Hi theATL.social (Mastodon) and yall.theATL.social (Lemmy) friends. Your friendly admin, @michael, here.
Currently, theATL.social blocks two domains from federation but does not utilize any block lists. the Lemmy yall.theATL.social does not block any domains.
My general admin philosophy is to let users decide what content they want to see, or not see. However, the Mastodon UI can make the adding/removing of domain block lists a bit tedious. (There are some tech/UI-related options to make this easier.)
On the other hand, I am personally not a free speech absolutist, and there are limits to what content could/should be relayed through theATL.social's servers.
For example, illegal content, instances dedicated solely to hate speech/harassment, etc. To that end, the Oliphant Tier 0 block list offers a "floor" to remove literally the worst instances operating on the Fediverse: https://codeberg.org/oliphant/blocklists/src/branch/main/blocklists
As your admin, I don't want to make any unilateral decisions - rather, I'd prefer a user/stakeholder conversation, with as many Q&As as helpful.
I skimmed over the Oliphant unified tier 0 blocklist, and it does not block any instances that I interact with, for what that's worth. I am a bit concerned with the idea of routinely updating the block list based on the Oliphant algorithm. I suspect that the Oliphant project has a tendency to attract administrators who most enthusiastic about blocking. The justifications that I see for blocking would often apply to a lot of mainstream/establishment opinion in the USA (and probably more so in GA).
That said, I'm fully in favor of blocking obnoxious sites simply for the purpose of maintaining a pleasant and productive environment. For instance, brighton.social is on the tier0 list -- this is basically a conspiracy theory propaganda mill; the local feed screams paranoia. I'd support blocking brighton.social purely to protect theATL users from wandering into a conversation prompted by their nonsense. I consider this to be the same as public parks having a policy that people need to clean up after their dogs. I consider it very unpleasant to be at the receiving end of a propaganda firehose, even if the propaganda is not personally threatening. (I don't know if this captures the entire story for why they are on the blocklist -- there could be other problems with them)
I did notice two sizable instances that are often considered "borderline problematic" in the defederation debates -- and both were set to "silence" rather than "suspend". One, is newsie.social (where Decaturish is located). I vaguely remember some criticisms about newsie.social a few months back -- I think part of it was that people were reposting or repeating opinions that published in the Washington Post or New York Times. Many of the critics wanted nothing to do with journalists. As we've seen with the BBC instance, there are a decent number of people in the fediverse who would block those mainstream publishers, either because they have different standards from mainstream society or because they simply don't want that experience on their Mastodon instance. I wouldn't necessarily take their 'block' as meaning that the instance is anything we'd consider horrible.
The other 'borderline' instance I saw silenced was "QOTO" -- this instance is controversial because its policy is to only defederate when absolutely necessary (e.g. spam overwhelms the server). Many other instances block QOTO either because they view this as aiding hate-mongers, or because they just want to keep the hate-monges a couple steps removed from them. (Disclaimer, my account was hosted on Qoto before I came here)
So I guess I don't have a clear opinion here. I think even tier0 may be a bit too stringent, but i can't suggest a better way to quickly distinguish between those that are exceptionally horrible and those that are everyday horrible. Perhaps just silence them, and treat that as a strike, so that if we have any problems with harassment we quickly block the whole server. I suspect that some need to be blocked outright -- if you look at Qoto's blocklist, that could give you a small number that are absolutely intolerable.
Thank you, @DecaturNature - earlier this evening, I conducted a careful review of reading the domain names of the blocked Tier 0 instances. Many instances' names were self-describing the type of content hosted, and those descriptions were suffiicent criteria for exclusion. Others, as you mentioned, were banned for unclear or highly subjective reasons. The instanced that you mentioned were automatically not blocked or silenced because people on theATL.social were already following/engaged with them.
With regards to auto-updating the Tier 0 list, I am in agreement that an automatic update procedure is not ideal, as instances may be again be blocked without cause.
Perhaps it best, now that the worst of the worst are now blocked, to have a better documented review process for any future additions to the block list.
And I know that we did have a moderation council before from our earlier meetings with Andy, but coordination with that council fell apart a bit - which I'll take the blame for. Perhaps re-grouping that group would be helpful going forward.
Perhaps make an exception for the “silence” tier0 instances.
I was surprised to see raspberrypi.social on the tier0 list but then noticed the “silence” category. I suspect that this instance is on the tier0 list because it doesn’t have server rules on its About page rather than due to the content that is hosted there. It’s a single user instance that claims to be the official instance RaspberryPi.com
I believe the admin has sufficient skills to determine if any instances listed in tier-0 are consumed by users of theatl.social. I trust they're making sounds calls what to include from the list and what to not include. That said, an auto-update could be deployed with some automated list scrubbing (i.e.: scan the list for changes, check if newly added instances are consumed, post a message to theatl.social clearly calling out changes or calls for review). But until then, a manual periodic review is appreciated.
I feel the general Mastodon community well established has a consistent baseline of acceptable use/content with guardrails requiring content warnings (CW) and proper flagging of material not suitable for general audiences. That said, there's a time, place, and instance that can provide people with outlets to consume and share non-"G-Rated" (NSFW) content and other content not generally accepted in polite company. TheATL.social is managed in such a way to avoid publishing NSFW content- and that's fine. My litmus test before posting on theatl.social: "would I feel okay if my mom saw this post?" If not, rethink the content or post to a more appropriate instance.
However...I want to call out one sentence that made me bristle a bit:
I’d support blocking brighton.social purely to protect theATL users from wandering into a conversation prompted by their nonsense...
Assuming the content/instance in question doesn't violate established community standards or violate terms of use, I would not appreciate the admin (or anyone) doing something to "protect" me from accidental conversation wandering. I'm fully functional adult who can choose what to read, what to "believe", and what to reject. Same goes for the concept of blocking mainstream news orgs and other entities with knee jerk reactions, such as preemptively blocking threads.net. Why actively close yourself off from the world around you? Echo chambers can be quite toxic and lead to uninformed world views. Why block something without a clear observation of impact? If I don't want to see it, I can simply block from my account.
To round out the public park analogy- I've wandered through many public parks while people hand out flyers, yell from megaphones, and try to recruit for their cults. I just walk by and ignore them. Just as I do when I see something in social media I don't want to engage with. I don't need a nanny to "protect" me. As a gay man, I've heard plenty of mainstream news orgs propagate ideas I find personally offensive; I take the content for what it is and move on or engage in healthy public debates. By blocking the org, I'd loose context around the discussion, the people engaging in the discussion, and in impact of the topics. I'd rather be informed of the unsavory things than to be blindsided because someone put my head in the sand.
I too am a big advocate for free speech and robust public debate, which is why I support the fediverse. But that doesn't mean that individual instances need to include access to everything that's legally published. The ability to access everything is supported by the fediverse as a whole, just as its supported by the publishing industry as a whole, not individual magazines.
Behaviors that are tolerable in individuals can become a problem when they are organized and professionalized, as Brighton has done with conspiracy-theorism (some background info here). Brighton is a noise machine. A community dedicated to conspiracy theories is a community that is not only dedicated to lies, it is dedicated to figuring out how to promote these lies with manipulative arguments and by slowly drawing people into a fantasy world. It's frankly a lot of work to assess these lies on a case-by-case basis and I don't think people will be attracted to theATL if the site expects them to do this work for themselves. This isn't a matter of letting people voice their opinions and hear other people's opinions -- it's a matter of turning down the volume on a propaganda campaign. We can see the world around us better when we filter out other people's attempts to mislead us; when those attempts to mislead us are coordinated at the community level, it's appropriate to silence them at the community level.
Tangentially, a community dedicated to conspiracy theories is bound to contain a lot of slander and antisemitism (along with other hateful attitudes).
Hi Michael and everyone. I wrote up some general notes for thinking about this...
#The need for community-level moderation:
I think most Mastodon users get by fine with the standard individual-level options -- you follow people you find interesting and block people who act obnoxious when you encounter them. There's no algorithm pushing obnoxious content onto your 'home' feed, nor is it easy for harassers to discover your posts. The first level of discovery is the 'local timeline' -- this is the main place where members of TheATL.social interact with each other, and why the top priority is to have good moderation of content posted on TheATL.social itself. We already has a clear set of policies for members of the community.
These same moderation policies can be applied to members of other communities -- blocking those who violate our rules from interacting with members of our community. However, this strategy has some limitations -- first, it effectively asks our moderators to moderate the entire Fediverse, which is an impossible task, especially when harassers may have multiple accounts. It also leaves our users exposed to coordinated harassment campaigns across communities -- as long as the harassers are allowed to keep their accounts, they can swarm one target after another and systematically drive anyone they dislike off Mastodon.
Finally, I think we also need to consider indirect interactions, such as slander. It's not enough for a user to be shielded from abusive messages if the abuser continues to spread rumors among the user's acquaintances. For this reason, I'd be inclined to simply cut off any community that allows an abuser to participate... though of course, we don't have the resources to litigate slander accusations.
#What are the options for community-level moderation?
The Mastodon software provides three options for Moderating entire websites. 'Suspend' (aka defederation/block) is a complete block; 'limiting' (aka silencing) will only prevent content from showing on the federated timeline (If I understand it correctly); 'reject media' will avoid copying/showing any images.
TheATL.social currently blocks 5 servers (1 is a subdomain) (see 'Moderated Servers'). My understanding is that this is a pretty light touch in terms of typical Mastodon moderation. Other community administrators have created long lists of communities that they consider problematic, and others have compiled these into composite blocklists such as Oliphant. Communities can be placed on these lists for a wide range of reasons, so there's some risk of going along with over-zealous blocking.
Note also that individual users can import these blocklists -- so they can hide themselves from these communities, but doing it alone means that a user may be cut out of conversations that their other friends/contacts/acquaintances are participating in.
I wanted to thank you for your comment! I'm out of time (and steam) for today, but I will follow up on thoughts on your points tomorrow. (I'm in general agreement with what you wrote)
To correct my initial post, @DecaturNature is correct. There are ~5 domains currently blocked, not 2. And, I'm in agreement thus far that a "light" approach to blocked domains is no longer appropriate. I'm looking forward to thoughts and feedback on this thread to inform the next steps on policy related to future domain blocks! Thank you for your time contributing to this discussion.
I would agree completely with utilizing the Tier 0 block list at the minimum. If things got bad, and users were being harassed, then going further would be justifiable to me.
I prioritize safe communities over free speech, if a trade-off has to be made.
Also, I believe the research on Mastodon CSAM that made all that noise last week, pointed out that utilizing the Tier 0 block list would have filtered out like ~80% of the stuff they found
Agreed - that report is what drove the urgency to implement this. It is one thing if "bad stuff" ended up on the server after reasonable steps were taken to prevent. However, if reasonable steps were not taken to prevent and bad stuff showed up...well, I don't want to be in that position!
I think it’s entirely appropriate to block hate speech and clearly illegal content (CP etc). That stuff has no place in any of our feeds. If there’s a block list that can make that easy I say use it. Obviously be transparent and possibly revisit this discussion periodically but I’m all for it.
I think the tier 0 list is a good start. Most of the servers on it that I've encountered on it tolerate or were actively stood up to promote the vile behaviorof their users and they don't make any community they interact with better. At best they simply take up oxygen, at worst they drive people out of the fediverse.
Funny thing is that threads.net is already 'suspended' on the Tier 1 list (but not the Tier 0 list).
Later on, it may be worthwhile to consider using these bigger lists for silencing -- it may help improve the quality of the content on the "federated timeline". For instance, if you look at the tier 3 list it excludes a lot of politics-focused instances (both left and right). Not that politics on Mastodon is a problem, but it may be more enjoyable for most people with more substantial discussions if the loudest political accounts are turned down a notch.
Anyway, that would be a question for the people who actually use the federated timeline (not me)
Likewise, I am not a free speech absolutist- limits should be in place to censor the most repugnant content. Using the Tier-0 list for instance-level blocking is a good approach: light touch, using globally accepted standards.
For the rest, if a user doesn't want to see content from a user or domain, they can block either/both at their level. As a user, I should be permitted to follow who I choose, consume text/media I wish to see, and block content that which offends me. I don't need an admin making those decisions for me. An overzealous admin (or instance leadership) blocking domains at the instance level that /they/ find distasteful is far too arbitrary.
However, the admin must balance risk- they own the underlying infrastructure and could be accountable for the content transacted and stored- from both a legal standpoint and with respect to the broader community. Consider a situation where a user is acting against de facto global content standards and the admin does nothing- all users of the instance are at risk of global defederation.
I feel the guidelines established to moderate theatl.social are broad enough to foster open dialog, but also protect the instance from broad defederation. Adding a Tier-0 block is a sensible addition.
Thank you all for your comments. I will be responding to each point in the next day or so. However, I do see from the comments that there is an immediate need to implement the Tier 0 Oliphant block list, which has now occured.
Note that there were some domains for which theATL.social users currently engage. I am giving a 2-week grace period for anyone who wants to keep those domains, otherwise the remaining domains will be blocked after that point.
Thank you all for your comments and contributions to making this instance a good place to be.
How do we unblock any instance that got on Oliphant tier 0 but are later removed? Is that something that will be checked for?
Another thought -- how does blocking and silencing affect the new search system? I guess blocking is obvious...but does silence block search? If so, I think that makes silencing much more impactful and we'd need to be more careful with it.
Hi @[email protected] - apologies for the slight delay. Work and some other stuff kept my dance card more full than expected this past week.
How do we unblock any instance that got on Oliphant tier 0 but are later removed? Is that something that will be checked for?
I'd like to have a users council or other group-type decision for block/unblocking going forward. If I was put on the spot for my own personal thoughts, I'd say that unless an instance is primarily dedicated towards illegal, hateful, malicious, or otherwise disruptive activities, I'm inclined to not defederate. My own personal differences with the speech or opinions of another instance have ever been a factor in that decision. So, if there was an instance that needs to be unblocked that was blocked on Oliphant, then let's unblock it!
how does blocking and silencing affect the new search system? I guess blocking is obvious…but does silence block search?
Yeah, blocking/defederating would prevent search, so agreed that we need to be a bit more careful with who we block or not.
I wanted to circle back on this discussion thread. Things got a bit busy with the instance upgrade (which is now upgraded to 4.1.6).
With regards to Tier 0 Oliphant, I had initially thought that the list was primarily bottom-of-the-barrel sort of sites, but on closer look, there were sites included that well, people on the instance utilize - such as newsie.social. I thought that our discussion on our Lemmy server, plus DMs and feedback from the Mastodon side figured out which sites we wanted to exclude from the domain block list. All that being said, my objective with the blocks was to remove the known sources of abusive, illegal, or malicious (e.g., spam, malware) content.
And then that leaves us in the gray zone. There are instances where 98% of the users are great Fediverse citizens, and then 2% are horrible trolls. There are instances where teams of moderators are carefully watching posts, and others were moderators only step in during extreme situations. And there are instances that are for-profit, non-profit, from different countries and contexts, and so on. I don't have any straightforward answer as to where the line is located in this grey area, with regards to if a domain should be defederated or not.
@DecaturNature had a great point regarding what if theATL.social becomes either a target or source of disinformation. A group of users (real or actual) could create accounts on the instance in a coordinated fashion, and turn over the apple cart of our existing community posting disinformation and slander. It would seem the line would be clear that those individuals would not be welcome, but if someone is being a jerk or insincere, is that enough justification to ban them?
Final Thoughts
I'm going to chew through the above points. Obviously other err...large social networks have figured something out. And smaller Mastodon servers have done the same. It is quite likely that the current server rules are not satisfactory to cover all the potential issues, situations, and possibilities of human (or bot) behavior of theATL.social users, or content that is federated to theATL.social. If you have any additional thoughts, please feel free to include below. I don't have a timeline on changes for server rules, but when there are changes (if you wanted to propose changes), I would prefer if that was a collaborative exercise with the core group of users on this instance.