Entrusting our speech to multiple different corporate actors is always risky. Yet given how most of the internet is currently structured, our online expression largely depends on a set of private companies ranging from our direct Internet service providers and platforms, to upstream ISPs (sometimes....
Check out the company Aqua, they have been buying public water companies across the country and running them into the ground with no government oversight since they are a private company.
It’s an unpopular opinion, but crippling platforms due to CSAM is a lot more harmful than what would happen if we did not have such draconian laws around it. Do people think there would be some dramatic explosion of CSAM? I don’t buy that for a second and the act of producing such material has always and will always be illegal, so like everything else, it seems ridiculous to prosecute the particular crime of posession.
Seize all funds received for distributing it, throw anyone involved in producing it in prison and throw away the key, and stop holding threat of social death over anybody’s head if some idiots throw a bunch of digital gunk at them.
it seems ridiculous to prosecute the particular crime of posession
what does this even mean? you mean with people hoarding CSAM shouldn't be charged because they're not distributing it?
Do people think there would be some dramatic explosion of CSAM?
Yes, this is not your local backwater town where you know there are a few visibly shitty & disgusting people and people tell their kids to stay away and everyone becomes safe. And if you think shit doesn't explode on the internet, you might be living under a rock last 2 decades.
That's stupid on a whole new level and your made up scenario doesn't make it any better. No one is threatened for having been sent some questionable content. The person who sent those however might be and the tech today makes it incredibly easy to prove where anything came from since everyone is being tracked.
Seize all funds received for distributing it, throw anyone involved in producing it in prison and throw away the key,
How about we prevent such things from happening by discouraging it in the firat place? Sure, they won't be down to 0, but your solution starting after the distribution has already started is highly disturbing.
As a CSA survivor, who had images taken of me while I was abused... Fuck you.
People wanting to possess it is exactly what encourages people to produce the material. If you let people possess it with no consequences you will let the demand shoot up and basic economics should tell you what happens next with the supply part.
That is disgusting. Seriously. You should feel ashamed of yourself.
ISPs are to me like infrastructure. They are like roads or power lines. If you ask the ISPs to block malicious activity it's like asking the electrical poweregrid to be responsible for stopping their electricity being used for illegal activity. Asking the ISPs to block malicious activity is like asking the road builders to be responsible for bankrobbers and murdere driving on the roads. It's simply just ridiculous to put the responsibility like that.
Yup. To many people don't get it. They are all for heavy regulations so long as it's their side doing the regulation, then five years later they're crying about being regulated.
I understand that censorship can be misused, but I also understand trying to fight faschistic right-wing propaganda and demagoguery is more important than trying to stick to general liberal ideals like "free speach", if radically following those ideals lead to bad outcomes.
So yes, I am in favor of not giving people that are against democracy a platform to push their lies and propaganda. With the current level of education and media literacy in the broad population, lies are much easier spread than that countered. Ignoring that means giving them their victories.
Facts are boring and feelings can easily be abused and misled.
I don't have an answer where the line is, and where and how censorship/blocking/deplatforming is effective. I just think that this isn't a simple issue.
But I would mostly agree that this shouldn't be decided by ISP companies. They probably shouldn't have a TOS. And if you ask me, they are infrastructure providers, so they have a monopoly, and therefore they should be non-commercial and under democratic control. Because democracy has proven to be a good way to handle monopolies.
I agree the situation looks helpless regarding fighting missinformation but conversation is the only viable tool. Failing that, when the topic is important enough, then only the tool of violence remains. A person about to blow themselves up in a crowd likely can't be talked out of it. Hopefully the situation isn't so bad that a lot of people are like that. I think it's better to promote education rather than trusting anyone to draw a line on what speech I can't hear (or say).
This was literally the Net Neutrality debate from 2013-2016ish... And yall can correct me if im mis-remembering what the argument was. IIRC it was if an ISP wanted to be classified as a public utility or private service and the outcome was something along the lines of.
Public utility > protection under title 2 of The Communications Act of 1934 (ammended in 1996, its not that old, and could NOT be sued for the content they transmitted) > they could not police the content, otherwise they were liable for what their customers used it for.
The reasoning was you could not sue a public utility for someone using them to do something illegal. However if it was a private service.
Private service > not protected under title 2 > could police content as it was private infrastructure. The fear was if Time Warner or someone throttled connections to streaming platforms to ruin the expirence so people would go back to watching cable. This was kicked off when Netflix, Level 3 and Comcast all got into a spat over content usage, data volumes and who was responsible for paying for hardware upgrades.
The issue was that they were poorly classified at the time (unsure if that changed) and had a habbit of flip-flopping classifications as they saw fit in different cases (ISPs claimed to be both and would only argue in favor of the classification that was more useful at the time). I dont think this was ever resolved as it was on chairman Wheelers to-do list but 45s nominee to the FCC was a wet blanket and intentionally did nothing. Now the seat is empty because congressional approval is required for appointees and were doing the "think of the kids/ruin the internet" bill again... /Sigh.
Y'all know the drill, call your congress critter n' shit, remind them not to break the internet again. And if your in a red state, just fart loudly into the phone, its funny and they wont do anything constructive anyway, even if you asked nicely. (Sorry, im just tired of this cycle of regulatory lights on, lights off)
Man, this is so wierd reading from post-soviet country. Here red state/region meant in 90-ies region with communists majority. And they probably would be for public utility.
Anyway now it doesn't matter in personalist resource autocracy.
Thats the part that makes this double frustrating, it by all accounts should be a public. Back in the early 00's the US federal government basicly gave all these companies a blank check to provide "broadband internet" to every home in America (See the US Postal Service as them doing shit like this before).
They (ISPs) have since taken the money and done some of the work (with the promise to get it done some day, eventually maybe never) and the term "broadband" is borderline useless in terms of an acceptable internet connection. Every few years there is some skuttlebutt to increase the standard of what "broadband" means, but the last update set it at 25mb down / 3mb up... Which in 2023 is pretty emberassing.
I don't want my local ISP to be making judgments about whether my neighbor is pirating movies or posting hate speech.
But I do want my local ISP to be able to cut off connectivity to a house that is directly abusing neighborhood-level network resources; in order to protect the availability of the network to my house and the rest of the neighborhood.
Back in the early 2000s there was a spate of Windows worms known as "flash worms" or "Warhol worms"¹, which could flood out whole network segments with malware traffic. If an end-user machine is infected by something like this, it's causing a problem for everyone in the neighborhood.
And the ISP should get to cut them off as a defensive measure. Worm traffic isn't speech; it's fully-automated malware activity.
¹ From Andy Warhol's aphorism that "in the future, everyone will be famous for 15 minutes", a Warhol worm is a worm that can take over a large swath of vulnerable machines across the Internet in 15 minutes. https://en.wikipedia.org/wiki/Warhol_worm
A fascinating read. I'm sure there will be plenty of people complaining about their "centralists / fence sitting" takes, but what they're saying it's perfectly valid. These top level providers shouldn't be interfering in arguably critical infrastructure.
Users need more control over the kind of content they want to see. The problem Lemmy has is very similar to the main problem with the internet as a whole: the current model is that of a "regulator" who controls the flow of information for us.
What I'd like to see is giving users the tools to filter for themselves, which means the internet as a whole. Not interested in sports, let me filter it all out by myself, instead of blocking individual parts piecemeal.
The problem is that no company has an incentive to work on something like that, and I wouldn't even know where to start designing such interface tools on my own, but there is, for example, a keyword blocker for YouTube that prevents video that contain said terms from appearing on my timeline. I've used it to block everything "Trump", for example. I'd like to see more of that.
The idea sounds nice in theory, but there is a reason people bring their car to a shop instead of changing their own oil. There are a lot of things we could/should take responsibility for directly but they are far too numerous for us to take responsibility for everyone of them. Sometimes we just have to place trust in groups we loosely vetted (if at all) and hope for the best. We all do it every day in all sorts of capacities.
To put it another way: do you think we should have the FDA? Or do you think everybody should have to test everything they eat and put on their skin?
To put it another way: do you think we should have the FDA? Or do you think everybody should have to test everything they eat and put on their skin?
There is a middle ground. The FDA shouldn't have the power to ban a product from the market. They should be able to publish their recommendations, however, and people who trust them can choose to follow those recommendations. Others should be free to publish their own recommendations, and some people will choose to follow those instead.
Applied to online content: Rather than having no filter at all, or relying on a controversial, centralized content policy, users would subscribe to "reputation servers" which would score content based on where it comes from. Anyone could participate in moderation and their moderation actions (positive or negative) would be shared publicly; servers would weight each action according to their own policies to determine an overall score to present to their followers. Users could choose a third-party reputation server to suit their own preferences or run their own, either from scratch or blending recommendations from one or more other servers.
If you ask me (and nobody ever does for good reason), one of the only times an ISP should be pulling the plug on online speech is when you start linking actual malicious links that have a good chance of your grandma losing her retirement funds or your tech illiterate uncle getting a crypto miner installed on his laptop or something equally destructive.
Your ISP should have no insight in to your traffic at all. Therefore unable to make any judgement on what traffic to block and what not to block. With the exception of volume of traffic and to where it is going.
I remember getting a warning years ago from an ISP I don't use anymore that my service would be cut off if I downloaded a pirate torrent again. Why the fuck were they paying attention to what I was doing? Even if it was piracy, it was none of their fucking business and they wouldn't be implicated. I've used a VPN ever since.
Although I do agree they should have no insight, I'd rather the insight they currently have be used to actually block sites from bad actors than just spying on you to most likely sell your data.
We already see how bad stuff gets censored or punished on social media with things like bots that cast such a wide net that lots of innocents get caught with no human to appeal to.
You only have to look at how the first amendment is abused in the US to understand just how bad free speech without consequence is.
Yuval Noah Harari did a session with the rest is politics podcast. He brought up a concept that was total alien to me beforehand. In the past fake news has sought to shock people into taking notice. With AI this will change things dramatically. Rogue states will use AI to befriend individuals, and then manipulate the thought process by gentle integration. I found this an immensely scary prospect to dwell on. People rarely think about the person on the other end of a conversation being something other than what they portray. The concept is very credible.
To me, this is one of the reasons why we must police activity online. But it must be done without government interference. Ideally an international effort should be made. An international fact authentication group would go along way also.
I'll agree that ISPs should not be in the business of policing speech, buuuut
I really think it's about time platforms and publishers be held responsible for content on their platforms, particularly if in their quest to monetize that content they promote antisocial outcomes like the promulgation of conspiracy theories and hate and straight-up crime
For example, Meta is not modding down outright advertising and sales of stolen credit cards at the moment
Also meta selling information with which to target voters... to foreign entities
The issue with this is holding tech companies liable for every possible infraction will mean tjay platforms like Lemmy, and mastodon can't exist cause they could be sued out of existance
The issue with this is holding tech companies liable for every possible infraction
That concern was the basis for section 230 of the 1996 Communications Decency Act, which is in effect in the USA but is not the law in places like, say the EU. It made sense at the time, but today it is desperately out of date.
Today we understand that in absolving platforms like Meta of their duty of care to take reasonable steps to not cause harm to their customers, their profit motive would guide them to look the other way when their platform is used to disseminate disinformation about vaccines that gets people killed, that the money would have them protecting Nazis, that algorithms intended to promote engagement would become a tool not just to advertisers but to propagandists and information warfare people.
I'm not particularly persuaded that if in the US there is reform to section 230 of the Communications Decency act, that it would doom nonprofit social media like most of the fediverse- if you look around at all, most of it already follows a well-considered duty-of-care standard that provides its operators substantial legal protection from liability for what 3rd parties post to their platforms. Also if you consider even briefly, that is the standard in effect in much of Europe and social media still exists- it's just less-profitable and has fewer nazis.
The problem is that your definitions are incredibly vague.
What is a "platform" and what is a "host"?
A host, in the definition of technology, could mean a hosting company where you would "host" a website from. If it's a private website, how would the hosting company moderate that content?
And that's putting aside the legality and ethics of one private company policing not only another private company, but also one that's a client.
Fair point about hosts, I'm talking about platforms as if we held them to the standards we hold publishers to. Publishing is protected speech so long as it's not libelous or slanderous, and the only reason we don't hold social media platforms to that kind of standard is that they demanded (and received) complete unaccountability for what their users put on it. That seemed okay as a choice to let social media survive as a new form of online media, but the result is that for-profit social media, being the de facto public square, have all the influence they want over speech but have no responsibility to use that influence in ways that aren't corrosive to democracy or to the public interest.
Big social media already censor content they don't like, I'm not calling for censorship in an environment that has none. What I'm calling for is some sort of accountability to nudge them in the direction of maybe not looking the other way when offshore troll farms and botnets spread division and disinformation
I'm a psycho because I don't like how toxic the internet and society has become, and how easily people are manipulated by hate speech and hateful ideologies?