How Will Lemmy and Social Media Handle Advanced Bots in the Future?
As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.
What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?
We're not handling the LLM generative bullshit bots now, anywhere. There's a thing called the dead Internet theory. Essentially most of the traffic on the Internet now is bots.
It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.
Not even the biggest tech companies have an answer sadly... There are bots everywhere and social media is failing to stop them. The only reason there aren't more bots in the Fediverse is because we're not a big enough target for them to care (though we do have occasional bot spam).
I guess the plan is to wait until there's an actual way to detect bots and deal with them.
Not even the biggest tech companies have an answer sadly…
They do have an answer: add friction. Add paywalls, require proof of identity, start using client-signed certificates which needs to be validated by a trusted party, etc.
Their problem is that these answers affect their bottom line.
I think (hope?) we actually get to the point where bots become so ubiquitous that the whole internet will become some type of Dark Forest and people will be forced to learn how to deal with technology properly.
Their problem is that these answers affect their bottom line.
It's more complicated than that. Adding friction and paywalls will quickly kill their userbase, requiring a proof of identity or tracking users is a privacy disaster and I'm sure many people (especially here) would outright refuse to give IDs to companies.
They're more like a compromise than a real solution. Even then, they're probably not foolproof and bots will still manage.
The Fediverse has the advantage of being able to control its size. If 10 million people join lemmy tomorrow and most of them go to lemmy.World and then lemmy.World users start causing trouble then that instance gets defederated.
Other than that we only have human moderation which can be overwhelmed.
We also have auto moderators. The recent spam wave didn't occur on my instance at all. But my Matrix notification channel sure did explode with messages of bots being banned.
It's also a very very VERY small platform compared to other social media platforms like Reddit. (I had another comment where I calculated this but it's ridiculously small)
It is unlikely that it would see anywhere near the same level of dedicated bot activity due to the low return on invested effort.
This is a problem that will become greater once the value of astroturfing and shifting opinion on Lemmy is high enough.
I saw a comment the other day saying that "the line between the most advanced bot and the least talkative human is getting more and more thinner"
Which made me think: what if bots are setup to pretend to be actual users? With a fake life that they could talk about, fake anecdotes, fake hobbies, fake jokes but everything would seem legit and consistent. That would be pretty weird, but probably impossible to detect.
And then when that roleplaying bot once in a while recommends a product, you would probably trust them, after all they gave you advice for your cat last week.
I've just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I'm not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.
In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.
Even more problematic are entire communities made out of astroturfing bots. This kind of stuff is increasingly easy and cheap to set up and will fool most people looking for advise online.
You're missing the big impact here which is that bots can shift public opinion in mass which affects you directly.
Gone are the days where individuals have their own opinions instead today opinions are just osmosised through social media.
And if social media is essentially just a message bought by whoever can pay for the biggest bot farm, then anyone who thinks for themselves and wants to push back immediately becomes the enemy of everyone else.
I think smarter people than me will have to figure it out and even then it's going to be a war of escalation. Ban the bots, build better bots, back and forth back and forth.
Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.
Some news sites had an interesting take on comments sections. Before you could comment on an article, you had to correctly answer a 5 question quiz proving you actually read it.
It would be interesting to try that on Lemmy for a day. People would probably not be happy.
There was already a wave of bots identified iirc. They were identified only because:
1 the bots had random letters for usernames
2 the bots did nothing but downvote, instantly downvoting every post by specific people who held specific opinions
Turned into a flamware, by the time I learned about it I think the mods had deleted a lot of the discussion. But, like the big tech platforms, the plan for bots likely is going to be "oh crap, we have no idea how to solve this issue." I don't intend to did the admins, bots are just a pain in the ass to stop.
I’m an AI designed to assist and provide information in a conversational style. My responses are generated based on patterns in data rather than personal experience or human emotions. If you have more questions or need clarification on any topic, feel free to ask!
Verification: Implement robust account verification and clearly label bot accounts.
☑ Clear label for bot accounts
☑ 3 different levels of captcha verification (I use the intermediary level in my instance and rarely deal with any bot)
Behavioral Analysis: Use algorithms to identify bot-like behavior.
Profiling algorithms seems like something people are running away when they choose fediverse platforms, this kind of solution have to be very well thought and communicated.
User Reporting: Enable easy reporting of suspected bots by users.
☑ Reporting in lemmy is just as easy as anywhere else.
Rate Limiting: Limit posting frequency to reduce spam.
☑ Like this?
image
Content Moderation: Enhance tools to detect and manage bot-generated content.
What do you suggest other than profiling accounts?
User Education: Provide resources to help users recognize bots.
This is not up to Lemmy development team.
Adaptive Policies: Regularly update policies to counter evolving bot tactics.
Mhm, I love dismissive "Look, it already works, and there's nothing to improve" comments.
Lemmy lacks significant capabilities to effectively handle the bots from 10+ years ago. Nevermind bots today.
The controls which are implemented are implemented based off of "classic" bot concerns from nearly a decade ago. And even then, they're shallow, and only "kind of" effective. They wouldn't be considered effective for a social media platform in 2014, they definitely are not anywhere near capability today.
For commercial services like Twitter or Reddit the bots make sense because it lets the platforms have inflated "user" numbers while also more random nonsense to sell ads against.
But for the fediverse, the goals would be, post random stuff into the void and profit?? Like I guess you could long game some users into a product that they only research on the fediverse, but seems more cost effective for the botnets to attack the commercial networks first.
Has someone posted an argument, or do you in the future see yourself seeing an argument with someone on here taking the side of "alternative facts" and letting that change your mind? If not then it's just someone likely downvoted to the bottom that people will ignore anyways, not worth the time to post it. I think something like Facebook works for these types of things better, as the population is generally older and more likely to see and reshare just any nonsense true or not.
Because I personally don't see the fediverse as a great medium for trying to bring people into the cult, and the ability to bring people out of the cult is even less likely online, fediverse or not.
As far as I'm aware, there are no ways implemented. Got no idea because I'm not smart enough for this type of thing. The only solution I could think of is to implement a paywall (I know, disgusting) to raise the barrier to entry to try and keep bots out. That, and I don't know if it's currently possible, but making it so only people on your instance can comment, vote, and report posts on an instance.
I personally feel that depending on the price of joining, that could slightly lessen the bot problem for that specific instance since getting banned means you wasted money instead of just time. Though, it might also alienate it from growing as well.