What do you think? Is it some sort of a bug or do people run bot farms?
Edit2: It's been now 3 days and we went from 150 000 user accounts 3 days ago to 700 000 user accounts today making it 550 000+ bot accounts and counting. Almost 80% accounts on lemmy are now bots and it may end up being an very serious issue for lemmy platform once they become active.
Edit3: It's now 4th day of the attack and the amount of accounts on lemmy has almost reached 1 200 000. Almost 90% of total userbase are now bots.
Edit 3.1: my numbers are outdated, there are currently 1 700 000 accounts which makes it even worse: https://fedidb.org/software/lemmy
More robust instances will have to defederate instances with high concentration of bots and monitor their own new users. Maybe also implement email verification or captchas
There are almost 1000 lemmy instances already. Getting individuals to fix their signup settings so that they mandate CAPTCHA likely will have to be driven from the lemmy product update level and an agreed upon defederation list for non-conformant instances.
And bot farms would be able to spin up new instances themselves, so being able to do a blacklist based federation model (federate with all by default except x, y, and z) isn't going to be viable. There's going to have to be a whitelist (federate only with a, b, and c) and maintaining that as new instances get added will be problematic without an overarching way of pushing updates of known "good" instances automatically.
Anyone can spin up an instance and create a bajillion bots. That doesn't matter at all. You cant solve that while being open source.
The question is: is whoever doing this USING the bots? Doesnt seem like it yet. And doing it this way would be stupid as well, those bot instances would just get insta-blocked.