Could somebody explain why this is bad?
I'm not a fan of all this AI stuff.
But I can't think of an argument besides "Big tech is bad and they should not make money if they use public information to do so."
I'm genuinely curious. There may be massive amounts of data being processed. But only public data, right? If they can use that data for something, isn't that something positive? Or at the very least nothing negative?
I always thought anything that is posted in public spaces means making it available for anyone to use anyway. So what am I missing here?
“public” does not mean you’re allowed to steal it and republish it as a work of your own
That is not what they or LLMs do. And while there is questionable morals around it acting like they are straight up stealing and republishing work hurts having serious discussions about it.
LLMs create statistical distributions of words and phrases based on ingested data, and then sample those distributions given conditional probabilities.
Why should for-profit companies have the right to create these statistical distributions based on our written works without consent? They're not publishing these distributions, and the purpose of ingesting these texts is not to report on the distributions.
They're just bottom-trawling the internet and acting as if they have every right to use other peoples' written works. While people are having "serious discussions" around it, they're moving forward, ignoring the discussions entirely, and trying to force the conclusion of those discussions to be "well, it's too late now, anyway".
Original analysis of public data is not stealing. If it were stealing to do so, it would gut fair use, and hand corporations a monopoly of a public technology. They already have their own datasets, and the money to buy licenses for more. Regular consumers, who could have had access to a corporate-independent tool for creativity and social mobility, would instead be left worse off with fewer rights than where they started.
You should have the discussions first. Not after you already profited off someone else's work. If the argument should be about whether they can use the data or not, then harvesting it first is absolutely harmful to the discussion you claim is important. You can't just argue one side is in bad faith when the other side is already objectively acting in bad faith if we are to assume the discussion is real.
But we are way past that. And legally while they are walking a thin line it seems that LLMs are going to win the legal challenges.
I don't think stopping or slowing LLM development is going to work, because then more questionable countries who really don't give a fuck about IP will pull ahead.
If you want my honest opinion I don't think these LLMs companies are stealing and I do think artists are getting the shit end of the stick at the same time. We are heading towards and AI dystopia and I think the way to address is is through more solid social welfare programs instead of fight about IP. While artists are the focus, this AI revolution is coming for all labor. Artists are unfortunately the first ones being impacted by it.
I think people should stop fighting about the minor things and instead prep for the inevitable unemployment this will bring. LLMs are really just the tip of the iceberg.
Yeah, you're right that it is different from simply stealing content. However the LLMs still use protected material as input and it seems that at least parts of those works can be uniquely identified in the output. That can be considered problematic, even if the data is deconstructed into embeddings inbetween input and output.
As a freelance writer, I write an article for a respected tech website. That article gets views, which in part determines if I get any sort of a performance bonus.
Along comes an AI that scrapes my content, so that when someone asks it a question about how to do "x" on Mac, it spits out an answer based on what it learned from MY article, sometimes regurgitating it word for word, and in doing so deprives me and my publisher of a much need page view.
It affects their revenue, since it affects ad views. It affects my performance bonus.
This isn't about big tech being "bad". It's about writers and other artists not being credited or paid for their work.
This is a good explanation, thank you. I didn't think about people who literally post stuff to earn money. Since so much talk already revolved around scraping sites like Lemmy, that was all I had in mind.
What you describe sounds like the same problem with services that avoid paywalls or ads of news sites.
In this case I fully aggree that some solution needs to be found.
I don't consent to my copyrighted material -- which is literally everything I write and post online, including this comment -- being included in these products. In some cases, I have implicitly consented to allowing this to happen via the EULA of websites I've used over the years, but having them actively scraping the web for content means they're directly bypassing any agreements I may have made with service providers, and that they're collecting my copyrighted works without my ever having done business of any sort with them.
I haven't agreed to contribute to their for-profit operation, I'm not being compensated in any way for this participation -- whether financially or via the providing of a service -- and I don't believe they have any moral right to decide that I'm going to contribute whether I want to or not.
Yeah, I don't really care what they harvest either. I suppose if conversations showed up in chat that would be an issue, but the internet is a public forum anyway and there's no expectation of privacy here.
If copyright law can work against the individual, it should work against the corporation as well. We can't only enforce it against the little people. Enforce it for all or for none.
In this instance they're not even taking copyrighted content. I don't think random forum posts are copyrightable since they're not even being reproduced, it's just being read to create a derivative work.
The expectation that things are not private is totally different from the expectation that things are not being harvested for profit, though. Harvesting things for profit is transforming the public into the private.
Just because something is public, does it mean the source is irrelevant? Not to mention, there's a lot of stuff that's not meant to be public that is. A computer won't know the difference. Public or not, it's theft to steal the content without credit and monetize it privately.
Let's said I use AI to write a book, in that case, AI will just grab what someone's else wrote.
Let's said I use AI to Write code, AI will just copy someone's else code.
Let's said I use AI to make art, AI uses Someone's else art.
Then, let's said I sell the book, use the code and make nft's with the art, since AI "did it" I don't have to follow any license or give credit to anyone.
About using only public information, that should be an opt in, but instead AI companies are just taking public internet, putting it inside a can a selling it, you like it or not.