The problem isn't the rise of "AI" but more so how we're using it.
If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that's a great use of the technology. We don't need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.
Thats fucking bullshit, the people developing it and shipping it as a product have been very clear and upfront about their uses and none of it is ethical.
This is a strawman argument. AI is a tool. Like any tool, it's used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI's climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).
AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.
Yes, ai is a tool. And the person in the screenshot is criticizing a generative gpt-like and midjorney-like ai, which has a massive impact on the climate and almost no useful results.
In your examples, as I can see, they always train their own model (supernovae research, illegal fishing) or heavily customize it and use it in close conjunction with people (cancer screenings).
And so I think we talking about two different things, so I want to clarify:
ai as in neural-network algorithm that can digest massive amounts of data and give meaningful results - absolutely is useful and, I think, the more the time will pass (and more grifters move on to other fields) the more actual useful niches and cases would be solved with neural-nets.
But, ai as in we-gonna-shove-this-bot-down-your-throut gpt-like bots trained on all the data from all the internet (mostly reddit) that struggle with basic questions, hallucinate glue on pizza, generate 6-fingered hands and are close to useless in any use-case are absolutely abismal and not worth it to ruin our climate for.
Obviously by AI they mean stuff like ChatGPT. An energy intensive toy where the goal is to get it into the hands of as many paying customers as possible. And you're doing free PR for them by associating it with useful small scale research projects. I don't think most researchers will want to associate their projects with AI now that the term has been poisoned, though they might have to because many bigwigs have been sucked into the hype. The term AI has basically existed nebulously since the beginning of computing, so whether we call one thing or another AI is basically personal taste. Companies like OpenAI have successfully attached their product to the term and have created the strongest association, so ultimately if you say AI in a contemporary context a lot of people are hearing GPT-like.
Yeah, but it doesn't really help that this is a community "Fuck AI" made as "A place for all those who loathe machine-learning...". It's like saying "I loathe Dijsktra's algorithm". The term machine learning has been used since at least the 50's and it involves a lot of elegant mathematics which all essentially just try to perform optimizations of various functions in various ways. And yet, at least in places I'm exposed to, people constantly present any instance of machine learning as useless, morally wrong, theft, ineffective compared to "traditional methods" and so on, to the point where I feel uneasy telling people that I'm doing research in that area, since there's so much hate towards the entire field, not just LLMs. It might be because of them, sure, but in my experience, the popular hating of AI is not limited to ChatGPT, corporations and the like.
Let's instead make an honest attempt to de-poison the term, rather rhat just giving in. It is indeed like saying "All math bad" because math can be used in bad ways.
but those are the cool interesting research related AIs, not the venture capital hype LLMs that will gift us AGI any day now with just a bit more training data/compute.
But the reason the planet burns is because of how we generate the energy, not because of using energy. I'm not defending all these fucked up greedy corporations and their use of AI, machine learning, LLMs or whatever crap they are trying to get us to use want or not, but our real problem is based on energy generation, not consumption.
But these are other applications of AI. I think he meant LLMS. That would be like saying "fitting functions has many other applications and can be used for good".
If you're complaining about climate impact, looking at the big picture isn't whataboutism. It's the biggest part of the dataset which is important for anyone who actually cares about the issue.
You're complaining about a single cow fart, then when someone points out there are thousands of cars with greater emissions - you cry whataboutism. It's clear you just want to have a knee-jerk reaction without rationally looking at the problem.
If all fossil fuel power plants were converted to nuclear then tech power consumption wouldn't even matter. Again, it was the oil industry that railroaded nuclear power as being unsafe.
If we had infinite money plus infinite people with the required skills to design and build nuclear power plants plus a magical method to build nuclear reactors in 2 months (or even instantly !) plus managed to convince the public opinion that nuclear energy is actually fine, then the climate crisis would be only partially solved ! Hurray ! (This doesn't in and of itself solve food production & consumption, transportation and other sources of land use change emissions, we'd need a whole lot more work or on many other subjects)
In more serious terms (Net Zero research), nuclear isn't perfect nor is it the be all and end all solution, but it IS globally a part of the solution to generate cleaner electricity and cutting emissions. However, since we don't have all the magical things I was listing earlier, its development encounters many roadblocks and it turns out that wind and solar are extremely well scalable, integrates pretty well into grids as long as we're willing to develop the (mostly known) solutions to counter their variability (several exemples of high integration rates in different settings). The issue is that all of this (both nuclear and renewables) demands a lot of investment in terms of money, of people with the required skill sets and educating the public opinion that this is needed and desirable. And that's a MASSIVE challenge.
Which is why, to get to the point, the enormous electricity use of AI is actually a problem because its additional power consumption is keeping fossil power plants running or making them run more when emissions should be declining due to advancements in low carbon electricity production (mostly renewables). In general, it makes reaching Net Zero goals harder.
Most of the people on this website hate AI without even understanding it, and refuse to make an honest assessment of its capabilities, instead pretending that it's nothing more than a good auto correct predictor engine.
Everyone makes their own risk analysis, a lot of people think that whatever you say it can do, its not worth the cost overall.
Unfortunately its your problem to disentangle useful AI from predatory AI. It would probably make sense to just call it something else (neural network, a new programming language, a new data analysis model), but then how would you trick investors?
bullshit take. OP didn't post a screenshot about AI, it's about LLMs. They are absolutely doing more harm than good.
And the examples you are quoting are also highly misleading at best:
science assistance: that's machine learning, not AI
helping doctors? yes, again, machine learning. Expedite screening rates? That's horribly dangerous and will get people killed. What it could do is scan medical data that has already been seen by a qualified doctor / radiologist / scientist, and re-submit them for a second opinion in case it "finds" a pattern.
powering robots that have moving parts: that's where you want actual AI, logical rules from sensor to action, putting deep learning or LLM bullshit in there is again fucking dangerous and will get people killed
helping to catch illegal fishing / etc: again, deep learning, not AI.
You seem to be arguing against another stawman. OP didn't say they only dislike LLM the sub is even "Fuck AI". And this thread is talking about AI in general.
Machine Learning is a subset of AI and has always been. Also, LLM is a subset of Machine Learning. You are trying to split hairs, or at least do a "That's not a real Scotsman" out of the above post.
We can do all those things without AI. Why do you care how fast it happens? If we could cure cancer twice as fast by grinding up baby animals would you do it?
Probably not the best to imply you want cancer treatment research to slow down simply because you don't like the tool used to do it. There's a lot of shit wrong with our current implementations of AI, but let's not completely throw the baby out with the bath, eh?
yes. i love animals but if my kid has a cold and i knew a puppy's breathing caused it, I would drown that puppy myself. let alone finding a cure for fucking cancer.
that being said AI isn't doing that and even if it is I wouldn't trust the results.
Is not the entire picture, we are destroying our planet to generate bad art, fake tities and search a little bit faster but with the same chance of being entirely wrong as just googleing it.
On the grand scheme of things, I suspect we actually don't have that much power in stopping the industrial machine.
Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn't resolve anything. Not really. They'd send the military out, shoot us down (possibly quite literally), then go back to business as usual.
Unless there becomes a business incentive to change our ways, then capitalism will not follow, and instead it'll do everything it can to resist that change.
By the time there is enough economic inventive, it'll be far too late to be worth fixing.
I mean, this isn't just a social media thing. It was part of the reason there was a writer's strike in Hollywood and they did manage to accomplish something. I don't see why protests/strikes/politics would be useless here.
You're right, but I was making a point, as social media is most often where you hear people calling for revolution.
I'll agree that strikes can work, especially employment strikes - but that's usually because there's a specific, private entity to target, an employer to back into the metaphorical corner.
As far as protesting/striking against the system, you need only look at the strikes and protests relating Palestine to know what kind of force such a revolutionary strike would be met with.
A lot of people on Lemmy are expecting the glorious revolution to happen any time now and then we will live in whatever utopia they believe makes a utopia. Even if something like that happens, and I'm less certain by the day that it ever will, the result isn't necessarily any better than what came before. And often worse.
It'll almost certainly be worse. When revolutions happen, the people who seize power are the ones who were most prepared, organized and willing to exercise violence. Does that at all sound like leftists in the West?
See, the thing is, dead people don't buy as many things as live ones, so extreme capitalism doesn't want to kill you directly either. Slow poison is fine if profitable enough, but fast intentional bullet to their main customer base - not as much.
Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn’t resolve anything. Not really. They’d send the military out, shoot us down (possibly quite literally), then go back to business as usual.
What are your thoughts on 2A and private gun ownership?
the us military will always have more firepower than your group of armed civilians. maybe good for defending against other armed civilians, but don't act like you could take on the military.
I mean, it also made the first image of a black hole, so there's that part.
I'd also flag that you shouldn't use one of these to do basic sums, but in fairness the corporate shills are so desperate to find a sellable application that they've been pushing that sort of use super hard, so on that one I blame them.
Like, humans aren't really the "smartest" animals. We're just the best at language and tool use. Other animals routinely demolish us in everythig else measured on an IQ test.
Pigeons get a bad rap at being stupid, but their brains are just different than ours. Their image and pattern recognition is so insane, they can recognize words they've never seen aren't gibberish just by letter structure.
We weren't even trying to get them to do it. They were just introducing new words and expected the pigeons to have to learn, but they could already tell despite never seeing that word before.
Why the hell are we jumping straight to human consciousness as a goal when we don't even know what human consciousness is? It's like picking up Elden Ring on whatever the final boss is for your very first time playing the game. Maybe you'll eventually beat it. But why wouldn't you just start from the beginning and work your way up as the game gets harder?
We should at least start with pigeons and get an artificial pigeon and work our way up.
Like, that old reddit repost about pigeon guided bombs, that wasn't a hail Mary, it was incredibly effective.
Who's jumping to human consciousness as a goal? LLMs aren't human consciousness. The original post is demagoguery, but it's not misrepresenting the mechanics. Chatbots already have more to do with your pigeons than with human consciousness.
I hate that the stupidity about AGI some of these techbros are spouting is being taken at face value by critics of the tech.
Do they? I guess I haven't encountered that much. I think about messenger pigeons in wars and such...
Disgusting? Sure, I've heard that a lot. But I haven't heard 'stupid' really as a word to describe pigeons.
Anyway, I don't disagree with you otherwise. My dogs are super stupid in my perception but I know which one of us would be better at following a trail after someone had left the scene. (Okay, maybe Charlie would still be too stupid to do that one, but Ghost could do it).
Something that blows my mind about dogs is that their sense of smell is so good that, when combined with routine, they use it to track time i.e. if their human leaves the house for 8 hours most days to go to work, the dog will be able to discern the difference between "human's smell 7 hours after they left" and "human's smell 8 hours after they left", and learn that the latter means their human should be home soon. How awesome is that?!
In 2021, Google’s total electricity consumption was 18.3 TWh, with AI accounting for 10%–15% of this total.
Let's call it 10% to make it seem as energy-efficient as possible. That's 1.83 TWh a year, or about 5 GWh a day. An average US home uses 10.5 MWh a year. You could power 476 US homes for a year, and still have some energy left over, with the amount of energy Google uses on their AI-powered search in a single day.
But then the problem is how google uses AI, not AI itself. I can have an LLM running locally not consuming crazy amounts of energy for my own purposes.
So blaming AI is absurd, we should blame OpenAI, Google, Amazon... This whole hatred for AI is absurd when it's not the real source of the problem. We should concentrate on blaming and ideally punishing companies for this kind of use (abuse more like) of energy. Energy usage also is not an issue in itself, as long as we use adequate energy sources. If companies start deploying huge solar panel fields on top of their buildings and parkings and whatnot to cover part of the energy use we could all end up better than before even.
Now it's being given the system prompt "you can use the command MATH for mathematical questions, and SEARCH to look up up-to-date information from the internet".