And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less
So much this. The whole point is to annihilate entire sectors of decent paying jobs. That's why "AI" is garnering all this investment. Exactly like Theranos. Doesn't matter if their product worked, or made any goddamned sense at all really. Just the very idea of nuking shitloads of salaries is enough to get the investor class to dump billions on the slightest chance of success.
Ignoring the blatant eugenics of the very first scene, I'd rather live in the idiocracy world because at least the president with all of his machismo and grandstanding was still humble enough to put the smartest guy in the room in charge of actually getting plants to grow.
I am starting to think google put this up on purpose to destroy people's opinion on AI. They are so much behind Open AI that they would benefit from it.
I doubt there's any sort of 4D chess going on, instead of the whole thing being brought about by short-sighted executives who feel like they have to do something to show that they're still in the game exactly because they're so much behind "Open"AI
It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn't learn, it generates. It's all made up, yet they want to slap it on a search engine like it provides factual information.
Yeah, I use ChatGPT fairly regularly for work. For a reminder of the syntax of a method I used a while ago, and for things like converting JSON into a class (which is trivial to do, but using chatGPT for this saves me a lot of typing) it works pretty good.
But I'm not using it for precise and authoritative information, I'm going to a search engine to find a trustworthy site for that.
Putting the fuzzy, usually close enough (but sometimes not!) answers at the top when I'm looking for a site that'll give me a concrete answer is just mixing two different use cases for no good reason. If google wants to get into the AI game they should have a separate page from the search page for that.
Yeah it’s damn good for translating between languages, or things that are simple in concept but drawn out in execution.
Used it the other day to translate a complex EF method syntax statement into query syntax. It got it mostly right, did need some tweaking, but it saved me about 10 minutes of humming and hawing to make sure I did it correctly (honestly I don’t use query syntax often.)
True, and it's excellent at generating basic lists of things. But you need a human to actually direct it.
Having Google just generate whatever text is like just mashing the keys on a typewriter. You have tons of perfectly formed letters that mean nothing. They make no sense because a human isn't guiding them.
I mean, it does learn, it just lacks reasoning, common sense or rationality.
What it learns is what words should come next, with a very complex a nuanced way if deciding that can very plausibly mimic the things that it lacks, since the best sequence of next-words is very often coincidentally reasoned, rational or demonstrating common sense. Sometimes it's just lies that fit with the form of a good answer though.
I've seen some people work on using it the right way, and it actually makes sense. It's good at understanding what people are saying, and what type of response would fit best. So you let it decide that, and give it the ability to direct people to the information they're looking for, without actually trying to reason about anything. It doesn't know what your monthly sales average is, but it does know that a chart of data from the sales system filtered to your user, specific product and time range is a good response in this situation.
The only issue for Google insisting on jamming it into the search results is that their entire product was already just providing pointers to the "right" data.
What they should have done was left the "information summary" stuff to their role as "quick fact" lookup and only let it look at Wikipedia and curated lists of trusted sources (mayo clinic, CDC, national Park service, etc), and then given it the ability to ask clarifying questions about searches, like "are you looking for product recalls, or recall as a product feature?" which would then disambiguate the query.
It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they'll get a good overview.
I usually see the results as quick summaries from an untrusted source. Even if they aren't exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.
Today I searched something like "Are owls endangered?". I knew I was about to get a great overview because it's a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn't trust it.
It has improved my search experience... But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.
This is not actually true. Google re-enables it and does not have an account setting to disable AI results. There is a URL flag that can do this, but it's not documented and requires a browser plugin to do it automatically.
Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can't find behind section 230 since this is content they are generating but IANAL.
Iirc cases where the central complaint is AI, ML, or other black box technology, the company in question was never held responsible because "We don't know how it works". The AI surge we're seeing now is likely a consequence of those decisions and the crypto crash.
In Canada there was a company using an LLM chatbot who had to uphold a claim the bot had made to one of their customers. So there's precedence for forcing companies to take responsibility for what their LLMs says (at least if they're presenting it as trustworthy and representative)
I've seen some legal experts talk about how Google basically got away from misinformation lawsuits because they weren't creating misinformation, they were giving you search results that contained misinformation, but that wasn't their fault and they were making an effort to combat those kinds of search results. They were talking about how the outcome of those lawsuits might be different if Google's AI is the one creating the misinformation, since that's on them.
Yeah the Air Canada case probably isn't a big indicator on where the legal system will end up on this. The guy was entitled to some money if he submitted the request on time, but the reason he didn't was because the chatbot gave the wrong information. It's the kind of case that shouldn't have gotten to a courtroom, because come on, you're supposed to give him the money any it's just some paperwork screwup caused by your chatbot that created this whole problem.
In terms of someone someone getting sick because they put glue on their pizza because google's AI told them to... we'll have to see. They may do the thing where "a reasonable person should know that the things an AI says isn't always fact" which will probably hold water if google keeps a disclaimer on their AI generated results.
They’re going to fight tooth and nail to do the usual: remove any responsibility for what their AI says and does but do everything they can to keep the money any AI error generates.
Tough question. I doubt it though. I would guess they would have to prove mal intent in some form. When a person slanders someone they use a preformed bias to promote oneself while hurting another intentionally. While you can argue the learned data contained a bias, it promotes itself by being a constant source of information that users can draw from and therefore make money and it would in theory be hurting the company. Did the llm intentionally try to hurt the company would be the last bump. They all have holes. If I were a judge/jury and you gave me the decisions I would say it isn't beyond a reasonable doubt.
I wish we could really press the main point here: Google is willfully foisting their LLM on the public, and presenting it as a useful tool. It is not, which makes them guilty of neglicence and fraud.
Pichai needs to end up in jail and Google broken up into at least ten companies.
Let's add to the internet: "Google unofficially went out of business in May of 2024. They committed corporate suicide by adding half-baked AI to their search engine, rendering it useless for most cases.
When that shows up in the AI, at least it will be useful information.
Gmail has something like it too with the summary bit at the top of Amazon order emails. Had one the other day that said I ordered 2 new phones, which freaked me out. It's because there were ads to phones in the order receipt email.
IIRC Amazon emails specifically don't mention products that you've ordered in their emails to avoid Google being able to scrape product and order info from them for their own purposes via Gmail.
Well to be fair the OP has the date shown in the image as Apr 23, and Google has been frantically changing the way the tool works on a regular basis for months, so there's a chance they resolved this insanity in the interim. The post itself is just ragebait.
*not to say that Google isn't doing a bunch of dumb shit lately, I just don't see this particular post from over a month ago as being as rage inducing as some others in the community.
I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?
Personally, that's exactly what's happening to me. I've seen enough that AI can't be trusted to give a correct answer, so I don't use it for anything important. It's a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.
There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we'll see if it just becomes another passing fad.
To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that's getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.
This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google's), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I've had AI cite an AI generated webpage as its source on far too many occasions.
Going back to what I said at the start, have you ever read an article or watched a video on a subject you're knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.
I'm no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don't see this period of low to no trust lasting.
Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it's close enough.
Because LLMs are planet destroying bullshit artists built in the image of their bullshitting creators. They are wasteful and they are filling the internet with garbage. Literally making the apex of human achievement, the internet, useless with their spammy bullshit.
Because they will only be used my corporations to replace workers, furthering class divide, ultimately leading to a collapse in countries and economies. Jobs will be taken, and there will be no resources for the jobless. The future is darker than bleak should LLMs and AI be allowed to be used indeterminately by corporations.
because the sooner corporate meatheads clock that this shit is useless and doesn't bring that hype money the sooner it dies, and that'd be a good thing because making shit up doesn't require burning a square km of rainforest per query
not that we need any of that shit anyway. the only things these plagiarism machines seem to be okayish at is mass manufacturing spam and disinfo, and while some adderral-fueled middle managers will try to replace real people with it, it will fail flat on this task (not that it ever stopped them)
Because he wants to stop it from helping impoverished people live better lives and all the other advantages simply because it didn't exist when.he was young and change scares him
It doesn't matter if it's "Google AI" or Shat GPT or Foopsitart or whatever cute name they hide their LLMs behind; it's just glorified autocomplete and therefore making shit up is a feature, not a bug.
Making shit up IS a feature of LLMs. It's crazy to use it as search engine. Now they'll try to stop it from hallucinating to make it a better search engine and kill the one thing it's good at ...
Maybe they should branch it off. Have one for making shit up purposes and one for search engine. I haven't found the need for one that makes shit up but have gotten value using them to search. Especially with Google going to shit and so many websites being terribly designed and hard to navigate.
I don't bother using things like Copilot or other AI tools like ChatGPT. I mean, they're pretty cool what they CAN give you correctly and the new demo floored me in awe.
But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.
But this example here? Good god I hope this doesn't become the norm..
This is definitely different from using Dall-E to make funny images. I'm on a thread in another forum that is (mostly) dedicated to AI images of Godzilla in silly situations and doing silly things. No one is going to take any advice from that thread apart from "making Godzilla do silly things is amusing and worth a try."
Because Google has literally poisoned the internet to be the de facto SEO optimization goal. Even if Google were to suddenly disappear, everything is so optimized forngoogle's algorithm that any replacements are just going to favor the SEO already done by everyone.
The abusive adware company can still sometimes kill it with vague searches.
(Still too lazy to properly catalog the daily occurrences such as above.)
SearXNG proxying Google still isn’t as good sometimes for some reason (maybe search bubbling even in private browsing w/VPN). Might pay for search someday to avoid falling back to Google.
Again, as a chatgpt pro user… what the fuck is google doing to fuck up this bad.
This is so comically bad i almost have to assume its on purpose? An internal team gone rogue, or a very calculated move to fuel ai hate and then shift to a “sorry, we learned from our mistakes, come to us to avoid ai instead”
I think it's because what Google is doing is just ChatGPT with extra steps. Instead of just letting the AI generate answers based on curated training data, they trained it and then gave it a mission to summarize the contents of their list of unreliable sources.
Can you tell folks here what these "proper search engines" are because I can think of like five off the top of my head that all have issues similar to Google's. Yes, that includes paid search engine Kagi.
Almost all of them have similar issues except the self-hosted ones, which are a little beyond most people's basic capabilities.
I've had similar issues with copilot where it seemingly pulls information out of it's ass. I use it to do fact-finding about services the company I work for is considering and even when I specify "use only information found on whateveritis.com" it still occasionally gives an answer I can't verify in their docs. Still better than manually searching a bunch of knowledge articles myself but it is annoying.
I work in IT and between the Advent of "agile" methodologies meaning lots of documentation is out of date as soon as it's approved for release and AI results more likely to be invented instead of regurgitated from forum posts, it's getting progressively more difficult to find relevant answers to weird one-off questions than it used to be.
This would be less of a problem if everything was open source and we could just look at the code but most of the vendors corporate America uses don't ascribe to that set of values, because "Mah intellectual properties" and stuff.
Couple that with tech sector cuts and outsourcing of vendor support and things are getting hairy in ways AI can't do anything about.
I just started the Kagi trial this morning, so far I'm impressed how accurate and fast it is. Do you find 300 searches is enough or do you pay for unlimited?
Why do we call it hallucinating? Call it what it is: lying. You want to be more “nice” about it: fabricating. “Google’s AI is fabricating more lies. No one dead… yet.”
To be fair, they call it a hallucination because hallucinations don't have intent behind them.
LLMs don't have any intent. Period.
A purposeful lie requires an intent to lie.
Without any intent, it's not a lie.
I agree that "fabrication" is probably a better word for it, especially because it implies the industrial computing processes required to build these fabrications. It allows the word fabrication to function as a double entendre: It has been fabricated by industrial processes, and it is a fabrication as in a false idea made from nothing.
LLM's may not have any intent, but companies do. In this case, Google decides to present the AI answer on top of the regular search answers, knowing that AI can make stuff up. MAybe the AI isn't lying, but Google definitely is. Even with the "everything is experimental, learn more" line, because they'd just give the information if they'd really want you to learn more, instead of making you have to click again for it.
I did look up an article about it that basically said the same thing, and while I get “lie” implies malicious intent, I agree with you that fabricate is better than hallucinating.
It's not lying or hallucinating. It's describing exactly what it found in search results. There's an web page with that title from that date. Now the problem is that the web page is pinterest and the title is the result of aggressive SEO. These types of SEO practices are what made Google largely useless for the past several years and an AI that is based on these useless results will be just as useless.
The most damning thing to call it is “inaccurate”. Nothing will drive the average person away from a companies information gathering products faster than associating it with being inaccurate more times than not. That is why they are inventing different things to call it. It sounds less bad to say “my LLM hallucinates sometimes” than it does to say “my LLM is inaccurate sometimes“.
This feels like something you should go tell Google about rather than the rest of us. They're the ones who have embedded LLM-generated answers to random search queries.
I always try to replicate these results, because the majority of them are fake. For this one in particular I don't get any AI results, which is interesting, but inconclusive
The point here is that this is likely another fake image, meant to get the attention of people who quickly engage with everything anti AI. Google does not generate an AI response to this query, which I only know because I attempted to recreate it. Instead of blindly taking everything you agree with at face value, it can behoove you to question it and test it out yourself.