The hype cycle for Google’s fabulous new AI Co-Scientist tool, based on the Gemini LLM, includes a BBC headline about how José Penadés’ team at Imperial College asked the tool about a problem…
I fucking hate that everyone has just accepted calling them AI. They are not AI they are LLMs which are nowhere close to what the sci-fi idea of AI is and no matter what sam says I don't think that they are the way to getting an AGI.
For clarification I understand that LLMs are a subset of AI but it feels like calling squares rectangles technically true but misleading.
I fucking knew this story was bullshit, and the scientist emailing a random plebe at Google (like some low level employee would know or be allowed to be honest about AI shitfuckery) all shocked was a joke, too. Pretty disappointed with a scientist feeding into this horseshit.
When it first ran, I posited that if they had emailed their documentation to someone with a Gmail address, it might have been up for grabs for sucking down the maw of Google's AI monstrosities.
Finally, even when it comes to the "right" answer there is no way to know if it hallucinated it's way to such an answer! Which makes it getting the "right" answer effectively pointless.
The AI guys are really playing with the exact same cheat every time, aren't they? Thanks to pivot-to-ai for continuing to shine a light on this... I hope the wider press eventually learns about it, too.
No, the typewriters are supposed to be random, this is guided based on previous work, so a whole space of output becomes extremely unlikely (so without looking at math for it, those spaces would show up very rarely if you then ran the infinite monkey experiment infinite times).