What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?
I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.
Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".
Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don't have mastery of language and they cannot predict desired output
I thought this was an inciteful comment. Language is a kind of 'view' (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It's a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.
Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level
But I take your point. This stuff will continue to advance.
But the important argument today isn't over what it can be, it's an attempt to clarify for confused people.
While the current LLMs are an important and exciting step, they're also largely just a math trick, and they are not a sign that thinking machines are almost here.
Some people are being fooled into thinking general artificial intelligence has already arrived.
If we give these unthinking LLMs human rights today, we expand orporate control over us all.
These LLMs can't yet take a useful ethical stand, and so we need to not rely on then that way, if we don't want things to go really badly.