Are modern LLMs closer to AGI or next word predictor? Where do they fall in this graph with 10 on x-axis being human intelligence.
Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.
Also not sure if this graph is right way to visualize it.
They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.
Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.
They are programmatically token predictors. It will never be "closer" to intelligence for that very reason. The broader question should be, "can a token predictor simulate intelligence?"