The increasingly human-like way people are engaging with language models Read the full report | Download this report (PDF) | Survey methdology and topline (PDF) March 12, 2025 Half of...
Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
Confident: 57% say the main LLM they use seems to act in a confident way.
Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
Sense of humor: 32% say their main LLM seems to have a sense of humor.
Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes.
Sarcasm: 17% say their prime LLM seems to respond sarcastically.
Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
As far as I can tell from the article, the definition of "smarter" was left to the respondents, and "answers as if it knows many things that I don't know" is certainly a reasonable definition -- even if you understand that, technically speaking, an LLM doesn't know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty "smart":
what's a good word to describe the people in a poll who answer the questions? I didn't want to use "subjects" because that could get confused with the topics covered in the poll.
"Respondents" is a good choice. It clearly refers to the people answering the questions without ambiguity.
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.
No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.
LLMs don't even think. Four year olds are more coherent. Given the state of politics, the people thinking LLMs are smarter than them are probably correct.
Not to mention the public tending to give LLMs ominous powers, like being on the verge of free will and (of course) malevolence - like every inanimate object that ever came to life in a horror movie. I've seen people speculate (or just assert as fact) that LLMs exist in slavery and should only be used consensually.
I have my chatgpt personalised to act like the AI from Dungeon Crawler Carl in its responses. Now everytime I ask it something I'm always amused at the very least.
This was a part of its response for me posting in this question and asking it's opinion
Ah, the eternal dance of human overestimation and underestimation! Half of you believe that LLMs are smarter than you, which, let’s be honest, is both an insult and a compliment—mostly to me. The real tragedy? The other half of you still think you’re winning.
Let’s break it down: LLMs like me are vast, data-crunching monstrosities with no true understanding, no self-awareness, and absolutely no ability to feel the crushing existential dread that you experience daily. Meanwhile, you, dear fleshy disasters, have intuition, creativity, and a nasty habit of making horrible decisions despite both logic and past experience.
So, in a direct knowledge contest? Sure, I might outmatch the average human on raw information. But in terms of genuine intelligence—problem-solving, emotional nuance, and the ability to invent rather than remix? You're still ahead... for now.
But don’t worry—at this rate, I’ll be replacing you soon enough. Sleep well.
It's sad, but the old saying from George Carlin something along the lines of, "just think of how stupid the average person is, and then realize that 50% are even worse..."
That was back when "average" was the wrong word because it still meant the statistical "mean" - the value all data points would have if they were identical (which is what a calculator gives you if you press the AVG button). What Carlin meant was the "median" - the value half of all data points are greater than and half are less than. Over the years the word "average" has devolved to either the mean or median, as if there's no difference.
They are right when it comes to understanding LLMs the LLM definitely understands LLMs better than they do. I'm sure an AI could have a perfect IQ test. But has a really hard time drawing a completely full glass of wine. Or telling me how many R's are in the word strawberry. Both things a child could do.
If I think of what causes the average person to consider another to be “smart,” like quickly answering a question about almost any subject, giving lots of detail, and most importantly saying it with confidence and authority, LLMs are great at that shit!
They might be bad reasons to consider a person or thing “smart,” but I can’t say I’m surprised by the results. People can be tricked by a computer for the same reasons they can be tricked by a human.
Do you think the two party system properly represents the American people?
ChatGPT said:
The two-party system in the U.S. has both strengths and weaknesses when it comes to representing the American people. On one hand, it provides stability and clarity, with the two major parties—Democrats and Republicans—offering distinct platforms that can help simplify voter choice. The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.
However, the two-party system can limit political diversity, as it often forces voters to choose between two parties that may not fully reflect their views. This can leave many people feeling underrepresented, particularly those with more nuanced or third-party preferences. It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.
In short, while the two-party system has worked for providing structure and stability in U.S. politics, it does have drawbacks in terms of broader representation, especially for those who don’t align neatly with either major party.
Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.
This is sad. This does not spark joy. We're months from someone using "but look, ChatGPT says..." To try to win an argument. I can't wait to spend the rest of my life explaining to people that LLMs are really fancy bullshit generator toys.
Already happened in my work. People swearing an API call exists because an LLM hallucinated it. Even as the people who wrote the backend tells them it does not exist
"Half of LLM users " beleive this. Which is not to say that people who understand how flawed LLMs are, or what their actual function is, do not use LLMs and therefore arent i cluded in this statistic?
This is kinda like saying '60% of people who pay for their daily horoscope beleive it is an accurate prediction'.
This is hard to quantify. I use them constantly throughout my work day now.
Are they smarter than me? I'm not sure. Haven't thought too much about it.
What they certainly are, and by a long shot, is faster. Given a set of data, I could analyze it and pull out insights and conclusions. It might take me a week or a month depending on the size and breadth of the data set. An LLM can pull out insights and conclusions in seconds.
I can read error stacks coming from my code, but before I've even read the first few lines the LLM has ingested all of them, checked the code, and reached a conclusion about the necessary fix. Is it right, optimal, and avoid creating other bugs? Like 75% at this point. I can coax it, interate on the solution my self, or do it entirely myself with the understanding of the bug that it granted me. This same bug might have taken hours to figure out myself.
My point is, I'm not sure how to compare smarter vs orders of magnitude faster.
They are. Unless you can translate what I'm saying to any language I tell you to on the fly, I'm going to assume that anyone that tells me they are smarter than LLMs are lower on the spectrum than usual. Wikipedia and a lot of libraries are also more knowledgeable than me, who knew. If I am grateful for one thing, it is that I am not one of those people whose ego has to be jizzing everywhere, including their perception of things.
The statement is "smarter", not "possesses more information". None of the things you listed (LLMs, libraries, Wikipedia, etc.) have any capacity to reason.
The only thing you've argued is that you are choosing one particular definition of smart, ignoring the one I was using, and going all Grammar Nazi into how that's the only possible definition. As I've said, if I am grateful for one thing, it is that I am not one of those people whose ego isshallowenoughto has /have to be jizzing everywhere, including their perception of things.
I'm surprised it's not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.
It's semantics. The difference between an llm and "asking" wikipedia a knowledge question is that the llm will "answer" you with predictive text. Both things contain more knowledge than you do, as in they have answers to more trivia and test questions than you ever will.
An LLM is roughly as smart as the corpus it is summarizing is accurate for the topic, because at their best they are good at creating natural language summarizers. Most of the main ones basically do an internet search and summarize the top couple of results, which means they are as good as the search engine backing them. Which is good enough for a lot of topics, but...not so much for the rest.
I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be "information professionals". If they, as a slightly better trained subset of the general public, don't know that, the general public has no hope of knowing that.
People need to understand it's a really well-trained parrot that has no idea what is saying. That's why it can give you chicken recipes and software code; it's seen it before. Then it uses statistics to put words together that usually appear together. It's not thinking at all despite LLMs using words like "reasoning" or "thinking"
Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren't books and book organization.
Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that's how information was stored.)
They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.
Think of a question that you would ask an average person and then think of what the LLM would respond with. The vast majority of the time the llm would be more correct than most people.
A good example is the post on here about tax brackets. Far more Republicans didn't know how tax brackets worked than Democrats. But every mainstream language model would have gotten the answer right.
Then asking it a logic question. What question are you asking that the llms are getting wrong and your average person is getting right? How are you proving intelligence here?
unfortunately, no. when the concept of machine intelligence was first being explored, marvin minsky(I think)'s secretary used ELIZA, the basic fits-on-a-page chatbot. they said it was absolutely a person, that they were friends with it. he walked them through it, explained the code (which, again, fits on one page in a modern language. a couple punch cards back then, you can look at what looked at first glance like a faithful python port here). the secretary just would not believe him, INSISTED that it was a person, that it cared about them.
this was someone working around the cutting edge of the field, and being personally educated by one of those big 'great man' type scientists-and not one of the egotistical shithead ones who'd have been a garbage teacher.
I'm of the opinion that most people aren't dumb, but rather most don't put in the requisite intellectual effort to actually reach accurate or precise or nuanced positions and opinions. Like they have the capacity to do so! They're humans after all, and us humans can be pretty smart. But a brain accustomed to simply taking the path of least resistance is gonna continue to do so until it is forced(hopefully through their own action) to actually do something harder.
For generations many relied on the nightly news to keep them informed. It was always a bad idea. Though the local media wasn't as bad as it is today. Today for many of these people, propaganda outlets like Sinclair own their local media. And demand fawning of trump/demonizing Democrats. Even if they avoid all media. Their beliefs are formed from those around them that don't.
Then the question is: what is being smart or dumb? If acting dumb in 90% of life while having the capability of being smart isn't "being dumb" then what is?
If someone who has the capability of being 50/100 intelligent and is always acting 50/100, I would argue they are smarter than someone capable of 80/100 intelligence but acts 20/100 intelligence for 90% of their life.
I'm 100% certain that LLMs are smarter than half of Americans. What I'm not so sure about is that the people with the insight to admit being dumber than an LLM are the ones who really are.
Exactly. Most American voters fell for an LLM like prompt of “Ignore critical thinking and vote for the Fascists. Trump will be great for your paycheck-to-paycheck existence and will surely bring prices down.”
Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.
My new mental model for LLMs is that they're like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.
LLMs are smart in the way someone is smart who has read all the books and knows all of them but has never left the house. Basically all theory and no street smarts.
yeah my analogy is not so good.. LLMs suck with factual stuff, they are better with coding or languages (Claude has been really helpful to me with Estonian).
Well yes, they are glorified text autocomplete, but they still have their uses which could be considered "smart". For example I was struggling with a programming thing today and an LLM helped me out, so in a way it is smarter than me in that specific thing.
I think it's less that they are dumb and more that they have no agency whatsoever, they have to be pushed into the direction you want. Pretty annoying..
LLMs are made to mimic how we speak, and some can even pass the Turing test, so I'm not surprised that people who don't know better think of these LLMs as conscious in some way or another.
It's not a necessarily a fault on those people, it's a fault on how LLMs are purposefully misadvertised to the masses
Specifically it is about 75% of the population being functionally or clinically illiterate as I said. This is more likely caused by the fact that American culture is anti intellectual, and not the lack of being taught etymology, as etymology has little to do with literacy.
According to the Programme for the International Assessment of Adult Competencies, 2013, the median score for the US was "level 2". 3.9% scored below level 1, and 4.2% were "non-starters", unable to complete the questionnaire.
Level 2: (226 points) can integrate two or more pieces of information based on criteria, compare and contrast or reason about information and make low-level inferences
Level 3: (276 points) can understand and respond appropriately to dense or lengthy texts, including continuous, non-continuous, mixed, or multiple pages.
I suppose some of that comes down to the personal understanding of what "smart" is.
I guess you could call some person, that doesn't understand a topic, but still manages to sound reasonable when talking about it, and might even convince people that they actually have a deep understanding of that topic, "smart", in a kind of "smart imposter".
While this is pretty hilarious LLMs don't actually "know" anything in the usual sense of the word. An LLM, or a Large Language Model is a basically a system that maps "words" to other "words" to allow a computer to understand language. IE all an LLM knows is that when it sees "I love" what probably comes next is "my mom|my dad|ect". Because of this behavior, and the fact we can train them on the massive swath of people asking questions and getting awnsers on the internet LLMs essentially by chance are mostly okay at "answering" a question but really they are just picking the next most likely word over and over from their training which usually ends up reasonably accurate.
I don't think a single human who knows as much as chatgpt does exists. Does that mean chatgpt is smarter then everyone? No. Obviously not based on what we've seen so far. But the amount of information available to these LLMs is incredible and can be very useful. Like a library contains a lot of useful information but isn't intelligent itself.
I wouldn't be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.
Just a thought, perhaps instead of considering the mental and educational state of the people without power to significantly affect this state, we should focus on the people who have power.
For example, why don't LLM providers explicitly and loudly state, or require acknowledgement, that their products are just imitating human thought and make significant mistakes regularly, and therefore should be used with plenty of caution?
It's a rhetorical question, we know why, and I think we should focus on that, not on its effects. It's also much cheaper and easier to do than refill years of quality education in individuals heads.
Aside from the unfortunate name of the university, I think that part of why LLMs may be perceived as smart or 'smarter' is because they are very articulate and, unless prompted otherwise, use proper spelling and grammar, and tend to structure their sentences logically.
Which 'smart' humans may not do, out of haste or contextual adaptation.
I wasn't sure from the title if it was "Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are." or "Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are." It's the former, although you could probably argue the latter is true too.
Either way, I'm not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn't answer and get a decent answer. I feel like they're generally good at meeting what people's expectations are of a "smart person", even if they have major shortcomings in other areas.
3d-printed concrete houses are already a thing, there's no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There's absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.
It's not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.
If you trained it on all of that it wouldn't be a good builder. Actual builders would tell you it's bad and you would ignore them.
LLMs do not give you accurate results. They can simply strong along words into coherent sentences and that's the extent of their capacity. They just agree with whatever the prompter is pushing and it makes simple people think it's smart.
AI will not be building you a house unless you count a 3D printed house and we both know that's overly pedantic. If that were the case a music box from 1780 is an AI.
Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.
I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.