Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:
- Confident: 57% say the main LLM they use seems to act in a confident way.
- Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
- Sense of humor: 32% say their main LLM seems to have a sense of humor.
- Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
- Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
i guess the 90% marketing (re: linus torvalds) is working
He’s probably a little high on the reality side to be honest.
I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.
It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.
It’s so weird watching the masses ignore industry experts and jump on weird media hype trains.
Is it though?
I’m the expert in this situation and I’m getting tired explaining to Jr Engineers and laymen that it is a media hype train.
I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.
I see how that is a hype train, and I also work with machine learning (though I’m far from an expert), but I’m not convinced these things are not getting intelligent. I know what their problems are, but I’m not sure whether the human brain works the same way, just (yet) more effective.
That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?
I work in an extremely related field and spend my days embedded into ML/AI projects. I’ve seen teams make some cool stuff and I’ve seen teams make crapware with “AI” slapped on top. I guarantee you that you are wrong.
What if our brains…
There’s the thing- you can go look this information up. You don’t have to guess. This information is readily available to you.
LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It’s not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we’re at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That’s okay, it’s still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you’re smart.
In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn’t have a million stack overflow articles to train on it’s going to be difficult to get an LLM to contribute in any significant way. I’d go so far to say it hasn’t introduced any tool I didn’t already have. It’s just mildly more convenient than some of them while the costs are low.
People need to understand it’s a really well-trained parrot that has no idea what is saying. That’s why it can give you chicken recipes and software code; it’s seen it before. Then it uses statistics to put words together that usually appear together. It’s not thinking at all despite LLMs using words like “reasoning” or “thinking”
Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren’t books and book organization.
Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that’s how information was stored.)
They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.
Half of all voters voted for Trump. So an LLM might be smarter than them. Even a bag of pea gravel might be.
Do you think the two party system properly represents the American people?
ChatGPT said:
The two-party system in the U.S. has both strengths and weaknesses when it comes to representing the American people. On one hand, it provides stability and clarity, with the two major parties—Democrats and Republicans—offering distinct platforms that can help simplify voter choice. The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.
However, the two-party system can limit political diversity, as it often forces voters to choose between two parties that may not fully reflect their views. This can leave many people feeling underrepresented, particularly those with more nuanced or third-party preferences. It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.
In short, while the two-party system has worked for providing structure and stability in U.S. politics, it does have drawbacks in terms of broader representation, especially for those who don’t align neatly with either major party.
The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.
It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.
LoL! Okay, they aren’t ready yet. At least these things are fun to play with.
Less than a third of all voters voted for Trump. Most voters stayed home.
If you didn’t vote then you’re not a voter.
Most eligable voters stayed home
Don’t Americans vote on a work day? They stayed at work
Goddammit that’s how you get Helldivers
does nobody pay attention…
A bag of frozen peas’s is smarter than some of these Trump followers. Even half a frozen pea is.
Next you’ll tell me half the population has below average intelligence.
Not really endorsing LLMs, but some people…
pathologically stupid, and still wrong. yes.
Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.
I wouldn’t be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.
Two things can be true at once! Though I suppose it depends on what you define as “a few.”
It’s a one trick pony.
That trick also happens to be a really neat trick that can make people think it’s a swiss army knife instead of a shovel.
That’s called a self-proving statement.
Nearly half of U.S. adults
Half of LLM users (49%)
No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.
to be fair they’re American and they’re LLM users, so for a selected group like that odds are they really are as stupid as LLMs.
I’m surprised it’s not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.
As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.
As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:
what’s a good word to describe the people in a poll who answer the questions? I didn’t want to use “subjects” because that could get confused with the topics covered in the poll.
“Respondents” is a good choice. It clearly refers to the people answering the questions without ambiguity.
The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.
Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.
So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.
or they’re dumb enough to be correct.
That’s a bingo
Do the other half believe it is dumber than it actually is?
Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.
My new mental model for LLMs is that they’re like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.
They’re right. AI is smarter than them.
Am American.
…this is not the flex that the article writer seems to think it is.