Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • @[email protected]
    link
    fedilink
    English
    28
    edit-2
    1 month ago

    I had to tell a bunch of librarians that LLMs are literally language models made to mimic language patterns, and are not made to be factually correct. They understood it when I put it that way, but librarians are supposed to be “information professionals”. If they, as a slightly better trained subset of the general public, don’t know that, the general public has no hope of knowing that.

    • @[email protected]
      link
      fedilink
      English
      231 month ago

      It’s so weird watching the masses ignore industry experts and jump on weird media hype trains. This must be how doctors felt in Covid.

      • @[email protected]
        link
        fedilink
        English
        51 month ago

        It’s so weird watching the masses ignore industry experts and jump on weird media hype trains.

        Is it though?

        • @[email protected]
          link
          fedilink
          English
          21 month ago

          I’m the expert in this situation and I’m getting tired explaining to Jr Engineers and laymen that it is a media hype train.

          I worked on ML projects before they got rebranded as AI. I get to sit in the room when these discussion happen with architects and actual leaders. This is Hype. Anyone who tells you other wise is lying or selling you something.

          • @[email protected]
            link
            fedilink
            English
            21 month ago

            I see how that is a hype train, and I also work with machine learning (though I’m far from an expert), but I’m not convinced these things are not getting intelligent. I know what their problems are, but I’m not sure whether the human brain works the same way, just (yet) more effective.

            That is, we have visual information, and some evolutionary BIOS, while LLMs have to read the whole internet and use a power plant to function - but what if our brains are just the same bullshit generators, we are just unaware of it?

            • @[email protected]
              link
              fedilink
              English
              11 month ago

              I work in an extremely related field and spend my days embedded into ML/AI projects. I’ve seen teams make some cool stuff and I’ve seen teams make crapware with “AI” slapped on top. I guarantee you that you are wrong.

              What if our brains…

              There’s the thing- you can go look this information up. You don’t have to guess. This information is readily available to you.

              LLMs work by agreeing with you and stringing together coherent text in patterns the recognize from huge samples. It’s not particularly impressive and is far, far closer to the initial chat bots from last century than they do real GAI or some sort of singularity. The limits we’re at now are physical. Look up how much electricity and water it takes just to do trivial queries. Progress has plateaued as it frequently does with tech like this. That’s okay, it’s still a neat development. The only big takeaway from LLMs is that agreeing with people makes them think you’re smart.

              In fact, LLMs are a glorified Google at higher levels of engineering. When most of the stuff you need to do doesn’t have a million stack overflow articles to train on it’s going to be difficult to get an LLM to contribute in any significant way. I’d go so far to say it hasn’t introduced any tool I didn’t already have. It’s just mildly more convenient than some of them while the costs are low.

    • @[email protected]
      link
      fedilink
      English
      41 month ago

      People need to understand it’s a really well-trained parrot that has no idea what is saying. That’s why it can give you chicken recipes and software code; it’s seen it before. Then it uses statistics to put words together that usually appear together. It’s not thinking at all despite LLMs using words like “reasoning” or “thinking”

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      Librarians went to school to learn how to keep order in a library. That does not inherently make them have more information in their heads than the average person, especially regarding things that aren’t books and book organization.

      • @[email protected]
        link
        fedilink
        English
        11 month ago

        Librarians go to school to learn how to manage information, whether it is in book format or otherwise. (We tend to think of libraries as places with books because, for so much of human history, that’s how information was stored.)

        They are not supposed to have more information in their heads, they are supposed to know how to find (source) information, catalogue and categorize it, identify good information from bad information, good information sources from bad ones, and teach others how to do so as well.

  • @[email protected]
    link
    fedilink
    English
    321 month ago

    Half of all voters voted for Trump. So an LLM might be smarter than them. Even a bag of pea gravel might be.

    • @[email protected]
      link
      fedilink
      English
      11 month ago

      Do you think the two party system properly represents the American people?

      ChatGPT said:

      The two-party system in the U.S. has both strengths and weaknesses when it comes to representing the American people. On one hand, it provides stability and clarity, with the two major parties—Democrats and Republicans—offering distinct platforms that can help simplify voter choice. The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.

      However, the two-party system can limit political diversity, as it often forces voters to choose between two parties that may not fully reflect their views. This can leave many people feeling underrepresented, particularly those with more nuanced or third-party preferences. It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.

      In short, while the two-party system has worked for providing structure and stability in U.S. politics, it does have drawbacks in terms of broader representation, especially for those who don’t align neatly with either major party.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 month ago

        The system also ensures that one of the two parties has a majority, making governance and passing legislation easier.

        It also tends to lead to a polarized political environment, where compromise and cooperation between different ideologies can be difficult.

        LoL! Okay, they aren’t ready yet. At least these things are fun to play with.

    • @[email protected]
      link
      fedilink
      English
      21 month ago

      A bag of frozen peas’s is smarter than some of these Trump followers. Even half a frozen pea is.

  • @[email protected]
    link
    fedilink
    English
    231 month ago

    Next you’ll tell me half the population has below average intelligence.

    Not really endorsing LLMs, but some people…

  • @[email protected]
    link
    fedilink
    English
    31 month ago

    Don’t they reflect how you talk to them? Ie: my chatgpt doesn’t have a sense of humor, isn’t sarcastic or sad. It only uses formal language and doesn’t use emojis. It just gives me ideas that I do trial and error with.

  • @[email protected]
    link
    fedilink
    English
    121 month ago

    I wouldn’t be surprised if that is true outside the US as well. People that actually (have to) work with the stuff usually quickly learn, that its only good at a few things, but if you just hear about it in the (pop-, non-techie-)media (including YT and such), you might be deceived into thinking Skynet is just a few years away.

    • [email protected]
      link
      fedilink
      English
      21 month ago

      Two things can be true at once! Though I suppose it depends on what you define as “a few.”

    • Singletona082
      link
      fedilink
      English
      51 month ago

      It’s a one trick pony.

      That trick also happens to be a really neat trick that can make people think it’s a swiss army knife instead of a shovel.

  • @[email protected]
    link
    fedilink
    English
    151 month ago

    Nearly half of U.S. adults

    Half of LLM users (49%)

    No, about a quarter of U.S. adults believe LLMs are smarter than they are. Only about half of adults are LLM users, and only about half of those users think that.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      1 month ago

      to be fair they’re American and they’re LLM users, so for a selected group like that odds are they really are as stupid as LLMs.

  • Lovable Sidekick
    link
    fedilink
    English
    21 month ago

    I’m surprised it’s not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.

  • @[email protected]
    link
    fedilink
    English
    21 month ago

    As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.

    As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:

    what’s a good word to describe the people in a poll who answer the questions? I didn’t want to use “subjects” because that could get confused with the topics covered in the poll.

    “Respondents” is a good choice. It clearly refers to the people answering the questions without ambiguity.

    The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.

  • 👍Maximum Derek👍
    link
    fedilink
    English
    811 month ago

    Reminds me of that George Carlin joke: Think of how stupid the average person is, and realize half of them are stupider than that.

    So half of people are dumb enough to think autocomplete with a PR team is smarter than they are… or they’re dumb enough to be correct.

  • TrackinDaKraken
    link
    fedilink
    English
    41 month ago

    Intelligence and knowledge are two different things. Or, rather, the difference between smart and stupid people is how they interpret the knowledge they acquire. Both can acquire knowledge, but stupid people come to wrong conclusions by misinterpreting the knowledge. Like LLMs, 40% of the time, apparently.

    • ZephyrXero
      link
      fedilink
      English
      21 month ago

      My new mental model for LLMs is that they’re like genius 4 year olds. They have huge amounts of information, and yet have little to no wisdom as to what to do with it or how to interpret it.

  • Singletona082
    link
    fedilink
    English
    401 month ago

    Am American.

    …this is not the flex that the article writer seems to think it is.