• @[email protected]
    link
    fedilink
    English
    35 months ago

    I can already see it…

    Ad: CAN YOU SOLVE THIS IMPOSSIBLE RIDDLE THAT AI CAN’T SOLVE?!

    With OP’s image. And then it will have the following once you solve it: “congratz, send us your personal details and you’ll be added to the hall of fame at CERN Headquarters”

  • @[email protected]
    link
    fedilink
    English
    325 months ago

    This is a bad example… If I ask a friend "is strawberry spelled with one or two r’s"they would think I’m asking about the last part of the word.

    The question seems to be specifically made to trip up LLMs. I’ve never heard anyone ask how many of a certain letter is in a word. I’ve heard people ask how you spell a word and if it’s with one or two of a specific letter though.

    If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

    • @[email protected]
      link
      fedilink
      English
      285 months ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

      This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

      • @[email protected]
        link
        fedilink
        English
        15 months ago

        What situations are you thinking of that requires reasoning?

        I’ve used LLMs to create software i needed but couldn’t find online.

        • @[email protected]
          link
          fedilink
          English
          15 months ago

          Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.

          It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.

      • @[email protected]
        link
        fedilink
        English
        75 months ago

        Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you’ll see very different applications. If you go into it looking for problems, you’ll find them, likewise if you go into it for use cases, you’ll find them.

        • @[email protected]
          link
          fedilink
          English
          15 months ago

          Personally I have yet to find a use case. Every single time I try to use an LLM for a task (even ones they are supposedly good at), I find the results so lacking that I spend more time fixing its mistakes than I would have just doing it myself.

          • @[email protected]
            link
            fedilink
            English
            25 months ago

            So youve never used it as a starting point to learn about a new topic? You’ve never used it to look up a song when you can only remember a small section of lyrics? What about when you want to code a block of code that is simple but monotonous to code yourself? Or to suggest plans for how to create simple sturctures/inventions?

            Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

            Hell, ask chatgpt what use cases it would recommend for itself, im sure itll have something interesting.

            • @[email protected]
              link
              fedilink
              English
              15 months ago

              as a starting point to learn about a new topic

              No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?

              to look up a song when you can only remember a small section of lyrics

              No, because traditional search engines do that just fine.

              when you want to code a block of code that is simple but monotonous to code yourself

              See this comment.

              suggest plans for how to create simple sturctures/inventions

              I guess I’ve never tried this.

              Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

              Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.

              Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.

    • @[email protected]
      link
      fedilink
      English
      55 months ago

      If you think of LLMs as something with actual intelligence you’re going to be very unimpressed

      Artificial sugar is still sugar.

      Artificial intelligence implies there is intelligence in some shape or form.

      • JohnEdwa
        link
        fedilink
        English
        3
        edit-2
        5 months ago

        Something that pretends or looks like intelligence, but actually isn’t at all is a perfectly valid interpretation of the word artificial - fake intelligence.

      • @[email protected]
        link
        fedilink
        English
        35 months ago

        Artificial sugar is still sugar.

        Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?

        Because those are all wrong. What’s your criteria?

        • @[email protected]
          link
          fedilink
          English
          15 months ago

          In this example a sugar is something that is sweet.

          Another example is artificial flavours still being a flavour.

          Or like artificial light being in fact light.

      • @[email protected]
        link
        fedilink
        English
        15 months ago

        Thats because it wasnt originally called AI. It was called an LLM. Techbros trying to sell it and articles wanting to fan the flames started called it AI and eventually it became common dialect. No one in the field seriously calls it AI, they generally save that terms to refer to general AI or at least narrow ai. Of which an llm is neither.

      • @[email protected]
        link
        fedilink
        English
        15 months ago

        Exactly. The naming of the technology would make you assume it’s intelligent. It’s not.

  • Fubarberry
    link
    fedilink
    English
    75 months ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.

    • @[email protected]
      link
      fedilink
      English
      25 months ago

      Yes, at some point the meme becomes the training data and the LLM doesn’t need to answer because it sees the answer all over the damn place.

  • @[email protected]
    link
    fedilink
    English
    515 months ago

    I’ve already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it’s not a primary source for anything I just get that blank stare like I have two heads.

    • @[email protected]
      link
      fedilink
      English
      135 months ago

      Me too. More than once on a language learning subreddit for my first language: “I asked ChatGPT whether this was correct grammar in German, it said no, but I read this counterexample”, then everyone correctly responded “why the fuck are you asking ChatGPT about this”.

  • Lovable Sidekick
    link
    fedilink
    English
    55 months ago

    What would have been different about this if it had impressed you? It answered the literal question and also the question the user was actually trying to ask.

  • @[email protected]
    link
    fedilink
    English
    45 months ago

    Doc: That’s an interesting name, Mr…

    Fletch: Babar.

    Doc: Is that with one B or two?

    Fletch: One. B-A-B-A-R.

    Doc: That’s two.

    Fletch: Yeah, but not right next to each other, that’s what I thought you meant.

    Doc: Isn’t there a children’s book about an elephant named Babar.

    Fletch: Ha, ha, ha. I wouldn’t know. I don’t have any.

    Doc: No children?

    Fletch: No elephant books.

  • @[email protected]
    link
    fedilink
    English
    965 months ago

    Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics

    • @[email protected]
      link
      fedilink
      English
      245 months ago

      Because you’re using it wrong.

      No, I think you mean to say it’s because you’re using it for the wrong use case.

      Well this tool has been marketed as if it would handle such use cases.

      I don’t think I’ve actually seen any AI marketing that was honest about what it can do.

      I personally think image recognition is the best use case as it pretty much does what it promises.

      • lime!
        link
        fedilink
        English
        15
        edit-2
        5 months ago

        i’m still not entirely sold on them but since i’m currently using one that the company subscribes to i can give a quick opinion:

        i had an idea for a code snippet that could save be some headache (a mock for primitives in lua, to be specific) but i foresaw some issues with commutativity (aka how to make sure that a + b == b + a). so i asked about this, and the llm created some boilerplate to test this code. i’ve been chatting with it for about half an hour and testing the code it produces, and had it expand the idea to all possible metamethods available on primitive types, together with about 50 test cases with descriptive assertions. i’ve now run into an issue where the __eq metamethod isn’t firing correctly when one of the operands is a primitive rather than a mock, and after having the llm link me to the relevant part of the docs, that seems to be a feature of the language rather than a bug.

        so in 30 minutes i’ve gone from a loose idea to a well-documented proof-of-concept to a roadblock that can’t really be overcome. complete exploration and feasibility study, fully tested, in less than an hour.

      • @[email protected]M
        link
        fedilink
        English
        34
        edit-2
        5 months ago

        Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”

        Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”

        Another is feeding it an article and asking for a summary, https://hackingne.ws does that for its Bsky posts.

        Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”

        Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”

        • @[email protected]
          link
          fedilink
          English
          265 months ago

          Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online

          How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.

          If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.

          I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.

          • @[email protected]M
            link
            fedilink
            English
            18
            edit-2
            5 months ago

            Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short. Talking to non-technical people about a technical issue is not the easiest for me, AI has helped me dumb it down when sending an email, and helps correct my shitty grammar at times.

            As for accuracy, you review what it gives you, you don’t just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn’t make the most sense, such as if it uses wording you wouldn’t typically use. It is fairly accurate though in my use-cases.

            Hallucinations are a thing, so validating what it spits out is definitely needed.

            Another example: if you feel your email is too stern or gives the wrong tone, I’ve used it for that as well. “Make this sound more relaxed: well maybe if you didn’t turn off the fucking server we wouldn’t of had this outage!” (Just a silly example)

            • @[email protected]
              link
              fedilink
              English
              75 months ago

              I think these are actually valid examples, albeit ones that come with a really big caveat; you’re using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can’t always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

              That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There’s certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there’s potentially real value there.

              Which is kind of the broader story with Gen AI overall. It’s not that it can never be useful; it’s that, at best, it can only ever aspire to “useful.” No one, yet, has demonstrated any ability to make AI “essential” and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.

              • snooggums
                link
                fedilink
                English
                55 months ago

                If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.

                This is such a good example of how it AI/LLMs/whatever are being used as a crutch that is far more impactful than using a spellchecker. A spell checker catches typos or helps with unfamiliar words, but doesn’t replace the underlying skill of communicating to your audience.

                • @[email protected]
                  link
                  fedilink
                  English
                  3
                  edit-2
                  5 months ago

                  Noted, I’ll be giving that a proper read after work. Thank you.

                  Edit to add: Yeah, that pretty much mirrors my own experiences of using AI as a coding aid. Even when I was learning a new language, I found that my comprehension of the material very quickly outstripped whatever ChatGPT could provide. I’d much rather understand what I’m building because I built it myself. A lot of the time, when you use a solution someone else provided you don’t find out until much later how badly that solution held you back because it wasn’t actually the best way to tackle the problem.

            • @[email protected]
              link
              fedilink
              English
              215 months ago

              As for accuracy, you review what it gives you, you don’t just copy and send it without review.

              Yeah, I don’t get why so many people seem to not get that.

              It’s like people who were against Intellisense in IDEs because “What if it suggests the wrong function?”…you still need to know what the functions do. If you find something you’re unfamiliar with, you check the documentation. You don’t just blindly accept it as truth.

              Just because it can’t replace a person’s job doesn’t mean it’s worthless as a tool.

              • @[email protected]
                link
                fedilink
                English
                95 months ago

                The issue is that AI is being invested in as if it can replace jobs. That’s not an issue for anyone who wants to use it as a spellchecker, but it is an issue for the economy, for society, and for the planet, because billions of dollars of computer hardware are being built and run on the assumption that trillions of dollars of payoff will be generated.

                And correcting someone’s tone in an email is not, and will never be, a trillion dollar industry.

              • @[email protected]
                link
                fedilink
                English
                65 months ago

                Yeah, I don’t get why so many people seem to not get that.

                The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.

                I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.

                I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.

          • @[email protected]
            link
            fedilink
            English
            35 months ago

            It works well. For example, we had a work exercise where we had to write a press release based on an example, then write a Shark Tank pitch to promote the product we came up with in the release.

            I gave AI the link to the example and a brief description of our product, and it spit out an almost perfect press release. I only had to tweak a few words because there were specific requirements I didn’t feed the AI.

            Then I told it to take the press release and write the pitch based on it.

            Again, very nearly perfect with only having to change the wording in one spot.

        • lurch (he/him)
          link
          fedilink
          English
          65 months ago

          it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.

          • @[email protected]M
            link
            fedilink
            English
            4
            edit-2
            5 months ago

            My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.

        • snooggums
          link
          fedilink
          English
          6
          edit-2
          5 months ago

          The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.

          How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?

          Are you really saving time?

          • @[email protected]M
            link
            fedilink
            English
            6
            edit-2
            5 months ago

            Yes, I’m saving time. As I mentioned in my other comment:

            Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.

            And

            and helps correct my shitty grammar at times.

            And

            Hallucinations are a thing, so validating what it spits out is definitely needed.

            • snooggums
              link
              fedilink
              English
              45 months ago

              How do you validate the accuracy of what it spits out?

              Why don’t you skip the AI and just use the thing you use to validate the AI output?

              • @[email protected]M
                link
                fedilink
                English
                55 months ago

                Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.

                Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)

            • snooggums
              link
              fedilink
              English
              4
              edit-2
              5 months ago

              If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.

      • chiisana
        link
        fedilink
        English
        85 months ago

        Ask it for a second opinion on medical conditions.

        Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.

        Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.

        They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.

        • Sippy Cup
          link
          fedilink
          English
          65 months ago

          So can web MD. We didn’t need AI for that. Googling symptoms is a great way to just be dehydrated and suddenly think you’re in kidney failure.

          • chiisana
            link
            fedilink
            English
            65 months ago

            We didn’t stop trying to make faster, safer and more fuel efficient cars after Model T, even though it can get us from place A to place B just fine. We didn’t stop pushing for digital access to published content, even though we have physical libraries. Just because something satisfies a use case doesn’t mean we should stop advancing technology.

            • Sippy Cup
              link
              fedilink
              English
              25 months ago

              We also didn’t make the model T suggest replacing the engine when the oil light comes on. Cars, as it happens, aren’t that great at self diagnosis, despite that technology being far simpler and further along than generative models are. I don’t trust the model to tell me what temperature to bake a cake at, I’m sure at hell not going to trust it with medical information. Googling symptoms was risky at best before. It’s a horror show now.

            • snooggums
              link
              fedilink
              English
              45 months ago

              AI is slower and less efficient than the older search algorithms and is less accurate.

      • @[email protected]
        link
        fedilink
        English
        25 months ago

        I have it write for me emails in German. I moved there not too long ago, works wonders to get doctors appointment, car service, etc. I also have it explain the text, so I’m learning the language.

        I also use it as an alternative to internet search, which is now terrible. It’s not going to help you to find smg super location specific, but I can ask it to tell me without spoilers smg about a game/movie or list metacritic scores in a table, etc.

        It also works great in summarizing long texts.

        LLM is a tool, what matters is how you use it. It is stupid, it doesn’t think, it’s mostly hype to call it AI. But it definitely has it’s benefits.

      • The Hobbyist
        link
        fedilink
        English
        75 months ago

        One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you’re unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.

        Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: https://wiki.alpinelinux.org/wiki/Docker

        Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I’m sure there are other usable models too.

        To assist you in programming (both to execute and learn) I find it helpful too.

        I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.

      • @[email protected]
        link
        fedilink
        English
        25 months ago

        I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in “berry” or they’re trying to trip up the model.

      • chaosCruiser
        link
        fedilink
        English
        5
        edit-2
        5 months ago

        Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.

        Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT is also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.

        If you’re well familiar with the code you’ve working on, GPT code will be convoluted by comparison. If so, you can ask GPT for the rough alpha version, and you can do the debugging and refining in a few minutes.

        • @[email protected]
          link
          fedilink
          English
          55 months ago

          That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.

          • TimeSquirrel
            cake
            link
            fedilink
            45 months ago

            It can do that just fine, because it has seen enough examples of working code. It can’t directly count correctly, sure, but it can write “i++;”, incrementing a variable by one in a loop and returning the result. The computer running the generated program is going to be doing the counting.

      • @[email protected]
        link
        fedilink
        English
        21
        edit-2
        5 months ago

        This but actually. Don’t use an LLM to do things LLMs are known to not be good at. As tools various companies would do good to list out specifically what they’re bad at to eliminate requiring background knowledge before even using them, not unlike needing to somehow know that one corner of those old iPhones was an antenna and to not bridge it.

        • @[email protected]
          link
          fedilink
          English
          45 months ago

          Yup, the problem with that iPhone (4?) wasn’t that it sucked, but that it had limitations. You could just put a case on it and the problem goes away.

          LLMs are pretty good at a number of tasks, and they’re also pretty bad at a number of tasks. They’re pretty good at summarizing, but don’t trust the summary to be accurate, just to give you a decent idea of what something is about. They’re pretty good at generating code, just don’t trust the code to be perfect.

          You wouldn’t use a chainsaw to build a table, but it’s pretty good at making big things into small things, and cleaning up the details later with a more refined tool is the way to go.

          • snooggums
            link
            fedilink
            English
            85 months ago

            They’re pretty good at summarizing, but don’t trust the summary to be accurate, just to give you a decent idea of what something is about.

            That is called being terrible at summarizing.

            • desktop_user [they/them]
              link
              fedilink
              English
              05 months ago

              if you want to find a few articles out of a few hundred that are about the benefits of nuclear weapons or other controversial topics that have significant literature on them it can be helpful to eliminate 90% that probably aren’t what I’m looking for.

              • snooggums
                link
                fedilink
                English
                25 months ago

                Or you might eliminate some that are what you are looking for because the summaries are inaccurate.

                Guess it depends on whether an unreliable system is still better than being overwhelmed with choices.

            • @[email protected]
              link
              fedilink
              English
              75 months ago

              That depends on how you use it. If you need the information from an article, but don’t want to read it, I agree, an LLM is probably the wrong tool. If you have several articles and want go decide which one has the information you need, an LLM is a pretty good option.

      • @[email protected]
        link
        fedilink
        English
        125 months ago

        I think there’s a fundamental difference between someone saying “you’re holding your phone wrong, of course you’re not getting a signal” to millions of people and someone saying “LLMs aren’t good at that task you’re asking it to perform, but they are good for XYZ.”

        If someone is using a hammer to cut down a tree, they’re going to have a bad time. A hammer is not a useful tool for that job.

    • @[email protected]
      link
      fedilink
      English
      25 months ago

      So for something you can’t objectively evaluate? Looking at Apple’s garbage generator, LLMs aren’t even good at summarising.

  • @[email protected]
    link
    fedilink
    English
    39
    edit-2
    5 months ago

    There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.

    In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.

    • @[email protected]
      link
      fedilink
      English
      55 months ago

      That was this reality. Very briefly. Remember AI Dungeon and the other clones that were popular prior to the mass ml marketing campaigns of the last 2 years?

    • @[email protected]
      link
      fedilink
      English
      45 months ago

      They are not random per se. They are just statistical with just some degree of randomization.

  • @[email protected]
    link
    fedilink
    English
    145 months ago

    It’s like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.

  • @[email protected]
    link
    fedilink
    English
    135 months ago

    This is literally just a tokenization artifact. If I asked you how many r’s are in /0x5273/0x7183 you’d be confused too.

  • @[email protected]
    link
    fedilink
    English
    25 months ago

    I’ve been avoiding this question up until now, but here goes:

    Hey Siri …

    • how many r’s in strawberry? 0
    • how many letter r’s in the word strawberry? 10
    • count the letters in strawberry. How many are r’s? ChatGPT ……2
  • Aatube
    link
    fedilink
    95 months ago

    I mean, that’s how I would think about it…

      • Aatube
        link
        fedilink
        55 months ago

        The typo in “strawbery” leads to a conversation like “hey you spelt this wrong there’s two r’s (after the e) not one”

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          5 months ago

          It happens even if you ask how many “r”s are in “strawberry”. It’s a well-known AI gotcha that happens on most if not all current models. The typo in the original post is a little misleading and not that relevant.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    5 months ago

    Works fine for me in o3-mini-high:

    Counting letters in “strawberry”

    Alright, I’m checking: the word “strawberry” is spelled S T R A W B E R R Y. Let me count the letters: S (1), T (2), R (3), A (4), W (5), B (6), E (7), R (8), R (9), Y (10). There are three R’s: in positions 3, 8, and 9. So, the answer is 3. Even if we ignore case, the count still holds. Therefore, there are 3 r’s in “strawberry.”