If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

  • @[email protected]
    link
    fedilink
    English
    136 months ago

    People writing off AI because it isn’t fully replacing humans. Sounds like writing off calculators because they can’t work without human input.

    Used correctly and in the right context, it can still significantly increase productivity.

    • Jyek
      link
      fedilink
      English
      126 months ago

      Except it has gotten progressively worse as a product due to misuse, corporate censorship of the engine and the dataset feeding itself.

    • @[email protected]
      link
      fedilink
      English
      46 months ago

      No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        Imagine if the engineers for computers were just as short sighted. If they had stopped prioritizing development when computers were massive, room sized machines with limited computing power and obscenely inefficient.

        Not all AI development is focused on increasing complexity. Much is focused on refinement, and increasing efficiency. And there’s been a ton of progress in this area.

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

          I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.

          • @[email protected]
            link
            fedilink
            English
            15 months ago

            Now that DeepSeek has been released and proven drastically more efficient, just wanted to pop back in and gloat.

            Hope this causes you to reexamine your beliefs and biases. Try reading more next time.

          • @[email protected]
            link
            fedilink
            English
            16 months ago

            Sounds like you’re only reading a certain narrative then. There’s plenty of articles about increasing efficiency, too.

  • Optional
    link
    fedilink
    English
    406 months ago

    The GPT Era Is Already Ending

    Had it begun? Alls I saw was a frenzy of idiot investment cheered on shamelessly by hypocritical hypemen.

    • @[email protected]
      link
      fedilink
      English
      176 months ago

      Oh, I saw a ton of search results feed me to worthless ai generated vomit. It definitely changed things.

    • @[email protected]
      link
      fedilink
      English
      66 months ago

      Seriously, I tried using ChatGPT for my work sooo often and it never gave me results I could work with. I was promised a tool that replaces me, yet it does only suggest me things that are not working correctly.

  • @[email protected]
    link
    fedilink
    English
    96 months ago

    I’ve been playing around with AI a lot lately for work purposes. A neat trick llms like OpenAI have pushed onto the scene is the ability for a large language model to “answer questions” on a dataset of files. This is done by building a rag agent. It’s neat, but I’ve come to two conclusions after about a year of screwing around.

    1. it’s pretty good with words - asking it to summarize multiple documents for example. But it’s still pretty terrible at data. As an example, scanning through an excel file log/export/csv file and asking it to perform a calculation “based on this badge data, how many people and who is in the building right now”. It would be super helpful to get answers to those types of questions-but haven’t found any tool or combinations of models that can do it accurately even most of the time. I think this is exactly what happened to spotify wrapped this year - instead of doing the data analysis, they tried to have an llm/rag agent do it - and it’s hallucinating.
    2. these models can be run locally and just about as fast. Ya it takes some nerd power to set these up now - but it’s only a short matter of time before it’s as simple as installing a program. I can’t imagine how these companies like ChatGPT are going to survive.
    • @[email protected]
      link
      fedilink
      English
      36 months ago

      I think this is exactly what happened to spotify wrapped this year - instead of doing the data analysis, they tried to have an llm/rag agent do it - and it’s hallucinating.

      Interesting - I don’t use Spotify anymore, but I overheard a conversation on the train yesterday where some teens were complaining about the results being super weird, and they couldn’t recognize themselves in it at all. It seems really strange to me to use LLMs for this purpose, perhaps with the exception of coming up with different ways of formulating the summary sentences so that it feels more unique. Showing the most played songs and artists is not really a difficult analysis task that does not require any machine learning. Unless it does something completely different over the past two years since I got my last one…

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        Showing the most played songs and artists is not really a difficult analysis task that does not require any machine learning.

        You want to dimension reduce to get that “people who listen to stuff like you also like to listen to” recommendation. To have an idea whom to play a new song to, you ideally want to analyse the song itself and not just people’s reaction to it and there we’re deep in the weeds of classifiers.

        Using LLMs in particular though is probably suit-driven development because when you’re trying to figure out whether a song sounds like pop or rock or classical then LLMs are, at best, overkill. Analysing song texts might warrant LLMs but I don’t think it’d gain you much. If you re-train them on music instead of language you might also get something interesting, classifying music by phrasal structure and whatnot don’t look at me I may own a guitar but am no musician. And, of course, “interesting” doesn’t necessarily mean “business case” unless you’re in the business of giving Adam Neely video ideas. “Spotify, play me all pop songs that sing ‘caught in the middle’ in the same way”… not a search that’s going to make spotify money, or anyone asked for.

      • @[email protected]
        link
        fedilink
        English
        46 months ago

        They are using LLM’s because the companies are run by tech bros who bet big on “AI” and now have to justify that.

    • @[email protected]
      link
      fedilink
      English
      126 months ago

      This is exactly how we use LLMs at work… LLM is trained on our work data so it can answer questions about meeting notes from 5 years ago or something. There are a few geniunely helpful use cases like this amongst a sea of hype and mania. I wish lemmy would understand this instead of having just a blanket policy of hate on everything AI

      the spotify thing is so stupid… There is simply no use case here for AI. Just spit back some numbers from my listening history like in the past. No need to have AI commentary and hallucination

      The even more infuriating part of all this is that i can think of ways that AI/ML (not necesarily LLMs) could actually be really useful for spotify. Like tagging genres, styles, instruments, etc… “Spotify, find me all songs by X with Y instrument in them…”

      • HubertManne
        link
        fedilink
        16 months ago

        This is to me what its useful for. So much reinventing the wheel at places but if the proper information could be found quickly enough then we could use a wheel we already have.

      • @[email protected]
        link
        fedilink
        English
        11
        edit-2
        6 months ago

        The problem is that the actual use cases (which are still incredibly unreliable) don’t justify even 1% of the investment or energy usage the market is spending on them. (Also, as you mentioned, there are actual approaches that are useful that aren’t LLMs that are being starved by the stupid attempt at a magic bullet.)

        It’s hard to be positive about a simple, moderately useful technology when every person making money from it is lying through their teeth.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      I run local AI using a program called “GPT4All”. It’s free, you can Download several models.

  • @[email protected]
    link
    fedilink
    English
    286 months ago

    We’re hitting the end of free/cheap innovation. We can’t just make a one-time adjustment to training and make a permanent and substantially better product.

    What’s coming now are conventionally developed applications using LLM tech. o1 is trying to fact-check itself and use better sources.

    I’m pretty happy it’s slowing down right at this point.

    I’d like to see non-profit open systems for education. Let’s feed these things textbooks and lectures. Model the teaching after some of our best minds. Give individuals 1:1 time with a system 24x7 that they can just ask whatever they want and as often as they want and have it keep track of what they know and teach them the things that they need to advance. .

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      I mean isn’t it already that is included in the datasets? It’s pretty much a mix of everything.

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        Not everything in the dataset is retrievable. It’s very lossy. It’s also extremely noisy with a lot of training data that’s not education-worthy.

        I suspect they’d make a purpose-built model trained mainly on what they actually would want to teach especially from good educators.

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      That’s the job I need. I’ve spent my whole live trying to be Data from Star Trek. I’m ready to try to mentor and befriend a computer.

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    6 months ago

    *doesn’t read the article*

    This is true. I’ve already moved onto Gemini. GPT already feels dated by comparison.

    • ms.lane
      link
      fedilink
      English
      36 months ago

      I don’t trust Gemini to get anything right, it’s just A million SEO monkeys.

      • @[email protected]
        link
        fedilink
        English
        06 months ago

        To be fair, there is currently no AI that is reliable for fact checking.

        I like it because it generates faster, more detailed responses. Currently I’m using it extensively for resumes and cover letters, and for making my correspondence with potential employers sound more intelligent by having it rewrite my messages for me. It’s really good at that.

        It also helped me reposition my 5G mmWave antenna perfectly, literally doubling my home internet speeds. It also seems to be better at writing code, or at least better at understanding what I’m trying to get out of the code.

    • LiveLM
      link
      fedilink
      English
      26 months ago

      Really?
      Last I tried the Gemini assistant on my phone, it wouldn’t even let me finish labeling my alarms before cutting me off

  • IamG0rb
    link
    fedilink
    English
    226 months ago

    “In OpenAI’s early tests, scaling o1 showed diminishing returns: Linear improvements on a challenging math exam required exponentially growing computing power.”

    Sounds like most other drugs, too.

  • Null User Object
    link
    fedilink
    English
    10
    edit-2
    6 months ago

    a million monkeys typing for a million years generating the works of Shakespeare

    FFS, it’s one monkey and infinite years. This is the second time I’ve seen someone make this mistake in an AI article in the past month or so.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      6 months ago

      A million isn’t even close.
      There’s about a few million characters in shakespeares works. That means the chance of typing it randomly is very conservatively 1 in 261000000

      if a monkey types a million characters a week the amount of “attempts” a million monkeys makes in a million years is somewhere in the order of 52000000*1000000*1000000 = 5.2 × 1019

      The difference is hillriously big. Like, if we multiply both the monkey amount and the number of years by the number of atoms in the knowable universe it still isn’t even getting close.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      6 months ago

      FFS, it’s one monkey and infinite years.

      it is definitely not that long. we already had a monkey generating works of shakespeare. its name was shakespeare and it did not take longer than ~60 million years

    • L3ft_F13ld!
      link
      fedilink
      English
      06 months ago

      I always thought it was a small team, not millions. But yeah, one monkey with infinite time makes sense.

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        The whole point is that one of the terms has to be infinite. But it also works with infinite number of monkeys, one will almost surely start typing Hamlet right away.

        The interesting part is that has already happened, since an ape already typed Hamlet, we call him Shakespeare. But at the same time, monkeys aren’t random letter generators, they are very intentional and conscious beings and not truly random at all.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          6 months ago

          one will almost surely start typing Hamlet right away

          This is guaranteed with infinite monkeys. In fact, they will begin typing every single document to have ever existed, along with every document that will exist, right from the start. Infinity is very, very large.

          • @[email protected]
            link
            fedilink
            English
            16 months ago

            This is guaranteed with infinite monkeys.

            no, it is not. the chance of it happening will be really close to 100%, not 100% though. there is still small chance that all of the apes will start writing collected philosophical work of donald trump 😂

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              6 months ago

              There’s 100% chance that all of Shakespeare’s and all of Trump’s writings will be started immediately with infinite monkeys. All of every writing past, present, and future will be immediately started (also, in every language assuming they have access to infinite keyboards of other spelling systems). There are infinite monkeys, if one gets it wrong there infinite chances to get it right. One monkey will even write your entire biography, including events that have yet to happen, with perfect accuracy. Another will have written a full transcript of your internal monologue. Literally every single possible combination of letters/words will be written by infinite monkeys.

            • @[email protected]
              link
              fedilink
              English
              16 months ago

              It’s not close to 100%, it is by formal definition 100%. It’s a calculus thing, when there’s a y value that depends on an x value. And y approaches 1 when x approaches infinity, then y = 1 when x = infinite.

              • @[email protected]
                link
                fedilink
                English
                2
                edit-2
                6 months ago

                it is by formal definition 100%.

                it is not

                And y approaches 1 when x approaches infinity, then y = 1 when x = infinite.

                you weren’t paying attention in your calculus.

                y is never 1, because x is never infinite. if you could reach the infinity, it wouldn’t be infinity.

                for any n within the function’s domain: abs(value of y in n minus limit of y) is number bigger than zero. that is the definition of the limit. brush up on your definitions 😆

                • @[email protected]
                  link
                  fedilink
                  English
                  1
                  edit-2
                  6 months ago

                  Except, that’s in the real world of physics. In this mathematical/philosophical hypothetical metaphysical scenario, x is infinite. Thus the probability is 1. It doesn’t just approach infinite, it is infinite.

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          But it also works with infinite number of monkeys, one will almost surely start typing Hamlet right away.

          Wouldn’t it even be not just one, but an infinite number of them that would start typing out Hamlet right away ?

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            6 months ago

            In typical statistical mathematician fashion, it’s ambiguously “almost surely at least one”. Infinite is very large.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              6 months ago

              That’s the thing though, infinity isn’t “large” - that is the wrong way to think about it, large implies a size or bounds - infinity is boundless. An infinity can contain an infinite number of other infinities within itself.

              Mathematically, if the monkeys are generating truly random sequences of letters, then an infinite number (and not just “at least one”) of them will by definition immediately start typing out Hamlet, and the probability of that is 100% (not “almost surely” edit: I was wrong on this part, 100% here does actually mean “almost surely”, see below). At the same time, every possible finite combination of letters will begin to be typed out as well, including every possible work of literature ever written, past, present or future, and each of those will begin to be typed out each by an infinite number of other monkeys, with 100% probability.

              • @[email protected]
                link
                fedilink
                English
                26 months ago

                Almost surely, I’m quoting mathematicians. Because an infinite anything also includes events that exist but with probability zero. So, sure, the probability is 100% (more accurately, it tends to 1 as the number of monkeys approach infinite) but that doesn’t mean it will occur. Just like 0% doesn’t mean it won’t, because, well, infinity.

                Calculus is a bitch.

  • @[email protected]
    link
    fedilink
    English
    106 months ago

    Yesterday, alongside the release of the full o1, OpenAI announced a new premium tier of subscription to ChatGPT that enables users, for $200 a month (10 times the price of the current paid tier), to access a version of o1 that consumes even more computing power—money buys intelligence.

    We poors are going to have to organize and make best use of our human intelligence to form an effective resistance against corporate rule. Or we can see where this is going.

    • @[email protected]
      link
      fedilink
      English
      116 months ago

      The thing I’m heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/“leadership” group. They actually think these models are intelligent. I’ve heard people say, “Well, just ask the AI,” meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don’t understand how to best use the technology, and they will end up hanging themselves with it.

      At the end of the day, all AI is just stupid number tricks. They’re very fancy, impressive number tricks, but it’s just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.

      • @[email protected]
        link
        fedilink
        English
        86 months ago

        If they outsource their thinking and coding to an LLM, they might start getting ahead quickly

        As a programmer I have yet to see evidence that LLMs can even achieve that. So far everything they product is a mess that needs significant effort to fix before it even does what was originally asked of the LLM unless we are talking about programs that have literally been written already thousands of times (like Hello World or Fibonacci generators,…).

        • @[email protected]
          link
          fedilink
          English
          36 months ago

          I’m not a programmer, more like a data scientist, and I use LLMs all day, I write my shity pretty specific code, check that it works and them pass it to the LLM asking for refactoring and optimization. Some times their method save me 2 secs on a 30 secs scripts, other ones it’s save me 35 mins in a 36 mins script. It’s also pretty good helping you making graphics.

        • @[email protected]
          link
          fedilink
          English
          26 months ago

          I find LLM’s great for creating shorter snippets of code. It can also be great as a starting point or to get started with something that you are not familiar with.

          • @[email protected]
            link
            fedilink
            English
            26 months ago

            Even asking for an example on how to use a specific API has failed about 50% of the time, it tends to hallucinate entire parts of the API that don’t exist or even entire libraries that don’t exist.

        • @[email protected]
          link
          fedilink
          English
          46 months ago

          I’ve seen a junior developer use it to more quickly get a start on things like boiler plate code, configuration, or just as a starting point for implementing an algorithm. It’s kind of like a souped up version of piecing together Stack Overflow code snippets. Just like using SO, it needs tweaking, and someone who relies too much on either SO or AI will not develop the proper skills to do so.

  • @[email protected]
    link
    fedilink
    English
    86 months ago

    I had a bunch of roofers hammering nails in with hammers.

    I bought a bunch of nail guns and then fired all the roofers. Now less roofing is being done! It is the end to the Era of nail guns! Everyone should just go back to hammers.

  • @[email protected]
    link
    fedilink
    English
    48
    edit-2
    6 months ago

    It’s a great article IMO, worth the read.

    But :

    “This is back to a million monkeys typing for a million years generating the works of Shakespeare,”

    This is such a stupid analogy, the chances for said monkeys to just match a single page any full page accidentally is so slim, it’s practically zero.
    To just type a simple word like “stupid” which is a 6 letter word, and there are 25⁶ combinations of letters to write it, which is 244140625 combinations for that single simple word!
    A page has about 2000 letters = 7,58607870346737857223e+2795 combinations. And that’s disregarding punctuation and capital letters and special charecters and numbers.
    A million monkeys times a million years times 365 days times 24 hours times 60 minutes times 60 seconds times 10 random typos per second is only 315360000000000000000 or 3.15e+20 combinations assuming none are repaeated. That’s only 21 digits, making it 2775 digits short of creating a single page even once.

    I’m so sick of seeing this analogy, because it is missing the point by an insane margin. It is extremely misleading, and completely misrepresenting getting something very complex right by chance.

    To generate a work of Shakespeare by chance is impossible in the lifespan of this universe. The mathematical likelihood is so staggeringly low that it’s considered impossible by AFAIK any scientific and mathematical standard.

    • Uriel238 [all pronouns]
      link
      fedilink
      English
      106 months ago

      In the meantime weasel programs are very effective, and a better, if less known metaphor.

      Sadly the monkeys thought experiment is a much more well known example.

      Irrelevant nerd thought, back in the early nineties, my game development company was Monkey Mindworks based on a joke our (one) programmer made about his method of typing gibberish into the editor and then clearing the parts that didn’t resemble C# code.

        • Uriel238 [all pronouns]
          link
          fedilink
          English
          16 months ago

          It may have been C+ or merely C with OOP features. I was writing the enemy-AI code (not to be confused with actual learning systems) in visual basic (and made some sweet pathfinding algorithms at the time), but took it too seriously and ended up breaking my brain.

          We had a publisher and it was going to be awesome and then Windows 95 came out and broke all our code.

    • @[email protected]
      link
      fedilink
      English
      25
      edit-2
      6 months ago

      The quote is misquoting the analogy. It is an infinite number of monkeys.

      The point of the analogy is about randomness and infinity. Any page of gibberish is equally as likely as a word perfect page of Shakespeare given equal weighting to the entry if characters. There are factors introduced with the behaviours of monkeys and placement of keys, but I don’t think that is the point of the analogy.

      • pikl
        link
        fedilink
        English
        26 months ago

        It was a big YouTube science video subject last week… Suddenly everyone has a real educated opinion on the matter with statistics and everything.

    • @[email protected]
      link
      fedilink
      English
      346 months ago

      the actual analog isn’t a million monkeys. you only need one monkey. but it’s for an infinite amount of time. the probability isn’t practically zero, it’s one. that’s how infinity works. not only will it happen, but it will happen again, infinitely many times.

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        6 months ago

        That’s not true. Something can be infinite and still not contain every possibility. This is a common misconceptoin.

        For instance, consider an infinite series of numbers created by adding an additional “1” to the end of the previous number.

        So we can start with 1. The next term is 11, followed by 111, then 1111, etc. The series is infinite since we can keep the pattern going forever.

        However at no point will you ever see a “2” in the sequence. The infinite series does not contain every possible digit.

        • @[email protected]
          link
          fedilink
          English
          15
          edit-2
          6 months ago

          why do you keep changing the parameters? yeah, if you exclude the possibility of something happening it won’t happen. duh?

          that’s not what’s happening in the infinite monkey theorem. it’s random key presses. that means every character has an equal chance of being pressed.

          no one said the monkey would eventually start painting. or even type arabic words. it has a typewriter, presumably an English one. so the results will include every possible string of characters ever.

          it’s not a common misconception, you just don’t know what the theorem says at all.

          • @[email protected]
            link
            fedilink
            English
            56 months ago

            so the results will include every possible string of characters ever.

            That’s just not true. One monkey could spend eternity pressing “a”. It does’t matter that he does it infinitely. He will never type a sentence.

            If the keystrokes are random that is just as likely as any other output.

            Being infinite does not guarantee every possible outcome.

            • @[email protected]
              link
              fedilink
              English
              76 months ago

              no. you don’t understand infinity, and you don’t understand probability.

              if every keystroke is just as likely as any other keystroke, then each of them will be pressed an infinite number of times. that’s what just as likely means. that’s how random works.

              if the monkey could press a for an eternity, then by definition it’s not as likely as any other keystroke. you’re again changing the parameters to a monkey whose probability of pressing a is 1 and every other key is 0. that’s what you’re saying means.

              for a monkey that presses the keys randomly, which means the probability of each key is equal, every string of characters will be typed. you can find the letter a typed a million times consecutively, and a billion times and a quadrillion times. not only will you find any number of consecutive keystrokes of every letter, but you will find it repeated an infinite number of times throughout.

              being infinite does guarantee every possible outcome. what you’re ruling out from infinity is literally impossible by definition.

            • @[email protected]
              link
              fedilink
              English
              86 months ago

              Any possibility, no matter how small, becomes a certainty when dealing with infinity. You seem to fundamentally misunderstand this.

          • @[email protected]
            link
            fedilink
            English
            4
            edit-2
            6 months ago

            if you exclude the possibility of something happening it won’t happen

            That’s exactly my point. Infinity can be constrained. It can be infinite yet also limited. If we can exclude something from infinity then we have shown that an infinite set does NOT necessarily include everything.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          6 months ago

          Anything with a nonzero probability will happen infinitely many times. The complete works of Shakespeare consist of 5,132,954 characters, 78 distinct ones. 1/(78^5132954 ) is an incomprehensibly tiny number, millions of zeroes after the decimal, but it is not zero. So the probability of it happening after infinitely many trials is 1. lim(1-(1-P)^n ) as n approaches infinity is 1 for any nonzero P.

          An outcome that you’d never see would be a character that isn’t on the keyboard.

          • @[email protected]
            link
            fedilink
            English
            46 months ago

            The original statement was that if something is infinite it must contain all possibilities. I showed one of many examples that do not, therefore the statement is not true. It’s a common misconception.

            Please use your big boy words to reply instead of calling something “dumb” for not understanding.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        6 months ago

        Infinite monkeys and infinite time is equally stupid, because obviously you can’t have either, for the simple reason that the universe is finite.
        And apart from that, it’s stupid because if you use an infinite random, EVERYTHING is contained in it!

        I’m sorry it just annoys the hell out of me, because it’s a thought experiment, and it’s stupid to use this as an analogy or example to describe anything in the real world.

        • @[email protected]
          link
          fedilink
          English
          106 months ago

          You wouldn’t need infinite time if you had infinite monkeys.

          An infinite number of them would produce it on the very first try!

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            6 months ago

            You wouldn’t need infinite time if you had infinite monkeys.

            Obviously, but as I wrote BOTH are impossible, so it’s irrelevant. I just didn’t think I’d have to explain WHY infinite monkeys is impossible, while some might think the universe is infinite also in time, which it is not.

            I also already wrote that if you have an infinite string everything is contained in it.
            But even with infinite moneys it’s not instant, because technically each monkey needs to finish a page.

            But I understand what you mean, and that’s exactly why the theorem is so stupid IMO. You could also have 1 monkey infinite time.
            But both are still impossible.

            When I say it’s stupid, I don’t mean as a thought experiment which is the purpose of it. The stupid part is when people think they can use it as an analogy or example to describe something

            • @[email protected]
              link
              fedilink
              English
              26 months ago

              It’s a theorem. It’s theoretical. This is like complaining about the 20 watermelon example being unrealistic: that’s not what it is about.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                6 months ago

                It’s OK it exist, it’s a thought that is curious enough. I’d even go so far and say it can have an educational function for children.
                I just don’t get why some people seem to think it’s relevant in so many situations where clearly it’s not.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      6 months ago

      You are missing a piece of the analogy.

      After each key press the size of the letters change, so some become more likely to be hit than others.

      How the size of the keys vary is the secret being sought, and this training requires many, many more monkeys than just producing Shakespeare.

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        AI data analyst here. The above is an excellent extension of the analogy.

        Now, imagine another monkey controlling how the size of the keys vary. There might even be another monkey controlling that one.

        The analogy doesn’t seem to break until we start talking about the assumptions humans make for efficiency.

    • @[email protected]
      link
      fedilink
      English
      116 months ago

      I hear you. My fucking dog keeps barking up stupid Mexican novellas and Korean pop. C’mon Rosco! Go get me the stick buddy! The stick! No! C’mon! The cat didn’t kill your father and then betray you for the chicken!!! Nobody likes your little dance that you do either, you do it because you sick in the brain for the Korean Ladies! Get otta here!

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.

  • irotsoma
    link
    fedilink
    English
    76 months ago

    The monkey’s typing and generating Shakespeare is supposed to show the ridiculousness of the concept of infinity. It does not mean it would happen in years, or millions of years, or billions, or trillions, or… So unless the “AI” can move outside the flow of time and take an infinite amount of time and also then has a human or other actual intelligence to review every single result to verify when it comes up with the right one…yeah, not real…this is what happens when we give power to people with no understanding of the problem much less how to solve it. They come up with random ideas from random slivers of information. Maybe in an infinite amount of time a million CEOs could make a longterm profitable company.

  • @[email protected]
    link
    fedilink
    English
    06 months ago

    I mean after reading the article, I’m still unsure how this makes ChatGPT any better at the things I’ve found it to be useful for. Proofreading, generating high level overview of well-understood topics, and asking it goofy questions, for instance. If it is ever gonna be a long-term thing, “AI” needs to have useful features at a cost people are willing to pay, or be able to replace large numbers of workers without significant degredation in quality of work. This new model appears to be more expensive without being either of those other things and is therefore a less competitive product.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      The new model now that it’s out of preview is performing significantly less “thinking”

  • @[email protected]
    link
    fedilink
    English
    46 months ago

    The only thing that stands out as a viable point is the energy consumption, everything else is word salad. As long as the average person isn’t being deprived of their energy needs, I see no problem. It’s the early stages, efficiency can come later in all sort of ways.

    What interests me is that all this hype paves the way for intelligence that can interact with the physical world — advanced robots.

    And as far as ChatGPT is concerned, its usefulness is a mystery only to contrarians and counter-culture types.