We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

  • @[email protected]
    link
    fedilink
    English
    1613 days ago

    What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he’s just saying complete bullshit to boost stock prices?

  • 4F6C69766572
    link
    fedilink
    English
    2314 days ago

    So “Deleting errors” meaning rewriting history, further fuckin’ up facts and definitely sowing hatred and misinformation. Just call it like it is, techbro‘s new reality. 🖕🏻

  • JackbyDev
    link
    fedilink
    English
    3714 days ago

    Training an AI model on AI output? Isn’t that like the one big no-no?

    • @[email protected]
      link
      fedilink
      English
      2414 days ago

      We have seen from his many other comments about this, that he just wants a propaganda bot that regurgitates all of the right wing talking points. So that will definitely be easier to achieve if he does it that way.

      • Schadrach
        link
        fedilink
        English
        713 days ago

        that he just wants a propaganda bot that regurgitates all of the right wing talking points.

        Then he has utterly failed with Grok. One of my new favorite pastimes is watching right wingers get angry that Grok won’t support their most obviously counterfactual bullshit and then proceed to try to argue it into saying something they can declare a win from.

    • @[email protected]
      link
      fedilink
      English
      113 days ago

      They used to think that, but it is actually not that bad.

      Like iterations with machine learning, you can train it with optimized output.

      That isn’t the dumb part about this.

  • @[email protected]
    link
    fedilink
    English
    10514 days ago

    “If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”

    ~Fucking Dumbass

  • Flukas88
    link
    fedilink
    English
    1214 days ago

    When you think he can’t be more of a wanker with an ameba brain… He surprises you

  • ViatorOmnium
    link
    fedilink
    English
    4414 days ago

    Because neural networks aren’t known to suffer from model collapse when using their output as training data. /s

    Most billionaires are mediocre sociopaths but Elon Musk takes it to the “Emperors New Clothes” levels of intellectual destitution.

  • FireWire400
    link
    fedilink
    English
    27
    edit-2
    13 days ago

    How high on ketamine is he?

    3.5 (maybe we should call it 4)

    I think calling it 3.5 might already be too optimistic

  • @[email protected]
    link
    fedilink
    English
    1613 days ago

    First error to correct:

    We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information errors and deleting errors information.

    • Lord Wiggle
      link
      fedilink
      English
      713 days ago

      He should be locked up in a mental institute. Indefinitely.

  • @[email protected]
    link
    fedilink
    English
    5313 days ago

    Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.

    The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.

    • @[email protected]
      link
      fedilink
      English
      1913 days ago

      Thinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it’ll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.

      • @[email protected]
        link
        fedilink
        English
        2613 days ago

        Wikipedia is quite resilient - you can even put it on a USB drive. As long as you have a free operating system, there will always be ways to access it.

        • @[email protected]
          link
          fedilink
          English
          1013 days ago

          I keep a partial local copy of Wikipedia on my phone and backup device with an app called Kiwix. Great if you need access to certain items in remote areas with no access to the internet.

      • @[email protected]
        link
        fedilink
        English
        313 days ago

        Yes. There will be no websites only AI and apps. You will be automatically logged in to the apps. Linux, Lemmy will be baned. We will be classed as hackers and criminals. We probably have to build our own mesh network for communication or access it from a secret location.

    • @[email protected]
      link
      fedilink
      English
      313 days ago

      The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.

      That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context

      LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens

      You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top

      The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way

      • Schadrach
        link
        fedilink
        English
        1013 days ago

        Wikipedia is not a trustworthy source of information for anything regarding contemporary politics or economics.

        Wikipedia presents the views of reliable sources on notable topics. The trick is what sources are considered “reliable” and what topics are “notable”, which is why it’s such a poor source of information for things like contemporary politics in particular.

      • Green Wizard
        link
        fedilink
        English
        2013 days ago

        Wikipedia gives lists of their sources, judge what you read based off of that. Or just skip to the sources and read them instead.

        • @[email protected]
          link
          fedilink
          English
          313 days ago

          Just because Wikipedia offers a list of references doesn’t mean that those references reflect what knowledge is actually out there. Wikipedia is trying to be academically rigorous without any of the real work. A big part of doing academic research is reading articles and studies that are wrong or which prove the null hypothesis. That’s why we need experts and not just an AI to regurgitate information. Wikipedia is useful if people understand it’s limitations, I think a lot of people don’t though.

          • Green Wizard
            link
            fedilink
            English
            313 days ago

            For sure, Wikipedia is for the most basic subjects to research, or the first step of doing any research (they could still offer helpful sources) . For basic stuff, or quick glances of something for conversation.

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              13 days ago

              This very much depends on the subject, I suspect. For math or computer science, wikipedia is an excellent source, and the credentials of the editors maintaining those areas are formidable (to say the least). Their explanations of the underlaying mechanisms are in my experience a little variable in quality, but I haven’t found one that’s even close to outright wrong.