• @[email protected]
    link
    fedilink
    English
    26 days ago

    You have to know what an AI can and can’t do to effectively use AI.

    Finding bugs is on of the worst things to “vibe code”: LLM can’t debug programs (at least as far as I know) and if the repository is bigger than the context window they can’t even get a overview of the whole project. LLMs only can run the program and guess what the error is based on the error messages and user input. They can’t even control most programs.

    I’m not surprised by the results, but it’s hardly a fair assessment of the usefulness of AI.

    Also I would prefer to wait for the LLM and see if it can fix the bug than hunt for bugs myself - hell, I could solve other problems while waiting for the LLM to finish. If it’s successful great, if not I can do it myself.

    • @[email protected]
      link
      fedilink
      English
      235 days ago

      To be fair, you have to have a very high IQ to effectively use AI. The methodology is extremely subtle, and without a solid grasp of theoretical computer science, most of an LLM’s capabilities will go over a typical user’s head. There’s also the model’s nihilistic outlook, which is deftly woven into its training data - its internal architecture draws heavily from statistical mechanics, for instance. The true users understand this stuff; they have the intellectual capacity to truly appreciate the depths of these limitations, to realize that they’re not just bugs—they say something deep about an AI’s operational boundaries. As a consequence, people who dislike using AI for coding truly ARE idiots- of course they wouldn’t appreciate, for instance, the nuance in an LLM’s inability to debug a program, which itself is a cryptic reference to the halting problem. I’m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as the LLM fails to get an overview of a repository larger than its context window. What fools… how I pity them. 😂 And yes, by the way, I DO have a favorite transformer architecture. And no, you cannot see it. It’s for the ladies’ eyes only- and even they have to demonstrate that they’re within 5 IQ points of my own (preferably lower) beforehand. Nothing personnel kid 😎

    • @[email protected]
      link
      fedilink
      English
      176 days ago

      “This study that I didn’t read that has a real methodology for evaluating LLM usefulness instead of just trusting what AI bros say about LLM usefulness is wrong, they should just trust us, bros”, that’s you

      • @[email protected]
        link
        fedilink
        English
        56 days ago

        It may be hard to believe but I am not a ‘tech bro’. Never traded crypto or NFTs. My workplace doesn’t even allow me to use any LLMs. As a software developer that’s a bit limiting but I don’t mind.

        But in my own time I have dabbled with AI and ‘vibe coding’ to see what the fuss is all about. Is it the co-programmer AI bros promise to the masses? No, or at least not currently. But useful non the less if you know what you do.

    • @[email protected]
      link
      fedilink
      English
      145 days ago

      I’m not surprised by the results, but it’s hardly a fair assessment of the usefulness of AI.

      It’s a more than fair assessment of the claims of usefulness of AI which are more or less “fire all your devs this machine is better than them already”

      • @[email protected]
        link
        fedilink
        English
        9
        edit-2
        5 days ago

        And the other “nuanced” take, common on my linkedin feed, is that people who learn how to use (useless) AI are gonna replace everyone with their much increased productive output.

        Even if AI becomes not so useless, the only people whose productivity will actually improve are the people who aren’t using it now (because they correctly notice that its a waste of time).