• @[email protected]
    link
    fedilink
    English
    20
    edit-2
    2 months ago

    You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.

    Mostly said by tech bros and startups.

    That should really tell you everything you need to know.

  • @[email protected]
    link
    fedilink
    English
    452 months ago

    Baldur Bjarnason’s given his thoughts on Bluesky:

    My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

    I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking

    • David GerardOPM
      link
      fedilink
      English
      172 months ago

      this post has also broken containment in the wider world, the video’s got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons

    • @[email protected]
      link
      fedilink
      English
      92 months ago

      Posts that explode like this are fun and yet also a reminder why the banhammer is needed.

      • @[email protected]
        link
        fedilink
        English
        82 months ago

        Unlike the PHP hammer, the banhammer is very useful for a lot of things. Especially sealion clubbing.

    • @[email protected]
      link
      fedilink
      English
      192 months ago

      the prompt-related pivots really do bring all the chodes to the yard

      and they’re definitely like “mine’s better than yours”

      • @[email protected]
        link
        fedilink
        English
        16
        edit-2
        2 months ago

        The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.

        “Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”

    • KubeRoot
      link
      fedilink
      English
      132 months ago

      GitHub, for one, colors the icon red for AI contributions and green/purple for human ones.

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        Ah, right, so we’re differentiating contributions made by humans with AI from some kind of pure AI contributions?

        • KubeRoot
          link
          fedilink
          English
          222 months ago

          It’s a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.

          • @[email protected]
            link
            fedilink
            English
            72 months ago

            I appreciate you explaining it. My LLM wasn’t working so I didn’t understand the joke

        • @[email protected]
          link
          fedilink
          English
          42 months ago

          yeah I just want to point this out

          myself and a bunch of other posters gave you solid ways that we determine which PRs are LLM slop, but it was really hard to engage with those posts so instead you’re down here aggressively not getting a joke because you desperately need the people rejecting your shitty generated code to be wrong

          with all due respect: go fuck yourself

    • @[email protected]
      link
      fedilink
      English
      24
      edit-2
      2 months ago

      To get a bit meta for a minute, you don’t really need to.

      The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

      Until then it’s probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don’t really need to debunk every separate witness testimony, it’s self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

    • @[email protected]
      link
      fedilink
      English
      152 months ago

      if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

  • @[email protected]
    cake
    link
    fedilink
    English
    442 months ago

    I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.

  • 🍪CRUMBGRABBER🍪
    link
    fedilink
    English
    212 months ago

    Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can’t bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM’s will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.

  • Lovable Sidekick
    link
    fedilink
    English
    32 months ago

    Arguments against misinformation aren’t arguments against the subject of the misinformation, they’re just more misinformation.

  • @[email protected]
    link
    fedilink
    English
    532 months ago

    Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?

    None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.

    • @[email protected]
      link
      fedilink
      English
      172 months ago

      Where is the good AI art?

      Right here:

      That’s about all the good AI art I know.

      There are plenty of uses for AI, they are just all evil

    • kadup
      link
      fedilink
      English
      272 months ago

      If the people addicted to AI could read and interpret a simple sentence, they’d be very angry with your comment

      • @[email protected]
        link
        fedilink
        English
        16
        edit-2
        2 months ago

        Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything “already debunked strawmen”.

    • Dragon
      link
      fedilink
      English
      72 months ago

      There is not really much “AI written code” but there is a lot of AI-assisted code.

      • @[email protected]
        link
        fedilink
        English
        122 months ago

        Wow. Where was this Wikipedia page when I was writing my MSc thesis?

        Alternatively, how did I manage to graduate with research skills so bad that I missed it?

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      so what are the sentiments about langchain? I was recently working with it to try to build some automatic PR generation scripts but I didn’t have the best experience understanding how to use the library. the documentation has been quite messy, repetitive and disorganized—somehow both verbose and missing key details. but it does the job I wanted it to, namely letting me use an LLM with tool calling and custom tools in a script

      • @[email protected]
        link
        fedilink
        English
        10
        edit-2
        2 months ago

        Given the volatility of the space I don’t think it could have been doing stuff much better, doubt it’s getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.

        Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.

        If someone wants to know an LLM’s opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        I’ve deployed LangChain to production shudders. My use case involved sending images results back to the “agent” and that use case is an after thought for many of these services. I ended up extending the Gemini Vertex client to fake it. The artifacts system is basically pass around a dictionary and pray both ends agree on the shape.

        This is not an endorsement of LLMs in general. I’m working to replace it with a decision tree.

  • Flax
    link
    fedilink
    English
    20
    edit-2
    2 months ago

    AI isn’t bad when supervised by a human who knows what they’re doing. It’s good to speed up programmers if used properly. But business execs don’t see that.

    Even when I supervise it, I always have to step in to clean up it’s mess, tell it off because it randomly renames my variables and functions because it thinks it knows better and oversteps. Needs to be put in it’s place like a misbehaving dog, lol

      • Flax
        link
        fedilink
        English
        92 months ago

        How? It’s just like googling stuff but less annoying

        • @[email protected]
          link
          fedilink
          English
          72 months ago

          it is not just like googling stuff if it actively fucks up already existing parts of the code

        • @[email protected]
          link
          fedilink
          English
          332 months ago

          also, fucking ew:

          Needs to be put in it’s place like a misbehaving dog, lol

          why do AI guys always have weird power fantasies about how they interact with their slop machines

          • @[email protected]
            link
            fedilink
            English
            112 months ago

            It’s almost as if they have problematic conceptions (or lack thereof) of exploitation and power dynamics!

        • @[email protected]
          link
          fedilink
          English
          172 months ago

          given your posts in this thread, I don’t think I trust your judgement on what less annoying looks like

        • snooggums
          link
          fedilink
          English
          262 months ago

          Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.

          For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.

          • @[email protected]
            link
            fedilink
            English
            112 months ago

            The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they’re often the reason the gate keeping is necessary to keep the quality high.

            • @[email protected]
              link
              fedilink
              English
              7
              edit-2
              2 months ago

              I used to answer new questions on SO daily a few years back and 50% of all questions are basically unanswerable.

              You’d also have the nice September Effect when a semester started and every other question would be someone just copy pasting their homework verbatim and being very surprised we closed it in like a minute.

              The thing about that is that literally anyone can answer SO questions. Like try and do that. Pick a language or a tech you’re most familiar with, filter that tag and sort by new. Click on every new question. After an hour you’ll understand just why most questions have to be closed immediately to keep the site sane.

              Whenever I see criticism of SO that’s like “oh they’ll just close your question for no reason” I can’t help but think okay, there’s overwhelming chance you’re just one of Those and not an innocent casualty of an overeager closer.

              • @[email protected]
                link
                fedilink
                English
                42 months ago

                I remember in my OS course we were advised to practice good “netiquette” if we were going to go bother the fine folks on stack overflow. Times have changed

          • @[email protected]
            link
            fedilink
            English
            112 months ago

            Google became shit not because of AI but because of SEO.

            The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the “reddit” tag just to make sure to get actual results instead of some badly written bloated text?

            • Tar_Alcaran
              link
              fedilink
              English
              142 months ago

              Google search became shit when they made the guy in charge of ads also in charge of search.

              • @[email protected]
                link
                fedilink
                English
                92 months ago

                this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)

                • @[email protected]
                  link
                  fedilink
                  English
                  42 months ago

                  Ackshually, Google became shit when they started posturing as a for-profit entity. Gather round, comrades, let us sing the internationale

          • Flax
            link
            fedilink
            English
            42 months ago

            Stack overflow resulted in people with highly specialised examples that wouldn’t suit your application. It’s easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax

            • @[email protected]
              link
              fedilink
              English
              142 months ago

              You’ve inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren’t even at the level of occasionally making decent open source contributions.

            • @[email protected]
              link
              fedilink
              English
              102 months ago

              wow imagine needing to understand the code you’re dealing with and not just copypasting a bunch of shit around

              reading documentation and source code must be an excruciating amount of exercise for your poor brain - it has to even do something! poor thing

            • @[email protected]
              link
              fedilink
              English
              16
              edit-2
              2 months ago

              Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don’t even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.

              • @[email protected]
                link
                fedilink
                English
                72 months ago

                Forever in my mind, the guy who said on another post he uses an LLM to convert strings to uppercase when that’s literally a builtin command in VSCode, give people cannons and they’re start shooting mosquitoes with them every fucking time

              • @[email protected]
                link
                fedilink
                English
                172 months ago

                the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time

            • @[email protected]
              link
              fedilink
              English
              62 months ago

              Fun fact, SO is not a place to go to ask for trivial syntax and it’s expressly off-topic, because guess what, people answering questions on SO are not your personal fucking google searchers

        • shnizmuffin
          link
          fedilink
          English
          62 months ago

          Were you too young to use a computer back when Google was good?

    • @[email protected]
      link
      fedilink
      English
      18
      edit-2
      2 months ago

      autoplag isn’t bad when supervised by a human

      even when I supervise it, it’s bad

      my god you people are a whole kind of poster and it fucking shows

  • snooggums
    link
    fedilink
    English
    1312 months ago

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    This is the most entertaining thing I’ve read this month.

    • @[email protected]
      link
      fedilink
      English
      122 months ago

      yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit

      he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him

      long may his PRs fail

    • @[email protected]
      link
      fedilink
      English
      672 months ago

      I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn’t publish my work