• @[email protected]
        link
        fedilink
        English
        19 months ago

        Programming was like a challenge, you have a problem and you need to solve it. You look into the internet, stack overflow, test different chunks of codes, reading documentation, etc. nowadays is simply splitting one problem into pieces, and then copy pasting.

  • Eager Eagle
    link
    fedilink
    English
    49 months ago

    I like to use suggestions to feel superior when trash talking the generated code

    • Pennomi
      link
      fedilink
      English
      69 months ago

      Flying cars exist, they’re just not cost effective. AFAICT there’s no GPT that is proficient at coding yet.

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        As far as I know, right now the main problem with flying cars is that they are nowhere near as idiot-proof as a normal car, and don’t really solve any transportation problem since most countries’ air regulations agencies would require them to exclusively take off and land in airports… Where you can usually find tons of planes that can go much further (and are much more cost effective, as you pointed out)

        • sepi
          link
          fedilink
          English
          12
          edit-2
          9 months ago

          The more people using chatgpt to generate low quality code they don’t understand, the more job safety and greater salary I get.

  • @[email protected]
    link
    fedilink
    English
    39 months ago

    Judging this article by it’s title (refuse to click). Calling BS. ChatGPT has been a game changer for me personally

  • @[email protected]
    link
    fedilink
    English
    149 months ago

    Who are those guys they keep asking this question over and over ? And how are they not able to use such a simple tool to increase their productivity ?

  • Destide
    link
    fedilink
    English
    119 months ago

    It’s just fancier spell check and boilerplate generator

  • ggppjj
    cake
    link
    fedilink
    English
    279 months ago

    It introduced me to the basics of C# in a way that traditional googling at my previous level of knowledge would’ve made difficult.

    I knew what I wanted to do and I didn’t know what was possible or how to ask without my question being closed as a duplicate with a link to an unhelpful post.

    In that regard, it’s very helpful. If I had already known the language well enough, I can see it being less helpful.

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        9 months ago

        Even with amazing documentation, it can be hard to find the thing you’re looking for if you don’t know the right phrasing or terminology yet. It’s easily the most usable thing I’ve seen come out of “AI”, which makes sense. Using a Language Model to parse language is a very literal application.

        • @[email protected]
          link
          fedilink
          English
          29 months ago

          The person I replied to was talking about learning the basics of a language… This isn’t about searching for something specific, this is about reading the very basic introduction to a language before trying to Google your way through it. Avoiding the basic documentation is always a bad idea. Replacing it with the LLMed version of the original documentation probably even more so.

    • Boomer Humor Doomergod
      link
      fedilink
      English
      79 months ago

      This is what I’ve used it for and it’s helped me learn, especially because it makes mistakes and I have to get them to work. In my case it was with Terraform and Ansible.

      • ggppjj
        cake
        link
        fedilink
        English
        4
        edit-2
        9 months ago

        Haha, yeah. It really loves to refactor my code to “fix” bracket list initialization (e.g. List<string> stringList = [];) because it keeps not remembering that the syntax has been valid for a while.

        It’s newest favorite hangup is to incessantly suggest null checks without asking if it’s a nullable property that it’s checking first. I think I’m almost at the point where it’s becoming less useful to me.

    • @[email protected]
      link
      fedilink
      English
      11
      edit-2
      9 months ago

      Great for Coding 101 in a language I’m rusty with or otherwise unfamiliar.

      Absolutely useless when it comes time to optimize a complex series of functions or upgrade to a new version of the .NET library. All the “AI” you need is typically baked into Intellisense or some equivalent anyway. We’ve had code-assist/advice features for over a decade and its always been mid. All that’s changed is the branding.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      I learned bash thanks to AI!

      For years, all I did was copy and paste bash commands. And I didn’t understand arguments, how to chain things, or how it connects.

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        You do realize that a very thorough manual is but a man bash away? Perhaps it’s not the most accessible source available, but it makes up for that in completeness.

        • ggppjj
          cake
          link
          fedilink
          English
          3
          edit-2
          9 months ago

          I believe accessibility is the part that makes LLMs helpful, when they are given an easy enough task to verify. Being able to ask a thing that resembles a human what you need instead of reading through possibly a textbook worth of documentation to figure out what is available and making it fit what you need is fairly powerful.

          If it were actually capable of reasoning, I’d compare it to asking a linguist the origin of a word vs looking it up in a dictionary. I don’t think anyone disagrees that the dictionary would be more likely to be fully accurate, and also I personally would just prefer to ask the person who seemingly knows and, if I have reason to doubt, then go back and double-check.

          Here’s the manpage for bash’s statistics from wordcounter.net:

          • @[email protected]
            link
            fedilink
            English
            19 months ago

            Perhaps LLMs can be used to gain some working vocabulary in a subject you aren’t familiar with. I’d say anything more than that is a gamble, since there’s no guarantee that hallucinations have not taken place. Remember, that to spot incorrect info, you need to already be well acquainted with the matter at hand, which is at the polar opposite of just starting to learn the basics.

            • ggppjj
              cake
              link
              fedilink
              English
              19 months ago

              I do try to keep the “unknown unknowns” problem in mind when I use it, and I’ve been using it far less as I latched on to how OOP actually works and built up the lexicon and my own preferences. I try to only ask it for high-level stuff that I can then use to search the wider (hopefully more human) internet more traditionally with. I fully appreciate that it’s nothing more than a very incredibly fancy auto-completion engine and the basic task of auto-complete just so happens to appear intelligent as it gets better and more complex but continues to lack any form of real logical thoughts.

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    I’m fine with searching stack exchange. It’s much more useful. More info, more options, more understanding.

  • @[email protected]
    link
    fedilink
    English
    89 months ago

    I partly disagree, complex algorithms are indeed a no, but for learning a new language it is awesome.

    Currently learning Rust and although it cannot solve everything, it does guide you with suggestions and usable code fragments.

    Highly recommended.

    • @[email protected]
      link
      fedilink
      English
      19 months ago

      Is there anything it provided you so far that was better than the guidance from the Rust compiler errors themselves? Every error ends with “run this command for a tutorial on why this error happened and how to fix it” type of info. A lot of times the error will directly tell you how to fix it too.

    • @[email protected]
      link
      fedilink
      English
      19 months ago

      Currently learning Rust and although it cannot solve everything, it does guide you with suggestions and usable code fragments.

      as does the compiler and the rust book

  • @[email protected]
    link
    fedilink
    English
    199 months ago

    I truly don’t understand the tendency of people to hate these kinds of tools. Honestly seems like an ego thing to me.

    • The Yungest Onion
      link
      fedilink
      English
      49 months ago

      Typical lack of nuance on the Internet, sadly. Everything has to be Bad or Good. Black or White. AI is either The best thing ever™ or The worst thing ever™. No room for anything in between. Considering negative news generates more clicks, you can see why the media tend to take the latter approach.

      I also think much of the hate is just people jumping on the AI = bad band-wagon. Does it have issues? Absolutely. Is it perfect? Far from it. But the constant negativity has gotten tired. There’s a lot of fascinating discussion to be had around AI, especially in the art world, but God forbid you suggest it’s anything but responsible for the total collapse of civilisation as we know it…

      • @[email protected]
        link
        fedilink
        English
        89 months ago

        If it didn’t significantly contribute to the cooking of all lifeforms on planet Earth, most of us would not mind. We would still deride it because of its untrustworthiness. However, it’s not just useless: it’s also harmful. That’s the core of the beef I (and a lot of other folks) have against the tech.

        • The Yungest Onion
          link
          fedilink
          English
          29 months ago

          Oh for sure. How we regulate AI (including how we power it) is really important, definitely.

      • @[email protected]
        link
        fedilink
        English
        119 months ago

        Like I told another person ITT, hiring terrible devs isn’t something you can blame on software.

          • @[email protected]
            link
            fedilink
            English
            49 months ago

            I would argue that it’s obvious if someone doesn’t know how to use a tool to do their job, they aren’t great at their job to begin with.

            Your argument is to blame the tool and excuse the person who is awful with the tool.

              • @[email protected]
                link
                fedilink
                English
                39 months ago

                Using a tool to speed up your work is not lazy. Using a tool stupidly is stupid. Anyone who thinks these tools are meant to replace humans using logic is misunderstanding them entirely.

                You remind me of some of my coworkers who would rather do the same mind numbing task for hours every day rather than write a script that handles it. I judge them for thinking working smarter is “lazy” and I think it’s a fair judgement. I see them as the lazy ones. They’d rather not think more deeply about the scripting aspect because it’s hard. They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.

                • @[email protected]
                  link
                  fedilink
                  English
                  29 months ago

                  They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.

                  From what I’ve seen of AI code in my time using it, it often is an advanced form of copying and pasting. It frequently takes problems that could be better solved more efficiently with fewer lines of code or by generalizing the problem and does the (IMO evil) work of making the solution that used to require the most drudgery easy.

              • @[email protected]
                link
                fedilink
                English
                49 months ago

                Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.

                If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.

                • @[email protected]
                  link
                  fedilink
                  English
                  29 months ago

                  Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.

    • @[email protected]
      link
      fedilink
      English
      129 months ago

      Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.

      At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something… now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it’d’ve given the wrong answer) and call it a day; no brain activity or learning whatsoever.

      • @[email protected]
        link
        fedilink
        English
        139 months ago

        That is not a problem with the ai software, that’s a problem with hiring morons who have zero experience.

        • @[email protected]
          link
          fedilink
          English
          149 months ago

          No. LLMs are very good at scamming people into believing they’re giving correct answers. It’s practically the only thing they’re any good at.

          Don’t blame the victims, blame the scammers selling LLMs as anything other than fancy but useless toys.

          • jungle
            link
            fedilink
            English
            59 months ago

            Did you get scammed by the LLM? If not, what’s the difference between you and the dev you mentioned?

            • @[email protected]
              link
              fedilink
              English
              5
              edit-2
              9 months ago

              I was lucky enough to not have access to LLMs when I was learning to code.

              Plus, over the years I’ve developed a good thick protective shell (or callus) of cynicism, spite, distrust, and absolute seething hatred towards anything involving computers, which younger developers yet lack.

              • jungle
                link
                fedilink
                English
                39 months ago

                Sorry, you misunderstood my comment, which was very badly worded.

                I meant to imply that you, an experienced developer, didn’t get “scammed” by the LLM, and that the difference between you and the dev you mentioned is that you know how to program.

                I was trying to make the point that the issue is not the LLM but the developer using it.

                • @[email protected]
                  link
                  fedilink
                  English
                  39 months ago

                  And I’m saying that I could have been that developer if I were twenty years younger.

                  They’re not bad developers, they just haven’t yet been hurt enough to develop protective mechanisms against scams like these.

                  They are not the problem. The scammers selling the LLM’s as something they’re not are.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      Its really weird.

      I want to believe people arent this dumb but i also dont want to be crazy for suggesting such nonsensical sentiment is manufactured. Such is life in the disinformation age.

      Like what are we going to do, tell all Countries and fraudsters to stop using ai because it turns out its too much of a hassle?

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        We can’t do that, nobody’s saying we can. But this is an important reminder that the tech savior bros aren’t very different from the oil execs.

        And constant activism might hopefully achieve the goal of pushing the tech out of the mainstream, with its friend crypto, along other things not to be taken seriously anymore like flying cars and the Hyperloop.

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          You are speaking for everyone so right away i dont see this as an actual conversion, but a decree of fact by someone i know nothing about.

          What are you saying is an important reminder? This article?

          By constant activism, do you mean anything that occurs outside of lemmy comments?

          Why would we not take LLMs seriously?

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            9 months ago

            I’m talking about people criticizing LLMs. I’m not a politician. But I’ve seen a few debates about LLMs on this platform, enough to know about the common complaints against ShitGPT. I’ve never seen anyone on this platform seriously arguing for a ban. We all know it’s stupid and that it will be ineffective, just like crackdowns on VPNs in authoritarian countries.

            The reminder is the tech itself. It’s yet another tech pushed by techbros to save the world that fails to deliver and is costing the rest of the planet dearly in the form of ludicrous energy consumption.

            And by activism, I mean stuff happening on Lemmy as well as outside (coworkers, friends, technical people at conferences/meetups). Like it or not, the consensus among techies in my big canadian city is that, while the tech sure is interesting, it’s regarded with a lot of mistrust.

            You can take LLMs seriously if you’d like. But the proofs that the tech is unsound for software engineering keep piling up. I’m fine with your skepticism. But I think the future will look bleaker and bleaker as times goes by. Not a week goes by without its lot of AI fuckups being reported in the press. This article is one of many examples.

            • @[email protected]
              link
              fedilink
              English
              19 months ago

              Theres no particular fuck up mentioned by this article.

              The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.

              Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.

              It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.

              Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.

              • @[email protected]
                link
                fedilink
                English
                19 months ago

                Nobody talked about banning them, once again. I don’t want to do that. I want it to leave the mainstream, for environmental reasons first and foremost.

                The fuckup is, IDK, the false impression of productivity, and the 41% more bugs? That seems like a huge deal to me, even though I’d like to see this study being reproduced to draw real conclusions.

                This, with strawberrries, Air Canada’s chatbots, the 3 Miles Island stuff, the delaying of Google’s carbon neutrality efforts, the cursed Google results telling you to add glue to your pizza, the distrust of the general public about anything with an AI label on it, to mention just a few examples… It’s starting to become a lot.

                Even if you omit the ethical aspects of cooking the planet for a toy, the technology is wildly unsound. You seem to think it can get better, and I can respect that. But I’m very skeptical, and there’s a lot of people with the same opinion, even in tech.

    • @[email protected]
      link
      fedilink
      English
      49 months ago

      Also, when a tool increases your productivity but your salary and paid time off don’t increase, it’s a tool that only benefits the overlords and as such deserves to be hated.

  • @[email protected]
    link
    fedilink
    English
    169 months ago

    No shit. Senior devs have been saying this the whole time. AI, in its current form, for developers, is like handing a spatula to a gourmet chef. Yes it is useful to an extremely small degree, but that’s it…for now.

    • @[email protected]
      link
      fedilink
      English
      59 months ago

      A convoluted spatula that sometimes accidentally cuts what your cooking im half instead of flipping it and consumes as much power as the entirety of Japan.

    • @[email protected]
      link
      fedilink
      English
      19 months ago

      It’s when you only have a pot and your fingers that a spatula is awesome. I could never bother finish learning C and its awkward syntax. Even though I know how to code in some other language, I just couldn’t write much C at all and it was painful and slow. And too much time passed between attempts that I forgot most of it in between. Now I can easily make simple C apps, I just explain the underlying logic, with example of how I would do it in my preferred language and piece by piece it quickly comes together and I don’t have to remember if the for loop needs brackets of parenthesis or brackets nor if the line terminator is colon or semi colon.

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        The problem is that you’re still not learning, then. Maybe that’s your goal, and if so, no worries, but AI is currently a hammer that everyone seems to be falling over themselves finding nails for.

        All I can do is sigh and shake my head. This bubble will burst, and AI will still take decades to get to the point people think it is already at.

  • Greg Clarke
    link
    fedilink
    English
    459 months ago

    Generative AI is great for loads of programming tasks like helping create regular expressions or syntax conversions between languages. The main issue I’ve seen in codebases that rely heavily on generative AI is that the “solutions” often fix today’s bug while making future debugging more difficult. Generative AI makes it easy to go fast in the wrong direction. Used right it’s a useful tool.

  • @[email protected]
    link
    fedilink
    English
    19 months ago

    I use it as second last resort, and in those times, it did worked out. I had to test, verify, and make changes. Even so, I avoid using them.