• dinckel
    link
    fedilink
    English
    8610 months ago

    I’ll take “things business people dont understand” for 100$.

    No one hires software engineers to code. You’re hired to solve problems. All of this AI bullshit has 0 capability to solve your problems, because it can only spit out what it’s already stolen from seen somewhere else

    • @[email protected]
      link
      fedilink
      English
      810 months ago

      I’ve worked with a few PMs over my 12 year career that think devs are really only there to code like trained monkeys.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        I’m at the point where what I work on requires such a depth of knowledge that I just manage my own projects. Doesn’t help that my work’s PM team consistently brings in new hires only to toss them on the difficult projects no one else is willing to take. They see a project is doomed to fail so they put their least skilled and newest person on it so the seniors don’t suffer any failures.

        Simplifying things to a level that is understandable for the PMs just leads to overlooked footguns. Trying to explain a small subset of the footguns just leads to them wildly misinterpreting what is going on, causing more work for me to sort out what terrible misconceptions they’ve blasted out to everyone else.

        If you can’t actually be a reliable force multiplier, or even someone I can rely on to get accurate information from other teams, just get out of my way please.

    • @[email protected]
      link
      fedilink
      English
      1610 months ago

      It can also throw things against the wall with no concern for fitness-to=purpose. See “None pizza, left beef”.

  • @[email protected]
    link
    fedilink
    English
    910 months ago

    Can I join anyone’s band of AI server farm raiders 24 months from now? Anyone forming a group? I will bring my meat bicycle.

  • JackbyDev
    link
    fedilink
    English
    3410 months ago

    Let’s assume this is true, just for discussion’s sake. Who’s going to be writing the prompts to get the code then? Surely someone who can understand the requirements, make sure the code functions, and then test it afterwards. That’s a developer.

    • @[email protected]
      link
      fedilink
      English
      910 months ago

      I don’t believe for a single instance that what he says is going to happen, this is just a play for funding… But if it were to happen I’m pretty sure most companies would hire anything that moves for those jobs. You have many examples of companies offloading essential parts of their products externally.

      I’ve also seen companies hiring tourism graduates (et al non engineering related) giving them a 3/4 week programming course, slapping a “software engineer” sticker on them and off they are to work on products they have no experience to work on. Then it’s up to senior engineers to handle all that crap.

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      No, going by them, they just talk to an AI voice and it will pop out a finished product.

    • William
      link
      fedilink
      English
      410 months ago

      I think that’s the point? They’re saying that those coders will turn into prompt engineers. They didn’t say they wouldn’t have a job, just that they wouldn’t be “coding”.

      Which I don’t believe for a minute. I could see it eventually, but it’s not “2 years” away by any stretch of the imagination.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        Definitely be coding less I think. Coding or programming is basically the “grunt work”. The real skill is understanding requirements and translating that into some product.

      • JackbyDev
        link
        fedilink
        English
        210 months ago

        Possibly. But… Here’s the thing. I’ve dealt with “business rules” engines before at a job. I used a few different ones. The idea is always to make coding simpler so non technical people can do it. Unless you couldn’t tell from context, I’m a software engineer lol. I was the one writing and troubleshooting those tools. And it was harder than if it was just in a “normal” language like Java or whatever.

        I have a soft spot for this area and there’s a non zero chance this comment makes me obsess over them again for a bit lol. But the point I’m making is that “normal” coding was always better and more useful.

        It’s not a perfect comparison because LLMs output “real” code and not code that is “Scratch-like”, but I just don’t see it happening.

        I could see using LLMs exclusively over search engines (as a first place to look that is) in 2 years. But we’ll see.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        Looking at your examples, and I have to object at putting scratch in there.

        My kids use it in clubs, and it’s great for getting algorithmic basics down before the keyboard proficiency is there for real coding.

        • @[email protected]
          link
          fedilink
          English
          1410 months ago

          It’s still code. What makes scratch special is that it structurally rules out syntax errors while still looking quite like ordinary code. Node editors – I have a love and hate relationship with them. When you’re in e.g. Blender throwing together a shader it’s very very nice to have easy visualisation of literally everything, but then you know you want to compute abs(a) + sin(b) + c^2 and yep that’s five nodes right there because apparently even the possibility to type in a formula is too confusing for artists. Never mind that Blender allows you to input formulas (without variables though) into any field that accepts a number.

  • @[email protected]
    link
    fedilink
    English
    32110 months ago

    The only people who would say this are people that don’t know programming.

    LLMs are not going to replace software devs.

    • Angry_Autist (he/him)
      link
      fedilink
      English
      810 months ago

      I don’t know if you noticed but most of the people making decisions in the industry aren’t programmers, they’re MBAs.

      • @[email protected]
        link
        fedilink
        English
        710 months ago

        Irrelevant, anyone who tries to replace their devs with LLMs will crash and burn. The lessons will be learned. But yes, many executives will make stupid ass decisions around this tech.

        • Angry_Autist (he/him)
          link
          fedilink
          English
          210 months ago

          It’s really sad how even techheads ignore how rapidly LLM coding has come in the last 3 years and what that means in the long run.

          Just look how rapidly voice recognition developed once Google started exploiting all of its users’ voice to text data. There was a point that industry experts stated ‘There will never be a general voice recognition system that is 90%+ across all languages and dialects.’ And google made one within 4 years.

          The natural bounty of a no-salary programmer in a box is too great for this to ever stop being developed, and the people with the money only want more money, and not paying devs is something they’ve wanted since the coding industry literally started.

          Yes its terrible now, but it is also in its infancy, like voice recognition in the late 90s it is a novelty with many hiccoughs. That won’t be the case for long and anyone who confidently thinks it can’t ever happen will be left without recourse when it does.

          But that’s not even the worst part about all of this but I’m not going into black box code because all of you just argue stupid points when I do but just so you know, human programming will be a thing of the past outside of hobbyists and ultra secure systems within 20 years.

          Maybe sooner

          • @[email protected]
            link
            fedilink
            English
            510 months ago

            Maybe in 20 years. Maybe. But this article is quoting CEOs saying 2 years, which is bullshit.

            I think it’s just as likely that in 20 years they’ll be crying because they scared enough people away from the career that there aren’t enough developers, when the magic GenAI that can write all code still doesn’t exist.

            • Angry_Autist (he/him)
              link
              fedilink
              English
              110 months ago

              yeah 2 years is bullshit but with innovation, 10 years is still reasonable and fucking terrifying.

    • @[email protected]
      link
      fedilink
      English
      510 months ago

      The one thing that LLMs have done for me is to make summarizing and correlating data in documents really easy. Take 20 docs of notes about a project and have it summarize where they are at so I can get up to speed quickly. Works surprisingly well. I haven’t had luck with code requests.

    • @[email protected]
      link
      fedilink
      English
      13810 months ago

      Wrong, this is also exactly what people selling LLMs to people who can’t code would say.

      • @[email protected]
        cake
        link
        fedilink
        English
        5110 months ago

        It’s this. When boards and non-tech savvy managers start making decisions based on a slick slide deck and a few visuals, enough will bite that people will be laid off. It’s already happening.

        There may be a reckoning after, but wall street likes it when you cut too deep and then bounce back to the “right” (lower) headcount. Even if you’ve broken the company and they just don’t see the glide path.

        It’s gonna happen. I hope it’s rare. I’d argue it’s already happening, but I doubt enough people see it underpinning recent lay offs (yet).

    • @[email protected]
      link
      fedilink
      English
      24
      edit-2
      10 months ago

      I can see the statement in the same way word processing displaced secretaries.

      There used to be two tiers in business. Those who wrote ideas/solutions and those who typed out those ideas into documents to be photocopied and faxed. Now the people who work on problems type their own words and email/slack/teams the information.

      In the same way there are programmers who design and solve the problems, and then the coders who take those outlines and make it actually compile.

      LLM will disrupt the programmers leaving the problem solvers.

      There are still secretaries today. But there aren’t vast secretary pools in every business like 50 years ago.

      • @[email protected]
        link
        fedilink
        English
        1510 months ago

        There is no reason to believe that LLM will disrupt anyone any time soon. As it stands now the level of workmanship is absolutely terrible and there are more things to be done than anyone has enough labor to do. Making it so skilled professionals can do more literally just makes it so more companies can produce quality of work that is not complete garbage.

        Juniors produce progressively more directly usable work with reason and autonomy and are the only way you develop seniors. As it stands LLM do nothing with autonomy and do much of the work they do wrong. Even with improvements they will in near term actually be a coworker. They remain something you a skilled person actually use like a wrench. In the hands of someone who knows nothing they are worth nothing. Thinking this will replace a segment of workers of any stripe is just wrong.

      • Badabinski
        link
        fedilink
        1210 months ago

        I wrote a comment about this several months ago on my old kbin.social account. That site is gone and I can’t seem to get a link to it, so I’m just going to repost it here since I feel it’s relevant. My kbin client doesn’t let me copy text posts directly, so I’ve had to use the Select feature of the android app switcher. Unfortunately, the comment didn’t emerge unscathed, and I lack the mental energy to fix it due to covid brain fog (EDIT: it appears that many uses of I were not preserved). The context of the old post was about layoffs, and it can be found here: https://kbin.earth/m/[email protected]/t/12147

        I want to offer my perspective on the Al thing from the point of view of a senior individual contributor at a larger company. Management loves the idea, but there will be a lot of developers fixing auto-generated code full of bad practices and mysterious bugs at any company that tries to lean on it instead of good devs. A large language model has no concept of good or bad, and it has no logic. happily generate string- templated SQL queries that are ripe for SQL injection. I’ve had to fix this myself. Things get even worse when you have to deal with a shit language like Bash that is absolutely full of God awful footguns. Sometimes you have to use that wretched piece of trash language, and the scripts generated are horrific. Remember that time when Steam on Linux was effectively running rm -rf /* on people’s systems? I’ve had to fix that same type of issue multiple times at my workplace.

        I think LLMs will genuinely transform parts of the software industry, but I absolutely do not think they’re going to stand in for competent developers in the near future. Maybe they can help junior developers who don’t have a good grasp on syntax and patterns and such. I’ve personally felt no need to use them, since spend about 95% of my time on architecture, testing, and documentation.

        Now, do the higher-ups think the way that do? Absolutely not. I’ve had senior management ask me about how I’m using Al tooling, and they always seem so disappointed when I explain why I personally don’t feel the need for it and what feel its weaknesses are. Bossman sees it as a way to magically multiply IC efficiency for nothing, so absolutely agree that it’s likely playing a part in at least some of these layoffs.

        Basically, I think LLMs can be helpful for some folks, but my experience is that the use of LLMs by junior developers absolutely increases the workload of senior developers. Senior developers using LLMs can experience a productivity bump, but only if they’re very critical of the output generated by the model. I am personally much faster just relying on traditional IDE auto complete, since I don’t have to change from “I’m writing code” mode to “I’m reviewing code mode.”

        • @[email protected]
          link
          fedilink
          English
          810 months ago

          The one colleague using AI at my company produced (CUDA) code with lots of memory leaks that required two expert developers to fix. LLMs produce code based on vibes instead of following language syntax and proper coding practices. Maybe that would be ok in a more forgiving high level language, but I don’t trust them at all for low level languages.

          • @[email protected]
            link
            fedilink
            English
            510 months ago

            I was trying to use it to write a program in python for this macropad I bought and I have yet to get anything usable out of it. It got me closer than I would have been by myself and I don’t have a ton of coding experience so it’s problems are probably partially on me but everything it’s given me has required me to correct it to work.

          • @[email protected]
            link
            fedilink
            English
            410 months ago

            The same one they have now, perhaps with a steeper learning curve. The market for software developers is already saturated with disillusioned junior devs who attended a boot camp with promises of 6 figure salaries. Some of them did really well, but many others ran headlong into the fact that it takes a lot more passion than a boot camp to stand out as a junior dev.

            From what I understand, it’s rough out there for junior devs in certain sectors.

      • @[email protected]
        link
        fedilink
        English
        1110 months ago

        The problem with this take is the assertion that LLMs are going to take the place of secretaries in your analogy. The reality is that replacing junior devs with LLMs is like replacing secretaries with a network of typewriter monkeys who throw sheets of paper at a drunk MBA who decides what gets faxed.

        • @[email protected]
          link
          fedilink
          English
          410 months ago

          I’m saying that devs will use LLM’s in the same way they currently use word processing to send emails instead of handing hand written notes to a secretary to format, grammar/spell check, and type.

      • Optional
        link
        fedilink
        English
        710 months ago

        I thought by this point everyone would know how computers work.

        That, uh, did not happen.

      • @[email protected]
        link
        fedilink
        English
        2210 months ago

        It’ll have to improve a magnitude for that effect. Right now it’s basically an improved stack overflow.

        • @[email protected]
          link
          fedilink
          English
          510 months ago

          …and only sometimes improved. And it’ll stop improving if people stop using Stack Overflow, since that’s one of the main places it’s mined for data.

    • @[email protected]
      link
      fedilink
      English
      4110 months ago

      AI as a general concept probably will at some point. But LLMs have all but reached the end of the line and they’re not nearly smart enough.

      • @[email protected]
        link
        fedilink
        English
        1510 months ago

        LLMs have already reached the end of the line 🤔

        I don’t believe that. At least from an implementation perspective we’re extremely early on, and I don’t see why the tech itself can’t be improved either.

        Maybe it’s current iteration has hit a wall, but I don’t think anyone can really say what the future holds for it.

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          10 months ago

          we’re extremely early on

          Oh really! The analysis has been established since the 80’s. Its so far from early on that statement is comical

          • Todd Bonzalez
            link
            fedilink
            English
            410 months ago

            Transformers, the foundation of modern “AI”, was proposed in 2017. Whatever we called “AI” and “Machine Learning” before that was mostly convolutional networks inspired by the 80’s “Neocognitron”, which is nowhere near as impressive.

            The most advanced thing a Convolutional network ever accomplished was DeepDream, and visual Generative AI has skyrocketed in the 10 years since then. Anyone looking at this situation who believes that we have hit bedrock is delusional.

            From DeepDream to Midjourney in 10 years is incredible. The next 10 years are going to be very weird.

        • @[email protected]
          link
          fedilink
          English
          610 months ago

          I’m not trained in formal computer science, so I’m unable to evaluate the quality of this paper’s argument, but there’s a preprint out that claims to prove that current computing architectures will never be able to advance to AGI, and that rather than accelerating, improvements are only going to slow down due to the exponential increase in resources necessary for any incremental advancements (because it’s an NP-hard problem). That doesn’t prove LLMs are end of the line, but it does suggest that additional improvements are likely to be marginal.

          Reclaiming AI as a theoretical tool for cognitive science

        • @[email protected]
          link
          fedilink
          English
          25
          edit-2
          10 months ago

          LLMs have been around since roughly 2016 2017 (comment below corrected me that Attention paper was 2017). While scaling the up has improved their performance/capabilities, there are fundamental limitations on the actual approach. Behind the scenes, LLMs (even multimodal ones like gpt4) are trying to predict what is most expected, while that can be powerful it means they can never innovate or be truth systems.

          For years we used things like tf-idf to vectorize words, then embeddings, now transformers (supped up embeddings). Each approach has it limits, LLMs are no different. The results we see now are surprisingly good, but don’t overcome the baseline limitations in the underlying model.

          • Todd Bonzalez
            link
            fedilink
            English
            710 months ago

            The “Attention Is All You Need” paper that birthed modern AI came out in 2017. Before Transformers, “LLMs” were pretty much just Markov chains and statistical language models.

      • Optional
        link
        fedilink
        English
        610 months ago

        “at some point” being like 400 years in the future? Sure.

        Ok that’s probably a little bit of an exaggeration. 250 years.

    • @[email protected]
      link
      fedilink
      English
      39
      edit-2
      10 months ago

      I’m pretty sure I could write a bot right now that just regurgitates pop science bullshit and how it relates to Line Go Up business philosophy.

      Edit: did it, thanks ChatJippity

      def main():
          # Check if the correct number of arguments are provided
          if len(sys.argv) != 2:
              print("Usage: python script.py <PopScienceBS>")
              sys.exit(1)
          # Get the input from the command line
          PopScienceBS = sys.argv[1]
          # Assign the input variable to the output variable
          LineGoUp = PopScienceBS
          # Print the output
          print(f"Line Go Up if we do: {LineGoUp}")
      if __name__ == "__main__":
          main()
      
  • @[email protected]
    link
    fedilink
    English
    1810 months ago

    Says the person who is primarily paid with Amazon stock, wants to see that stock price rise for their own benefit, and won’t be in that job two years from now to be held accountable. Also, who has never written a kind of code. Yeah…. Ok. 🤮

  • @[email protected]
    link
    fedilink
    English
    3610 months ago

    Sure, Microsoft is happy to let their AIs scan everyone else’s code., but is anyone aware of any software houses letting AIs scan their in-house code?

    Any lawyer worth their salt won’t let AIs anywhere near their company’s proprietary code intil they are positive that AI isn’t going to be blabbing the code out to every one of their competitors.

    But of course, IANAL.

    • @[email protected]
      link
      fedilink
      English
      1410 months ago

      The LLMs they train on their code will only be accessible internally. They won’t leak their own intellectual property.

      • Jack
        link
        fedilink
        English
        410 months ago

        Will that not be more experiensive than having developers?

        • Echo Dot
          link
          fedilink
          English
          510 months ago

          Yeah which is why this is a dumb statement from Amazon. But then again I don’t expect C-suite managers to really understand the intricacies of their own companies.

        • androogee (they/she)
          link
          fedilink
          English
          410 months ago

          Of course not. It will be more expensive and they’ll still have to pay developers to figure out what’s wrong with their AI code.

        • @[email protected]
          link
          fedilink
          English
          210 months ago

          Possibly. It’s hard to know without seeing the numbers and assessing output quality and volume.

          Also it’s not unheard of that some bigwig wastes millions of company €€ for some project they fancy. (Billions if they happen to be Elon)

        • @[email protected]
          link
          fedilink
          English
          310 months ago

          Depends on the use case. Training local llms is a lot cheaper after Galore and there are ways to get useful local models with only a moderate amount of effort, see e.g. augmentoolkit.

          This may or may not be practical in many use cases.

          24 months is pretty generous but no doubt there will be significantly less demand for junior developers in the near future.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        10 months ago

        If only we had an overarching structure that everyone in society has agreed exists for the purposes of enforcing laws and regulating things. Something that governs people living in a region… Maybe then they could be compelled to show exactly what they’re using, and what those models are being trained with.

        Oh well.

  • @[email protected]
    link
    fedilink
    English
    4510 months ago

    A company I used to work for outsourced most of their coding to a company in India. I say most because when the code came back the internal teams anways had to put a bunch of work in to fix it and integrate it with existing systems. I imagine that, if anything, LLMs will just take the place of that overseas coding farm. The code they spit out will still need to be fixed and modified so it works with your existing systems and that work is going to require programmers.

    • @[email protected]
      link
      fedilink
      English
      2410 months ago

      So instead of spending 1 day writing good code, we’ll be spending a week debugging shitty code. Great.

  • @[email protected]
    link
    fedilink
    English
    6210 months ago

    This will be used as an excuse to try to drive down wages while demanding more responsibilities from developers, even though this is absolute bullshit. However, if they actually follow through with their delusions and push to build platforms on AI-generated trash code, then soon after they’ll have to hire people to fix such messes.

  • Aurelius
    link
    fedilink
    English
    1410 months ago

    All the manufacturers of mechanical keyboards just cried 🥺

    • geogle
      link
      fedilink
      English
      1010 months ago

      I’m sure they’ll hold strong to that prediction in 24 mo. It’s just 24 more months away

        • Dizzy Devil Ducky
          link
          fedilink
          English
          5
          edit-2
          10 months ago

          I remember a little over a decade ago while I was still in public school hearing about super advanced cars that had self driving were coming soon, yet we’re hardly anywhere closer to that goal (if you don’t count the Tesla vehicles running red lights incidents).

          • @[email protected]
            link
            fedilink
            English
            310 months ago

            In Phoenix you can take a Waymo (self driving taxi) just like an Uber. They have tons of ads and they’re everywhere on the roads.

            • @[email protected]
              link
              fedilink
              English
              210 months ago

              I am in Phoenix and just took one to the airport. First time riding in a Waymo. It was uncannily good and much more confident than the FSD Tesla I’ve ridden in a few times.

              • @[email protected]
                link
                fedilink
                English
                210 months ago

                I haven’t taken one yet but have several friends who have. Besides being generally good, one of the best parts is unlike Uber, there’s no chance that you have a weird driver that wants to talk to you the whole ride

          • @[email protected]
            link
            fedilink
            English
            310 months ago

            A subscription to Popular Science magazine through most of my teen years did wonders for my skepticism.

            We should all be switched to hydrogen fuel by now, for our public transport lines with per person carriages that can split off from the main line seamlessly at speed to go off on side routes to your individual destination, that automatically rejoin the main line when you’re done with it. They were talking about all of that pre-2010.

            • Dizzy Devil Ducky
              link
              fedilink
              English
              110 months ago

              I think I remember the hydrogen fuel thing.

              Also, fuck Popular Science for making me think there was gonna be a zombie apocalypse due to some drug that turns you into a zombie.

        • geogle
          link
          fedilink
          English
          510 months ago

          This is the year of the Linux Desktop

    • lurch (he/him)
      link
      fedilink
      English
      810 months ago

      15 years at least. probably more like 30. and it will be questionable, because it will use a lot of energy for every query and a lot of resources for cooling

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        it will use a lot of energy for every query and a lot of resources for cooling

        Well, so do coders. Coffee can be quite taxing on the environment, as can air conditioning!

    • @[email protected]
      link
      fedilink
      English
      810 months ago

      That’s probably the amount of time remaining before they move on to selling the next tech buzz word to some suckers.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        And just like that, they’ll forget about these previous statements as well.

        I fear Elon Musk’s broken promises method is being admired and copied.

  • @[email protected]OP
    link
    fedilink
    English
    2110 months ago

    I’m curious about what the “upskilling” is supposed to look like, and what’s meant by the statement that most execs won’t hire a developer without AI skills. Is the idea that everyone needs to know how to put ML models together and train them? Or is it just that everyone employable will need to be able to work with them? There’s a big difference.

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      We will all be given old school Casio calculators a d sent to crunch numbers in the bitcoin mines.