Now I understand why at each windows 11 update, they introduce more bugs than ever

    • @[email protected]
      link
      fedilink
      English
      14 months ago

      They’re lying about using AI to write software, they probably have required all their programmers to have an AI plugin installed, and are thus counting any code they make as “written by AI”, and then are counting any minor edit to existing code as the entire thing being “written by AI”.

      The software is bad because it’s written to serve the infinite growth imperative. The reason they claim they’re writing code with AI is because that being true is the only hope that they have for achieving the infinite growth imperative. It’s a con, it’s a cult, they are extracting as much value as they can before everything falls apart.

  • @[email protected]
    link
    fedilink
    English
    204 months ago

    This makes sense and would explain the mainline windows versioning and probably the xbox versioning too!

    Microsoft to AI: List all the integers from one to eleven.

    AI: 3. 95. 98. 2000. XP. Vista. 7. 8. 10. 11.

  • Dr. Moose
    link
    fedilink
    English
    54 months ago

    They include tab complete of github copilot which is often as much as a single dot. Same thing they’ve done with all github copilot stats.

  • @[email protected]
    link
    fedilink
    English
    7
    edit-2
    4 months ago

    Fun fact: Nadella has been replaced with an AI agent a couple of months ago and nobody has noticed yet. “Copilot, while I’m away, generate bs on AI adoption and fire a bunch of employees, ok?”

  • @[email protected]
    link
    fedilink
    English
    234 months ago

    Code written by software doesn’t mean AI unless you ignore compilers

    Executives lie to boost profits and justify their decisions, I doubt MS execs even know how much of their code is AI generated just like the ad sales company exec they were talking to in the article

  • @[email protected]
    link
    fedilink
    English
    64 months ago

    If this were true there would massive databreaches. AI is really bad at keeping private keys private. Not to even mention the default credentials it would use because it doesnt have commen sense to change them

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      4 months ago

      I disagree. It feels like your making this assumption from the point of view that people using AI to develop turn their whole brain off and let AI take the wheel. Any dev I know using AI uses it as a time saving measure, i.e. advanced autocomplete, or to assist with troubleshooting as a form of advanced search engine. Also you would have no need to give the AI the actual key itself, at most you would give it the title of the variable the key is saved as.

      • @[email protected]
        link
        fedilink
        English
        54 months ago

        I find it hard to believe because I work at an adjacent company who has made similar claims and it is complete bullshit.

        I do think there is some about of “AI provided a smart complete and the developer hit ‘tab’ to take the changes” equalling “this code was written by AI” in some metrics that go to the execs. And since the execs mandated high AI use everyone is fine just saying they have high AI use regardless of how true it is.

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      Likely a lot of manpower were focused on that, and/or the employees rather wrote their own code then lied about the AI use (heard a lot about it).

  • @[email protected]
    link
    fedilink
    English
    3784 months ago

    Horseshit.

    The current state of code generated by AI is sketchy at best. I often get plain wrong answers because the model tries to derive. It comes up with calls to functions and properties that just do not exist.

    “You are right, I made a mistake. Here is a better answer.” Continues to give wrong answers.

    Apart from that, apps that are glued together from AI generated code are not maintainable at all. What if there is a bug somewhere and you so not comprehend what is actually happening? Ask AI to fix it? Yeah good luck with that.

    I do use AI for simple questions, and it works fairly well for that, but this claim by MS is just marketing bullshit.

    • @[email protected]
      link
      fedilink
      English
      1194 months ago

      This ^

      “20%-30% of code inside the company’s repositories”

      Now, if they had said “20%-30% of code written in the past 6 months…” I might buy that.

      The repositories are going to have all the current codebase, likely going back years now. AI generated code is barely viable at this point and really only pretty recently.

      No way 1/3rd of all current codebase is AI.

      • @[email protected]
        link
        fedilink
        English
        544 months ago

        Even 20% of new code would be a stretch unless they count every first iteration of code written by AI that needs to be replaced by a human later because it was plain wrong.

    • @[email protected]
      link
      fedilink
      English
      304 months ago

      They say that because they are selling it.

      And yeah, my experience is the same. The most frustrating is when writing in a typed python, and it gives answers that are clearly incorrect, making up attributes that don’t even exist etc.

      • Balder
        link
        fedilink
        English
        114 months ago

        My brother said his superior asked him to use more AI auto complete so that they can brag to investors that X percent of the company’s code is written by AI. This told me everything about the current state of this bullshit.

    • @[email protected]
      link
      fedilink
      English
      174 months ago

      I didn’t RTA, but if they mean ALL code at MS, that just can’t be true. They have legacy stuff going back decades, beyond just their windows platform. There’s no way 30% of all their code is replaced or newly created by AI.

    • @[email protected]
      link
      fedilink
      English
      114 months ago

      “You are right, I made a mistake. Here is a better answer.” Continues to give wrong answers

      The exact same wrong answer. Co-Pilot is especially bad for that. I’m practically giving up using it outside of vs code because the actual copilot AI is dog shit stupid m

    • fmstrat
      link
      fedilink
      English
      54 months ago

      “Auto complete generated 30% of characters”

      Fixed it.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      4 months ago

      “You are right, I made a mistake. Here is a better answer.” Continues to give wrong answers.

      To be fair, the AI’s not wrong. It’s probably better, but just a teeny tiny bit so.

      Honestly, AI is like a genie - whatever you come up with he’ll just butcher and misinterpret so you start questioning both your own sanity and the semantics of language. Good thing these genies have no wish limit, but bad thing that they murder rainforests while generating their non-sequitur replies.

    • nickwitha_k (he/him)
      link
      fedilink
      English
      14 months ago

      I do use AI for simple questions, and it works fairly well for that, but this claim by MS is just marketing bullshit.

      This is my experience. It can be useful for simple things that used to be found with a web search before AI slop broke things. For example, I was having trouble getting a simple CGO program for a POC to communicate with the main Go process. This should have been solvable easily with documentation but the CGO docs are pretty bad and sample code was near impossible to find due to AI slop in the search results. GPT was able to provide the needed sample code to unblock me.

  • @[email protected]
    link
    fedilink
    English
    4
    edit-2
    4 months ago

    Would be interesting to see how they measured that metric. Are they tagging individual lines as AI generated?

    What those lines are too would be interesting, AI as auto complete is less dangerous than complete generation, but probably also less useful.

    • @[email protected]
      link
      fedilink
      English
      34 months ago

      AI as auto complete is exactly what I was thinking.

      I’ve seen lots of cases where AI appears as an auto complete suggestion and I can just hit <TAB> and it finishes the current line. It’s essentially filling in the boilerplate text. Heck in some cases it isn’t even right, but it’s close enough that I can change a few values.

      I also want to point out that this isn’t particularly new technology. This existed before AI. It has perhaps expanded more, but it isn’t a revolutionary improvement, it’s an incremental one. So when we talk about usefulness, I think it is actually more useful.

      Now if it could do all the magic planning and thinking, that would be more useful, but we’re not there yet.

    • BlackEco
      link
      fedilink
      English
      12
      edit-2
      4 months ago

      Most probably Microsoft has set objectives for how much LoC are from LLMs and developers invented numbers to match that metric (because they probably have things more important to do than counting LoC)

  • Log in | Sign up
    link
    fedilink
    English
    84 months ago

    Satya Nadella has given an evasive answer there and both Zuckerberg and the journalists have been taken in.

    It is common in programming languages that have a lot of boilerplate to use code generation, where you take some information about data and generate code automatically, like code that translates data between formats (for example reading and writing xml for saving to disk or json to send over the network). Being very routine to write and easy to deduce logically from other information, this process has been automated for years and years, long before AI existed.

    Microsoft’s flagship software such as operating systems, office software, is unbelievably vast and complex, far beyond the complexity of most business software, and has been developed over decades. They absolutely have not replaced 30% of their code since the very recent advent of useful AI. I can believe that 30% of it is automatically generated, but not by AI.

  • Teknikal
    link
    fedilink
    English
    114 months ago

    Windows is 95 percent pure bloat now imo, an os just needs to handle my hardware and launch my programs anything else is just eating my resources.

    • @[email protected]
      link
      fedilink
      English
      24 months ago

      I don’t need any assistance from anything while my phasers and quantums aren’t doing anything. I don’t need AI doing anything when I finally get the proper setup for crashing a Tomcat into a big old mountain that only a fool would miss. I don’t need any bloat while I’m ripping off an old cartoon character for a D&D campaign.