Over half of all tech industry workers view AI as overrated::undefined

  • @[email protected]
    link
    fedilink
    English
    82 years ago

    Well, it depends on your bubble I guess. But personally I’d say it’s underrated and overrated at the same time, but mostly underrated.

    It depends on your expectations and way of usage in your toolbox I’d say. It keeps surprising me weekly how fast progress is. But we get used to it perhaps.

  • @[email protected]
    link
    fedilink
    English
    72 years ago

    It’s an effective tool at providing introductory information to well documented topics. A smarter Google Search, basically. And that’s all I really want it to be. Overrated? Probably not. It’s useful if you use it correctly. Overhyped? Yeah, but that’s more a fault of marketing than technology.

  • @[email protected]
    link
    fedilink
    English
    192 years ago

    In a podcast I listen to where tech people discuss security topics they finally got to something related to AI, hesitated, snickered, said “Artificial Intelligence I guess is what I have to say now instead of Machine Learning” then both the host and the guest started just belting out laughs for a while before continuing.

  • danque
    link
    fedilink
    English
    32 years ago

    It’s not the magic that all people think it is. They even warn you that the facts might not be true facts.

  • Dizzy Devil Ducky
    link
    fedilink
    English
    72 years ago

    As a college student, I agree with the idea and statement of AI being overrated. It’ll definitely have its’ place in this world, but I definitely don’t foresee us being able to utilize it to the fullest before we end up in a nuclear hellhole.

  • @[email protected]
    link
    fedilink
    English
    72 years ago

    It’s not overrated.

    Using the “Mistral instruct” model to select your food in the canteen works like a charm.

    Just provide it with the daily option, tell it to select one main-, side dish and a dessert and explain the selection. Never let me down. Consistently selects the healthier option that still tastes good.

  • @[email protected]
    link
    fedilink
    English
    152 years ago

    I use github copilot. It really is just fancy autocomplete. It’s often useful and is indeed impressive. But it’s not revolutionary.

    I’ve also played with ChatGPT and tried to use it to help me code but never successfully. The reality is I only try it if google has failed me, and then it usually makes up something that sounds right but is in fact completely wrong. Probably because it’s been trained on the same insufficient data I’ve been looking at.

    • @[email protected]
      link
      fedilink
      English
      22 years ago

      For me it depends a lot on the question. For tech questions like programming language questions, it’s much faster than a search engine. But when I did research for cars and read reviews, I used Kagi.

    • thelastknowngod
      cake
      link
      fedilink
      English
      12 years ago

      Yeah agreed. I use copilot too. It’s fine for small, limited tasks/functions but that’s about it. The overwhelming majority of my work is systems design and maintenance though… There’s no AI for that…

    • MeanEYE
      link
      fedilink
      English
      32 years ago

      I still consider copilot to be a serial license violator. So many things are GPL licensed on GitHub and completing your code with someone else’s or at least variation of it without giving credit is a clear violation of the license.

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    That depends heavily on the framing. AI right now is overrated, it can do a ton of stuff, but rarely anything useful. AI in the (near) future however will eat all our jobs and everything else.

    It’s always worth keeping in mind under what restrictions current AI still operates. ChatGPT never saw a compiler or a shell, never could test its code. It’s pure book-knowledge without any practical experience. Yet it still can produce quite good code, at least for small problems. What it might be capable of once it has access to a compiler and runtime environment could far surpass what we have today. But it’s speculation, since we don’t have that yet.

  • LittleHermiT
    link
    fedilink
    English
    112 years ago

    I asked chatGPT to generate a list of 5 random words, and then tell me the fourth word from the bottom. It kept telling me the third. I corrected it, and it gave me the right word. I asked it again, and it made the same error. It does amazing things while failing comically at simple tasks. There is a lot of procedural code added to plug the leaks. Doesn’t mean it’s overrated, but when something is hyped hard enough as being able to replace human expertise, any crack in the system becomes ammunition for dismissal. I see it more as a revolutionary technology going through evolutionary growing pains. I think it’s actually underrated in its future potential and worrisome in the fact that its processing is essentially a black box that can’t be understood at the same level as traditional coding. You can’t debug it or trace the exact procedure that needs patching.

    • @[email protected]
      link
      fedilink
      English
      42 years ago

      There is a lot of procedural code added to plug the leaks.

      It’s definitely feasible, like what they tried to do with Wolfram alpha- but do you have a source for this?

    • @[email protected]
      link
      fedilink
      English
      32 years ago

      I believe I saw this kind of issues was because of the token system. Like if you tell him to find a word starting with a letter, he can’t really do it without hard coded workaround, because he doesn’t know about single letters, only about tokens which are parts of the sentence.
      It’s definitly more complicated than that, but it doesn’t mean AI is bad, only that this current implementation can’t do theses kind of task.

  • @[email protected]
    link
    fedilink
    English
    64
    edit-2
    2 years ago

    It is overrated. At least when they look at AI as some sort of brain crutch that redeems them from learning stuff.

    My boss now believes he can “program too” because he let’s ChatGPT write scripts for him that more often than not are poor bs.

    He also enters chunks of our code into ChatGPT when we issue bugs or aren’t finished with everything in 5 minutes as some kind of “Gotcha moment”, ignoring that the solutions he then provides don’t work.

    Too many people see LLMs as authorities they just aren’t…

    • @[email protected]
      link
      fedilink
      English
      82 years ago

      It bugs me how easily people (a) trust the accuracy of the output of ChatGPT, (b) feel like it’s somehow safe to use output in commercial applications or to place output under their own license, as if the open issues of copyright aren’t a ten-ton liability hanging over their head, and © feed sensitive data into ChatGPT, as if OpenAI isn’t going to log that interaction and train future models on it.

      I have played around a bit, but I simply am not carefree/careless or am too uptight (pick your interpretation) to use it for anything serious.

    • @[email protected]
      link
      fedilink
      English
      62 years ago

      Too many people see LLMs as authorities they just aren’t…

      This is more a ‘human’ problem than an ‘AI’ problem.

      In general it’s weird as heck that the industry is full force going into chatbots as a search replacement.

      Like, that was a neat demo for a low hanging fruit usecase, but it’s pretty damn far from the ideal production application of it given that the tech isn’t actually memorizing facts and when it gets things right it’s a “wow, this is impressive because it really shouldn’t be doing a good job at this.”

      Meanwhile nearly no one is publicly discussing their use as classifiers, which is where the current state of the tech is a slam dunk.

      Overall, the past few years have opened my eyes to just how broken human thinking is, not as much the limitations of neural networks.

  • @[email protected]
    link
    fedilink
    English
    302 years ago

    I have a doctorate in computer engineering, and yeah it’s overhyped to the moon.

    I’m oversimplifying it and some one will ackchyually me but once you understand the core mechanics the magic is somewhat diminished. It’s linear algebra and matrices all the way down.

    We got really good at parallelizing matrix operations and storing large matrices and the end result is essentially “AI”.

    • HMN
      link
      fedilink
      English
      22 years ago

      Big emphasis on the ‘A’