• katy ✨
    link
    fedilink
    English
    022 days ago

    chatbots and ai are just dumber 1990s search engines.

    • mycelium underground
      link
      fedilink
      English
      122 days ago

      I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I’m pretty sure I can get better answers from the LLMs tied to Kagi search.

      AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a “search” engine like Google now).

      Don’t be the product.

    • Lovable Sidekick
      link
      fedilink
      English
      9
      edit-2
      23 days ago

      Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either. AI Bad!

      • @[email protected]
        link
        fedilink
        English
        1523 days ago

        This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

      • @[email protected]
        link
        fedilink
        English
        323 days ago

        AI Bad!

        Yes, it is. But not in, like a moral sense. It’s just not good at doing things.

      • @[email protected]
        link
        fedilink
        English
        10
        edit-2
        23 days ago

        I’ll bait. Let’s think:

        -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

        • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

        • now llm is asked about the topic and computes the answer string

        By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

        If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

        • Lovable Sidekick
          link
          fedilink
          English
          423 days ago

          It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.

    • Terrasque
      link
      fedilink
      English
      923 days ago

      The quote was originally on news and journalists.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        23 days ago

        Can, and opt not to. Big difference. I’m sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own

        Same with many aspects of modern technology. Like, I’m sure it’s very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else’s computer turns off, I’d like to keep control over my things

  • tisktisk
    link
    fedilink
    English
    4924 days ago

    I plugged this into gpt and it couldn’t give me a coherent summary.
    Anyone got a tldr?

    • Skua
      link
      fedilink
      3124 days ago

      Based on the votes it seems like nobody is getting the joke here, but I liked it at least

    • tisktisk
      link
      fedilink
      English
      2024 days ago

      For those genuinely curious, I made this comment before reading only as a joke–had no idea it would be funnier after reading

    • veee
      link
      fedilink
      English
      6824 days ago

      It’s short and worth the read, however:

      tl;dr you may be the target demographic of this study

  • @[email protected]
    link
    fedilink
    English
    1923 days ago

    Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.

  • @[email protected]
    link
    fedilink
    English
    224 days ago

    There is something I don’t understand… openAI collaborates in research that probes how awful its own product is?

    • @[email protected]
      link
      fedilink
      English
      524 days ago

      If I believed that they were sincerely interested in trying to improve their product, then that would make sense. You can only improve yourself if you understand how your failings affect others.

      I suspect however that Saltman will use it to come up with some superficial bullshit about how their new 6.x model now has a 90% reduction in addiction rates; you can’t measure anything, it’s more about the feel, and that’s why it costs twice as much as any other model.

  • @[email protected]
    link
    fedilink
    English
    2524 days ago

    Correlation does not equal causation.

    You have to be a little off to WANT to interact with ChatGPT that much in the first place.

    • Echo Dot
      link
      fedilink
      English
      824 days ago

      I don’t understand what people even use it for.

      • @[email protected]
        link
        fedilink
        English
        323 days ago

        Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.

        Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        23 days ago

        There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.

        I can only imagine there are others who aren’t as stringent with the output.

        Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.

        Did like the huge flexibility v. the parser available in most made by human text adventures, though.

      • @[email protected]
        link
        fedilink
        English
        9
        edit-2
        23 days ago

        I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human expression (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it better matches the patterns that you want to evoke in the training data.

        That said, yeah of course I become “addicted” to it and have a harder time coping without it, because it’s part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.

      • @[email protected]
        link
        fedilink
        English
        323 days ago

        I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.

      • Bilb!
        link
        fedilink
        English
        223 days ago

        I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.

  • @[email protected]
    link
    fedilink
    English
    823 days ago

    New DSM / ICD is dropping with AI dependency. But it’s unreadable because image generation was used for the text.

    • @[email protected]
      link
      fedilink
      English
      423 days ago

      This is perfect for the billionaires in control, now if you suggest that “hey maybe these AI have developed enough to be sentient and sapient beings (not saying they are now) and probably deserve rights”, they can just label you (and that arguement) mentally ill

      Foucault laughs somewhere

  • Lovable Sidekick
    link
    fedilink
    English
    46
    edit-2
    23 days ago

    TIL becoming dependent on a tool you frequently use is “something bizarre” - not the ordinary, unsurprising result you would expect with common sense.

    • @[email protected]
      link
      fedilink
      English
      1923 days ago

      If you actually read the article Im 0retty sure the bizzarre thing is really these people using a ‘tool’ forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a ‘friend’.

      • @[email protected]
        link
        fedilink
        English
        823 days ago

        No, I basically get the same read as OP. Idk I like to think I’m rational enough & don’t take things too far, but I like my car. I like my tools, people just get attached to things we like.

        Give it an almost human, almost friend type interaction & yes I’m not surprised at all some people, particularly power users, are developing parasocial attachments or addiction to this non-human tool. I don’t call my friends. I text. ¯\(°_o)/¯

        • @[email protected]
          link
          fedilink
          English
          923 days ago

          I loved my car. Just had to scrap it recently. I got sad. I didnt go through withdrawal symptoms or feel like i was mourning a friend. You can appreciate something without building an emotional dependence on it. Im not particularly surprised this is happening to some people either, wspecially with the amount of brainrot out there surrounding these LLMs, so maybe bizarre is the wrong word , but it is a little disturbing that people are getting so attached to so.ething that is so fundamentally flawed.

          • @[email protected]
            link
            fedilink
            English
            323 days ago

            Sorry about your car! I hate that.

            In an age where people are prone to feeling isolated & alone, for various reasons…this, unfortunately, is filling the void(s) in their life. I agree, it’s not healthy or best.

      • Lovable Sidekick
        link
        fedilink
        English
        4
        edit-2
        23 days ago

        Yes, it says the neediest people are doing that, not simply “people who who use ChatGTP a lot”. This article is like “Scientists warn civilization-killer asteroid could hit Earth” and the article clarifies that there’s a 0.3% chance of impact.

      • @[email protected]
        link
        fedilink
        English
        123 days ago

        You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn’t that weird, especially one that can talk to you like it’s human.

      • Komodo Rodeo
        link
        fedilink
        English
        423 days ago

        What the Hell was the name of the movie with Tom Cruise where the protagonist’s friend was dating a fucking hologram?

        We’re a hair’s-breadth from that bullshit, and TBH I think that if falling in love with a computer program becomes the new defacto normal, I’m going to completely alienate myself by making fun of those wretched chodes non-stop.

  • @[email protected]
    link
    fedilink
    English
    6424 days ago

    those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.

    That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.

    Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.

    • @[email protected]
      link
      fedilink
      English
      2924 days ago

      AI and ads… I think that is the next dystopia to come.

      Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.

      • @[email protected]
        link
        fedilink
        English
        424 days ago

        that is not a thought i needed in my brain just as i was trying to sleep.

        what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks

      • @[email protected]
        link
        fedilink
        English
        1724 days ago

        That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.

        Would you like to order a delicious, refreshing Coke Zero™️?

        • ivanafterall ☑️
          link
          fedilink
          English
          6
          edit-2
          23 days ago

          I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?

          • Dale
            link
            fedilink
            English
            523 days ago

            Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.

      • @[email protected]
        link
        fedilink
        English
        11
        edit-2
        24 days ago

        “Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”

        edit: how does this make you feel

        • @[email protected]
          link
          fedilink
          English
          323 days ago

          It makes me wish my government actually fucking governed and didn’t just agree with whatever businesses told them

      • @[email protected]
        link
        fedilink
        English
        624 days ago

        Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.

    • @[email protected]
      link
      fedilink
      English
      1024 days ago

      Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”

  • @[email protected]
    link
    fedilink
    English
    2323 days ago

    This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn’t a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.

    I don’t want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn’t fulfill.