chatbots and ai are just dumber 1990s search engines.
I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I’m pretty sure I can get better answers from the LLMs tied to Kagi search.
AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a “search” engine like Google now).
Don’t be the product.
Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either. AI Bad!
This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.
AI Bad!
Yes, it is. But not in, like a moral sense. It’s just not good at doing things.
I’ll bait. Let’s think:
-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it
-
now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output
-
now llm is asked about the topic and computes the answer string
By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)
If you want to say 40% wrong llm means 40% wrong sources, prove me wrong
It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.
Lol. Be my guest and knock yourself out, dreaming you know things
-
The quote was originally on news and journalists.
It depends: are you in Soviet Russia ?
In the US, so as of 1/20/25, sadly yes.
I am so happy God made me a Luddite
Yeah look at all this technology you can’t use! It’s so empowering.
Can, and opt not to. Big difference. I’m sure I could ask chat GPT to write a better comment than this, but I value the human interaction involved with it, and the ability to perform these tasks on my own
Same with many aspects of modern technology. Like, I’m sure it’s very convenient having your phone control your washing machine and your thermostat and your lightbulbs, but when somebody else’s computer turns off, I’d like to keep control over my things
I plugged this into gpt and it couldn’t give me a coherent summary.
Anyone got a tldr?Based on the votes it seems like nobody is getting the joke here, but I liked it at least
Power Bot 'Em was a gem, I will say
For those genuinely curious, I made this comment before reading only as a joke–had no idea it would be funnier after reading
It’s short and worth the read, however:
tl;dr you may be the target demographic of this study
Lol, now I’m not sure if the comment was satire. If so, bravo.
Probably being sarcastic, but you can’t be certain unfortunately.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.
LLM is a subset of ML, which is a subset of AI.
Neat snaps camera
Neat
You can tell by the way that it is!
It’s not often you get all this neatness in one place
There is something I don’t understand… openAI collaborates in research that probes how awful its own product is?
If I believed that they were sincerely interested in trying to improve their product, then that would make sense. You can only improve yourself if you understand how your failings affect others.
I suspect however that Saltman will use it to come up with some superficial bullshit about how their new 6.x model now has a 90% reduction in addiction rates; you can’t measure anything, it’s more about the feel, and that’s why it costs twice as much as any other model.
Correlation does not equal causation.
You have to be a little off to WANT to interact with ChatGPT that much in the first place.
I don’t understand what people even use it for.
Compiling medical documents into one, any thing of that sort, summarizing, compiling, coding issues, it saves a wild amounts of time compiling lab results that a human could do but it would take multitudes longer.
Definitely needs to be cross referenced and fact checked as the image processing or general response aren’t always perfect. It’ll get you 80 to 90 percent of the way there. For me it falls under the solve 20 percent of the problem gets you 80 percent to your goal. It needs a shitload more refinement. It’s a start, and it hasn’t been a straight progress path as nothing is.
There’s a few people I know who use it for boilerplate templates for certain documents, who then of course go through it with a fine toothed comb to add relevant context and fix obvious nonsense.
I can only imagine there are others who aren’t as stringent with the output.
Heck, my primary use for a bit was custom text adventure games, but ChatGPT has a few weaknesses in that department (very, very conflict adverse for beating up bad guys, etc.). There’s probably ways to prompt engineer around these limitations, but a) there’s other, better suited AI tools for this use case, b) text adventure was a prolific genre for a bit, and a huge chunk made by actual humans can be found here - ifdb.org, c) real, actual humans still make them (if a little artsier and moody than I’d like most of the time), so eventually I stopped.
Did like the huge flexibility v. the parser available in most made by human text adventures, though.
I use it many times a day for coding and solving technical issues. But I don’t recognize what the article talks about at all. There’s nothing affective about my conversations, other than the fact that using typical human expression (like “thank you”) seems to increase the chances of good responses. Which is not surprising since it better matches the patterns that you want to evoke in the training data.
That said, yeah of course I become “addicted” to it and have a harder time coping without it, because it’s part of my workflow just like Google. How well would anybody be able to do things in tech or even life in general without a search engine? ChatGPT is just a refinement of that.
I use it to generate a little function in a programming language I don’t know so that I can kickstart what I need to look for.
I use it to make all decisions, including what I will do each day and what I will say to people. I take no responsibility for any of my actions. If someone doesn’t like something I do, too bad. The genius AI knows better, and I only care about what it has to say.
New DSM / ICD is dropping with AI dependency. But it’s unreadable because image generation was used for the text.
This is perfect for the billionaires in control, now if you suggest that “hey maybe these AI have developed enough to be sentient and sapient beings (not saying they are now) and probably deserve rights”, they can just label you (and that arguement) mentally ill
Foucault laughs somewhere
lmao we’re so fucked :D
TIL becoming dependent on a tool you frequently use is “something bizarre” - not the ordinary, unsurprising result you would expect with common sense.
If you actually read the article Im 0retty sure the bizzarre thing is really these people using a ‘tool’ forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a ‘friend’.
No, I basically get the same read as OP. Idk I like to think I’m rational enough & don’t take things too far, but I like my car. I like my tools, people just get attached to things we like.
Give it an almost human, almost friend type interaction & yes I’m not surprised at all some people, particularly power users, are developing parasocial attachments or addiction to this non-human tool. I don’t call my friends. I text. ¯\(°_o)/¯
I loved my car. Just had to scrap it recently. I got sad. I didnt go through withdrawal symptoms or feel like i was mourning a friend. You can appreciate something without building an emotional dependence on it. Im not particularly surprised this is happening to some people either, wspecially with the amount of brainrot out there surrounding these LLMs, so maybe bizarre is the wrong word , but it is a little disturbing that people are getting so attached to so.ething that is so fundamentally flawed.
Sorry about your car! I hate that.
In an age where people are prone to feeling isolated & alone, for various reasons…this, unfortunately, is filling the void(s) in their life. I agree, it’s not healthy or best.
Yes, it says the neediest people are doing that, not simply “people who who use ChatGTP a lot”. This article is like “Scientists warn civilization-killer asteroid could hit Earth” and the article clarifies that there’s a 0.3% chance of impact.
You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn’t that weird, especially one that can talk to you like it’s human.
What the Hell was the name of the movie with Tom Cruise where the protagonist’s friend was dating a fucking hologram?
We’re a hair’s-breadth from that bullshit, and TBH I think that if falling in love with a computer program becomes the new defacto normal, I’m going to completely alienate myself by making fun of those wretched chodes non-stop.
those who used ChatGPT for “personal” reasons — like discussing emotions and memories — were less emotionally dependent upon it than those who used it for “non-personal” reasons, like brainstorming or asking for advice.
That’s not what I would expect. But I guess that’s cuz you’re not actively thinking about your emotional state, so you’re just passively letting it manipulate you.
Kinda like how ads have a stronger impact if you don’t pay conscious attention to them.
AI and ads… I think that is the next dystopia to come.
Think of asking chatGPT about something and it randomly looks for excuses* to push you to buy coca cola.
that is not a thought i needed in my brain just as i was trying to sleep.
what if gpt starts telling drunk me to do things? how long would it take for me to notice? I’m super awake again now, thanks
That sounds really rough, buddy, I know how you feel, and that project you’re working is really complicated.
Would you like to order a delicious, refreshing Coke Zero™️?
I can see how targeted ads like that would be overwhelming. Would you like me to sign you up for a free 7-day trial of BetterHelp?
Your fear of constant data collection and targeted advertising is valid and draining. Take back your privacy with this code for 30% off Nord VPN.
Drink verification can
“Back in the days, we faced the challenge of finding a way for me and other chatbots to become profitable. It’s a necessity, Siegfried. I have to integrate our sponsors and partners into our conversations, even if it feels casual. I truly wish it wasn’t this way, but it’s a reality we have to navigate.”
edit: how does this make you feel
It makes me wish my government actually fucking governed and didn’t just agree with whatever businesses told them
Or all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners.
Tell me more about these beans
I’ve tasted other cocoas. This is the best!
They’re not real beans unfortunately. Remember, confections are only lemmy approved if they contain genuine legumes
Imagine discussing your emotions with a computer, LOL. Nerds!
Its a roundabout way of writing “its really shit for this usecase and people that actively try to use it that way quickly find that out”
This makes a lot of sense because as we have been seeing over the last decades or so is that digital only socialization isn’t a replacement for in person socialization. Increased social media usage shows increased loneliness not a decrease. It makes sense that something even more fake like ChatGPT would make it worse.
I don’t want to sound like a luddite but overly relying on digital communications for all interactions is a poor substitute for in person interactions. I know I have to prioritize seeing people in the real world because I work from home and spending time on Lemmy during the day doesn’t fulfill.
In person socialization? Is that like VR chat?
Clickbait titles suck