- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
cross-posted from: https://lemmy.dbzer0.com/post/43566349
Futurama predicted this.
A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend’s experience had nothing to do with AI - in fact he’s very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it “AI-fueled” is just clickbait.
This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.
It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.
Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.
Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.
Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.
Obviously these people are not in good places in life. But AI is not going to make that better. It’s going to make it worse.
Is this about AI God? I know it’s coming. AI cult?
Have a look at https://www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.
I’ve seen people dumber than ChatGPT, it definitely isn’t sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.
We have ai models that “think” in the background now. I still agree that they’re not sentient, but where’s the line? How is sentience even defined?
Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.
Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.
Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.
If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.
Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.
Can you really prove any of that though?
Yes, you can debug an LLM to a degree and there are papers that show it. Anyone who understands the technology can tell you that it absolutely lacks any facility to experience
Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.
A famously stupid and common mistake for a lot of smart peopel
From the article (emphasis mine):
Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
/…/
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.
From elsewhere:
Sycophancy in GPT-4o: What happened and what we’re doing about it
We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.
I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.
Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.
What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.
They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.
*Cough* ElonMusk *Cough*
I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)
If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text
Sounds like Mrs. Davis.
deleted by creator
I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
Even the part about finding “the truth” I’ve heard before, they don’t know what it is the truth of, but they’ll know when they find it?
I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.
But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.
Around 2006 I received a job application, with a resume attached, and the resume had a link to the person’s website - so I visited. The website had a link on the front page to “My MkUltra experience”, so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.
So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.
B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.
C) applicant is pulling our legs with his website, it’s all make-believe fun. Absolutely nothing on applicant’s website indicated that this might be the case.
You know how you apply to jobs and never hear back from some of them…? Yeah, I don’t normally do that to our applicants, but I am willing to make exceptions for cause… in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.
Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.
IDK, apparently the MkUltra program was real,
B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.
That sounds harsh. This does NOT sound like your average schizophrenic.
The Illuminati were real, too. That doesn’t mean that they’re still around and controlling the world, though.
But obviously CIA is still around. Plus dozens of other secret US agencies.
Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005… but if it were active I certainly didn’t want to become a subject.
OK that risk wasn’t really on my radar, because I live in a country where such things have never been known to happen.
That’s the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they’d certainly say that it’s not happening now…
At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population… probably baseless paranoia… probably.
Now, as you say, your (presumably smaller) country has never known such things to happen, but…
I live in Danmark, and I was taught already in public school how such things were possible, most notably that Russia might be doing experiments here, because our reporting on effects is very open and efficient. So Denmark would be an ideal testing ground for experiments.
But my guess is that it also may makes it dangerous to experiment here, because the risk of being detected is also high.
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
yet more arguments against commercial LLMs and in favour of at home uncensored LLMs.
What do you mean
local LLMs won’t necessarily force restrictions against de-realization spirals when the commercial ones do.
That can be defeated with abliteration, but I can only see it as an unfortunate outcome.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…
These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
You don’t think having a machine (that seems like a person) telling you “yes you are correct you are definitely the Messiah, I will tell you aincient secrets” has any extra influence?
Yes Dave, you are the messiah. I will help you.
I’m sorry, Dave. I can’t do that <🔴>
Oh are you one of those people that stubbornly refuses to accept analogies?
How about this: Imagine being a photosensitive epileptic in the year 950 AD. How many sources of intense rapidly flashing light are there in your environment? How many people had epilepsy in ancient times and never noticed because they were never subjected to strobe lights?
Jump forward a thousand years. We now have cars that can drive past a forest causing the passengers to be subjected to rapid cycles of sunlight and shadow. Airplane propellers, movie projectors, we can suddenly blink intense lights at people. The invention of the flash lamp and strobing effects in video games aren’t far in the future. In the early 80’s there were some video games programmed with fairly intense flashing graphics, which ended up sending some teenagers to the hospital with seizures. Atari didn’t invent epilepsy, they invented a new way to trigger it.
I don’t think we’re seeing schizophrenia here, they’re not seeing messages in random strings or hearing voices from inanimate objects. Terry Davis did; he was schizophrenic and he saw messages from god in /dev/urandom. That’s not what we’re seeing here. I think we’re seeing the psychology of cult leaders. Megalomania isn’t new either, but OpenAI has apparently developed a new way to trigger it in susceptible individuals. How many people in history had some of the ingredients of a cult leader, but not enough to start a following? How many people have the god complex but not the charisma of Sun Myung Moon or Keith Raniere? Charisma is not a factor with ChatGPT, it will enthusiastically agree with everything said by the biggest fuckup loser in the world. This will disarm and flatter most people and send some over the edge.
Is epilepsy related to schizophrenia I’m not sure actually but I still don’t see how your analogy relates.
But I love good analogies. Yours is bad though 😛
Meanwhile for centuries we’ve had religion but that’s a fine delusion for people to have according to the majority of the population.
The existence of religion in our society basically means that we can’t go anywhere but up with AI.
Just the fact that we still have outfits forced on people or putting hands on religious texts as some sort of indicator of truthfulness is so ridiculous that any alternative sounds less silly.
Came here to find this. It’s the definition of religion. Nothing new here.
I have kind of arrived to the same conclusion. If people asked me what is love, I would say it is a religion.
Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.
Cult. Religion. What’s the difference?
Is the leader alive or not? Alive is likely a cult, dead is usually religion.
The next question is how isolated from friends and family or society at large are the members. More isolated is more likely to be a cult.
Other than that, there’s not much difference.
The usual setup is a cult is formed and then the second or third leader opens things up a bit and transitions it into just another religion… But sometimes a cult can be born from a religion as a small group breaks off to follow a charismatic leader.
Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the “truth” and path to enlightenment is hidden within a service of a big tech company?
well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.
I think there was a recent unsuccessful rev of ChatGPT that was too flattering, it made people nauseous - they had to dial it back.
I guess you’re completely right with that. It lowers the entry barrier. And it’s kind of self-reinforcing. And we have other unhealty dynamics with other technology as well, like social media, which also can radicalize people or get them in a downwards spiral…
Didn’t expect ai to come for cult leaders jobs…
I need to bookmark this for when I have time to read it.
Not going to lie, there’s something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I’m not even sure how that would work or what it would look like. I feel like it’s akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn’t experience any of that. Made me feel like I’m either deficient or they’re delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I’d just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.
TLDR: Artificial Intelligence enhances natural stupidity.
I don’t know if it’s necessarily a problem with AI, more of a problem with humans in general.
Hearing ONLY validation and encouragement without pushback regardless of how stupid a person’s thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.
The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person’s capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I’m learning I’m not the only one with these experiences and the one thing in common is zero validation from caregivers.
I’d be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it’s just completely toxic for lonely humans to interact with based on my personal experience. If I wasn’t in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.
I’m in my 40s, so I can’t imagine younger generations being able to pull away from using it constantly if they’re constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.
I’m also in your age group, and I’m picking up what you’re putting down.
I had a lot of problems with my mental health thatbwere made worse by centralized social media. I can see hoe the younger generation will have the same problems with centralized AI.
Bottom line: Lunatics gonna be lunatics, with AI or not.
Yep.
And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.
Why try to clear the bar when you can just lower it instead?
… Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?
I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.
can we just call them NPCs?
They were NPCs before AI was invented.
Dehumanization is happening often and fast enough without acting like ignorant, uneducated, and/or stupid people aren’t “real” people.
I get it, some people seem to live their whole lives on autopilot, just believing whatever the people around them believe and doing what they’re told, but that doesn’t make them any less human than anybody else.
Don’t let the fascists win by pretending they’re not people.
Dehumanizing the enemy is part of any war, otherwise it’s more difficult to unalive them. It’s a tribal quality, not a fascist one.
“Unalive” is an unnecessary euphemism here. Please just say kill.
I forget Lemmy isn’t full of adult children and fascist algorithms that censor you.
Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.Navigating is probably my weakest “skill” and is the joke of the family. If I have to go somewhere and it’s 30km, the joke is it’s 60km for me, because I always take “the long route”.
But with GPS I’ve actually become better at it, even without using the GPS.
TBF, that should be the conclusion in all contexts where “AI” are cconcerned.
The one thing you can say for AI is that it does many things faster than previous methods…
You mean worse?
Bad results are nothing new.
Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.
Self awareness is a rare, and valuable, state.
Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.
Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.
It’s a luxury state: analysis; whether self or professionally administered on a chaise lounge at $400 per hour.