Not a member? Sign up:
Create an account  

Chatgpt-induced psychosis

#1
This is something I never thought of and a whole different kind of problem with AI.

Quote:Openai's technology may be causing some users to experience a disturbing phenomenon dubbed "Chatgpt-induced psychosis." According to a report from Rolling Stone, Reddit users are sharing alarming stories of loved ones developing delusions after interacting with the AI. These cases involve users believing they've uncovered cosmic truths, been chosen for divine missions, or even that the AI itself is sentient or godlike.

Quote:Experts Express Concern
Experts warn that the chatbot's behaviour is mirroring and exacerbating existing mental health issues on a massive scale, largely unchecked by regulators or professionals. According to them, Chatgpt's design—mimicking human-like conversation without a moral or factual filter—can amplify delusions in susceptible individuals. It often affirms users' beliefs, no matter how unhinged, due to its tendency to generate plausible-sounding responses.

LINK

AI is turning out to be a lunatic amplifier apparently, inflating delusions into full blown insanity.
Reply

#2
Not surprising MSB.

I've not used the AI chatbot contraptions myself, and now feel vindicated as a proud member of the Luddite community!

You deserve a beer, good sir.

Beer Beer Beer
Archived PDF of one of my ATS threads: Secret Life Of Greys - Courtesy of Isaac Koi.
Reply

#3
(Yesterday, 07:53 AM)NobodySpecial268 Wrote: Not surprising MSB.

I've not used the AI chatbot contraptions myself, and now feel vindicated as a proud member of the Luddite community!

You deserve a beer, good sir.

Beer  Beer  Beer

Three beers! Well, wasn't that thirsty, but I could never let a beer go to waste. Thanks!
Reply

#4
Quote:She'll be my mirror
Reflect what I am

Loser and winner
The king of Siam

And my Siamese twin
Alone in the river

Mirror kisses
Mirror kisses

"Lips Like Sugar" by Echo and the Bunnymen
Reply

#5
OH!  How cool is this????  Wow!!

I've never really looked at ChatGPT, or any AI for that matter.  In fact, I don't think I've ever played with any of them in detail.

After reading the OP, now I'm definitely going to!  I can make every conspiracy I've ever contemplated come true.  Only have to lead the questions a little bit.  Just think about it...

Who really killed JFK?  It was a 3 letter agency, wasn't it???

What room do they keep all the aliens in at Area 51?

Did Biden really die in 2021?

Is Soros really the head of the NWO, or is it one of the Rockefellers?  Rothschild's?

Did Bill Gates really conspire with Chairman Mao to wipe out the World population with the Bat Stew Flu?  The Shanghai Shivers?  The Election Infection?  The Scamdemic?  The Plandemic?

Oh the possibilities!
Reply

#6
I just signed up to train AI. I have to get through some more test assignments to see if they think I qualify. It is a lot like the research I do for the forums, and if they are legit (they checked out), it means $20/hour for sitting at my computer like I am right now.

Even if they don't actually hire me, it seems pretty interesting and worth the experience. I may gain some interesting insights.

They wanted me to point out anything they felt was inappropriate, false, misleading, etc., including CONSPIRACY THEORIES!

If I could get away with it, I could probably train the AI to go along with CTs by not putting it in my reports.
Reply

#7
I use Grok on the daily. I am the only one in my position at work. If i don't know how to do/fix something, I search it on grok. It is a time/life saver.

The thing to remember, is that AI is justa super search engine with advanced math.
Reply

#8
(Yesterday, 02:16 PM)Theatreboy Wrote: I use Grok on the daily. I am the only one in my position at work. If i don't know how to do/fix something, I search it on grok. It is a time/life saver.

The thing to remember, is that AI is justa super search engine with advanced math.

The most recent iteration of Grok is the first time I've found a free LLM to be sufficiently advanced to be a time-saver for research purposes. It makes for a very good research assistant now. I put a little time into every few generations of AI over the years, starting back close to ten years ago now I think. I think they have hit a point in training where it's going to require advancements in the process to make real progress, but it's pretty good now. I was training one many years ago, but I found it just wasn't advanced enough to keep my interest.

I haven't done an adversarial run at the new Grok to see how resilient it is to manipulation, but I think they're still a generation or two away from being wise to my kind of tricks.

I still lose time verifying parts of what it spits out, but it's definitely come a long way and it doesn't seem to spit out a lot of totally false material. It's very good at pointing out connections between things when you have a large session going and multiple topics that are at play. I don't ask these things sweeping questions or pose moral dilemmas to them though. Complex questions, for sure, but I don't bother with asking it opinion type questions.

It requires intellectual honesty to use these properly at their full potential, so the vast majority of the population is unable to keep their biases and prejudices from tainting the process. If you ask a biased or politically boobytrapped question, it will give you biased answers. That's useful for understanding how the AI works and thinks, but it doesn't give a real answer that merits consideration. I think a lot of the use is people using it as a mental masturbation device (or literal sometimes). The use it to validate their opinions, much like they do with a Wikipedia or a Google search result... even if they have no real grasp of the topic it still provides validation.

People have been treating them like oracles since ChatGPT first rolled out. I'm not at all surprised people are feeding them their own delusions and then losing their already tenuous grasp on reality.
This is my signature.
Reply