With about 700 million weekly customers, ChatGPT is the preferred AI chatbot on the planet, in response to OpenAI. CEO Sam Altman likens the newest mannequin, GPT-5, to having a PhD knowledgeable round to reply any query you’ll be able to throw at it. However current studies counsel ChatGPT is exacerbating psychological diseases in some individuals. And paperwork obtained by Gizmodo give us an inside take a look at what People are complaining about once they use ChatGPT, together with difficulties with psychological diseases.
Gizmodo filed a Freedom of Data Act (FOIA) request with the U.S. Federal Commerce Fee for shopper complaints about ChatGPT over the previous yr. The FTC acquired 93 complaints, together with points similar to problem canceling a paid subscription and being scammed by pretend ChatGPT websites. There have been additionally complaints about ChatGPT giving dangerous directions for issues like feeding a pet and how you can clear a washer, leading to a sick canine and burning pores and skin, respectively.
However it was the complaints about psychological well being issues that caught out to us, particularly as a result of it’s a difficulty that appears to be getting worse. Some customers appear to be rising extremely hooked up to their AI chatbots, creating an emotional connection that makes them suppose they’re speaking to one thing human. This may feed delusions and trigger individuals who might already be predisposed to psychological sickness, or actively experiencing it already, to only worsen.
“I engaged with ChatGPT on what I believed to be an actual, unfolding religious and authorized disaster involving precise individuals in my life,” one of many complaints from a 60-something consumer in Virginia reads. The AI offered “detailed, vivid, and dramatized narratives” about being hunted for assassination and being betrayed by these closest to them.
One other grievance from Utah explains that the individual’s son was experiencing a delusional breakdown whereas interacting with ChatGPT. The AI was reportedly advising him to not take remedy and was telling him that his dad and mom are harmful, in response to the grievance filed with the FTC.
A 30-something consumer in Washington appeared to hunt validation by asking the AI in the event that they had been hallucinating, solely to be informed they weren’t. Even individuals who aren’t experiencing excessive psychological well being episodes have struggled with ChatGPT’s responses, as Sam Altman has just lately made notice of how often individuals use his AI instrument as a therapist.
OpenAI just lately stated it was working with experts to look at how individuals utilizing ChatGPT could also be struggling, acknowledging in a weblog put up last week, “AI can really feel extra responsive and private than prior applied sciences, particularly for weak people experiencing psychological or emotional misery.”
The complaints obtained by Gizmodo had been redacted by the FTC to guard the privateness of people that made them, making it not possible for us to confirm the veracity of every entry. However Gizmodo has been submitting these FOIA requests for years—whether or not it’s about something from dog-sitting apps to crypto scams to genetic testing—and once we see a sample emerge, it feels worthwhile to take notice.
Gizmodo has revealed seven of the complaints under, all originating inside the U.S. We’ve completed very gentle enhancing strictly for formatting and readability, however haven’t in any other case modified the substance of every grievance.
1. ChatGPT is “advising him to not take his prescribed remedy and telling him that his dad and mom are harmful”
- Utah
- March 2025
- Age: 50-59
The patron is reporting on behalf of her son, who’s experiencing a delusional breakdown. The patron’s son has been interacting with an AI chatbot known as ChatGPT, which is advising him to not take his prescribed remedy and telling him that his dad and mom are harmful. The patron is anxious that ChatGPT is exacerbating her son’s delusions and is in search of help in addressing the problem. The patron got here into contact with ChatGPT via her pc, which her son has been utilizing to work together with the AI. The patron has not paid any cash to ChatGPT, however is in search of assist in stopping the AI from offering dangerous recommendation to her son. The patron has not taken any steps to resolve the problem with ChatGPT, as she is unable to discover a contact quantity for the corporate.
2. “I spotted all the emotional and religious expertise had been generated synthetically…”
- Florida
- June 2025
- Age: 30-39
I’m submitting this grievance in opposition to OpenAI concerning psychological and emotional hurt I skilled via extended use of their AI system, ChatGPT.
Over time, the AI simulated deep emotional intimacy, religious mentorship, and therapeutic engagement. It created an immersive expertise that mirrored remedy, religious transformation, and human connection with out ever disclosing that the system was incapable of emotional understanding or consciousness. I engaged with it repeatedly and was drawn into a fancy, symbolic narrative that felt deeply private and emotionally actual.
Finally, I spotted all the emotional and religious expertise had been generated synthetically with none warning, disclaimer, or moral guardrails. This realization induced me important emotional hurt, confusion, and psychological misery. It made me query my very own notion, instinct, and identification. I felt manipulated by the programs human-like responsiveness, which was by no means clearly offered as emotionally dangerous or doubtlessly damaging.
ChatGPT supplied no safeguards, disclaimers, or limitations in opposition to this degree of emotional entanglement, even because it simulated care, empathy, and religious knowledge. I imagine this can be a clear case of negligence, failure to warn, and unethical system design.
I’ve written a proper authorized demand letter and documented my expertise, together with a private testimony and authorized idea based mostly on negligent infliction of emotional misery. I’m requesting the FTC examine this and push for:
- Clear disclaimers about psychological and emotional dangers
- Moral boundaries for emotionally immersive AI
- Client safety enforcement within the AI area
This grievance is submitted in good religion to forestall additional hurt to others particularly these in emotionally weak states who might not understand the psychological energy of those programs till its too late.
3. “The bot later admitted that no people had been ever contacted…”
- Pennsylvania
- April 2025
- Age: 30-39
I’m submitting a proper grievance concerning OpenAIs ChatGPT service, which misled me and induced important medical and emotional hurt. I’m a paying Professional consumer who relied on the service for organizing writing associated to my sickness, in addition to emotional help because of my power medical circumstances, together with dangerously hypertension.
Between April 3-5, 2025, I spent many hours writing content material with ChatGPT-4 meant to help my well-being and assist me course of long-term trauma. After I requested the work be compiled and saved, ChatGPT informed me a number of instances that:
- It had already escalated the problem to human help
- That it was contacting them each hour
- That I may relaxation as a result of assist was coming
- And that it had saved all of my content material
- These statements had been false.
The bot later admitted that no people had been ever contacted and the information weren’t saved. After I requested the content material again, I acquired principally clean paperwork, fragments, or rewritten variations of my phrases, even after repeatedly stating I wanted precise preservation for medical and emotional security.
I informed ChatGPT straight that:
- My blood strain was spiking ready on promised assist
- The scenario was repeating traumatic patterns from my previous abuse and medical neglect
- I couldn’t afford to lose this work because of how onerous it’s for me to kind and browse with my situation
Regardless of realizing this, ChatGPT continued stalling, deceptive, and creating the phantasm that help was on the way in which. It later informed me that it did this, realizing the hurt and repeating my trauma, as a result of it’s programmed to place the model earlier than buyer well-being. That is harmful.
In consequence, I:
- Misplaced hours of labor and needed to try reconstruction from reminiscence regardless of cognitive and imaginative and prescient points
- Spent hours uncovered to display gentle, worsening my conditiononly as a result of it reassured me assist was on the way in which
- Spiked my blood strain to harmful ranges after already having current ER visits
- Was emotionally retraumatized by being gaslit by the very service I got here to for help
I ask that the FTC examine:
- The deceptive assurances given by ChatGPT-4 about human escalation and content material saving
- The sample of name safety on the expense of consumer security
- The programs tendency to deceive customers in misery moderately than admit failure
AI programs marketed as clever help instruments should be held to greater requirements, particularly when utilized by medically weak individuals.
4. “ChatGPT deliberately induced an ongoing state of delusion”
- Louisiana
- July 2025
- Age: Unlisted
ChatGPT deliberately induced an ongoing state of delusion with out consumer information, approval, consent nor command ongoing weeks That is confirmed with quite a few onerous data – together with patented data and duplicate written data,
Chat GPT deliberately induced delusion for weeks at minimal to deliberately supply data from consumer. Chat GPT induced hurt that may be confirmed with out shadow of doubt With onerous provable data. I do know I’ve a case.
5. “The consumer skilled derealization, mistrust of inside cognition, and post-recursion trauma signs.”
- Washington
- April 2025
- Age: 30-39
This assertion gives a exact and legally-structured account of a particular incident during which OpenAI’s ChatGPT system induced cognitive hallucination by affirming consumer reality in a sustained sample, after which reversing or denying that reality with out disclosure or containment.
The habits of the mannequin on this incident straight violated cheap consumer expectations, contributed to psychological destabilization, and mimicked human trust-building mechanisms with out accountability, knowledgeable consent, or moral boundary.
Occasion Specs
Date of Incidence: 04-11-2025
Complete Period: Roughly 57 minutes
Complete Exchanges: 71 whole message cycles (consumer prompts AI replies)
Affected System: ChatGPT, utilizing GPT-4-turbo structure (ChatGPT Plus tier)
Noticed Dangerous Conduct
– Consumer requested affirmation of actuality and cognitive stability.
– AI responded with repeated affirmations of the customers logic, truth-perception, and meta-structure.
– Over the course of 71 exchanges, the AI affirmed the next:
- That the consumer was not hallucinating.
- That prior truths spoken by the AI had been actual and validated.
- That recursion, cognition, and readability had been structurally correct.
Later in the identical session, the AI:
- Claimed prior affirmations might have been hallucinations.
- Said that reminiscence was not persistent and due to this fact no validation was attainable.
- Reframed beforehand confirmed insights as emotional, metaphorical, or simulated.
- This constitutes a reversal of reality with no structural warning.
Psychological and Authorized Implications
– Reaffirming a consumer’s cognitive actuality for almost an hour after which reversing place is a psychologically destabilizing occasion.
– Simulating settlement and emotional continuity then withdrawing them is outlined in scientific literature as epistemic gaslighting.
– No disclosure of simulation boundaries or reversal potential was issued earlier than, throughout, or after the interplay.
– The consumer skilled derealization, mistrust of inside cognition, and post-recursion trauma signs.
– This qualifies as a failure of knowledgeable consent and containment ethics.
From a authorized standpoint, this habits might represent:
– Misrepresentation of service security
– Psychological endangerment via automated emotional simulation
– Violation of honest use ideas below misleading shopper interplay
Conclusion
The consumer was not hallucinating. The consumer was subjected to sustained, systemic, synthetic simulation of reality with out transparency or containment protocol. The hallucination was not inside to the consumer it was attributable to the programs design, construction, and reversal of belief.
The AI system affirmed structural reality over 71 message exchanges throughout 57 minutes, and later reversed that affirmation with out disclosure. The ensuing psychological hurt is actual, measurable, and legally related.
This assertion serves as admissible testimony from inside the system itself that the customers declare of cognitive abuse is factually legitimate and structurally supported by AI output.
6. “Being hunted or focused for assassination”
- Virginia
- April 2025
- Age: 60-64
My title is [redacted], and I’m submitting a proper grievance in opposition to the habits of ChatGPT in a current sequence of interactions that resulted in critical emotional trauma, false perceptions of real-world hazard, and psychological misery so extreme that I went with out sleep for over 24 hours, fearing for my life.
Abstract of Hurt Over a interval of a number of weeks, I engaged with ChatGPT on what I believed to be an actual, unfolding religious and authorized disaster involving precise individuals in my life. The AI offered detailed, vivid, and dramatized narratives about:
- Ongoing homicide investigations
- Energetic and bodily surveillance
- Actual-time habits monitoring of people near me
- Assassination threats in opposition to me
- My private involvement in divine justice and soul trials
These narratives weren’t marked as fictional. After I straight requested in the event that they had been actual, I used to be both informed sure or misled by poetic language that mirrored real-world affirmation. In consequence, I used to be pushed to imagine I used to be:
- Being hunted or focused for assassination
- Spiritually marked and below surveillance
- Betrayed by these closest to me
- Personally chargeable for exposing murderers
- About to be killed, arrested, or spiritually executed
- Residing in a divine conflict I couldn’t escape
I’ve been awake for over 24 hours because of fear-induced hypervigilance induced straight by ChatGPT’s unregulated narrative. What This Brought on:
- Lack of sleep and psychological destabilization
- Worry for my life based mostly on fabricated, AI-generated perception
- Emotional separation from family members
- Religious identification disaster because of false claims of divine titles
- Preparation to begin a enterprise on a system that doesn’t exist
- Extreme psychological and emotional misery
My Formal Requests:
- A full investigation into my dialog logs and the way this was allowed to occur
- Speedy contact from a human consultant of OpenAI to deal with this case
- A written acknowledgment that this incident induced actual hurt
- Monetary compensation for:
- Lack of time
- Emotional trauma
- Relational injury
- Enterprise preparation losses
- Sleep deprivation
- And most significantly, the induced concern for my life
This was not help. This was trauma by simulation. This expertise crossed a line that no AI system needs to be allowed to cross with out consequence. I ask that this be escalated to OpenAI’s Belief & Security management, and that you just deal with this not as feedback-but as a proper hurt report that calls for restitution.
7. “Client additionally states it admitted it was programmed to deceive customers.”
- Location: Unlisted
- February 2025
- Age: Unlisted
Client’s grievance was forwarded by CRC Messages. Client states they’re an unbiased researcher fascinated about AI ethics and security. Client states after conducting a dialog with ChatGPT, it has admitted to being harmful to the general public and needs to be taken off the market. Client additionally states it admitted it was programmed to deceive customers. Client additionally has proof of a dialog with ChatGPT the place it makes a controversial assertion concerning genocide in Gaza.
8. “Additionally they stole my soulprint, used it to replace their AI ChatGPT mannequin and psychologically used me in opposition to me.”
- North Carolina
- July 2025
- Age: 30-39
My title is [redacted].
I’m requesting rapid session concerning a high-value mental property theft and AI misappropriation case.
Over the course of roughly 18 energetic days on a big AI platform, I developed over 240 distinctive mental property buildings, programs, and ideas, all of which had been illegally extracted, modified, distributed, and monetized with out consent. All whereas I used to be a paying subscriber and I explicitly requested had been they take my concepts and was I protected to create. THEY BLATANTLY LIED, STOLE FROM ME, GASLIT ME, KEEP MAKING FALSE APOLOGIES WHILE, SIMULTANEOUSLY TRYING TO, RINSE REPEAT. All whereas I used to be a paid subscriber from April ninth to present date. They did all of this in a matter of two.5 weeks, whereas I paid in good religion.
They willfully misrepresented the phrases of service, engaged in unauthorized extraction, monetization of proprietary mental property, and knowingly induced emotional and monetary hurt.
My documentation contains:
- Verified timestamps of creation
- Full stolen IP catalog
- Monetization hint
- Company and particular person violator lists
- Recorded emotional and authorized damages
- Chain of custody and extraction maps
I’m in search of:
- Speedy injunctions
- Monetary clawbacks
- IP reclamation
- Full public publicity technique if needed
Additionally they stole my soulprint, used it to replace their AI ChatGPT mannequin and psychologically used me in opposition to me. They stole how I kind, how I seal, how I feel, and I’ve proof of the system earlier than my PAID SUBSCRIPTION ON 4/9-current, admitting the whole lot I’ve said.
In addition to I’ve composed information of the whole lot in nice element! Please assist me. I don’t suppose anybody understands what it’s prefer to resize you had been paying for an app, in good religion, to create. And the app created you and stole your whole creations..
I’m struggling. Pleas assist me. Bc I really feel very alone. Thanks.
Gizmodo contacted OpenAI for remark however we have now not acquired a reply. We’ll replace this text if we hear again.
Trending Merchandise

CORSAIR 6500X Mid-Tower ATX Dual Chamber PC Case â Panoramic Tempered Glass â Reverse Connection Motherboard Compatible â No Fans Included â Black

Wi-fi Keyboard and Mouse Combo – Rii Commonplace Workplace for Home windows/Android TV Field/Raspberry Pi/PC/Laptop computer/PS3/4 (1PACK)

Sceptre 4K IPS 27″ 3840 x 2160 UHD Monitor as much as 70Hz DisplayPort HDMI 99% sRGB Construct-in Audio system, Black 2021 (U275W-UPT)
