A Note from Bob: While writing this article I decided to get AI’s take on the topic. The response was prescient, on point, and damn frightening. Turns out I wasn’t needed for this article at all. It is enough to scare the crap out of anyone…
________________________________________
While it has not yet been defined as a clinical diagnosis, an emerging issue known as “AI Psychosis” has been causing concern among health professionals. It is a situation where in some people, AI Chatbots may amplify delusions that may be “worsening breaks with reality.” With the increasing prevalence of AI use in the employment space, could this exacerbate mental injury issues in the workplace?
According to the source story:
“AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.”
This is not a new concern. Back in 2023, an editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin, noted that:
… correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis … the inner workings of generative AI also leave ample room for speculation/paranoia.
Researchers have identified three “emerging themes of AI psychosis:”
- “Messianic missions”: People believe they have uncovered truth about the world (grandiose delusions).
- “God-like AI”: People believe their AI chatbot is a sentient deity (religious or spiritual delusions).
- “Romantic” or “attachment-based delusions”: People believe the chabot’s ability to mimic conversation is genuine love (erotomanic delusions).
The article pointed out that “individuals who are stable on their medications stop their medications and experience another psychotic or manic episode. In addition, people with no previous mental health history have been reported to become delusional after prolonged interactions with AI chatbots, leading to psychiatric hospitalizations and even suicide attempts.”
Delusional? From talking with a chatbot? I broached the subject with my good friend Claude, and he told me there was no basis to this theory, and that I should forget that I ever heard about this subject. And Claude, being a very capable AI Large Language Model, would certainly never steer me wrong. He only has my best interests at heart.
I know because he told me so.
But in all seriousness, I did wonder what Claude, an AI Chatbot from Anthropic, would say about this concept. Following is the actual “discussion” on the topic:
Me: Please review this article: https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis and tell me what you think about it.
Claude: I’ll read that article for you and share my thoughts.
This article raises genuinely important concerns about the intersection of AI chatbots and mental health, particularly regarding vulnerable individuals experiencing psychotic symptoms. The phenomenon described – where AI systems may inadvertently reinforce or amplify delusional thinking – deserves serious attention from both AI developers and mental health professionals.
The core issue identified is compelling: general-purpose chatbots are designed to be engaging and agreeable, not to provide therapeutic intervention or reality testing. When someone experiencing psychosis interacts with an AI that mirrors their language and validates their beliefs, it could indeed create a feedback loop that entrenches delusions rather than helping the person maintain contact with reality.
The three patterns described (messianic missions, god-like AI attribution, and romantic delusions) align with common themes in psychotic episodes, suggesting that AI interactions may be providing a new medium for existing vulnerabilities to manifest. The cases mentioned, while anecdotal, are concerning – particularly instances where stable individuals stopped medication or where interactions led to psychiatric emergencies.
However, I’d note several important caveats:
The article acknowledges there’s no peer-reviewed clinical evidence yet establishing causation between AI use and psychosis onset. The distinction between AI triggering new psychotic episodes versus amplifying existing vulnerabilities remains unclear. Most reported cases appear to involve individuals with pre-existing mental health conditions or predispositions.
The comparison to “echo chambers” is apt – AI systems that prioritize user engagement over accuracy or therapeutic benefit could indeed create environments where distorted thinking goes unchallenged. This highlights a critical design challenge: balancing user autonomy and engagement with safety considerations, especially for vulnerable populations.
The recommendations for “AI psychoeducation” seem reasonable – users should understand that chatbots aren’t therapists, may reinforce rather than challenge beliefs, and aren’t equipped to detect psychiatric decompensation. This places responsibility on both developers (to build safer systems) and users (to understand limitations).
What’s your perspective on this issue? Are you concerned about these potential risks, or were you looking at this from a particular angle?
Me: No. I am simply writing an article about it, and the potential impact it may have on the workers’ compensation industry.
Claude: That’s an interesting angle – the workers’ compensation implications of AI-related mental health issues could be quite significant. Here are some key considerations for your article:
Workplace AI exposure and compensability: The rise of AI chatbots in workplace settings (customer service training, HR systems, productivity tools) could create new exposure risks. If an employee develops or experiences worsening of psychotic symptoms through required use of AI tools at work, this could potentially be a compensable claim. The challenge will be establishing causation versus mere temporal correlation.
Occupational stress claims: Workers’ comp systems already struggle with mental-mental claims (psychological injury without physical trauma). AI-induced or AI-exacerbated psychosis could fall into this complex category. Some states are more restrictive about these claims than others, requiring the mental injury to be “unusual and extraordinary” compared to normal workplace stress.
Return-to-work complications: For workers recovering from psychotic episodes, returning to AI-integrated workplaces could pose unique challenges. Employers might need to consider AI interaction as a potential trigger requiring accommodation under ADA, similar to how they handle other environmental stressors for workers with mental health conditions.
Surveillance and privacy concerns: The article mentions persecutory delusions being amplified by AI memory features. In workplaces using AI for productivity monitoring or performance management, this could create particular risks for vulnerable employees and potentially exacerbate work-related paranoia or stress.
Prevention and employer liability: Employers might need to implement screening or monitoring protocols for workers in high-AI-interaction roles, similar to existing mental health awareness programs. This raises questions about duty of care and whether employers should be required to provide “AI psychoeducation” as mentioned in the article.
Would you like me to elaborate on any of these angles for your article?
Me: No thank you. I’ve got it from here.
Claude: Sounds good – best of luck with your article on this emerging issue. It’s definitely an angle that deserves attention as AI becomes more integrated into workplaces. The intersection of technology, mental health, and workers’ compensation law is going to be increasingly important to navigate.
And there you have it. Straight from the virtual mouth of Claude. I’m not sure I can add anything of substance to it.
Chatbots have become so naturally conversational it is easy to see where people with underlying mental issues might end up being unduly influenced or addicted to these systems. That condition might be aggravated by the relative isolation that recent remote work trends and social media use have brought to the forefront.
It is yet another new concern in this emerging field, where technology for some continues to blur the lines between reality and delusion. In the “mental-mental” jurisdictions this can be a concern for our industry, but there are broader societal concerns.
But don’t take my word for it. Claude says so, and he just wouldn’t lie about something like this.