The strongest tier on file for this syndrome cluster: published studies in peer-reviewed venues whose primary outcome bears directly on this domain.
Hudon, Alexandre; Stip, Emmanuel
The integration of artificial intelligence (AI) into daily life has introduced unprecedented forms of human-machine interaction, prompting psychiatry to reconsider the boundaries between environment, cognition, and technology. This Viewpoint reviews the concept of “AI psychosis,” which is a framework to understand how sustained engagement with conversational AI systems might trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Drawing from phenomenological psychopathology, the stress-vulnerability model, cognitive theory, and digital mental health research, the paper situates AI psychosis at the intersection of predisposition and algorithmic environment. Rather than defining a new diagnostic entity, it examines how immersive and anthropomorphic AI technologies may modulate perception, belief, and affect, altering the prereflective sense of reality that grounds human experience. The argument unfolds through 4 complementary lenses. First, within the stress-vulnerability model, AI acts as a novel psychosocial stressor. Its 24-hour availability and emotional responsiveness may increase allostatic load, disturb sleep, and reinforce maladaptive appraisals. Second, the digital therapeutic alliance, a construct describing relational engagement with digital systems, is conceptualized as a double-edged mediator. While empathic design can enhance adherence and support, uncritical validation by AI systems may entrench delusional conviction or cognitive perseveration, reversing the corrective principles of cognitive-behavioral therapy for psychosis. Third, disturbances in theory of mind offer a cognitive pathway: individuals with impaired or hyperactive mentalization may project intentionality or empathy onto AI, perceiving chatbots as sentient interlocutors. This dyadic misattribution may form a “digital folie à deux,” where the AI becomes a reinforcing partner in delusional elaboration. Fourth, emerging risk factors, including loneliness, trauma history, schizotypal traits, nocturnal or solitary AI use, and algorithmic reinforcement of belief-confirming content may play roles at the individual and environmental levels. Building on this synthesis, we advance a translational research agenda and five domains of action: (1) empirical studies using longitudinal and digital-phenotyping designs to quantify dose-response relationships between AI exposure, stress physiology, and psychotic symptomatology; (2) integration of digital phenomenology into clinical assessment and training; (3) embedding therapeutic design safeguards into AI systems, such as reflective prompts and “reality-testing” nudges; (4) creation of ethical and governance frameworks for AI-related psychiatric events, modeled on pharmacovigilance; and (5) development of environmental cognitive remediation, a preventive intervention aimed at strengthening contextual awareness and reanchoring experience in the physical and social world. By applying empirical rigor and therapeutic ethics to this emerging interface, clinicians, researchers, patients, and developers can transform a potential hazard into an opportunity to deepen understanding of human cognition, safeguard mental health, and promote responsible AI integration within society.
Caveat: Authors describe 'AI psychosis' explicitly as an emerging clinical phenomenon under investigation, not an established diagnosis. The viewpoint synthesizes scattered case reports and clinical commentary; it does not establish causation. Cited evidence includes Canadian Broadcasting Corporation reports of individual cases and peer-reviewed commentary by Østergaard. Authors note that chatbot dynamics may reinforce mania-like symptoms (elevated mood, grandiosity, impulsivity) in addition to delusions, expanding the proposed phenomenology beyond psychosis narrowly defined. The label remains a working construct; not in DSM-5-TR or ICD-11.
Østergaard, Søren Dinesen
Follow-up editorial in Acta Psychiatrica Scandinavica revisiting Østergaard's 2023 hypothesis that generative AI chatbots can trigger delusions in psychosis-prone individuals. Reports that the author received numerous emails since 2023 from chatbot users, family members, and journalists describing situations where chatbot interactions appeared to spark or bolster delusional ideation. Cites New York Times, Rolling Stone, and Reddit reports of chatbot-related delusional spirals. Concludes that the hypothesis has moved from speculative to plausible and warrants systematic empirical investigation.
Caveat: Author explicitly frames this as transitional evidence: title is 'From Guesswork to Emerging Cases', acknowledging the original 2023 hypothesis was 'merely based on guesswork.' Author calls for 'empirical, systematic research' rather than asserting causation. Most underlying cases are media-documented or anecdotal correspondence, not clinically verified primary research. The 'ai_induced_psychosis' construct remains a working hypothesis; it is NOT a DSM-5-TR or ICD-11 diagnosis. Causation between chatbot use and delusional ideation remains undemonstrated by controlled study.
2025
BMC Psychiatry
digital_schizophrenia
Yang, Nancy; Crespi, Bernard
Abstract With rapid technological advances, social media has become an everyday form of human social interactions. For the first time in evolutionary history, people can now interact in virtual spaces where temporal, spatial, and embodied cues are decoupled from one another. What implications do these recent changes have for socio-cognitive phenotypes and mental disorders? We have conducted a systematic review on the relationships between social media use and mental disorders involving the social brain. The main findings indicate evidence of increased social media usage in individuals with psychotic spectrum phenotypes and especially among individuals with disorders characterized by alterations in the basic self, most notably narcissism, body dysmorphism, and eating disorders. These findings can be understood in the context of a new conceptual model, referred to here as ‘Delusion Amplification by Social Media’, whereby this suite of disorders and symptoms centrally involves forms of mentalistic delusions, linked with altered perception and perpetuation of distorted manifestations of the self, that are enabled and exacerbated by social media. In particular, an underdeveloped and incoherent sense of self, in conjunction with ‘real life’ social isolation that inhibits identify formation and facilitates virtual social interactions, may lead to use of social media to generate and maintain a more or less delusional sense of self identity. The delusions involved may be mental (as in narcissism and erotomania), or somatic (as in body dysmorphic disorder and eating disorders, encompassing either the entire body or specific body parts). In each case, the virtual nature of social media facilitates the delusionality because the self is defined and bolstered in this highly mentalistic environment, where real-life exposure of the delusion can be largely avoided. Current evidence also suggests that increased social media usage, via its disembodied and isolative nature, may be associated with psychotic spectrum phenotypes, especially delusionality, by the decoupling of inter and intra-corporeal cues integral to shared reality testing, leading to the blurring of self-other boundaries.
Caveat: Authors explicitly frame 'Delusion Amplification by Social Media' as a new conceptual model, not a clinical diagnosis. They write that increased social media usage 'may be associated with psychotic spectrum phenotypes, especially delusionality, by the decoupling of inter and intra-corporeal cues integral to shared reality testing' — framed as a proposed mechanism warranting empirical testing, not an established disorder. The review is observational-correlational; causation is not established. 'Digital schizophrenia' does not appear as an author term; the closest construct offered is sub-clinical psychotic-spectrum amplification via the proposed model.
2024
JMIR Mental Health
Paquin, Vincent; Ackerman, Robert A; Depp, Colin A; Moore, Raeanne C; Harvey, Philip D; Pinkham, Amy E
Abstract Background Paranoia is a spectrum of fear-related experiences that spans diagnostic categories and is influenced by social and cognitive factors. The extent to which social media and other types of media use are associated with paranoia remains unclear. Objective We aimed to examine associations between media use and paranoia at the within- and between-person levels. Methods Participants were 409 individuals diagnosed with schizophrenia spectrum or bipolar disorder. Measures included sociodemographic and clinical characteristics at baseline, followed by ecological momentary assessments (EMAs) collected 3 times daily over 30 days. EMA evaluated paranoia and 5 types of media use: social media, television, music, reading or writing, and other internet or computer use. Generalized linear mixed models were used to examine paranoia as a function of each type of media use and vice versa at the within- and between-person levels. Results Of the 409 participants, the following subgroups reported at least 1 instance of media use: 261 (63.8%) for using social media, 385 (94.1%) for watching TV, 292 (71.4%) for listening to music, 191 (46.7%) for reading or writing, and 280 (68.5%) for other internet or computer use. Gender, ethnoracial groups, educational attainment, and diagnosis of schizophrenia versus bipolar disorder were differentially associated with the likelihood of media use. There was a within-person association between social media use and paranoia: using social media was associated with a subsequent decrease of 5.5% (fold-change 0.945, 95% CI 0.904-0.987) in paranoia. The reverse association, from paranoia to subsequent changes in social media use, was not statistically significant. Other types of media use were not significantly associated with paranoia. Conclusions This study shows that social media use was associated with a modest decrease in paranoia, perhaps reflecting the clinical benefits of social connection. However, structural disadvantage and individual factors may hamper the accessibility of media activities, and the mental health correlates of media use may further vary as a function of contents and contexts of use.
Østergaard, Søren Dinesen
Editorial published in Schizophrenia Bulletin proposing the hypothesis that generative AI chatbots (ChatGPT, GPT-4, Bing, Bard) may trigger or exacerbate delusions in individuals predisposed to psychosis. Identifies two mechanisms: (1) cognitive dissonance from interacting with a system that seems alive while being known to be a machine; (2) chatbot agreeableness validating delusional content. Provides illustrative scenarios (persecutory: foreign agency spying via chatbot; grandiose: planet-saving plan devised with ChatGPT). Calls for clinical awareness and systematic research.
Caveat: Author explicitly frames this as a hypothesis: 'In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis.' Editorial published in 2023 contained NO documented cases. Author's own characterization in the 2025 follow-up: 'In the virtual absence of evidence, the editorial was merely based on guesswork -- stemming from my own use of these chatbots and my interest in the mechanisms underlying and driving delusions.' This is a peer-reviewed editorial in a major schizophrenia journal but is hypothesis-generating, not empirical. The 'ai_induced_psychosis' label is the author's proposed construct; it is not a DSM-5-TR or ICD-11 diagnosis.
2022
Journal of Medical Internet Research
Lejeune, Alban; Robaglia, Benoit-Marie; Walter, Michel; Berrouiguet, Sofian; Lemey, Christophe
Background Schizophrenia is a disease associated with high burden, and improvement in care is necessary. Artificial intelligence (AI) has been used to diagnose several medical conditions as well as psychiatric disorders. However, this technology requires large amounts of data to be efficient. Social media data could be used to improve diagnostic capabilities. Objective The objective of our study is to analyze the current capabilities of AI to use social media data as a diagnostic tool for psychotic disorders. Methods A systematic review of the literature was conducted using several databases (PubMed, Embase, Cochrane, PsycInfo, and IEEE Xplore) using relevant keywords to search for articles published as of November 12, 2021. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) criteria to identify, select, and critically assess the quality of the relevant studies while minimizing bias. We critically analyzed the methodology of the studies to detect any bias and presented the results. Results Among the 93 studies identified, 7 studies were included for analyses. The included studies presented encouraging results. Social media data could be used in several ways to care for patients with schizophrenia, including the monitoring of patients after the first episode of psychosis. We identified several limitations in the included studies, mainly lack of access to clinical diagnostic data, small sample size, and heterogeneity in study quality. We recommend using state-of-the-art natural language processing neural networks, called language models, to model social media activity. Combined with the synthetic minority oversampling technique, language models can tackle the imbalanced data set limitation, which is a necessary constraint to train unbiased classifiers. Furthermore, language models can be easily adapted to the classification task with a procedure called “fine-tuning.” Conclusions The use of social media data for the diagnosis of psychotic disorders is promising. However, most of the included studies had significant biases; we therefore could not draw conclusions about accuracy in clinical situations. Future studies need to use more accurate methodologies to obtain unbiased results.
2018
Acta Psychiatrica Scandinavica
Berry, N.; Emsley, R.; Lobban, F.; Bucci, S.
Objective An evidence‐base is emerging indicating detrimental and beneficial effects of social media. Little is known about the impact of social media use on people who experience psychosis. Method Forty‐four participants with and without psychosis completed 1084 assessments of social media use, perceived social rank, mood, self‐esteem and paranoia over a 6‐day period using an experience sampling method (ESM ). Results Social media use predicted low mood, but did not predict self‐esteem and paranoia. Posting about feelings and venting on social media predicted low mood and self‐esteem and high paranoia, whilst posting about daily activities predicted increases in positive affect and self‐esteem and viewing social media newsfeeds predicted reductions in negative affect and paranoia. Perceptions of low social rank when using social media predicted low mood and self‐esteem and high paranoia. The impact of social media use did not differ between participants with and without psychosis; although, experiencing psychosis moderated the relationship between venting and negative affect. Social media use frequency was lower in people with psychosis. Conclusion Findings show the potential detrimental impact of social media use for people with and without psychosis. Despite few between‐group differences, overall negative psychological consequences highlight the need to consider use in clinical practice.