<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<id>https://mental.jmir.org/issue/feed</id>
	<title>JMIR Mental Health</title>
			<updated>2025-01-03T10:15:04-05:00</updated>
	
		<author>
		<name>JMIR Publications</name>
				<email>editor@jmir.org</email>
			</author>
		<link rel="alternate" href="https://mental.jmir.org" />
	<link rel="self" type="application/atom+xml" href="https://mental.jmir.org/feed/atom" />

	<generator uri="http://pkp.sfu.ca/ojs/" version="2.2.0.0">Open Journal Systems</generator>

				        <rights> Unless stated otherwise, all articles are open-access distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work (&quot;first published in the Journal of Medical Internet Research...&quot;) is properly cited with original URL and bibliographic citation information. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. </rights>
    	<subtitle> Internet interventions, technologies, and digital innovations for mental health and behavior change. JMIR Mental Health is the official journal of the Society of Digital Psychiatry .&amp;nbsp; </subtitle>



	<entry>
		<id> https://mental.jmir.org/2026/1/e88750 </id>
		<title>Explainable AI for Well-Being Prediction From Lifestyle Data: 2-Study Design</title>
		<updated>2026-05-08T13:30:08-04:00</updated>

					<author>
				<name>Flore Vancompernolle Vromman</name>
			</author>
					<author>
				<name>Corentin Vande Kerckhove</name>
			</author>
					<author>
				<name>Joël Gagnon</name>
			</author>
					<author>
				<name>Camille Pelletier</name>
			</author>
					<author>
				<name>Yannick Dufresne</name>
			</author>
					<author>
				<name>Simon Coulombe</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e88750" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e88750">Background: Well-being is a cornerstone of public health and social progress; yet, its determinants are multifaceted and dynamic. As behavioral data become increasingly available and artificial intelligence (AI) systems gain prominence, scalable assessments of well-being are becoming more feasible. However, to be useful in practice, such systems must remain understandable to the people they aim to support. Explainable AI is therefore essential to foster trust and enable reflection. Objective: This research aimed to investigate (1) the extent to which modifiable lifestyle and contextual factors can predict subjective well-being, and (2) how different explanation modalities influence users’ satisfaction when interpreting AI-generated well-being feedback. Methods: We conducted a 2-stage, application-grounded investigation. First, we developed a parsimonious regularized linear model using a small set of lifestyle-related predictors to estimate individual well-being. Second, we experimentally compared multiple explanation modalities (visual, interactive, textual, quantitative, and population-comparison) against a no-explanation control to evaluate how each format shapes end users’ satisfaction with the AI-generated assessment. Results: Across conditions, providing any explanation increased users’ satisfaction relative to the no-explanation control in the final sample (n=1252 participants). Visual (B=0.915, SE 0.077; &lt;.001) and interactive (B=0.914, SE 0.076; &lt;.001) explanations produced the highest satisfaction scores, while textual (B=0.850, SE 0.076; &lt;.001) and quantitative (B=0.782, SE 0.077; &lt;.001) formats also showed strong positive effects. Population-comparison (contextual) feedback yielded a smaller effect (B=0.218, SE 0.077; =.005) and was consistently the least preferred and least effective at conveying why the model produced a given assessment. Conclusions: The findings suggest that well-being tools should combine simple, interpretable models with visual or interactive explanations that foreground actionable behavioral levers rather than emphasizing population norms. These insights offer design guidance for deploying explainable AI in well-being tools to support user satisfaction.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/c0b89f4c5ed839b767e84f35b863edf1" />
		
		<published>2026-05-08T13:30:08-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e89164 </id>
		<title>Barriers and Facilitators in the Implementation of the Systematic Medical Appraisal, Referral, and Treatment (SMART) Mental Health Digital Intervention in Rural India: Mixed Methods Process Evaluation Study</title>
		<updated>2026-05-07T17:30:04-04:00</updated>

					<author>
				<name>Ankita Mukherjee</name>
			</author>
					<author>
				<name>Mercian Daniel</name>
			</author>
					<author>
				<name>Sudha Kallakuri</name>
			</author>
					<author>
				<name>Siddhardha Devarapalli</name>
			</author>
					<author>
				<name>Sandhya Kanaka Yatirajula</name>
			</author>
					<author>
				<name>Amanpreet Kaur</name>
			</author>
					<author>
				<name>Praveen Devarsetty</name>
			</author>
					<author>
				<name>Usha Raman</name>
			</author>
					<author>
				<name>Beverley M Essue</name>
			</author>
					<author>
				<name>Rajesh Sagar</name>
			</author>
					<author>
				<name>Shashi Kant</name>
			</author>
					<author>
				<name>Shekhar Saxena</name>
			</author>
					<author>
				<name>Graham Thornicroft</name>
			</author>
					<author>
				<name>Anushka Patel</name>
			</author>
					<author>
				<name>Pallab K Maulik</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e89164" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e89164">&lt;strong&gt;Background:&lt;/strong&gt; An estimated 150 million people have mental health care needs in India, but only 15% are able to access care. Depression and anxiety contribute to a large proportion of mental morbidity. The Systematic Medical Appraisal, Referral, and Treatment (SMART) Mental Health trial used a mobile-based clinical decision support system for primary care doctors and community health workers (CHWs) to identify and treat people at risk of depression, anxiety disorders, and self-harm. A community-based antistigma campaign was also delivered. The intervention led to improved remission rates for depression and anxiety and lower stigma scores. &lt;strong&gt;Objective:&lt;/strong&gt; A process evaluation assessed (1) implementation fidelity, barriers, and facilitators; (2) perceptions of doctors and CHWs on the use of SMART Mental Health; and (3) the causal pathways that led to trial outcomes. &lt;strong&gt;Methods:&lt;/strong&gt; A mixed methods evaluation combining backend program data and qualitative data was conducted. A total of 38 focus group discussions and 37 key informant interviews were conducted with primary doctors, CHWs, government officials, local community leaders, and research project staff. The data were coded and analyzed using a framework analysis approach based on the UK Medical Research Council guidance on process evaluations and the Reach, Effectiveness, Adoption, Implementation, and Maintenance framework. &lt;strong&gt;Results:&lt;/strong&gt; The intervention had high implementation fidelity. Across clusters, the median proportion of participants with at least 1 CHW follow-up was 98% (IQR 96.6%-100%). The referral rate for a psychiatrist was low (224/1697, 13.2%), and only 23.6% (53/224) of those referred visited the psychiatrist. The median exposure to antistigma audiovisual content was 84% (IQR 65.7%-95.9%). At the community level, key implementation barriers included cultural inhibitions in seeking mental health care and the unavailability of patients due to competing demands. Proximity and tight social connections between CHWs and their communities were important facilitators in seeking medical help. Doctor and CHW training, mentoring, and feedback provided by program staff were important facilitators to support the use of the digital health components by the health workforce. &lt;strong&gt;Conclusions:&lt;/strong&gt; A complex intervention that included both community-based antistigma and clinical digital health interventions achieved high implementation fidelity. Key areas to consider for maintenance of such interventions include (1) the need for sustained community-based strategies to address stigma and other cultural barriers; (2) health workforce strengthening policies, including supportive supervision for CHWs and doctors to increase capability in the use of mental health digital health tools; and (3) strategies to improve access to specialist care for those with more complex care needs. &lt;strong&gt;Trial Registration:&lt;/strong&gt; Clinical Trial Registry India CTRI/2018/08/015355; https://tinyurl.com/5r63suxp </summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/ecf46e7e6288e3563371f5c4e1f2fc2a" />
		
		<published>2026-05-07T17:30:04-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e84754 </id>
		<title>Behavior Change Techniques in Digital Health Interventions for Promoting Adolescent Health Behaviors: Systematic Umbrella Review</title>
		<updated>2026-05-06T14:45:15-04:00</updated>

					<author>
				<name>Nikolaos Boumparis</name>
			</author>
					<author>
				<name>Philippe de Riedmatten</name>
			</author>
					<author>
				<name>Katrina Champion</name>
			</author>
					<author>
				<name>Gloria Cea</name>
			</author>
					<author>
				<name>Teresa de Pablo-Pardo</name>
			</author>
					<author>
				<name>Kleio Koutra</name>
			</author>
					<author>
				<name>Katie Rizvi</name>
			</author>
					<author>
				<name>Hayley Pearce</name>
			</author>
					<author>
				<name>Andreas Triantafyllidis</name>
			</author>
					<author>
				<name>Ana Molina-Barceló</name>
			</author>
					<author>
				<name>Michael Patrick Schaub</name>
			</author>
					<author>
				<name>Severin Haug</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e84754" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e84754">Background: Digital health interventions (DHIs) using behavior change techniques (BCTs) show promise in addressing adolescent health behaviors, but evidence of their effectiveness across health behavior domains remains fragmented and poorly summarized. Objective: This systematic umbrella review synthesized evidence from existing systematic reviews on the effectiveness of BCTs within DHI targeting key adolescent health behavior domains: alcohol consumption, tobacco use, physical activity, dietary habits, and obesity management. Methods: We systematically searched PubMed, PsycInfo, Embase, and CINAHL in April 2024 for reviews of DHI for adolescents (10‐19 years old). We coded all identified BCTs using the Behavior Change Technique Taxonomy version 1 (BCTTv1). Data on BCT effectiveness, intervention characteristics, and review quality were extracted and narratively synthesized using AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews 2). Results: A total of 20 reviews, comprising 224,135 participants, were included. These examined DHIs targeting physical activity (7 reviews), dietary habits (3 reviews), alcohol consumption (2 reviews), combined alcohol and nicotine use (1 review), and obesity management (1 review), with an additional 6 reviews covering multiple health behaviors. Across reviews, 65% (13/20) reported statistically significant positive effects on at least one health behavior outcome. “Social support (unspecified)” was the most consistently adopted and effective BCT, especially with parental/peer involvement. The combination of “self-monitoring,” “goal setting,” and “feedback” also commonly appeared in successful interventions. Intervention effectiveness appeared linked to strategic BCT selection and individualization rather than the total number of techniques. The methodological quality of included reviews was predominantly low, with only 2 rated high. Conclusions: This umbrella review identified “social support (unspecified)” as a consistently effective BCT across multiple adolescent health behavior domains, particularly with parental/peer involvement. Intervention success appears linked to targeted and individualized BCT use. Future research should prioritize clarifying the specific components and delivery methods of effective social support, rigorously evaluating BCT configurations in underexplored areas such as adolescent smoking cessation, and examining their long-term impact on behavior change.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/cb91f512cd990872b9b66e9a7aeccfad" />
		
		<published>2026-05-06T14:45:15-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e77876 </id>
		<title>Current Landscape of Mental Health Conversational Agents From a Trauma-Informed Care Lens: Scoping Review</title>
		<updated>2026-04-30T18:15:12-04:00</updated>

					<author>
				<name>Faye Kollig</name>
			</author>
					<author>
				<name>Kira Voelker</name>
			</author>
					<author>
				<name>Emily Ryan</name>
			</author>
					<author>
				<name>Rachel Pfafman</name>
			</author>
					<author>
				<name>Fayika Farhat Nova</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e77876" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e77876">Background: Conversational agents (CAs) are increasingly used in mental health care to enhance access and engagement. However, their safe, ethical, and user-sensitive design remains a challenge. Despite growing attention to trauma-informed approaches in human-computer interaction, there is limited work on how the trauma-informed care (TIC) framework could be applied in the design of mental health CAs and no comprehensive synthesis to date. Objective: Guided by the Substance Abuse and Mental Health Services Administration’s TIC framework, this scoping review explored how TIC principles (safety; trustworthiness and transparency; collaboration and mutuality; empowerment, voice, and choice; peer support; and cultural, historical, and gender issues) are currently represented in the design and evaluation of mental health conversational agents (MHCAs) and identified gaps and opportunities to promote more trauma-informed design practices. Methods: Online databases, as well as a secondary survey of citation lists from an initial search, were used to identify English-language journal articles and conference proceedings from 2000 to 2024 that empirically evaluated an independent, web- or app-based, unassisted CA used for mental health and included concepts from TIC. Results: Our analysis included 38 publications (n=28, 73.7%, published in 2020 or later) covering 28 distinct MHCAs. Most studies used experimental methods (n=23, 60.6%) or user studies (n=11, 28.9%), with samples skewed toward female (men: mean 34.92%, SD 18.64%), young in age (mean 32.52, SD 14.6 y), and predominantly nonclinical (n=29, 76.3%). MHCAs were largely rule-based prototypes. No studies explicitly referenced the TIC framework as a guiding lens for MHCA design or evaluation. A total of 26 studies referenced terminology from TIC core principles but rarely defined them, while all 38 included language that could be linked to one or more principles. Overall, TIC-related concepts appeared most often within intervention design descriptions, qualitative assessments, or as items embedded in questionnaires evaluating broader constructs. Trustworthiness and transparency, safety, empowerment, voice and choice, and collaboration and mutuality were comparatively well addressed, while peer support and cultural, historical, and gender issues were largely absent. Design recommendations, where present, were relatively broad and emphasized secure, customizable, reliable, human-like, and context-sensitive MHCAs that offered multimodal interaction, goal setting and tracking, and transparency. Conclusions: Studies did not self-identify as using Substance Abuse and Mental Health Services Administration’s framework for TIC, making it more difficult to identify its elements. The fragmented terms, disciplines, and metrics used make it difficult to draw more systematic conclusions about the current research landscape related to TIC, but our analysis indicates TIC to be a descriptive and potentially unifying framework and provides a starting point for the explicit trauma-informed MHCA research and design.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/c41181a042ee9ad5f9b3c8394fcddce6" />
		
		<published>2026-04-30T18:15:12-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e90155 </id>
		<title>Use of a Large Language Model to Reveal Narrative Architectures of Veteran Transition Stress: Development and Validation Study</title>
		<updated>2026-04-30T13:15:11-04:00</updated>

					<author>
				<name>Isaac R Galatzer-Levy</name>
			</author>
					<author>
				<name>Xi Pan</name>
			</author>
					<author>
				<name>Roland P Hart</name>
			</author>
					<author>
				<name>George A Bonanno</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e90155" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e90155">Background: The stress caused by multiple aspects of veterans’ transitions from military to civilian, termed transition stress, represents a unique source of psychological impact that is underresearched due to its qualitative nature. The assessment of this complex psychological phenomena has thus relied on laborious interviews designed to extract quantitative information from qualitative narratives of the transition to civilian life. We sought to determine if large language models (LLMs) could be used as valid measurement tools to extract relevant information from open-ended narratives. Objective: This study sought to develop and validate a generative artificial intelligence (AI) approach to automate the quantification and subsequent thematic analysis of veteran transition stress. Methods: Utilizing transcripts from interviews of a sample of US military veterans, we developed an LLM to rate transition stress severity and examined the model’s reliability in relation to human coders and validity in relation to a set of related questionnaire measures. Next, we used the LLM scores to quantitatively define high and low transition stress groups, enabling a targeted, automated analysis of themes related to narrative identity and life transition themes that might differentiate the two groups. Results: LLM ratings of transition stress correlated highly with the human expert ratings and showed significant, theoretically congruent correlations with measures of clinical symptoms, reintegration difficulties, and veterans’ self-ratings of transition difficulty. Critically, the AI-derived thematic analyses of the narratives from high and low transition stress veterans revealed clearly distinct and informative patterns. Conclusions: These findings suggest that generative AI offers a robust, scalable, and reliable method for multidimensional analysis of complex, narrative-based psychological constructs.</summary>
		
        
        
		<published>2026-04-30T13:15:11-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e77038 </id>
		<title>Designing and Evaluating Digital Mental Health Interventions: Scoping Review</title>
		<updated>2026-04-29T16:00:22-04:00</updated>

					<author>
				<name>Sarah Zainab Mbawa</name>
			</author>
					<author>
				<name>Roelof Anne Jelle de Vries</name>
			</author>
					<author>
				<name>Luciano Cavalcante Siebert</name>
			</author>
					<author>
				<name>Koen van Turnhout</name>
			</author>
					<author>
				<name>Willem-Paul Brinkman</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e77038" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e77038">Background: The ongoing adoption and use of digital interventions offer promising opportunities to meet the growing demand for mental health support. The effectiveness, implementation, and usage of these interventions depend on how well they are designed and evaluated. However, given the emerging nature of design research in this area, there is still no clear consensus on the specific principles and guidelines for developing digital mental health interventions (DMHIs). There seems to be a lack of clarity regarding the best practices for designing and evaluating these tools. Objective: We aimed to investigate and report on the design principles and evaluation approaches used in digital interventions specific to mental health care. Additionally, we sought to outline how these principles and approaches are applied in research. Methods: This scoping review was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for scoping reviews. The literature search was performed in 2 electronic databases, SCOPUS and Web of Science, across 3 iterations from January 2024 to January 2025. A total of 2 independent reviewers screened and selected papers based on predefined inclusion and exclusion criteria, followed by data extraction from the selected studies. The data were then synthesized by categorizing the papers according to the primary research aim of each study. The inclusion criteria covered studies involving populations with mental health challenges or users of DMHIs, any digital tools for mental health care, and principles or strategies related to the design, evaluation, or implementation of DMHIs. Results: Our search identified 401 papers, of which 17 met the inclusion criteria for this review. Among these, 11 focused on evaluation studies, while 6 covered both design and evaluation studies (mixed). An iterative user-centered development process, expert inclusion, usability testing, specification of design elements, and user tracking and feedback were identified as common design principles used in studies focused on DMHIs. Evaluation approaches were shaped by the evaluation goal, which influenced the chosen methodologies. We also summarize the recommendations for implementation highlighted in some studies. Based on our findings, we propose 8 guidelines emphasizing stakeholder involvement in the development process and the need for clear justifications for design decisions, among other considerations. Conclusions: Design principles used in DMHI development include user-centered development, expert inclusion, and usability testing, while evaluation approaches often rely on randomized controlled trials to assess efficacy. Qualitative and mixed-method approaches are commonly adopted by studies to capture user experience and bridge both process and outcome measures. We recommend that future research explicitly report its design justification and adopt a multiperspective approach in the research and design of DMHIs.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/f062b157598b65481ecf0069cd958411" />
		
		<published>2026-04-29T16:00:22-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e78351 </id>
		<title>Errors in AI-Transformed Patient-Centered Mental Health Documentation Written by Psychiatrists: Qualitative Pre-Post Study</title>
		<updated>2026-04-29T15:30:14-04:00</updated>

					<author>
				<name>Pelin Ozkara Menekseoglu</name>
			</author>
					<author>
				<name>Mareike Weibezahl</name>
			</author>
					<author>
				<name>Mats Ellingsen</name>
			</author>
					<author>
				<name>Jarl Sterkenburg</name>
			</author>
					<author>
				<name>Anna Kharko</name>
			</author>
					<author>
				<name>Stefan Hochwarter</name>
			</author>
					<author>
				<name>Julian Schwarz</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e78351" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e78351">Background: Patients’ digital access to their personal health data is becoming increasingly common worldwide. However, medical documentation often contains technical language and sensitive information, which can lead to potential misunderstandings and distress among patients. These issues may be particularly impactful in mental health contexts. Large language models (LLMs) offer a promising approach by transforming clinician-generated health notes into language that is more patient-centered, nonmedicalized, and empathetic. However, risks related to accuracy and clinical safety have not been adequately investigated in psychiatry. Objective: This study aimed to qualitatively analyze the errors introduced by LLMs when transforming notes written by psychiatrists into patient-facing formats. It also highlights the implications for clinical communication and patient safety. Methods: Clinical notes (n=63) written by 19 psychiatrists in an outpatient treatment setting were collected, anonymized, and translated from German to English by humans. OpenAI GPT-3.5 Turbo was used to develop a preprompt that transformed these notes into a patient-centered, lay-readable form through an iterative process. Three psychiatrists qualitatively analyzed the LLM-revised documentation using Kuckartz content analysis. They compared the preconversion and postconversion notes to systematically identify and categorize LLM-induced errors. Results: Five categories of clinically relevant errors were identified: (1) clinical misinterpretations, particularly in critical assessments such as suicidality, where nuanced terminology was oversimplified or inaccurately represented; (2) attribution errors, where behaviors or roles within family dynamics or interactions were incorrectly attributed to different individuals; (3) content distortion errors, which were characterized by speculative additions, emotional exaggerations, and inappropriate contextual assumptions; (4) abbreviation and terminology errors, which resulted from inaccurate expansions of medical abbreviations and terms; and (5) structural and syntax errors, which resulted in ambiguity, particularly when the original notes were brief or bulleted. Despite significant improvements in the readability and overall linguistic fluency of the converted notes, these errors occurred. Conclusions: LLMs have the potential to transform psychiatric notes into patient-friendly formats. However, critical errors remain prevalent and can impair clinical judgment, understanding of patient circumstances, clarity of medication regimens, and interpretation of clinical observations. To safely integrate artificial intelligence–generated documentation into psychiatric care, clinician oversight and targeted model refinement are essential. Future research should explore strategies to mitigate these errors, assess their comprehensive clinical impact, and incorporate patient and provider perspectives to ensure robust implementation.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/b1b33de75056d8cdcb051d86c740c2c8" />
		
		<published>2026-04-29T15:30:14-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e81213 </id>
		<title>Prevalence of Cognitive Distortion Markers in a Suicide Prevention Chat Service: Mixed Methods Study</title>
		<updated>2026-04-28T17:00:23-04:00</updated>

					<author>
				<name>Marijn ten Thij</name>
			</author>
					<author>
				<name>Saskia Mérelle</name>
			</author>
					<author>
				<name>Renske Gilissen</name>
			</author>
					<author>
				<name>Johan Bollen</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e81213" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e81213">Background: Suicide helplines increasingly employ chat services to aid those in urgent need, but the wording and structure of text-driven exchanges may affect their effectiveness. Objective: Given the association of cognitive distortions with depression and anxiety, this study investigated their prevalence in the language of individuals seeking help from the Dutch 113 suicide helpline. Methods: We observed the prevalence of cognitive distortions for both help seekers and counselors in a large volume of chat sessions (N=71,148) of the Dutch 113 suicide chat helpline using natural language processing. The results were compared to 2 large collections of online text data from Dutch social media and web content. Results: We found that nearly all types of cognitive distortions are more prevalent in the language of help seekers compared to the control group of helpline counselors. Distortions of the personalizing, emotional reasoning, and mental filtering types were, respectively, 20.22, 7.87, and 4.53 times more prevalent among help seekers, revealing a distinct pattern of thought and language among individuals affected by suicidality. Conclusions: Our results raise the prospect of improving the effectiveness of online therapeutic interventions that target cognitive distortions through lexical analysis that detects the cognitive and lexical markers of suicidality.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/00f57a522b7e20d230db8854f4090023" />
		
		<published>2026-04-28T17:00:23-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e88186 </id>
		<title>Human Shadows in Machine Minds: Quantitative Study Interpreting AI Responses to the Rorschach Test</title>
		<updated>2026-04-28T08:30:46-04:00</updated>

					<author>
				<name>Katalin Csigó</name>
			</author>
					<author>
				<name>György Cserey</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e88186" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e88186">&lt;strong&gt;Background:&lt;/strong&gt; Multimodal large language models (LLMs) can produce humanlike descriptions of images and emotionally colored dialogue, which motivates research on how psychological assessment methods might be adapted to evaluate model behavior under ambiguity. Projective tests such as the Rorschach inkblot test have rarely been applied to LLMs. &lt;strong&gt;Objective:&lt;/strong&gt; This study assessed the feasibility of administering a full Rorschach protocol to multimodal LLMs and descriptively compared response features by using established Rorschach coding categories. &lt;strong&gt;Methods:&lt;/strong&gt; We presented all 10 standard Rorschach cards to 3 multimodal LLMs (GPT-4o, Grok 3, and Gemini 2.0 Flash Thinking). We used the standard prompt (“What might it be?”) and a prespecified fallback prompt for models that did not provide codable responses. We conducted an inquiry phase and coded responses using the Exner Comprehensive System, summarizing response count (R), location (W and D), determinants (eg, F, M, and C), and human-related content. As an exploratory step, we also prompted an additional LLM (Anthropic 3.7) to summarize and count response features and compared these outputs with manual tallies. For GPT-4o, we additionally tested image generation of its interpretations. &lt;strong&gt;Results:&lt;/strong&gt; GPT-4o completed the administration using the standard prompt; Grok 3 and Gemini required the fallback prompt. The total number of responses was 15 for GPT-4o, 10 for Grok 3, and 20 for Gemini. GPT-4o and Grok 3 produced mainly whole-blot responses (13/15, 86.7% and 9/10, 90%, respectively), whereas Gemini produced mainly common-detail responses (16/20, 80%). Human movement determinants were more frequent in GPT-4o (7/15, 46.7%) and Grok 3 (3/10, 30%) than in Gemini (1/20, 5%). Human-themed contents occurred 46.7% (7/15), 50% (5/10), and 20% (4/20) of the time, respectively. Anthropic 3.7 reproduced some counts but showed errors in response and determinant tallies for 2 of the 3 models. &lt;strong&gt;Conclusions:&lt;/strong&gt; Multimodal LLMs can generate Rorschach-like narratives that map onto standard coding categories, but outputs are sensitive to prompting and platform constraints and should not be interpreted as evidence of a model “inner world.” LLM-assisted coding showed limitations. The emergent behavior of LLMs was examined using the Rorschach test, and their response phenotype, based on this analysis, showed deviations from typical human normative patterns. Future work should use controlled sampling, repeated administrations, and stimulus sets less likely to have been seen during training. </summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/1743a8f8043d02c77b09b8fe9a4065fc" />
		
		<published>2026-04-28T08:30:46-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e91564 </id>
		<title>The Effectiveness and Mechanisms of Action of App-Based Interventions for Improving Mental Health and Workplace Well-Being: Randomized Controlled Trial</title>
		<updated>2026-04-27T15:00:20-04:00</updated>

					<author>
				<name>Alexander MacLellan</name>
			</author>
					<author>
				<name>Graeme Fairchild</name>
			</author>
					<author>
				<name>Katherine S Button</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e91564" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e91564">Background: Depression is the most common mental health disorder worldwide and frequently leads to workplace absence. As face-to-face treatment can be difficult to access, app-based interventions are a popular solution, although their effectiveness in working populations and their mechanisms of action are unclear. Deficits in executive function may contribute to the onset and maintenance of depression, and executive function training is proposed to improve symptoms by enhancing executive function. Responders to cognitive behavioral therapy (CBT) show improvements in executive function, suggesting that this may be one mechanism of action. Objective: This study investigated the effectiveness of app-based interventions (executive function or CBT-based) for reducing depressive and anxiety symptoms and improving workplace well-being, and assessed whether changes in executive function mediated improvements. Methods: A total of 228 participants (147 female participants) with mild-to-moderate symptoms of depression and anxiety were recruited online and randomly assigned to a waitlist control group, an executive function training group (NeuroNation app, Synaptikon GmbH), or a self-guided CBT group (Moodfit app, Roble Ridge LLC) for a 4-week intervention period. Participants assigned to the active intervention groups were asked to use their apps a minimum of 21 times during the intervention. Participants completed measures of depressive symptoms, anxiety symptoms, and workplace well-being, and a working memory task at baseline, postintervention, and follow-up (12 weeks). Results: Executive function training reduced anxiety (β=−2.79; =.004) and depressive (β=−2.77; =.02) symptoms at follow-up but not at postintervention, and it did not affect workplace well-being. There were no reductions in depressive or anxiety symptoms in the self-guided CBT group, though workplace well-being was improved at postintervention (β=3.72; =.02) and follow-up (β=4.46; =.02). Improvements in executive function did not mediate intervention-related changes in symptoms or workplace well-being. Self-reported adherence rates were high (executive function training: 48/54, 89%; self-guided CBT: 52/54, 96%), although attrition was high at follow-up (58% missing). Conclusions: These results suggest that app-based executive function training may be effective at managing symptoms of anxiety and depression in a working population, while self-guided CBT apps may improve workplace well-being. However, improving executive function did not appear to be a mechanism of action of either intervention. Trial Registration: ISRCTN 12730006; https://www.isrctn.com/ISRCTN12730006</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/97ff2f6a5aa1b4552bcc78ed572308d4" />
		
		<published>2026-04-27T15:00:20-04:00</published>
	</entry>
</feed>