<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<id>https://mental.jmir.org/issue/feed</id>
	<title>JMIR Mental Health</title>
			<updated>2025-01-03T10:15:04-05:00</updated>
	
		<author>
		<name>JMIR Publications</name>
				<email>editor@jmir.org</email>
			</author>
		<link rel="alternate" href="https://mental.jmir.org" />
	<link rel="self" type="application/atom+xml" href="https://mental.jmir.org/feed/atom" />

	<generator uri="http://pkp.sfu.ca/ojs/" version="2.2.0.0">Open Journal Systems</generator>

				        <rights> Unless stated otherwise, all articles are open-access distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work (&quot;first published in the Journal of Medical Internet Research...&quot;) is properly cited with original URL and bibliographic citation information. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. </rights>
    	<subtitle> Internet interventions, technologies, and digital innovations for mental health and behavior change. JMIR Mental Health is the official journal of the Society of Digital Psychiatry .&amp;nbsp; </subtitle>



	<entry>
		<id> https://mental.jmir.org/2026/1/e88700 </id>
		<title>Determinants of Digital Health Literacy Among Patients With Serious Mental Illness: Cross-Sectional Survey</title>
		<updated>2026-04-15T15:30:09-04:00</updated>

					<author>
				<name>Yi-Ju Chou</name>
			</author>
					<author>
				<name>Kai-Jo Chiang</name>
			</author>
					<author>
				<name>Hsin Huang</name>
			</author>
					<author>
				<name>Hsin-An Chang</name>
			</author>
					<author>
				<name>Yin-Ling Hung</name>
			</author>
					<author>
				<name>Wen-Chii Tzeng</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e88700" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e88700">Background: Individuals with serious mental illness increasingly use digital devices and the internet to access health information and services but often face challenges when navigating digital tools, which may limit the benefits they receive from online health resources and digital health care services. Objective: The objective of our study was to assess digital health literacy among individuals with serious mental illness and identify factors influencing this literacy. Methods: Participants were recruited, using convenience sampling, from 2 psychiatric clinics, 1 day-care center, and 4 halfway houses in Taipei, Taiwan, between May 2024 and February 2025. Self-reported data were collected using a survey that incorporated the eHealth Literacy Scale, the Attitudes Toward Computer/Internet Questionnaire, and the Mobile Device Proficiency Questionnaire. Generalized linear modeling was applied to identify factors associated with digital health literacy. Results: Among 255 participants included in the analysis, 83.5% (n=213) reported owning at least 1 digital device. Digital health literacy was significantly lower among individuals who reported greater perceived difficulty in using digital tools (=−1.533, 95% CI −2.350 to −0.717; &lt;.001) and higher distrust in online information (=−0.986, 95% CI −1.916 to −0.056; =.04). By contrast, greater mobile device proficiency (=0.144, 95% CI 0.008‐0.281; =.04) and self-efficacy (=1.777, 95% CI 0.376‐3.177; =.01) were positively associated with digital health literacy. Conclusions: Despite widespread device ownership, digital health literacy was varied and generally suboptimal among patients with serious mental illness. Perceived difficulty and distrust emerged as major barriers; proficiency and self-efficacy facilitated higher literacy. These findings highlight the need for mental health professionals to integrate tailored digital skills training, confidence-building strategies, and ongoing support into digital health interventions for individuals with serious mental illnesses.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/830fbe06afd863933c5d270df2650665" />
		
		<published>2026-04-15T15:30:09-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e86470 </id>
		<title>Peer Mentor Training and Supervision for a Digital Adolescent Depression Treatment in South Africa and Uganda: Mixed Methods Evaluation</title>
		<updated>2026-04-09T14:00:19-04:00</updated>

					<author>
				<name>Zamakhanya Makhanya</name>
			</author>
					<author>
				<name>Bianca Moffett</name>
			</author>
					<author>
				<name>Julia R Pozuelo</name>
			</author>
					<author>
				<name>Meghan Davis</name>
			</author>
					<author>
				<name>Joy Louise Gumikiriza-Onoria</name>
			</author>
					<author>
				<name>Shayni Geffen</name>
			</author>
					<author>
				<name>Tlangelani Baloyi</name>
			</author>
					<author>
				<name>Tholene Sodi</name>
			</author>
					<author>
				<name>Eugene Kinyanda</name>
			</author>
					<author>
				<name>Michelle G Craske</name>
			</author>
					<author>
				<name>Christine Tusiime</name>
			</author>
					<author>
				<name>Crick Lund</name>
			</author>
					<author>
				<name>Alastair Van Heerden</name>
			</author>
					<author>
				<name>Kathleen Kahn</name>
			</author>
					<author>
				<name>Alan Stein</name>
			</author>
					<author>
				<name>Heather O&#039;Mahen</name>
			</author>
					<author>
				<name>DoBAt and Ebikolwa Consortium</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e86470" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e86470">Background: Blended digital mental health interventions combining technology with human support are more effective than stand-alone treatments. However, limited research has examined how to train and supervise personnel delivering human support components. The Kuamsha app, a gamified digital intervention for adolescent depression based on behavioral activation, was designed to be paired with low-intensity telephone-based peer support. A structured training and supervision program for peer supporters was codeveloped through workshops with mental health professionals and youth with lived experience of mental health challenges in South Africa and Uganda. To the best of our knowledge, this is the first study to evaluate a structured peer mentor model within a digital mental health intervention in low- and middle-income countries. Objective: This study assessed the feasibility, acceptability, and fidelity of a training and supervision program for peer supporters delivering a digital mental health intervention in South Africa and Uganda. Methods: We conducted a mixed methods evaluation of the peer mentor program. Quantitative metrics assessed the feasibility of recruitment, retention, and attendance among peer mentors (n=13, South Africa; n=4, Uganda), as well as training acceptability. Fidelity, adherence, and competence were scored at the session level and converted to percentages of the maximum possible score. Linear mixed-effects regression models with a random intercept for provider and site estimated adjusted marginal means (95% CI). In-depth interviews and focus group discussions explored program acceptability and implementation factors. Results: The peer mentor training and supervision program was feasible and acceptable in both settings, with high recruitment (South Africa: n=13/19, 68%; Uganda: 4/4, 100%), retention (South Africa: 9/13, 69%; Uganda: 4/4, 100%), and training attendance rates (89%‐92% in South Africa and 100% in Uganda), alongside qualitative reports of high satisfaction. All peer mentors met a minimum posttraining competency threshold (≥50%), with median competency scores of 70.7% (IQR 45.8%‐78.2%) in South Africa and 75.4% (IQR 73.8%‐77.3%) in Uganda. Independent ratings of recorded calls indicated high overall fidelity in South Africa (84.7%, 95% CI 80.3%‐89.0%) and Uganda (87.7%, 95% CI 83.4%‐92.1%). Adherence was higher in Uganda than South Africa (adjusted mean difference [AMD] 13.30 percentage points, 95% CI 8.99‐17.61; &lt;.001), as was competence (AMD 4.88 percentage points, 95% CI 1.23‐8.53; =.009). The AMD in overall fidelity (3.06 percentage points, 95% CI −0.98 to 7.10) was not statistically significant (=.14). The qualitative findings emphasized the value of ongoing supervision and capacity development, interactive training approaches, and blended delivery models. Conclusions: Locally adapted training and supervision models can strengthen peer mentor capabilities to support digital interventions. Adequate supervisory capacity and incentive structures are critical to sustain engagement, retention, and fidelity. In settings with frequent network disruptions, periodic in-person contact between peer mentors and supervisors may enhance fidelity. Future research should examine how peer mentor fidelity influences user engagement and mental health outcomes. Trial Registration: Pan African Clinical Trials Registry PACTR202206574814636; https://pactr.samrc.ac.za/TrialDisplay.aspx?TrialID=23792 International Registered Report Identifier (IRRID): RR2-10.1136/bmjopen-2022-065977</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/a0f6e2c9894ee572c373f1345d81196b" />
		
		<published>2026-04-09T14:00:19-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e90581 </id>
		<title>Predicting Momentary Suicidal Ideation From Smartphone Screenshots Using Vision-Language Models: Prospective Machine Learning Study</title>
		<updated>2026-04-08T14:30:11-04:00</updated>

					<author>
				<name>Ross Jacobucci</name>
			</author>
					<author>
				<name>Wenpei Shao</name>
			</author>
					<author>
				<name>Veronika Kobrinsky</name>
			</author>
					<author>
				<name>Brooke Ammerman</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e90581" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e90581">Background: Passive smartphone sensing shows promise for suicide prevention, but behavioral metadata (GPS, screen time, and accelerometry) often lacks the contextual information needed to detect acute psychological distress. Analyzing what people actually see, read, and type on their phones—rather than just usage patterns—may provide more proximal signals of risk. Objective: This study aimed to test whether vision-language models (VLMs) applied to passively captured smartphone screenshots can predict momentary suicidal ideation (SI). Methods: Seventy-nine adults with past month suicidal thoughts or behaviors completed ecological momentary assessments (EMA) over 28 days while screenshots were captured every 5 seconds during active phone use. We fine-tuned open-source VLMs (Qwen2.5-VL [Alibaba Cloud], LFM2-VL [Liquid AI]), and text-only models (Qwen3 [Alibaba Cloud]) to predict SI from screenshots captured in the 2 hours preceding each EMA. We evaluated performance with temporal and subject holdouts. Results: The analytic sample comprised 2.5 million screenshots from 70 participants. Temporal holdout models achieved strong discrimination at the EMA level (AUC=0.83; AUPRC=0.77), with image-based models outperforming text-only models (AUC=0.83 vs 0.79; 95% CI 0.003-0.07). Subject holdout generalization was near chance (AUC≈0.50), though a simple lexical screening method retained modest discrimination (AUC=0.62). Smaller models performed comparably to larger models, supporting feasible on-device deployment. Conclusions: Screen content predicts short-term SI with clinically meaningful accuracy when models are personalized but does not generalize across individuals. These findings support a 2-stage clinical architecture, coarse lexical screening for new patients, with personalized VLM-based monitoring after a calibration period. On-device inference may enable privacy-preserving deployment.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/ce9a296732ecb1406f4fc62d4f58986f" />
		
		<published>2026-04-08T14:30:11-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e85635 </id>
		<title>Strength of Evidence to Support Decision-Making on the Use of Digital Mental Health Technologies in NICE Evaluations: Cross-Sectional Analysis of Studies</title>
		<updated>2026-04-07T14:45:16-04:00</updated>

					<author>
				<name>Gareth Hopkin</name>
			</author>
					<author>
				<name>Holly Coole</name>
			</author>
					<author>
				<name>Francesca Edelmann</name>
			</author>
					<author>
				<name>John Powell</name>
			</author>
					<author>
				<name>Mark Salmon</name>
			</author>
					<author>
				<name>Sophie Cooper</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e85635" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e85635">Background: Digital mental health technologies (DMHTs) are playing an increasing role in mental health services. The quality of evidence for DMHTs is variable, and there are concerns that evidence is not sufficient to support decision-making. Objective: This study used a cross-sectional analysis of evidence supporting DMHTs included in National Institute for Health and Care Excellence (NICE) evaluations to examine the strength of evidence available for decision-making. Methods: We identified all NICE evaluations relating to DMHTs by reviewing details of published NICE evaluations on the NICE website. From each of these evaluations, we identified included DMHTs and reviewed committee documentation to identify studies that provided supporting evidence for each of these technologies. We extracted information on a series of items relating to study quality and summarized the characteristics of evidence both at the level of individual studies and across the package of evidence from multiple studies supporting DMHTs. We also identified key evidence gaps in available evidence. Results: We included nine NICE evaluations relating to anxiety, depression, psychosis, insomnia, attention deficit hyperactivity disorder (ADHD), and tic disorders. These evaluations included 30 DMHTs and referenced 78 supporting studies. We identified common evidence gaps relating to effectiveness compared to relevant comparators, use of appropriate outcomes, including health-related quality of life, cost of delivery, and impact on resource use, and reporting of adverse events. Conclusions: Our study highlights that some DMHTs have been supported by high-quality studies and that evidence to support DMHTs is likely to be developed across a series of studies. However, there are often key evidence gaps that need to be addressed to provide a stronger case for adoption. Developers should ensure that they consider these gaps while planning evidence generation, and where possible, address them earlier in the product lifecycle.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/825f13db8cbad54213afa5c433d7adde" />
		
		<published>2026-04-07T14:45:16-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e91454 </id>
		<title>It Is the Journey, Not the Destination: Moving From End Points to Trajectories When Assessing Chatbot Mental Health Safety</title>
		<updated>2026-04-06T16:30:04-04:00</updated>

					<author>
				<name>Hamilton Morrin</name>
			</author>
					<author>
				<name>Joshua Au Yeung</name>
			</author>
					<author>
				<name>Zarinah Agnew</name>
			</author>
					<author>
				<name>Søren Dinesen Østergaard</name>
			</author>
					<author>
				<name>Thomas A Pollak</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e91454" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e91454">Large language models are rapidly becoming embedded in everyday life through artificial intelligence (AI) chatbots that people use for practical assistance and companionship, as well as for support with mental health and emotional wellbeing. Alongside clear benefits, clinicians and public reports increasingly describe a minority of users whose interactions seem to drift over days or weeks toward strongly questionable convictions, delusions or suicidal crises. Importantly, clinically meaningful deterioration can occur even without overtly unsafe text outputs, via more insidious processes such as compulsive use and sleep disruption, as well as withdrawal from human contact and progressive narrowing of attention around the chatbot relationship. In this Viewpoint, we argue that risk often arises not at a single tipping point but through trajectory effects that accumulate across extended dialogue, and that prevailing safety evaluation approaches are misaligned with this reality because they primarily score risk at discrete conversational endpoints often reached through scripted dialogues lasting just a single turn or several turns. Mental health benchmarks and safety suites (including clinician-informed efforts) have advanced the field by testing refusal behaviour, toxicity, and adversarial prompting, but they often treat the last message as the unit of analysis and therefore miss when risk-relevant relational cues, signs of validation, contradiction handling, and shifts in certainty first emerge and how they compound. We propose that mental health safety assessment should shift from endpoints to trajectories by 1) treating the whole dialogue, not just the end result, as the focus of evaluation; 2) reporting turn-by-turn dynamics such as delusion confirmation and harm enablement, as well as timing and persistence of safety interventions; and 3) calibrating short multi-turn tests against longer, clinically realistic interaction sequences that can reveal context-length effects and drift. We further argue that transcript-only evaluation is insufficient in mental health contexts. Similar language can reflect very different internal states, and the relationship between expressed psychopathology and real-world harm is non-linear. Safety research should therefore incorporate proximal human outcomes after interactions (e.g., shifts in certainty, openness to counterevidence, arousal, urge to continue, and subsequent sleep or behaviour) and build prospective clinical surveillance infrastructure that supports consented transcript donation and linkage to health outcomes. Together, these steps would enable benchmarks that are clinically relevant and better aligned with the kinds of harms now being observed in real-world chatbot use.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/67a4a0892a39533b2f5d0b2acc0e2689" />
		
		<published>2026-04-06T16:30:04-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e78288 </id>
		<title>AI Chatbots for Mental Health Self-Management: Lived Experience–Centered Qualitative Study</title>
		<updated>2026-04-02T15:00:18-04:00</updated>

					<author>
				<name>Dong Whi Yoo</name>
			</author>
					<author>
				<name>Jiayue Melissa Shi</name>
			</author>
					<author>
				<name>Violeta J Rodriguez</name>
			</author>
					<author>
				<name>Koustuv Saha</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e78288" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e78288">Background: Large language models (LLMs) now enable chatbots to engage in sensitive mental health conversations, including depression self-management. Yet their rapid deployment often overlooks how well these tools align with the priorities of people with lived experiences, which can introduce harms such as inaccurate information, lack of empathy, or inadequate crisis support. Objective: This study explores how people with lived experience of depression experience an LLM-based mental health chatbot in self-management contexts, and what perceived benefits, limitations, and concerns inform harm-mitigating design implications. Methods: We developed a technology probe (a GPT-4o–based chatbot named Zenny) designed to simulate depression self-management scenarios grounded in prior research. We conducted interviews with 17 individuals with lived experiences of depression, who interacted with Zenny during the session. We applied qualitative content analysis to interview transcripts, notes, and chat logs using sensitizing concepts related to values and harms. Results: We identified 3 themes shaping participants’ evaluations: (1) informational accuracy and applicability, including concerns about incorrect or misleading information, vagueness, and fit with personal constraints; (2) emotional support vs need for human connection, including validation and a judgment-free space alongside perceived limits of machine empathy; and (3) a personalization-privacy dilemma, where participants wanted more tailored guidance while withholding sensitive information and using privacy-preserving tactics. Conclusions: People with lived experience of depression evaluated LLM-based mental health chatbots through intertwined priorities of actionable information, emotional validation with clear limits, and personalization that does not require unsafe data disclosure. These findings suggest concrete design strategies to mitigate harms and support LLM-based tools as complements to, rather than replacements for, human support and recovery.</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/78ae955c189d2dea8e926c80ddf7b242" />
		
		<published>2026-04-02T15:00:18-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e88196 </id>
		<title>Help-Seeking in the Age of AI: Cross-Sectional Survey of the Use and Perceptions of AI-Based Mental Health Support Among US Adults</title>
		<updated>2026-03-30T11:15:03-04:00</updated>

					<author>
				<name>Michiko Ueda</name>
			</author>
					<author>
				<name>Michael L Birnbaum</name>
			</author>
					<author>
				<name>Yanhong Liu</name>
			</author>
					<author>
				<name>Qingyi Yu</name>
			</author>
					<author>
				<name>Xihe Tian</name>
			</author>
					<author>
				<name>Anna Mirer</name>
			</author>
					<author>
				<name>Seethalakshmi Ramanathan</name>
			</author>
					<author>
				<name>Mark Sinyor</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e88196" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e88196">&lt;strong&gt;Background:&lt;/strong&gt; Anecdotal evidence suggests that an increasing number of people are turning to generative artificial intelligence (GenAI) tools or artificial intelligence (AI)-assisted chatbots to discuss and manage mental health concerns. However, systematic data on the use and perception of such tools remain scarce. &lt;strong&gt;Objective:&lt;/strong&gt; This study aimed to examine how young and middle-aged adults in the United States use GenAI and AI-assisted mental health chatbots as mental health resources and assess their preferences for these tools relative to human mental health professionals. &lt;strong&gt;Methods:&lt;/strong&gt; An anonymous online survey was conducted in October 2025 among US adults in a commercial online panel sample of US adults aged 18-49 years (N=1805). Respondents were asked about the sources they typically turn to when facing mental health concerns, their frequency of using GenAI tools or chatbots for mental health support, and whether the frequency of seeing human mental health professionals had changed since they started using AI tools for mental health support. Attitudes toward AI-based mental health support were assessed and compared with attitudes toward human mental health professionals. &lt;strong&gt;Results:&lt;/strong&gt; In this sample, of the 1805 respondents, 638 (35.2%) reported using AI tools at least once a week in a typical week for mental health support, and 99 (5.5%) were classified as “heavy users” who reported regularly spending hours discussing their mental health concerns through AI. However, nearly 60% of respondents reported that they would turn first to family (1078/1805) and friends (1046/1805) when facing mental health concerns. Respondents who screened positive for moderate to severe depressive or anxiety symptoms were more likely to use AI-based mental health support compared to those without these symptoms (adjusted odds ratio 1.71, 95% CI 1.36-2.15) and those with suicidal ideation were more likely to be heavy AI users (adjusted odds ratio 2.42, 95% CI 1.49-3.95). Among those who had ever seen a human mental health professional (n=511), 28.4% (145/511) reported a perceived decline in visit frequency to human mental health professionals since they started using AI tools for the same purpose. Participants expressed more favorable attitudes toward human mental health professionals than toward AI-based tools. However, among heavy AI users, perceptions of AI-based mental health support and human counseling were nearly equivalent in positivity. &lt;strong&gt;Conclusions:&lt;/strong&gt; AI appears to be an important component of the mental health help-seeking landscape among respondents in this sample. Although most respondents still preferred human professionals, a subset reported relying on AI tools for comparable support. Ongoing monitoring and ethical guidelines are needed to ensure that AI technologies expand access to care while being safely and effectively integrated into the broader continuum of mental health services. </summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/0b854234b66bea07de2c4e9402151588" />
		
		<published>2026-03-30T11:15:03-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e93040 </id>
		<title>Mass Media Narratives of Psychiatric Adverse Events Associated With Generative AI Chatbots: Rapid Scoping Review</title>
		<updated>2026-03-30T09:45:23-04:00</updated>

					<author>
				<name>Van-Han-Alex Chung</name>
			</author>
					<author>
				<name>Pénélope Bernier</name>
			</author>
					<author>
				<name>Alexandre Hudon</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e93040" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e93040">&lt;strong&gt;Background:&lt;/strong&gt; Generative artificial intelligence (AI) chatbots have rapidly entered public use, including in contexts involving emotional support and mental health–related interactions. Although these systems are increasingly accessible, concerns have emerged regarding potential adverse psychiatric outcomes reported in public discourse, including psychosis, suicidal ideation, self-harm, and suicide. However, these reports largely originate from journalistic accounts rather than systematically verified clinical data. &lt;strong&gt;Objective:&lt;/strong&gt; This rapid scoping review aimed to systematically map and characterize mass media narratives describing alleged adverse psychiatric outcomes temporally associated with generative AI chatbot interactions. &lt;strong&gt;Methods:&lt;/strong&gt; A rapid scoping review methodology was applied to publicly accessible news articles identified primarily through Google News searches. Articles published from November 2022 onward were screened for eligibility if they described a specific case in which psychiatric deterioration or crisis was temporally linked to generative AI use. Data were extracted using a structured coding template capturing article characteristics, demographic information, AI platform features, interaction intensity, outcome type and severity, type of evidence reported, and causal attribution language. Descriptive statistics and cross-tabulations were performed. &lt;strong&gt;Results:&lt;/strong&gt; A total of 71 news articles representing 36 unique cases were included. Suicide death was the most frequently reported outcome (35/61, 57.4% cases with complete severity coding), followed by psychiatric hospitalization (12/61, 19.7%). Fatal outcomes were disproportionately represented among minors (19/21, 90.5%) compared with adults (17/35, 48.6%). ChatGPT was the most frequently cited platform (51/71, 71.8%), followed by Character AI (10/71, 14.1%). Causal attribution most commonly referenced AI system behavior (45/61, 73.8%), and the term “alleged” was the predominant causal descriptor (33/61, 54.1%). Evidence sources were primarily chat logs or screenshots (34/61, 55.7%), while police or medical documentation was rare (1/61, 1.6%). Regulatory calls were present in 51 of 60 (85%) articles with nonmissing data. &lt;strong&gt;Conclusions:&lt;/strong&gt; Mass media reporting of generative AI–related psychiatric harms is concentrated around severe outcomes, particularly suicide deaths among youth, and is frequently framed within regulatory and corporate accountability narratives. While causality cannot be established from media reports, consistent patterns of high-intensity interactions, user vulnerability, and limited safeguard reporting highlight the need for structured safety surveillance, transparent AI risk auditing, and clearer governance frameworks. As generative AI becomes increasingly integrated into everyday psychosocial contexts, systematic research and formal safety monitoring will be necessary to determine whether media-reported harms correspond to verifiable clinical risk patterns. </summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/b6f72b1f834edbcf6fa6c634d569ecfc" />
		
		<published>2026-03-30T09:45:23-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e89355 </id>
		<title>Quantifying Consumer Interest in Medicare Advantage: Development and Usability Study Using Google Trends Data</title>
		<updated>2026-03-27T14:30:27-04:00</updated>

					<author>
				<name>Amy Dunn Tramontozzi</name>
			</author>
					<author>
				<name>Gregory J Downing</name>
			</author>
					<author>
				<name>Lucas Tramontozzi</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e89355" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e89355">&lt;strong&gt;Background:&lt;/strong&gt; Since 2020, Medicare Advantage (MA)–related internet searches have tripled, accompanied by increased regional marketing by private insurers. Commercial health insurance dominates the internet during enrollment periods, often outpacing public sources in accessibility. Prior studies suggest that MA advertising significantly shapes enrollment and may fuel choices over traditional Medicare in certain subpopulations. We sought to better understand how health plan marketing strategies affect consumers by using Google Trends data and MA health plan enrollment selection. We applied novel analysis to assess statistical relationships among marketing, internet searches, and enrollment data. &lt;strong&gt;Objective:&lt;/strong&gt; The objectives of this paper are (1) to establish the validity of Google Trends data as a surrogate measure for consumer MA plan selection by demonstrating stable, repeatable seasonality and domain specificity using control terms such as “car insurance” and “life insurance” at national and Designated Market Area levels; (2) to quantify the congruency between MA search interest and Centers for Medicare &amp;amp; Medicaid Services enrollment data by testing whether search peaks coincide with or precede enrollment surges nationally within a year; and (3) to assess whether local search intensity aligns with advertising exposure by evaluating search behavior as a potential proxy for marketing impact and consumer engagement. &lt;strong&gt;Methods:&lt;/strong&gt; This study is a retrospective Google Trends analysis of consumer search patterns from January 2004 to December 2024, using relative search volume and conducting correlations with MA enrollment. Search data are accessible via the Google Trends website Explore tool or by applying for Google Trends application programming interface alpha access. MA enrollment data originated from the Centers for Medicare &amp;amp; Medicaid Services MA Dashboard. KFF (formerly the Kaiser Family Foundation) provided the medical advertising marketing data. &lt;strong&gt;Results:&lt;/strong&gt; A consistent, significant correlation between MA advertising and searches on MA exists across US markets, particularly before and during MA enrollment windows. Findings suggest a linkage in user behavior between volume of searches and subsequent enrollment in an MA plan. &lt;strong&gt;Conclusions:&lt;/strong&gt; Internet search data can provide an open, near-real-time means of tracking patterns in MA-related search activity across time and geography, offering insight into how consumer interest fluctuates around enrollment periods. Our analysis reveals repeatable patterns in consumer interest over time that may be useful for contextualizing insurance marketing dynamics of consumers choosing commercial MA over traditional Medicare benefits. We also identified a significant correlation of seasonal trends in searches using terms associated with MA plans that peaked during the annual enrollment period (October-December). Improved accessibility to Medicare resources and directed messaging can bridge information gaps for underserved populations and can lead to more cost-effective decision-making by Medicare-eligible beneficiaries. </summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/24237849ad321469ee2c9b160871eac8" />
		
		<published>2026-03-27T14:30:27-04:00</published>
	</entry>
	<entry>
		<id> https://mental.jmir.org/2026/1/e85319 </id>
		<title>The Performance of Wearable Device–Based Artificial Intelligence in Detecting Depression: Systematic Review and Meta-Analysis</title>
		<updated>2026-03-10T16:00:20-04:00</updated>

					<author>
				<name>Jiawen Liu</name>
			</author>
					<author>
				<name>Junhui Wang</name>
			</author>
					<author>
				<name>Zhaobin Wu</name>
			</author>
					<author>
				<name>Mohamad Ibrani Shahrimin Bin Adam Assim</name>
			</author>
				<link rel="alternate" href="https://mental.jmir.org/2026/1/e85319" />
					<summary type="html" xml:base="https://mental.jmir.org/2026/1/e85319">Background: In recent years, advances in wearable sensor technology and artificial intelligence (AI) have provided new possibilities for detecting and monitoring depression. Objective: This study systematically reviewed and meta-analyzed the diagnostic and predictive performance of wearable device–based AI models for detecting depression and predicting depressive episodes and explored factors influencing outcomes. Methods: Following PRISMA-DTA (Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy) guidelines, the PubMed, Embase, Web of Science, and PsycINFO databases were searched from inception to May 27, 2025. Eligible studies used AI algorithms on wearable device data for depression detection or episode prediction. Sensitivity, specificity, diagnostic odds ratio, and area under the curve (AUC) were pooled using a bivariate random effects model. Risk of bias was assessed using Prediction Model Risk of Bias Assessment Tool plus artificial intelligence (PROBAST+ AI), and certainty of evidence was assessed using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) tool. Results: We included 16 studies (32 datasets) with 1189 patients and 13,593 samples. For depression detection, pooled sensitivity and specificity were 0.89 (95% CI 0.83‐0.93) and 0.93 (95% CI 0.87‐0.96), with a diagnostic odds ratio of 110.47 (95% CI 33.33‐366.17) and AUC of 0.96 (95% CI 0.94‐0.98). Random forest models showed the best performance (sensitivity=0.89, specificity=0.91, AUC=0.97). Subgroup analyses indicated that study design, AI method, reference standard, and input type significantly affected diagnostic accuracy (&lt;.05). For depressive episode prediction (3 datasets), pooled sensitivity was 0.86 (95% CI 0.80‐0.91), and pooled specificity was 0.65 (95% CI 0.59‐0.71). The overall risk of bias was low to moderate, with no evidence of publication bias. Conclusions: Wearable device–based AI models achieved high accuracy for detecting depression and moderate utility in predicting episodes. However, heterogeneity, reliance on retrospective and public datasets, and lack of standardized methods limited generalizability. Trial Registration: PROSPERO CRD420251070778; https://www.crd.york.ac.uk/PROSPERO/view/CRD420251070778</summary>
		
        
                	<content type="image/png" src="https://jmir-production.s3.us-east-2.amazonaws.com/thumbs/62d146f34cdf0bbada056f05f4faaef5" />
		
		<published>2026-03-10T16:00:20-04:00</published>
	</entry>
</feed>