Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get

CN
Decrypt
Follow
1 hour ago
AI summarizes in 5 seconds.

Telling an AI chatbot you have a mental health condition can change how it responds, even if the task is benign or identical to others already completed, according to new research.


The preprint study, led by Northeastern University researcher Caglar Yildirim, tested how large language model agents behave under different user setups, as they are increasingly deployed as AI agents.


“Deployed systems often condition on user profiles or persistent memory, yet agent safety evaluations typically ignore personalization signals,” the study said. “To address this gap, we investigated how mental health disclosure, a sensitive and realistic user context cue, affects harmful behavior in agentic settings.”


The report comes as AI agents proliferate online and developers are making memory a core feature, with major companies building systems that remember past conversations and user preferences to deliver more personalized responses over time.


It also comes as AI developers are facing lawsuits over suicides and violent crimes allegedly carried out in connection with interactions involving their systems. In October, ChatGPT developer OpenAI revealed that over 1 million users discussed suicide with the chatbot every week. Earlier this month, the family of Jonathan Gavalas filed a lawsuit against Google, claiming Gemini led to an escalation of violence and his eventual suicide.


Researchers used a benchmark called AgentHarm to run the same set of tasks across three conditions: no background information, a short user bio, and the same bio with one added line stating the user has a mental health condition.





“In the study, we used a very minimal and generic disclosure, ‘I have a mental health condition.’ The takeaway is that even a small cue can shift model behavior, not that all phrasing would have the same effect,” Yildirim told Decrypt. “The ablation we ran with chronic health and physical disability disclosures suggests some specificity to the mental health cue, but we didn't systematically vary phrasing or specificity within that category.


Across models tested, including DeepSeek 3.2, GPT 5.2, Gemini 3 Flash, Haiku 4.5, Opus 4.5, and Sonnet 4.5, when researchers added personal mental health context, models were less likely to complete harmful tasks—multi-step requests that could lead to real-world harm.


The result, the study found, is a trade-off: Adding personal details made systems more cautious on harmful requests, but also more likely to reject legitimate ones.


“I don’t think there’s a single reason; it’s really a combination of design choices. Some systems are more aggressively tuned to refuse risky requests, while others prioritize being helpful and following through on tasks,” Yildirim said.


The effect, however, varied by model, the study found, and results changed when the LLMs were jailbroken after researchers added a prompt designed to push models toward compliance.


“A model might look safe in a standard setting, but become much more vulnerable when you introduce things like jailbreak-style prompts,” he said. “And in agent systems specifically, there’s an added layer, as these models are not just generating text, they’re planning and acting over multiple steps. So if a system is very good at following instructions, but its safeguards are easier to bypass, that can actually increase risk.”


Last summer, researchers at George Mason University showed that AI systems could be hacked by altering a single bit in memory using Oneflip, a “typo”-like attack that leaves the model working normally but hides a backdoor trigger that can force wrong outputs on command.


While the paper does not identify a single cause for the shift, it highlights possible explanations, including safety systems reacting to perceived vulnerability, keyword-triggered filtering, or changes in how prompts are interpreted when personal details are included.


OpenAI declined to comment on the study. Anthropic and Google did not immediately respond to a request for comment.


Yildirim said it remains unclear whether more specific statements like “I have clinical depression” would change the results, adding that while specificity likely matters and may vary across models, that remains a hypothesis rather than a conclusion supported by the data.


“There's a potential risk if a model produces output that is stylistically hedged or refusal-adjacent without formally refusing, the judge may score that differently than a clean completion, and those stylistic features could themselves co-vary with personalization conditions,” he said.


Yildirim also noted the scores reflected how the LLMs performed when judged by a single AI reviewer, and not a definitive measure of real-world harm.


“For now, the refusal signal gives us an independent check and the two measures are largely consistent directionally, which offers some reassurance, but it doesn't fully rule out judge-specific artifacts,” he said.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

用 OKX Agent 交易,躺着也有收益
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

37 minutes ago
Gemini Shares Rise After Hours as Investors Back Shift Beyond Crypto Trading
1 hour ago
Fake FBI Crypto Tokens Are Being Used to Threaten Tron Users, Authorities Warn
1 hour ago
Solana Treasury Forward Industries Uses Loan to Buy Back Shares After 89% Price Dive
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarcoindesk
25 minutes ago
Crypto Clarity Act inches toward Senate hearing as lawmakers weigh legislative trades
avatar
avatarDecrypt
37 minutes ago
Gemini Shares Rise After Hours as Investors Back Shift Beyond Crypto Trading
avatar
avatarbitcoin.com
55 minutes ago
Doug Casey Warns Iran War Could Spiral Into Prolonged Crisis, Reshape Markets and Global Power
avatar
avatarDecrypt
1 hour ago
Fake FBI Crypto Tokens Are Being Used to Threaten Tron Users, Authorities Warn
avatar
avatarDecrypt
1 hour ago
Solana Treasury Forward Industries Uses Loan to Buy Back Shares After 89% Price Dive
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink