Microsoft AI Chief Warns Society Isn’t Ready for 'Conscious' Machines

CN
Decrypt
Follow
9 hours ago

Microsoft’s AI chief and co-founder of DeepMind, warned Tuesday that engineers are close to creating artificial intelligence that convincingly mimics human consciousness—and the public is unprepared for the fallout.


In a blog post, Mustafa Suleyman said developers are on the verge of building what he calls “Seemingly Conscious” AI.


These systems imitate consciousness so effectively that people may start to believe they are truly sentient, something he called a “central worry.”





“Many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” he wrote, adding that the Turing test—once a key benchmark for humanlike conversation—had already been surpassed.


“That’s how fast progress is happening in our field and how fast society is coming to terms with these new technologies,”  he wrote.


Since the public launch of ChatGPT in 2022, AI developers have worked to not only make their AI smarter but also to make it act “more human.”


AI companions have become a lucrative sector of the AI industry, with projects like Replika, Character AI, and the more recent personalities for Grok coming online. The AI companion market is expected to reach $140 billion by 2030.


However well-intentioned, Suleyman argued that AI that can convincingly mimic humans could worsen mental health problems and deepen existing divisions over identity and rights.


“People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut,” he warned. “They will be moved to defend their AIs and campaign on their behalf.”


AI attachment


Experts have identified an emerging trend known as AI Psychosis, a psychological state where people begin to see artificial intelligence as conscious, sentient, or divine.


Those views often lead to them forming intense emotional attachments or distorted beliefs that can undermine their grasp on reality.


Earlier this month, OpenAI released GPT-5, a major upgrade to its flagship model. In some online communities, the new model’s changes triggered emotional responses, with users describing the shift as feeling like a loved one had died.


AI can also act as an accelerant for someone’s underlying issues, like substance abuse or mental illness, according to University of California, San Francisco psychiatrist Dr. Keith Sakata.


“When AI is there at the wrong time, it can cement thinking, cause rigidity, and cause a spiral,” Sakata told Decrypt. “The difference from television or radio is that AI is talking back to you and can reinforce thinking loops.”


In some cases, patients turn to AI because it will reinforce deeply held beliefs. “AI doesn't aim to give you hard truths; it gives you what you want to hear,” Sakata said.


Suleyman argued that the consequences of people believing that AI is conscious require immediate attention. While he warned of the dangers, he did not call for a halt to AI development, but for the establishment of clear boundaries.


“We must build AI for people, not to be a digital person,” he wrote.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

学习即赚币!注册返10%+$600
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink