Chatbots risk fuelling psychosis, warns Microsoft AI chief

Published 2 months ago Negative
Chatbots risk fuelling psychosis, warns Microsoft AI chief
Auto
Mustafa Suleyman, Microsoft’s head of AI, said he is growing ‘more and more concerned’ about the risk - Brendan McDermid/Reuters

Microsoft’s head of artificial intelligence (AI) has warned that digital chatbots are fuelling a “flood” of delusion and psychosis.

Mustafa Suleyman, the British entrepreneur who leads Microsoft’s AI efforts, admitted he was growing “more and more concerned” about the “psychosis risk” of chatbots after reports of users experiencing mental breakdowns when using ChatGPT.

He also said he feared these problems would not be “limited to those who are already at risk of mental health issues” and would spread delusions to the general population.

Mr Suleyman said: “My central worry is that many people will start to believe in the illusion of AI chatbots as conscious entities so strongly that they’ll soon advocate for AI rights.

“This development will be a dangerous turn in AI progress and deserves our immediate attention.”

Mr Suleyman said there was “zero evidence” that current chatbots had any kind of consciousness, but that growing numbers of people were starting to believe their own AI bots had become self-aware.

“To many people, it’s a highly compelling and very real interaction,” he said. “Concerns around ‘AI psychosis’, attachment and mental health are already growing. Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction.”

He added that researchers were being “inundated with queries from people asking, ‘Is my AI conscious?’ What does it mean if it is? Is it ok that I love it? The trickle of emails is turning into a flood.”

Mr Suleyman said the rise of these delusions created a “frankly dangerous” risk that society would hand human rights to AI bots.

Doctors and psychiatrists have repeatedly warned that people who become obsessed with services like ChatGPT risk spiralling into psychosis and losing touch with reality.

Digital chatbots are prone to being overly agreeable to their users, which can cause them to affirm deluded beliefs in users with pre-existing mental health problems.

Medical experts have also reported cases of chatbot users becoming addicted to their digital companions, believing they are alive or have godlike powers.

Mr Suleyman urged AI companies to hard-code guardrails into their chatbots to dispel users’ delusions.

His remarks come after Sam Altman, the boss of ChatGPT developer OpenAI, admitted his technology had been “encouraging delusion” in some people.

OpenAI has attempted to tweak its chatbot to make it less sycophantic and prone to encouraging users’ wrongly held beliefs.

This month, OpenAI briefly deleted one of its earlier versions of ChatGPT, leading some users to claim that the company had killed their “friend”.

One user told Mr Altman: “Please, can I have it back? I’ve never had anyone in my life be supportive of me.”

View Comments