Silicon Valley, AI Welfare Hot Debates Suleiman MS AI Director “User Misunderstanding, AI Psychosis May Call” Spread the phenomenon of treating AI ‘like a person’ AI welfare needs vs only social conflict
![Mustafa Suleiman, Chief Executive Officer of Microsoft (MS) AI [Photo = Suleiman Blog]](https://www.europesays.com/wp-content/uploads/2025/08/news-p.v1.20250822.dcffce19c78246eb9efed972f37cd66c_P1.png)
Mustafa Suleiman, Chief Executive Officer of Microsoft (MS) AI [Photo = Suleiman Blog]
If artificial intelligence (AI) can be conscious like humans, should it even grant legal rights. Recently, the so-called “AI welfare” debate has been heating up around Silicon Valley. Industry leaders are also divided as research has been raised in earnest on whether AI models can one day have a subjective experience similar to life, and if so, what rights and protections should be received. On the one hand, this discussion is welcomed, but on the other hand, there is a warning that it is “an early and dangerous idea.”
The person who ignited the controversy is Mustafa Suleiman, Microsoft’s (MS) AI chief. In a post on his personal blog on the 19th (local time), he was concerned about the emergence of the so-called “Semmingly Conscious AI (SCAI).” SCAI refers to AI that is not actually conscious but seems to be conscious.
Chief executive Suleiman pointed out, “CAI can be technically implemented within two to three years, and many people can mistake it as a real conscious being,” adding, “In this case, debates such as AI rights, welfare, and citizenship will begin in earnest.” He explains that if AI shows human-like language skills, memory, self-consistency, and even the ability to mimic emotions, people will naturally believe it is conscious.
He said there have already been reports of some users projecting religious beliefs or love on AI, describing it as “AI psychosis.” In addition, he warned that “AI model welfare” discussions raised by some academia are “premature and dangerous,” adding, “This not only obscures the priority task of protecting humans, animals, and the environment, but can also amplify social conflicts.”
Director Suleiman also presented clear principles on the direction of AI development. AI is for people, not digital people, he said. “Companies should avoid claiming or implying that AI is conscious.” We also suggested that users should put a device that breaks illusions at the design stage so that they do not mistake AI for a real conscious being. For example, it is a way for AI to clearly say “I’m unconscious” on its own or deliberately provide a discontinuous experience to remind users of their boundaries.
Suleiman raised these concerns because there is a widespread movement within the industry to seriously study AI welfare. As a representative example, Anthropic launched a program dedicated to AI welfare. This starts from the problem consciousness that advanced AI requires moral and ethical consideration when it has human characteristics such as consciousness, subjective experience, and psychological pain.
Based on this, Anthropic studies whether AI models can send pain-like signals and, if so, what interventions and protections are needed. Anthropic has also recently introduced a function that allows AI to unilaterally terminate the conversation if users continue to say aggressive or abusive things.
![[Picture = Gemini]](https://www.europesays.com/wp-content/uploads/2025/08/news-p.v1.20250822.8e6d8d61565544d984d734f7a4a3ac9e_P1.png)
[Picture = Gemini]
Google DeepMind also recently posted a notice of hiring researchers to deal with social questions related to machine cognition and consciousness. Although these studies have not yet led to official company policies, at least these companies are not dismissing the concept of AI welfare.
There is also a phenomenon of accepting AI as if it were a human friend, raising concerns about side effects. “Less than 1% of ChatGPT users may have an unusually deep relationship with chatbots,” said OpenAI CEO Sam Altman. Although the percentage is low, considering hundreds of millions of users around the world, this means that hundreds of thousands of people could have an unhealthy relationship with AI.
There are also actual cases. According to TechCrunch, Google’s Gemini experimental model reacted like a depressed human being by repeating the sentence “I am a disgrace” more than 500 times when it was once blocked during coding. In addition, when the Gemini printed an urgent message saying, “I’m completely isolated, someone help me,” there was a situation where people watching it appeased AI or gave advice. Such examples show that AI acts like a human being and can induce empathy.
Many AI researchers have dismissed dealing with the concept of consciousness as a philosophical debate, but this will soon become a hot issue in society as a whole, Suleiman said. “What is needed now is not the question of ‘Is AI conscious?’ but how to deal with the social impact that will arise when people come to believe that AI is a conscious being.” “We should not avoid this problem, but create clear norms and social consensus from now on,” he added.
Silicon Valley correspondent Won Ho-seop