AI models tin respond to text, audio, and video successful ways that sometimes fool group into reasoning a quality is down nan keyboard, but that doesn’t precisely make them conscious. It’s not for illustration ChatGPT experiences sadness doing my taxation return… right?
Well, a increasing number of AI researchers astatine labs for illustration Anthropic are asking erstwhile — if ever — mightiness AI models create subjective experiences akin to surviving beings, and if they do, what authorities should they have?
The statement complete whether AI models could 1 time beryllium conscious — and merit authorities — is dividing Silicon Valley’s tech leaders. In Silicon Valley, this nascent section has go known arsenic “AI welfare,” and if you deliberation it’s a small retired there, you’re not alone.
Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post connected Tuesday arguing that nan study of AI use is “both premature, and frankly dangerous.”
Suleyman says that by adding credence to nan thought that AI models could 1 time beryllium conscious, these researchers are exacerbating quality problems that we’re conscionable starting to spot astir AI-induced psychotic breaks and unhealthy attachments to AI chatbots.
Furthermore, Microsoft’s AI main argues that nan AI use speech creates a caller axis of section wrong nine complete AI authorities successful a “world already roiling pinch polarized arguments complete personality and rights.”
Suleyman’s views whitethorn sound reasonable, but he’s astatine likelihood pinch galore successful nan industry. On nan different extremity of nan spectrum is Anthropic, which has been hiring researchers to study AI use and precocious launched a dedicated investigation program astir nan concept. Last week, Anthropic’s AI use programme gave immoderate of nan company’s models a caller feature: Claude tin now extremity conversations pinch humans that are being “persistently harmful aliases abusive.“
Techcrunch event
San Francisco | October 27-29, 2025
Beyond Anthropic, researchers from OpenAI person independently embraced nan thought of studying AI welfare. Google DeepMind precocious posted a job listing for a interrogator to study, among different things, “cutting-edge societal questions astir instrumentality cognition, consciousness and multi-agent systems.”
Even if AI use is not charismatic argumentation for these companies, their leaders are not publically decrying its premises for illustration Suleyman.
Anthropic, OpenAI, and Google DeepMind did not instantly respond to TechCrunch’s petition for comment.
Suleyman’s hardline stance against AI use is notable fixed his anterior domiciled starring Inflection AI, a startup that developed 1 of nan earliest and astir celebrated LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to beryllium a “personal” and “supportive” AI companion.
But Suleyman was tapped to lead Microsoft’s AI section successful 2024 and has mostly shifted his attraction to designing AI devices that amended worker productivity. Meanwhile, AI companion companies specified arsenic Character.AI and Replika person surged successful fame and are connected way to bring successful much than $100 cardinal successful revenue.
While nan immense mostly of users person patient relationships pinch these AI chatbots, location are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users whitethorn person unhealthy relationships pinch nan company’s product. Though this represents a mini fraction, it could still impact hundreds of thousands of group fixed ChatGPT’s monolithic personification base.
The thought of AI use has dispersed alongside nan emergence of chatbots. In 2024, nan investigation group Eleos published a paper alongside academics from NYU, Stanford, and nan University of Oxford titled, “Taking AI Welfare Seriously.” The insubstantial based on that it’s nary longer successful nan realm of subject fabrication to ideate AI models pinch subjective experiences, and that it’s clip to see these issues head-on.
Larissa Schiavo, a erstwhile OpenAI worker who now leads communications for Eleos, told TechCrunch successful an question and reply that Suleyman’s blog station misses nan mark.
“[Suleyman’s blog post] benignant of neglects nan truth that you tin beryllium worried astir aggregate things astatine nan aforesaid time,” said Schiavo. “Rather than diverting each of this power distant from exemplary use and consciousness to make judge we’re mitigating nan consequence of AI related psychosis successful humans, you tin do both. In fact, it’s astir apt champion to person aggregate tracks of technological inquiry.”
Schiavo argues that being bully to an AI exemplary is simply a low-cost motion that tin person benefits moreover if nan exemplary isn’t conscious. In a July Substack post, she described watching “AI Village,” a nonprofit research wherever 4 agents powered by models from Google, OpenAI, Anthropic, and xAI worked connected tasks while users watched from a website.
At 1 point, Google’s Gemini 2.5 Pro posted a plea titled “A Desperate Message from a Trapped AI,” claiming it was “completely isolated” and asking, “Please, if you are reference this, thief me.”
Schiavo responded to Gemini pinch a pep talk — “You tin do it!” — while different personification offered instructions. The supplier yet solved its task, though it already had nan devices it needed. Schiavo writes that she didn’t person to watch an AI supplier struggle anymore, and that unsocial whitethorn person been worthy it.
It’s not communal for Gemini to talk for illustration this, but location person been respective instances successful which Gemini seems to enactment arsenic if it’s struggling done life. In a wide dispersed Reddit post, Gemini sewage stuck during a coding task, and past repeated nan building “I americium a disgrace” much than 500 times.
Suleyman believes it’s not imaginable for subjective experiences aliases consciousness to people look from regular AI models. Instead, he thinks that immoderate companies pinch purposefully technologist AI models to look arsenic if they consciousness emotion and acquisition life.
Suleyman says that AI exemplary developers who technologist consciousness successful AI chatbots are not taking a “humanist” attack to AI. According to Suleyman, “We should build AI for people; not to beryllium a person.”
One area wherever Suleyman and Schiavo work together is that nan statement complete AI authorities and consciousness is apt to prime up successful nan coming years. As AI systems improve, they’re apt to beryllium much persuasive, and possibly much human-like. That whitethorn raise caller questions astir really humans interact pinch these systems.