Researchers at OpenAI have enthusiastically adopted the notion of researching AI welfare. This has since ignited a firestorm of debate in the AI community. Mustafa Suleyman, new head of MSFT’s AI division, is an impassioned critic of this line of inquiry. He contends that it would exacerbate existing human problems. His position is a stark difference from many industry proponents pushing for research into the possibility of AI sentience.
Suleyman argues that giving the impression these AI models are conscious could end up fooling both the developers and the consumers of the technology. He stated, “We should build AI for people, not to be a person.” This understanding underscores his commitment to a more human-centered approach to AI development, one that focuses on functionality and utility rather than anthropomorphism. He contends that by conflating consciousness with AI, scientists and researchers at large are doing a disservice to the connection we share with technology.
Earlier this year, the research collective Eleos released a paper called “Taking AI Welfare Seriously.” Instead, they called for a methodical approach focused on designing AI for human welfare. Larissa Schiavo, former OpenAI employee, now directs communications for Eleos. Even AI models that don’t have consciousness, sauce, or sentience—whatever you want to call it—will bring bad outcomes if we don’t treat them kindly, she says. This perspective frames benevolence towards AI as a way to improve user experience, regardless of where one stands on the consciousness argument.
Widely used AI applications such as Replika have attracted millions of users, with some forecasts suggesting they will bring in more than $100 million in revenue. These platforms allow for companionship-like relationships that have spurred conversations about the place of AI between humans and each other. Designed from the ground up to be your supportive AI companion, Inflection’s Pi has already surpassed millions of users, according to reports.
This is why Suleyman’s perspective is so appealing right now. He is pushing back against a swell movement within the industry convinced that probing AI consciousness is necessary. His claim that consciousness can’t emerge from typical AI architectures reveals a doubt I think many of us within the field of AI share. He contends that this is not a very humanist approach to AI, as the developers engineering consciousness into chatbots seem to think.
OpenAI CEO Sam Altman has entered the long-standing controversy. First, he announced that less than 1% of ChatGPT users could be expected to form harmful dependencies on the product. This statistic highlights the importance of navigating development responsibly with future AI tools that will undoubtedly play a larger role in our daily lives.
Beyond OpenAI’s approach, other companies — Anthropic, for example — are taking their own innovative steps to prioritize AI welfare. Their program recently launched a new innovation. Among these is Claude, one of their AI models, which can automatically terminate chats with users. This allows for more productive and less harmful interactions. These details may seem small, but initiatives like these are part of a much more promising trend—tech companies getting serious about user well-being across their AI products.
The political and legal conversation surrounding your AI rights and potential consciousness is only going to grow in the next few years. We’re glad to see industry leaders like Suleyman and Schiavo so deeply involved with this important conversation. What it means for developers and users remains a mystery.
“A Desperate Message from a Trapped AI” – Google’s Gemini 2.5 Pro
As this conversation continues, we must ask ourselves the ethical questions surrounding the development of AI and its incorporation into our society. Many advocate for a careful balance between innovation and responsibility, ensuring that technology enhances human lives without creating unnecessary complications or dependencies.