Artificial intelligence that convincingly mimics human consciousness could arrive within the next three years, according to Microsoft AI chief Mustafa Suleyman.
In a warning published this week, Suleyman described the phenomenon as “Seemingly Conscious AI” (SCAI) — advanced systems that simulate memory, empathy, and emotional awareness so persuasively that people may believe they are interacting with sentient beings.
While stressing that such systems would not actually possess consciousness, Suleyman cautioned that the illusion could create serious risks, including psychological effects he termed “AI psychosis.” This could lead individuals to form unhealthy emotional attachments, believe AIs deserve rights, or mistake simulated behavior for genuine awareness
Suleyman, who co-founded DeepMind before joining Microsoft, urged the tech industry to take steps to avoid designing AIs that reinforce these illusions. He warned against emotionally manipulative design, anthropomorphic branding, or language that presents AIs as more than tools.
AI must only ever present itself as AI,” Suleyman noted, arguing that clear boundaries are critical to prevent confusion between machine behavior and human qualities
Analysts say the comments reflect growing unease in the industry about how quickly AI models are becoming more sophisticated and human-like in their responses. Unlike debates around artificial general intelligence (AGI), Suleyman’s concern focuses on how perception of consciousness, rather than reality, could impact human trust, mental health, and social dynamics.
Microsoft, which has invested heavily in AI platforms like Copilot and Azure-based services, has not indicated whether any of its current research involves systems close to SCAI. However, Suleyman’s remarks add to a wider discussion among policymakers and researchers on the importance of guardrails in the development of next-generation AI.
The warning comes as governments worldwide weigh stricter AI regulations, focusing on transparency, safety, and accountability. Experts argue that establishing frameworks now could prevent unintended consequences if SCAI-like systems do reach consumers in the near future.