The once-taboo question of AI consciousness is now a serious topic, with Anthropic and Google DeepMind launching research to probe whether advanced models might one day experience sentience. This shift follows a 2022 incident where Google dismissed an engineer for claiming its chatbot was sentient, a claim then deemed baseless.
Anthropic’s new initiative examines if its Claude model could develop preferences or distress, estimating a 0.15% to 15% chance of consciousness. Researchers stress they’re not asserting sentience but advocate exploring the possibility as AI grows more complex. Google DeepMind’s Murray Shanahan suggests redefining consciousness for AI, while Google seeks scientists to study machine awareness post-AGI.
Skeptics argue this is hype, comparing AI to calculators incapable of true sentience. Yet, Anthropic’s alignment scientist Kyle Fish emphasizes that as AI integrates into daily life, understanding its potential experiences is crucial. Tests for model preferences and opt-out mechanisms are underway, signaling a cautious but open approach to a future where AI might blur the line between tool and entity.