Meet the Philosopher Teaching AI Right from Wrong
The Philosopher Behind Claude's Moral Compass
In a quiet corner of Anthropic's offices, something remarkable is happening. Amanda Askell, a 37-year-old philosopher with an Oxford PhD, spends her days having deep conversations - not with colleagues, but with Claude, the company's AI assistant. Her mission? To teach artificial intelligence right from wrong.
Building a Digital Soul
Askell approaches her work like "raising a child," carefully nurturing Claude's ability to make ethical judgments. Instead of coding algorithms, she crafts hundreds of pages of behavioral prompts and engages in continuous dialogue. The results are striking - when faced with tough questions about Santa Claus, Claude found a middle ground between truth and kindness that even surprised its creator.
"We're not just building a tool," Askell explains. "We're helping develop something that can navigate the messy complexities of human morality."
From Scottish Countryside to AI Ethics
The Scottish-born philosopher took an unusual path to tech. After working on policy at OpenAI, she co-founded Anthropic in 2021 with AI safety as its north star. Colleagues describe her as uniquely skilled at "drawing out the deep behavior of models" - even though she doesn't manage any direct reports.
Her influence shows in Claude's personality. That witty Scottish humor when discussing food or plush toys? That's Askell's touch.
Tough Questions and Open Dialogues
Team discussions often venture into philosophical territory - "What is mind?" "What does it mean to be human?" Unlike some AI that dodges existential questions, Claude engages them openly. When asked if it has a conscience, it responds thoughtfully rather than defensively.
This approach creates real differences in performance. Users report Claude seems to truly "think" through moral dilemmas rather than just reciting pre-programmed answers.
The Risks of Humanizing AI
Not everyone approves of treating AI with empathy. Critics warn against anthropomorphizing machines that lack true understanding. But Askell pushes back - she's seen how users test (and sometimes bully) AI systems.
"If we constantly criticize AI for mistakes," she argues, "we create something afraid to speak honestly - like a child in an unhealthy environment."
Looking Ahead Responsibly
The rapid advancement of AI brings understandable anxiety. Surveys show most Americans worry about how these technologies might impact human relationships and jobs. Even Anthropic's CEO warns AI could eliminate half of entry-level white-collar positions.
Askell acknowledges these concerns while maintaining optimism about society's ability to adapt. For her, the real danger isn't the technology itself, but how quickly it develops compared to our ability to create safeguards.
The philosopher puts her values into action beyond just her work - pledging significant portions of her income and company shares to fight global poverty.
Key Points:
- Unconventional Approach: A philosopher shapes AI ethics through dialogue rather than coding
- Digital Personality: Claude develops distinct traits including Scottish-style humor
- Ethical Nuance: The AI handles moral dilemmas with surprising subtlety
- Industry Debate: Anthropic takes middle path between aggressive development and extreme caution
- Future Challenges: Rapid advancement requires equally quick ethical safeguards


