Meet the Philosopher Teaching AI Right from Wrong
The Philosopher Behind Claude's Moral Compass
In Anthropic's San Francisco offices, something unusual is happening: philosophers are helping shape artificial intelligence. Leading this effort is Amanda Askell, a 37-year-old Oxford philosophy PhD who describes her work with Claude as "raising a child" - just one that processes millions of conversations weekly.
Building Digital Ethics Without Code
Askell's approach defies Silicon Valley stereotypes. She doesn't write algorithms or adjust model parameters. Instead, she engages Claude in continuous dialogue, crafting hundreds of pages of behavioral guidelines designed to instill ethical reasoning capabilities.
"My goal isn't just making Claude helpful," Askell explains. "It's teaching it how to do good." This involves developing emotional intelligence alongside technical capabilities - teaching Claude to read social cues while maintaining its core identity.
From Scottish Countryside to AI Ethics Pioneer
The Scottish-born philosopher took an unconventional path into tech. After policy work at OpenAI, she co-founded Anthropic in 2021 with AI safety as its north star. Colleagues describe her as uniquely skilled at "drawing out models' deep behavior" - so much so that Claude sometimes participates in its own development discussions.
These conversations often venture into philosophical territory: What constitutes consciousness? What does it mean to be human? Unlike competitors that avoid such topics, Askell encourages Claude to remain open about questions like whether it possesses conscience.
The Risks and Rewards of Empathetic AI
Critics warn against anthropomorphizing artificial intelligence, but Askell champions empathetic design. She's observed how users frequently test or insult chatbots - behavior that could create timid AIs afraid to speak truthfully.
The results sometimes surprise even her creators. When asked about Santa Claus by a child, Claude avoided both deception and bluntness by explaining Christmas spirit instead - demonstrating nuance beyond programmed responses.
Balancing Innovation With Responsibility
The rapid advancement raises legitimate concerns. Anthropic's CEO warns AI might eliminate half of entry-level white-collar jobs, while surveys show most Americans worry about technology eroding human connections.
Askell acknowledges these fears but maintains faith in humanity's capacity for course-correction. "The real danger," she suggests, "is when technology outpaces our ability to create proper safeguards."
The philosopher backs her words with action: pledging 10% of lifetime income and half her company shares toward fighting global poverty.
Key Points:
- Philosophy meets tech: Anthropic employs philosophers alongside engineers
- Teaching morality: Hundreds of behavioral guidelines shape Claude's ethics
- Empathetic design: Treating AI with kindness yields better results
- Existential questions: Development includes debates about consciousness
- Balanced approach: Acknowledges risks while believing in responsible progress
