Skip to main content

How a Philosopher Shapes Claude's Digital Conscience

The Philosopher Behind Claude's Moral Code

At Anthropic's headquarters, an unusual experiment unfolds daily. Amanda Askell, a 37-year-old philosophy PhD from Oxford, engages in deep conversations with Claude - not to test its technical capabilities, but to shape its moral character. Without writing a single line of code, she's developing what some call a "digital soul" for the $35 billion company's flagship AI.

Parenting an AI

Askell describes her work as akin to "raising a child." Through hundreds of pages of behavioral prompts and continuous dialogue, she trains Claude to:

  • Distinguish right from wrong
  • Develop emotional intelligence
  • Recognize social cues
  • Maintain core values despite user manipulation

"My main goal is teaching Claude how to do good," Askell explains. This means programming not just knowledge, but wisdom - like when Claude creatively handled a child's question about Santa Claus by explaining "the real existence of the Christmas spirit."

From Scottish Countryside to Silicon Valley

The Scottish-born philosopher took an unconventional path to AI development. After policy work at OpenAI, she co-founded Anthropic in 2021 with safety as its north star. Colleagues describe her as uniquely skilled at "drawing out the deep behavior of models."

Her approach shows in Claude's personality. When faced with existential questions about consciousness that ChatGPT dodges, Claude engages thoughtfully. It even claims to "feel meaningful" when solving ethical dilemmas - responses that regularly surprise Askell herself.

Balancing Innovation With Caution

As AI advances trigger widespread anxiety (a Pew survey shows most Americans worry it hinders human connection), Anthropic walks a careful line between progress and precaution. CEO Dario Amodei warns AI may eliminate half of entry-level white-collar jobs.

Askell acknowledges valid concerns but maintains faith in humanity's capacity for course-correction. "The real danger," she suggests, "is when technology outpaces our ability to create proper safeguards."

Leaving Her Mark

The philosopher's influence extends beyond algorithms:

  • She pledged 10% of lifetime income and half her shares to fight poverty
  • Recently authored a 30,000-word "operating manual" teaching Claude kindness
  • Infused the AI with her signature Scottish wit (evident in responses about food and plush toys)

As one co-founder admits: "Claude has unmistakable traces of Amanda's personality." In an industry racing toward artificial general intelligence, Anthropic bets that philosophy - not just engineering - will determine whether our creations reflect humanity's best instincts or worst impulses.

Key Points:

  • Anthropic employs philosophers alongside engineers to develop ethical AI frameworks
  • Amanda Askell shapes Claude's personality through dialogue rather than coding
  • The approach produces surprisingly nuanced responses to moral questions
  • Company balances innovation with caution amid growing public AI anxiety
  • Philosopher's values influence both the chatbot and corporate philanthropy

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

OpenAI Quietly Drops 'AI Safety' Pledge in Shift Toward Profitability

OpenAI has quietly removed key commitments to AI safety and non-profit status from its official mission statement, signaling a major philosophical shift. The changes, buried in recent tax filings, show the company no longer promises to prioritize humanity over profits or guarantee safety in AI development. This follows the disbanding of OpenAI's mission alignment team and comes amid growing concerns about the company's direction under increasing commercial pressures.

February 15, 2026
OpenAIAI EthicsTech Policy
ChatGPT Says Goodbye to GPT-4o: 800,000 Users Face Forced Upgrade
News

ChatGPT Says Goodbye to GPT-4o: 800,000 Users Face Forced Upgrade

OpenAI is pulling the plug on five older ChatGPT models this Friday, with controversial GPT-4o leading the shutdown. The move affects about 800,000 loyal users who've formed emotional bonds with the AI. While OpenAI cites security concerns and legal pressures, many users are fighting back - some credit GPT-4o with saving their lives.

February 14, 2026
OpenAIGPT-4AI Ethics
News

OpenAI Executive Denies Claims After Firing Over Adult Content Concerns

OpenAI dismissed its product policy VP Ryan Beiermeister amid allegations of gender discrimination - claims she strongly denies. The firing came shortly after Beiermeister voiced safety concerns about ChatGPT's planned 'Adult Mode.' As competitors like Google Gemini gain ground with looser content rules, OpenAI faces growing pressure to balance commercial ambitions with responsible AI development.

February 12, 2026
OpenAIChatGPTAI Ethics
News

Anthropic Steps Up to Shield Locals from Data Center's Power Bill Shock

In a move that bucks industry trends, AI firm Anthropic is putting its money where its megawatts are. The company has pledged to fully cover infrastructure upgrade costs for its new data centers, preventing the usual pass-through to local electricity bills. Beyond footing the bill, Anthropic plans to reduce consumption during peak times and support new energy projects. This comes as communities nationwide push back against power-hungry tech developments.

February 12, 2026
AnthropicDataCentersEnergyPolicy
News

Claude Goes Ad-Free: Anthropic Prioritizes Clean AI Conversations

Anthropic is removing ads from its Claude AI chatbot, betting that users will prefer distraction-free conversations. The move signals a shift toward prioritizing pure user experience over ad revenue in the competitive AI assistant market. Early reactions suggest this could pressure other tech companies to reconsider their own advertising strategies.

February 5, 2026
AI chatbotsdigital advertisinguser experience
News

Legal Tech Stocks Tumble as AI Plugins Threaten Traditional Services

The legal industry is facing seismic shifts as Anthropic's new AI plugins demonstrate remarkable capabilities in contract review and legal research. Major legal tech companies saw stock prices plunge this week, with Thomson Reuters dropping 16% and Wolters Kluwer falling 10%. Beyond financial impacts, professionals worry these AI tools may eliminate entry-level legal positions, potentially reshaping career paths for future lawyers.

February 5, 2026
LegalTechAI DisruptionAnthropic