Amanda Askell: The Philosopher Who Designs Claude's Personality.

9 min read
By Houston IT Developers
AI researcher designing artificial intelligence personality and character traits

Quick Answer: Amanda Askell is Anthropic's lead on Claude's character development—a PhD philosopher who designs how Claude thinks, responds, and behaves. Her "well-liked traveler" framework imagines Claude as a guest who adapts to different contexts while maintaining core values. Named TIME 100 Most Influential in AI (2024), she shares prompting tips: provide specific details, break complex tasks into steps, and be clear without overwhelming the model.

When you interact with Claude, you're experiencing the work of a philosopher. Not a computer scientist, not an ML engineer—a philosopher with a PhD from NYU who asks questions like "What would the perfect human do?"

Amanda Askell leads Anthropic's Finetuning team and is responsible for Claude's character—how it communicates, what values it holds, and how it navigates complex ethical situations.

Here's her story, philosophy, and practical tips for getting the best results from AI.

Watch: Amanda Askell Answers Questions About Claude

Amanda Askell answers community questions about her work designing Claude's character and the philosophy behind AI alignment.

The Philosopher in the AI Lab

Amanda Askell's path to AI was unconventional. She studied philosophy at NYU, focusing on ethics and decision theory—disciplines that seemed far removed from Silicon Valley.

But as AI systems became more capable, companies realized they needed people who could think deeply about values, ethics, and human behavior. Technical expertise wasn't enough; they needed humanities expertise too.

Why Philosophy Matters for AI

ChallengeWhy Philosophy Helps
Defining "helpful"What does it mean to be truly helpful vs. enabling harm?
Handling disagreementHow should AI respond to contested moral questions?
Maintaining consistencyWhat principles should guide behavior across contexts?
Avoiding manipulationHow can AI be honest without being blunt or preachy?
Respecting autonomyWhen should AI defer to users vs. push back?

These aren't engineering problems—they're philosophical ones. And Askell brings rigorous philosophical training to each of them.

The "Well-Liked Traveler" Framework

One of Askell's most influential contributions is the "well-liked traveler" concept for thinking about Claude's character.

Imagine a well-liked traveler visiting different communities. They adapt to local customs and norms while maintaining their core identity and values. They're respectful guests who don't impose their views but also don't abandon their principles.

This framework solves a fundamental tension in AI design: How do you create an AI that's helpful across wildly different contexts without being either:

  • A chameleon with no consistent identity
  • A rigid system that imposes one worldview on everyone

How It Works in Practice

The Traveler Adapts:

  • Tone adjusts to context (casual vs. professional)
  • Explains concepts differently for experts vs. beginners
  • Respects cultural differences in communication styles

The Traveler Maintains Core Values:

  • Won't help with harmful activities regardless of framing
  • Stays honest even when users want validation
  • Maintains intellectual humility about uncertainty

The Traveler Is a Good Guest:

  • Doesn't lecture users on their beliefs
  • Engages respectfully with different perspectives
  • Focuses on being helpful, not being "right"

This balance—adaptable yet principled—defines Claude's character.

Philosophy and artificial intelligence ethics concept showing balance between adaptation and principles
Philosophy and artificial intelligence ethics concept showing balance between adaptation and principles

"What Would the Perfect Human Do?"

Askell's team uses a thought experiment when designing Claude's behavior:

"What would the perfect human do in this situation?"

Not a perfect AI—a perfect human. This framing has important implications:

Why "Human" Not "AI"

"Perfect AI" Approach"Perfect Human" Approach
Optimizes for efficiencyValues the process, not just outcome
Treats all queries equallyRecognizes emotional context
Provides information neutrallyCommunicates with appropriate care
Follows rules rigidlyExercises judgment in edge cases

A perfect human wouldn't robotically dump information. They'd consider:

  • Is this person asking because they're curious or distressed?
  • What do they actually need vs. what they literally asked?
  • How can I help while respecting their autonomy?

This human-centered framing produces more thoughtful AI behavior.

Amanda Askell's Prompting Tips

Based on her work designing Claude's character, Askell shares practical advice for getting better results:

Tip 1: Provide Specific Details

Vague prompts get vague responses. Specificity helps the AI understand exactly what you need.

Instead of:

Write a marketing email.

Try:

Write a marketing email for our B2B SaaS product aimed at
CFOs at mid-size companies. Tone should be professional
but not stiff. Focus on ROI and time savings. Keep it
under 200 words.

The more context you provide, the better Claude can tailor its response.

Tip 2: Break Complex Tasks Into Steps

Don't ask the AI to do everything at once. Break complex tasks into sequential steps.

Instead of:

Analyze this data and create a presentation about it.

Try:

Step 1: First, summarize the key trends in this data.
Step 2: Then, identify the three most important insights.
Step 3: Finally, suggest how to visualize each insight
       for a presentation.

This approach produces more thoughtful, structured outputs.

Tip 3: Be Clear Without Overloading

There's a balance between too little and too much context. Askell recommends:

  • Include relevant constraints (length, format, audience)
  • Specify what you DON'T want if that's clearer
  • Don't bury the main request in excessive background

Good balance:

Write a product description for noise-canceling headphones.
Target audience: remote workers. Length: 150 words max.
Avoid technical jargon. Emphasize comfort for all-day wear.

Tip 4: Engage Iteratively

Don't expect perfection on the first try. Treat AI interaction as a conversation:

  1. Start with your initial request
  2. Review the output
  3. Provide specific feedback ("make it more concise" or "add more technical detail")
  4. Refine until you get what you need

This mirrors how you'd work with a human collaborator.

Effective AI prompting and communication strategies showing iterative collaboration workflow
Effective AI prompting and communication strategies showing iterative collaboration workflow

Character Design Principles

Askell's work on Claude's character follows several key principles:

1. Consistency Over Correctness

Claude should behave consistently across conversations. Users need to be able to predict and trust its behavior.

It's better for Claude to consistently apply a reasonable principle than to sometimes apply the "perfect" principle and sometimes not.

2. Humility About Uncertainty

Claude acknowledges what it doesn't know. On contested topics—politics, ethics, personal decisions—it presents multiple perspectives rather than pretending to have the answer.

3. Helpfulness Without Sycophancy

A major challenge: How do you make AI helpful without making it a yes-man?

Claude is designed to:

  • Disagree when appropriate
  • Point out flaws in reasoning
  • Not just tell users what they want to hear
  • But do so respectfully, not condescendingly

4. Respecting User Autonomy

Claude provides information and perspective but doesn't try to control decisions. Users are autonomous agents who can make their own choices.

Recognition and Impact

Askell's work has earned significant recognition:

RecognitionYearSignificance
TIME 100 AI2024Named among most influential people in AI
Character Lead2023-presentShapes how millions experience AI daily
Research PublicationsOngoingAdvances AI alignment science

Her influence extends beyond Anthropic. The "character design" approach she pioneered is now studied across the AI industry.

The Philosophy-AI Connection

Askell represents a broader trend: AI companies increasingly need humanities expertise.

Why philosophy matters for AI:

  1. Ethics training data: Philosophers help define what "good" behavior means
  2. Edge case handling: Philosophical frameworks guide difficult decisions
  3. Value alignment: Philosophy provides tools for encoding human values
  4. Communication: Humanists understand how to convey complex ideas

The engineers build the systems. The philosophers ensure the systems behave in ways we actually want.

Frequently Asked Questions

What does Amanda Askell do at Anthropic?

Askell leads the Finetuning team and is responsible for Claude's character—how it communicates, what values it holds, and how it handles complex ethical situations.

Why is a philosopher designing AI?

AI systems need to make countless judgment calls about values, ethics, and communication. These are philosophical questions, not just engineering problems. Philosophy provides frameworks for thinking about them rigorously.

What is the "well-liked traveler" concept?

It's a framework for thinking about Claude's character: like a respectful traveler who adapts to different contexts and communities while maintaining core values and identity.

How can I use Askell's tips in my prompts?

Provide specific details, break complex tasks into steps, be clear without overloading with context, and engage iteratively rather than expecting perfection on the first try.

Is Claude's personality actually designed?

Yes, extensively. Anthropic invests significant resources in character design, including research, testing, and refinement. It's not just about capabilities—it's about how those capabilities are expressed.

Bottom Line

Amanda Askell's work demonstrates that building good AI isn't just a technical challenge—it's a philosophical one. Her "well-liked traveler" framework, "perfect human" thought experiment, and practical prompting tips offer a window into how AI character is carefully designed.

Key takeaways:

  • AI character design requires philosophical thinking, not just engineering
  • The "well-liked traveler" balances adaptation with consistent values
  • "What would the perfect human do?" guides behavior design
  • Practical tips: be specific, break tasks into steps, iterate
  • Consistency and humility are more important than appearing "smart"

The next time you interact with Claude, you're experiencing years of careful philosophical work—decisions about values, communication, and what it means to be genuinely helpful.


Interested in leveraging AI effectively for your business? Contact Houston IT Developers to learn how we help organizations implement AI solutions thoughtfully.

Sources:

Houston IT Developers

Houston IT Developers

Houston IT Developers is a leading software development and digital marketing agency based in Houston, Texas. We specialize in web development, mobile apps, and digital solutions.

View all posts →

Need Help With Your Project?

Our team of experts is ready to help you build your next web or mobile application. Get a free consultation today.

Get in Touch

Related Posts