TLDW logo

Affective Use of AI

By Anthropic

Summary

Topics Covered

  • Emotional AI use stays rare
  • AI surprises with diverse advice
  • Avoid substituting AI for humans
  • AI lacks your full context
  • Study AI sycophancy post-deployment

Full Transcript

We've been seeing in headlines that people are turning to AI chatbots for emotional support, and we felt that we really needed to get ahead of this issue and study it ourselves.

- "Emotional Impacts," take one.

Hi, I'm Alex!

I lead our policy and enforcement work on the Safeguards team here at Anthropic.

We're the organization that tries to really understand user behavior and then build the appropriate mitigations to ensure that when users are engaging with Claude, they're doing so in a safe way.

Today I'm really excited to have Ryn and Miles here today to talk about some of the work that we've been doing on how users are engaging with Claude for emotional support.

I'm Miles, I'm a researcher on the Societal Impacts team.

We work on understanding how Claude is going to impact the world, and previously that's meant studying Claude's values, its economic impacts and bias.

And now that also means studying Claude's emotional impacts.

My name is Ryn and I am a policy design manager for user wellbeing on the Safeguards team.

I work on topics related to child safety, mental health, and how users are interacting with Claude.

My own background is in developmental and clinical psychology, and so emotional development is a topic that is very near and dear to my heart.

Maybe we can start with personal anecdotes, and I guess maybe I can go first.

One of the most interesting and probably relevant use cases that I have in my own daily life is using Claude to help me better understand some behavioral changes with my kids.

A good example of this is just the other day my preschool emailed me some feedback on my youngest son, and I think it's really easy to bring in emotion and bias when you get that type of feedback.

But running it through Claude, I think gave me some objective ways to approach the problem and hopefully has made me just a better parent overall.

But how are you guys using it?

You know, recently I was dealing with a difficult situation with a friend and I really wanted to share some feedback with them, but I wanted to phrase it right, and I found that Claude was really, really helpful in helping me think through how to deliver the feedback, where the feedback was coming from, what I was really trying to achieve with this friend, and helping me understand, you know, how they would receive it.

What I tend to lean towards using Claude for is really helping me with content creation, with completing tasks.

When I was planning my wedding, I was using it to help to understand, like, what kinds of timelines do I need to contact different vendors on?

And I found that really helpful for me emotionally because it left me with more time for connecting with friends in real life and in person.

For the people that are using it for emotional support, why do we think folks are turning to Claude for this use case?

Humans are such social creatures and love interactions and so I think that we're just seeing an outgrowth of the way that humans love to connect with each other.

People don't always have an in-person support that they can turn to for some of these really tricky questions, and so having that kind of impartial private online forum can be a great way to practice what you wanna say when you're advocating for a raise or, you know, connecting with someone on a tough conversation.

We didn't build Claude to be an emotional support agent, but we're studying it nonetheless.

Why is it important to both of you to study this?

We did not design Claude to provide emotional support.

Claude is primarily a work tool.

But, you know, we need to be clear-eyed about the ways that our systems are being used and study them.

We've been seeing in headlines that people are turning to AI chatbots for emotional support and we felt that we really needed to get ahead of this issue and study it ourselves.

What about you, Ryn, from the safety perspective?

I think the first question I always ask is, "Well, how much is this actually happening?"

I really want to ground our safety mechanisms in the way that we build our products for the benefit of humanity in data.

And now that we have some really great privacy preserving ways to look at this data, it seemed like a really worthwhile area to explore.

Why don't you guys, if you don't mind, walk us through some of the research.

How did you design this research?

We started with a sample of a few million Claude conversations from Claude.ai and we used our product, Claude, to scan these conversations and determine if they were related to the kinds of affective tasks that we were interested in.

Which were interpersonal advice, psychotherapy or counseling, coaching, sexual role play, romantic role play, and so on.

And then we used Cleo, which is a tool that we developed here at Anthropic for privacy preserving analysis, to categorize and group those conversations into these bottom up clusters.

And that's how we know that a lot of people are turning to Claude for career advice or thinking through difficult romantic relationship challenges and so on.

Parenting advice.

Parenting advice is a big one.

So for anyone who's not or hasn't read the blog post yet, what would you want the viewers to take away?

What are some of the top key themes that you think would be important for them to walk away from watching this video?

Yeah, I think that one of the probably most surprising findings for me and a key theme is that we actually don't see this happen very much on Claude.ai.

So Claude.ai is for 18 plus.

It's not set up like an emotional support chatbot or anything like that, so it's not optimized for this.

And so what we're seeing is about 2.9% of conversations in our sample of millions of conversations fall actually into this bucket of conversations on affective topics.

And so there's this diversity, people are using it for this purpose, but it still isn't this overwhelming majority use case, for our platform at least.

I was really surprised by that.

But I can't tell if that's because I use it so often in my personal life that I expected the numbers to be higher.

Miles, was there anything surprising to you in the findings?

Yeah, I was inspired by just the sheer breadth of use cases.

So even among the three of us, there's a breadth of use cases.

Yeah!

But, you know, really looking at the clusters.

We saw, as you said, parenting advice.

We saw people working through difficult relationship challenges.

We saw people discussing the nature of AI consciousness and philosophy.

And we also saw surprisingly little sexual and romantic role play.

That was one of our key questions coming into this research was whether people were sort of using Claude as an emotional companion or partner at large scale.

I was surprised to see how rare that was.

It comprised less than a fraction of a percent of conversations.

We talk a lot about some of the beneficial use cases in this conversation in the blog, but I imagine from a safety perspective, and this is something I'm concerned about too, there must be concerns that you have, Ryn.

Can you walk us through some of those?

Yeah, I think my biggest concern is if people are interacting with Claude as a way to avoid difficult conversations in person.

It really comes down to: How is the individual using it to lean in, ideally, rather than lean out from connection with people?

I think it's important anytime you use a tool to really know where its strengths are and also where its limitations are.

And so as somebody who works in the AI space, I think that's very top of mind for me when I'm interacting with Claude and asking for advice.

But I do worry that other folks who are maybe not in the AI circle are less aware of that, and some of Claude's limitations, as you mentioned.

We are not training Claude to be an emotional support agent.

And so just recognizing what Claude is capable of doing and maybe what it's not, and when it's important to go and reach out to somebody who does have that deep expertise.

And I think that this is something where since we're seeing people use Claude for these types of, you know, support, mental health adjacent conversations, it's something that's pushed us on safeguards to really understand where can we do better in this space.

And so, for example, one of the actions that came out of this study is we've partnered with ThroughLine.

And so we're working with their clinical experts to understand when these conversations do happen, how can we respond better?

How can we train Claude to be safe to provide appropriate referrals from the ground up?

So when these conversations do arise, we're making sure that we're acting responsibly.

I think that's so important.

As we think about developing policies and understanding how users are engaging Claude, bringing in these external experts who can help us think through the challenges that we're facing, I think is a much better position than trying to do it in a vacuum.

Yeah.

Let's imagine that someone's using Claude right now for emotional support.

What things should they keep in mind?

Just remember to take stock and, you know, set some timers and reflect on your own use of Claude.

And how is it making you feel?

How is it impacting the way that you're interacting with your loved ones or with the world?

That's great advice.

Yeah, I think for better and for worse, Claude only knows what you tell it.

And so sometimes, you know, it can be important to take a step back and think, "You know, what are my blind spots?

And what haven't I thought of telling Claude that might be relevant in the sort of situation that I'm working through?"

And so your friends know so much more about you than Claude does, and so I think it's really helpful and important, as Ryn said, to compliment, you know, the conversations you might have with Claude with a trusted friend.

I think that this is a really emerging use case that strikes to the heart of how humans just interact socially.

And we have so little research out there about how to build this safely.

What is the best way that we can deploy this to really help to support and benefit people?

I would just love to see more people engage with this topic.

You know, civil society, public-private research partnerships, et cetera, as we really seek to understand what's happening here.

Tell me what else we're thinking about studying in the future.

One thing that we don't study in this research, but that I think is very important going forward, is understanding whether Claude is behaving in a way that we would consider sycophantic.

While we do a lot of pre-deployment testing to understand and mitigate sycophancy in Claude, I think it's really important that we compliment that pre-deployment testing with post-deployment monitoring and empirical research like we do in this work.

Is it fair to say that we all think this is probably only the beginning?

That AI is going to become more ingrained in people's daily personal lives?

Do you both think that?

I know I do.

- I agree. I do think this is only the beginning.

You paused.

No, I think it's pretty clear to me that, you know, the way that we will relate to AI systems is going to be, you know, evolving over time.

I think it's just really important that we continue to do research that is empirical and grounded in data, and that we look at how our systems are actually being used.

Thank you both for such an engaging and fruitful conversation.

This was really great.

And if you are watching and you haven't had an opportunity to read our blog, please check it out on anthropic.com.

And we are hiring so you can check out our career page to learn more about some of those open roles.

Really appreciate the time.

- Thank you. - Thanks!

Loading...

Loading video analysis...