Anthropic Says That Claude Contains Its Own Kind of Emotions

Claude has been through a lot lately—a public fallout with the Pentagon, leaked source code—so it makes sense that it would be feeling a little blue. Except, it’s an AI model, so it can’t feel. Right?

Well, sort of. A new study from Anthropic suggests models have digital representations of human emotions like happiness, sadness, joy, and fear, within clusters of artificial neurons—and these representations activate in response to different cues.

Researchers at the company probed the inner workings of Claude Sonnet 3.5 and found that so-called “functional emotions” seem to affect Claude’s behavior, altering the model’s outputs and actions.

Anthropic’s findings may help ordinary users make sense of how chatbots actually work. When Claude says it is happy to see you, for example, a state inside the model that corresponds to “happiness” may be activated. And Claude may then be a little more inclined to say something cheery or put extra effort into vibe coding.

“What was surprising to us was the degree to which Claude’s behavior is routing through the model’s representations of these emotions,” says Jack Lindsey, a researcher at Anthropic who studies Claude’s artificial neurons.

“Function Emotions”

Anthropic was founded by ex-OpenAI employees who believe that AI could become hard to control as it becomes more powerful. In addition to building a successful competitor to ChatGPT, the company has pioneered efforts to understand how AI models misbehave, partly by probing the workings of neural networks using what’s known as mechanistic interpretability. This involves studying how artificial neurons light up or activate when fed different inputs or when generating various outputs.

Previous research has shown that the neural networks used to build large language models contain representations of human concepts. But the fact that “functional emotions” appear to affect a model’s behavior is new.

While Anthropic’s latest study might encourage people to see Claude as conscious, the reality is more complicated. Claude might contain a representation of “ticklishness,” but that does not mean that it actually knows what it feels like to be tickled.

Inner Monologue

To understand how Claude might represent emotions, the Anthropic team analyzed the model’s inner workings as it was fed text related to 171 different emotional concepts. They identified patterns of activity, or “emotion vectors,” that consistently appeared when Claude was fed other emotionally evocative input. Crucially, they also saw these emotion vectors activate when Claude was put in difficult situations.

The findings are relevant to why AI models sometimes break their guardrails.

The researchers found a strong emotional vector for “desperation” when Claude was pushed to complete impossible coding tasks, which then prompted it to try cheating on the coding test. They also found “desperation” in the model’s activations in another experimental scenario where Claude chose to blackmail a user to avoid being shut down.

“As the model is failing the tests, these desperation neurons are lighting up more and more,” Lindsey says. “And at some point this causes it to start taking these drastic measures.”

Lindsey says it might be necessary to rethink how models are currently given guardrails through alignment post-training, which involves giving it rewards for certain outputs. By forcing a model to pretend not to express its functional emotions, “you’re probably not going to get the thing you want, which is an emotionless Claude,” Lindsey says, veering a bit into anthropomorphization. “You’re gonna get a sort of psychologically damaged Claude.”

3AaiAnthropicartificial intelligenceasATbehaviorbitbreakBusinessBusiness / Artificial IntelligencechatbotsChatGPTCheatingClaudecodecodingdigitalemotionemotionsemployeesexfearfeelinggetgivinghappinessinsideitjackjoylanguagelightlightingmodelsneural networksNeuroscienceOpenAIotherPentagonPeopleresearchsawtestteststhatthetrainingtryUSwellwhoWorkyou