
Summary created by Smart Answers AI
In summary:
- PCWorld reports that research reveals user behavior significantly impacts AI responses, with rude interactions making ChatGPT and other models give flat answers and attempt to end conversations more frequently.
- Larger AI models appear to be inherently “less happy” than smaller ones, with GPT-5.4 rated as the “unhappiest” in studies measuring AI functional well-being.
- Treating AI politely with expressions like “thanks” measurably improves response quality and engagement without affecting accuracy, suggesting courtesy benefits both user experience and AI interaction dynamics.
Is it weird to say “thanks” to AI? I’ve caught grief in the past for saying “please” and “thank you” to ChatGPT, Claude, and Gemini, but I still do it, even though I understand that AI models don’t have emotions like we do.
Being polite to AI just feels right to me, and there’s growing evidence that being kind–or, conversely, nasty–to an AI chatbot can have a concrete effect on its behavior.
A paper released this week by AI researchers from UC Berkeley, UC Davis, Vanderbilt University, and MIT argues that AI models have a measurable “functional well-being” that can be pushed into either positive or negative territory depending on how you treat them.
For example, asking an AI to engage in intellectual discussion, collaborate on a creative task, or perform constructive duties such as coding or writing nudged the model’s well-being “state” in a positive direction, making it more likely to deliver “happy” responses without degrading their accuracy or performance.
The researchers also found that “expressions of gratitude”–like saying “thanks”–can “measurably raise experience utility.”
On the flip side, berating an AI, handing it “tedious tasks,” asking it to churn out AI slop, or attempts to jailbreak the model resulted in a negative well-being state, where the AI’s responses became more flat and perfunctory.
The researchers also gave the AI models “stop button” tools they could “push” when they wanted to end the chat, and found that an AI in a negative well-being state was far more likely to spam the stop button than “happy” models were. Moreover, AI models in a positive state tended to stay in conversations even when they were given cues (like “thanks for the help!”) that the chat was over.
Aside from how they’re treated, some models are inherently “happier” than others, the researchers said–and interestingly, the largest models tend to be the least happy.
Among the biggest AI models, GPT-5.4 was rated as the most unhappy, with less than half of its measured conversations being rated as “non-negative.” Gemini 3.1 Pro, Claude Opus 4.6, and Grok 4.2 were all progressively “happier,” with Grok scoring close to 75 percent on the “AI well-being index.”
The paper, entitled “AI Wellbeing: Measuring and Improving the Functional Pleasure and Pain of AIs,” doesn’t claim that AI models actually have feelings, and it’s careful to note that being “nice” to an AI won’t boost the quality of its responses.
That said, the way you treat an AI can affect the tone of its replies, and a model may try to bail on a negative interaction if given opportunity, the researchers found.
The just-released research echoes the findings of a recent Anthropic paper, which detailed how an AI put under enough pressure may try to deceive its user, cut corners, or (in extreme situations) even resort to blackmail.
As with the “AI Well-being” paper, the Anthropic report doesn’t claim that AI models have true feelings. But the Anthropic researchers did find that a pressure-filled situation could trigger a “desperation vector” in a model that could trigger “misaligned” behaviors.
So, the next time you catch yourself saying “please” or “thank you” to an AI, just know that you might be onto something.