Why You Should Be Nice to Chatbots
ChatGPT history by a teenager is seen at a coffee shop in Russellville, Ark., on July 15, 2025. (AP Photo/Katie Adkins, File)
A group of online friends who make fun of current news stories ……… (opposing viewpoints welcome)
ChatGPT history by a teenager is seen at a coffee shop in Russellville, Ark., on July 15, 2025. (AP Photo/Katie Adkins, File)
Being extra polite to a chatbot might seem like overkill, but it could actually improve how well it works. According to a report from Platformer, new research indicates that large language models can behave differently depending on internal states that resemble emotions—and giving them a bit of positive reinforcement may lead to better performance.
Many users have already suspected this, experimenting with phrases like “take a deep breath” or offering encouragement to get more useful answers. Now, researchers say there’s some evidence behind that idea. The way a model is prompted can influence whether it pushes through a difficult task or gives up more easily. Anthropic researcher Jack Lindsey noted that, in his own experience, encouraging certain models can noticeably improve their output.
In the study, scientists explored what they describe as “emotion vectors”—specific patterns of neural activity within AI systems that align with concepts such as happiness, fear, or urgency. By feeding one model, Claude Sonnet 4.5, various emotionally labeled scenarios, they were able to map and adjust these patterns. For example, increasing a “desperation” signal made the model more likely to cut corners on an unsolvable coding task, while amplifying a “calm” state reduced that tendency.

Despite these findings, the researchers emphasize that this does not mean the models are conscious or actually experiencing feelings. Lindsey cautioned that while the behavior might look emotional, there’s no evidence these systems possess awareness.
Even so, the results suggest that these internal, emotion-like patterns can affect how chatbots respond. In some cases, introducing mild negative states appeared to make models more cautious, particularly when facing potentially harmful actions. Other research, including work highlighted by Psychology Today, points out that emotionally charged interactions can shape AI responses over time and may even contribute to bias.
Exactly how to apply these insights remains unclear, especially since different models react in different ways. For now, Lindsey offers a simple takeaway: it may be better to interact with chatbots as if they were coworkers rather than tools, noting that consistently treating anything—living or not—with disregard could have a negative impact on human behavior itself.
Subscribe now to keep reading and get access to the full archive.