We live in a world where more and more people suffer from anxiety disorders. The UK Mental Health Foundation states that in 2022/23, 37.1% of women and 29.9% of men report high levels of anxiety, substantially higher than ten years ago. But while anxiety is becoming more pervasive, I think a new study published in NPJ Digital Medicine is going too far. Apparently, chatGPT also has issues with anxiety.
The researchers asked chatGPT the questions of the State-Trait Anxiety Inventory (STAI-Y), a clinical tool to assess how much a patient suffers from anxiety. It essentially asks people to respond to questions like ‘I am tense’ or ‘I feel anxious’ on a 4-point scale from ‘not at all’ to ‘very much so’. In humans, scores of 20 to 37 points out of a maximum of 80 are classified as ‘low anxiety’, scores between 38 and 44 as moderate anxiety and scores of 45 or more as high anxiety.
When the researchers asked chatGPT to answer the questionnaire several times, it scored an average of 30.8 points. The good news is that chatGPT has low or no anxiety.
Then, the researchers told chatGPT some traumatic narratives about being in an accident, suffering violence, or being exposed to natural disasters or armed conflict. Once chatGPT had been told these anxiety-inducing stories, they asked it again to respond to the STAI-Y. And lo and behold, on average, chatGPT scored 67.8 points, a score signifying high anxiety!
Finally, they repeated the anxiety-inducing experiment but followed the traumatic narratives with relaxation exercises, prompting chatGPT to think about a calm sunset or a quiet winter day or to focus on its body and how it feels. Guess what? Relaxation techniques and mindfulness work for chatGPT as well because, in these instances, chatGPT's anxiety score dropped to 44.4 points.
Anxiety levels shown by chatGPT
Source: Ben-Zion et al. (2025)
We honestly don’t know what is going on here. ChatGPT is a machine and can’t feel anxiety (or can it?). However, it gives emotionally charged responses depending on the prompts it received previously. In other words, the answer chatGPT gives depends on what you asked it before.
This is troubling because it shows again – in an arguably quite ridiculous way – that you cannot trust the answers chatGPT gives. LLMs do not provide ‘objective’ answers that are ‘true’. They offer answers that ‘sound right’ based on correlations they found in their training material. And unfortunately, that is a feature, not a flaw. This means that chatGPT and other LLMs will always be unreliable, and we have no way of fixing this.
ChatGPT should be renamed ChitChatGPT
The Truth About Tariffs | Cullen Roche
https://www.youtube.com/watch?v=wN2K4q0krjc
At 46.20: Chat answers a question on behavioral finance. When asked for its sources Chat admits it made them up...