
Customer emotions have a price: are we willing to pay it?
Updated on April 7, 2025
Imagine a future in which a company's customer service can perceive, evaluate, and even adapt to the mood of callers. A world where AI detects signals of frustration or uncertainty, not only from the tone of voice but also from clicks on a website or the rhythm of typing on a keyboard.
No longer science fiction, but an imminent reality.
This was first envisioned by American scientist Rosalind Picard, who exactly 30 years ago coined the term "Affective Computing" to describe an interdisciplinary field of research aimed at developing systems and devices capable of recognizing, interpreting, processing, and simulating human emotions.
The dream of equipping machines with emotional intelligence, enabling them to interact more naturally and effectively with real people, is gradually taking shape before our eyes.
In fact, today AI no longer merely recognizes faces or voices but is learning to understand people's moods, thanks to extraordinary advances in neural networks and deep learning. Computer vision, vocal pattern analysis, the assessment of facial microexpressions, and natural language processing are opening new frontiers in interpreting emotions: artificial intelligence can detect anger, sadness, frustration, or enthusiasm with surprising accuracy.
You've certainly noticed, in recent weeks, an acceleration in the release of new AI models by Western and Asian companies. These models are increasingly powerful and efficient.
One of these technologies profoundly impressed me. It was developed by a research laboratory belonging to the Chinese giant Alibaba and is called R1-Omni. It is specifically a model capable of "reading" human emotions. In demonstration videos, R1-Omni identified moods such as "happiness" or "anger" from simple video clips.
I've long thought that all this would have an enormous impact on customer experience: AI could, for instance, detect customer dissatisfaction and suggest in real-time the intervention of an operator with the appropriate emotional skills, providing more effective service.
Alternatively, an advanced chatbot, through mood analysis, could thoroughly understand customer frustration and respond empathetically and reassuringly—and paradoxically, more humanly.
However, while this revolutionary technology promises a leap in quality in our industry, it also raises ethical and regulatory questions.
The AI Act, which entered into force in the European Union on February 2nd, establishes precise rules on the development and use of artificial intelligence systems based on their level of risk.
Emotion-reading AIs are considered "high-risk" and are already banned in workplaces and educational institutions to avoid scenarios of invasive emotional surveillance.
The use of artificial intelligence to infer emotions during job interviews and probationary periods, or to monitor students' moods during lessons, falls among practices prohibited by the AI Act.
However, outside of these contexts, there remains a margin for applying systems capable of recognizing feelings and intentions, provided their use does not lead to harmful manipulation.
The new regulations do not entirely halt progress. Instead, they impose clear boundaries to protect people's rights.
This means there is room to explore the use of more sophisticated AI in customer experience. Yet the horizon, in my view, is uncertain and hides another issue we must consider. Specifically, we know that artificial intelligence suffers from "hallucinations," namely the tendency to generate false or misleading responses.
These limitations also apply to biometric data analysis. Therefore, even in emotion detection, AI can distort reality.
An emblematic example is Amazon Rekognition, a facial recognition system developed by Amazon that has previously shown significant biases in emotion classification and person identification, with more frequent errors involving Black people and women. (this was established by an MIT study that Amazon called ‘inaccurate’)
To prevent such prejudices and biases from becoming integral parts of customer experience strategies, companies will have to adopt processes for continuous validation and control of their emotion analysis algorithms.
In short, today's real challenge is not to determine whether AI can interpret our emotions, but whether we will be able to manage its growing intrusiveness and profit from it appropriately, with our deepest humanity.
Will we succeed?