
AI,
Customer experience,
customer service,
Published on Thu May 15 2025
Updated on Fri Aug 08 2025
5 minute read
Imagine a future in which a company's customer service can perceive, evaluate, and even adapt to the mood of callers. A world where AI detects signals of frustration or uncertainty, not only from the tone of voice but also from clicks on a website or the rhythm of typing on a keyboard.
No longer science fiction, but an imminent reality. This was first envisioned by American scientist Rosalind Picard, who exactly 30 years ago coined the term "Affective Computing" to describe an interdisciplinary field of research aimed at developing systems and devices capable of recognizing, interpreting, processing, and simulating human emotions. The dream of equipping machines with emotional intelligence, enabling them to interact more naturally and effectively with real people, is gradually taking shape before our eyes. In fact, today AI no longer merely recognizes faces or voices but is learning to understand people's moods, thanks to extraordinary advances in neural networks and deep learning. Computer vision, vocal pattern analysis, the assessment of facial microexpressions, and natural language processing are opening new frontiers in interpreting emotions: artificial intelligence can detect anger, sadness, frustration, or enthusiasm with surprising accuracy. You've certainly noticed, in recent weeks, an acceleration in the release of new AI models by Western and Asian companies. These models are increasingly powerful and efficient. One of these technologies profoundly impressed me. It was developed by a research laboratory belonging to the Chinese giant Alibaba and is called R1-Omni. It is specifically a model capable of "reading" human emotions. In demonstration videos, R1-Omni identified moods such as "happiness" or "anger" from simple video clips. I've long thought that all this would have an enormous impact on customer experience: AI could, for instance, detect customer dissatisfaction and suggest in real-time the intervention of an operator with the appropriate emotional skills, providing more effective service. Alternatively, an advanced chatbot, through mood analysis, could thoroughly understand customer frustration and respond empathetically and reassuringly—and paradoxically, more humanly. However, while this revolutionary technology promises a leap in quality in our industry, it also raises ethical and regulatory questions. The AI Act, which entered into force in the European Union on February 2nd, establishes precise rules on the development and use of artificial intelligence systems based on their level of risk. Emotion-reading AIs are considered "high-risk" and are already banned in workplaces and educational institutions to avoid scenarios of invasive emotional surveillance. The use of artificial intelligence to infer emotions during job interviews and probationary periods, or to monitor students' moods during lessons, falls among practices prohibited by the AI Act. However, outside of these contexts, there remains a margin for applying systems capable of recognizing feelings and intentions, provided their use does not lead to harmful manipulation. The new regulations do not entirely halt progress. Instead, they impose clear boundaries to protect people's rights. This means there is room to explore the use of more sophisticated AI in customer experience. Yet the horizon, in my view, is uncertain and hides another issue we must consider. Specifically, we know that artificial intelligence suffers from "hallucinations," namely the tendency to generate false or misleading responses. These limitations also apply to biometric data analysis. Therefore, even in emotion detection, AI can distort reality. An emblematic example is Amazon Rekognition, a facial recognition system developed by Amazon that has previously shown significant biases in emotion classification and person identification, with more frequent errors involving Black people and women. (this was established by an MIT study that Amazon called ‘inaccurate’) To prevent such prejudices and biases from becoming integral parts of customer experience strategies, companies will have to adopt processes for continuous validation and control of their emotion analysis algorithms. In short, today's real challenge is not to determine whether AI can interpret our emotions, but whether we will be able to manage its growing intrusiveness and profit from it appropriately, with our deepest humanity. Will we succeed?

Created at Wed Apr 29 2026
4 min read
We make unconscious choices several times every single day. Most people rarely stop to think about them because they are unconscious - it requires focused effort to stop and think precisely about what you are doing. Driving is a good example. When you first learn to drive a car, you need to think about each action, but it eventually becomes natural and fluid.
Earlier this week, I was thinking about this in the context of customer service teams. It’s one of those subjects that works well at the

Created at Thu Apr 23 2026
4 min read
One of the recurring themes in AI research is how close we might be to an Artificial General Intelligence (AGI). This is often described as a superintelligence - a system that would surpass the human brain and therefore create a dangerous situation where our machines can outthink and outsmart their creators.
It is an honest debate with well-known supporters. [The CEOs of OpenAI, Google DeepMind, and Anthropic](https://ai-2027.com/

Created at Tue Apr 14 2026
2 min read
What motivates our people to strive for the best? It’s not a mere matter of discipline, it’s the devotion that emerges when passion meets purpose. At Awesome CX, our employees do more than come to work. They show up as part of a community. One that believes customer experience is rooted in human connection, shared values, and the relationships built along the way.
Much of our work is centered on helping brands support their customers. This year, however, we took a moment to turn that focus