
Published on Thu May 15 2025
Updated on Fri Aug 08 2025
5 minute read
Imagine a future in which a company's customer service can perceive, evaluate, and even adapt to the mood of callers. A world where AI detects signals of frustration or uncertainty, not only from the tone of voice but also from clicks on a website or the rhythm of typing on a keyboard.
No longer science fiction, but an imminent reality. This was first envisioned by American scientist Rosalind Picard, who exactly 30 years ago coined the term "Affective Computing" to describe an interdisciplinary field of research aimed at developing systems and devices capable of recognizing, interpreting, processing, and simulating human emotions. The dream of equipping machines with emotional intelligence, enabling them to interact more naturally and effectively with real people, is gradually taking shape before our eyes. In fact, today AI no longer merely recognizes faces or voices but is learning to understand people's moods, thanks to extraordinary advances in neural networks and deep learning. Computer vision, vocal pattern analysis, the assessment of facial microexpressions, and natural language processing are opening new frontiers in interpreting emotions: artificial intelligence can detect anger, sadness, frustration, or enthusiasm with surprising accuracy. You've certainly noticed, in recent weeks, an acceleration in the release of new AI models by Western and Asian companies. These models are increasingly powerful and efficient. One of these technologies profoundly impressed me. It was developed by a research laboratory belonging to the Chinese giant Alibaba and is called R1-Omni. It is specifically a model capable of "reading" human emotions. In demonstration videos, R1-Omni identified moods such as "happiness" or "anger" from simple video clips. I've long thought that all this would have an enormous impact on customer experience: AI could, for instance, detect customer dissatisfaction and suggest in real-time the intervention of an operator with the appropriate emotional skills, providing more effective service. Alternatively, an advanced chatbot, through mood analysis, could thoroughly understand customer frustration and respond empathetically and reassuringly—and paradoxically, more humanly. However, while this revolutionary technology promises a leap in quality in our industry, it also raises ethical and regulatory questions. The AI Act, which entered into force in the European Union on February 2nd, establishes precise rules on the development and use of artificial intelligence systems based on their level of risk. Emotion-reading AIs are considered "high-risk" and are already banned in workplaces and educational institutions to avoid scenarios of invasive emotional surveillance. The use of artificial intelligence to infer emotions during job interviews and probationary periods, or to monitor students' moods during lessons, falls among practices prohibited by the AI Act. However, outside of these contexts, there remains a margin for applying systems capable of recognizing feelings and intentions, provided their use does not lead to harmful manipulation. The new regulations do not entirely halt progress. Instead, they impose clear boundaries to protect people's rights. This means there is room to explore the use of more sophisticated AI in customer experience. Yet the horizon, in my view, is uncertain and hides another issue we must consider. Specifically, we know that artificial intelligence suffers from "hallucinations," namely the tendency to generate false or misleading responses. These limitations also apply to biometric data analysis. Therefore, even in emotion detection, AI can distort reality. An emblematic example is Amazon Rekognition, a facial recognition system developed by Amazon that has previously shown significant biases in emotion classification and person identification, with more frequent errors involving Black people and women. (this was established by an MIT study that Amazon called ‘inaccurate’) To prevent such prejudices and biases from becoming integral parts of customer experience strategies, companies will have to adopt processes for continuous validation and control of their emotion analysis algorithms. In short, today's real challenge is not to determine whether AI can interpret our emotions, but whether we will be able to manage its growing intrusiveness and profit from it appropriately, with our deepest humanity. Will we succeed?

Created at Thu Apr 02 2026
3 min read
AI is accelerating faster than enterprise operating models were designed to handle. In every organization, transformation is underway. Roadmaps are expanding, budgets are shifting, and expectations from boards and customers are rising. But acceleration without structure creates volatility - and customer experience is no exception to the rule. While technology introduces possibility, leadership determines whether that possibility becomes measurable value or a mere disruption.
Navigating this ten

Created at Wed Apr 01 2026
6 min read
Development is no longer the hardest part of the gaming industry. After decades spent perfecting the art of building worlds, even challenger studios now have access to powerful engines, efficient collaboration pipelines, and global development teams that can consistently ship high-quality titles.
The greatest challenge gaming companies confront today - the one that separates noobs from pros? It’s all about what happens after launch: the moment players show up. That’s when the game changes, bec

Created at Fri Mar 27 2026
5 min read
Leaders’ most valuable insights don’t come from their titles. They come from lessons learnt along real professional journeys. That’s the wisdom behind our Leading Voices series charting the careers and challenges of the real pioneers behind the future of customer experience. And there couldn’t be a richer example than the story of Julie ‘Jam’ Barton. With more than 16 years of experience across both client and BPO environments, she now leads global training and communications for member servic