
Agentic AI,
AI,
machine learning,
AGI,
bots,
Chatbot,
Published on Thu Apr 23 2026
Updated on Thu Apr 23 2026
4 minute read
One of the recurring themes in AI research is how close we might be to an Artificial General Intelligence (AGI). This is often described as a superintelligence - a system that would surpass the human brain and therefore create a dangerous situation where our machines can outthink and outsmart their creators.
It is an honest debate with well-known supporters. The CEOs of OpenAI, Google DeepMind, and Anthropic have all recently stated that AGI will be a reality by the end of this decade. These are all entrepreneurs with companies that succeed if they show continuous progress on research, so they may be bullish on the dates.
However, the AI research community generally does support the idea of AGI becoming real, although 2040 is a more realistic prediction. This has been revised after the recent wave of Generative AI, three years ago 2060 was a more common prediction. And now the focus is shifting to the emergence of early, AGI-like systems that could demonstrate human-level reasoning in specific, limited domains as early as 2026 or 2028.
But the problem with this entire debate is that most of it revolves around science fiction. There is no agreed measurement for AGI, and without this consensus on what an AI would need to achieve to be classified as AGI it is tough to argue or debate when it will arrive.
Some of the media coverage of AGI borders on dangerous. It is highly exaggerated and talks about a future where AGI will take over the world. Some commentators talk about ‘the rapture’ and a biblical judgement day where only those with the resources to access a doomsday bunker will survive.
This is all extremely emotional and irresponsible. However, it can be challenging to ask the media to be more responsible in their analysis when outrageous stories generate more clicks and more clicks generate more ad revenue. Sober analysis of AGI can be hard to find.
Think for a moment about how difficult it would really be to create an agreed method of measuring AGI. Intelligence is not a simple concept that you can measure with a single score. The astrophysicist Adam Becker recently wrote in The Atlantic: “The human ability to predict and steer situations is not a single, broadly applicable skill or trait - someone may be brilliant in one area and trash in another. Einstein wasn’t a great novelist; the chess champion Bobby Fischer was a paranoid conspiracy theorist.”
This is obvious when you think about it for more than a moment. Are bats more intelligent than humans because they can use echolocation? Bats have this ability that humans lack, but you can’t have a conversation about opera with a bat. How do you turn this into a metric that can be compared?
Much of the discussion around AGI uses the concept of the brain as a computer - how much memory or processing power would we need to rival the human brain? But, as Mr Becker suggested, which human brain? Everyone has different abilities.
A recent problem that has emerged since LLMs and generative AI crossed from the lab into general use has been people using conversations with bots as therapy - even creating virtual friends. This has tragically led to a number of suicides as people facing a personal crisis turn to AI chatbots for advice. These devastating outcomes also play into the doomsday fear that AI is now so powerful and pervasive that it can only have a negative effect on society. As a direct result of these concerns, some jurisdictions, such as Virginia, are moving to restrict or regulate the use of AI chatbots by minors.
But the real misunderstanding is that generative AI chatbots have feelings or an understanding of the problems a young teenager faces when asking questions about their life. ChatGPT or Claude do not “understand” your question or the answer it provides, it just generates a series of words based on a mathematical formula that predicts the most useful next word.
Generative AI is proving to be a highly effective tool, capable of delivering impressive results when provided with suitable training data. This is why customer service automation is improving so fast. Recent advancements in models like GPT-5 have led to a significant reduction in factual errors and hallucinations, improving their reliability. Furthermore, the development of domain-specific LLMs, fine-tuned with proprietary data, is leading to greater accuracy and fewer errors in specialized business applications.
Crucially, the next step beyond a simple chatbot is Agentic AI - systems that can autonomously plan, decide, and execute tasks, moving from an assistant role to an "autonomous doer" that can transform and optimize entire workflows, e.g. automatically processing a warranty claim from initial submission to final resolution, entirely autonomously. But these bots are not thinking or feeling. There is no emotion in the words that are created, even if you ask a general-purpose LLM to describe why the poetry of Arthur Rimbaud is considered to be so important for symbolism. The answer is generated by an algorithm, from data, not from an emotional mind that loves poetry.
Perhaps AI has become popular with general users too fast for this to be appreciated. This is why we see so many predictions of an AGI doomsday without a shred of evidence or data to back up the predictions. It can find patterns and create insight from vast amounts of data that no human could ever analyze. But when general users anthropomorphize bots, because they feel the conversations are real and emotional, it really can’t be good for them.
Likewise, the industry leaders that talk up their algorithms and suggest that AI will soon be smarter than humans are not helping the general public to understand this technology either. They are really just creating a general sense of misunderstanding and fear.
Your business could offer better services with a more productive team, but it’s difficult to explore these ideas if the workforce thinks that the only purpose of AI is to replace them. Perhaps in the future there will be an AI model that is so powerful we can only compare it to a human brain, but at present the best use of AI is augmenting existing processes, using it as a tool to do things better, faster, and at a lower cost. We should try to generate a better understanding of these opportunities - a more general digital literacy focused on AI - rather than making vague and emotional promises about technology.

Created at Wed Apr 29 2026
4 min read
We make unconscious choices several times every single day. Most people rarely stop to think about them because they are unconscious - it requires focused effort to stop and think precisely about what you are doing. Driving is a good example. When you first learn to drive a car, you need to think about each action, but it eventually becomes natural and fluid.
Earlier this week, I was thinking about this in the context of customer service teams. It’s one of those subjects that works well at the

Created at Thu Apr 23 2026
4 min read
One of the recurring themes in AI research is how close we might be to an Artificial General Intelligence (AGI). This is often described as a superintelligence - a system that would surpass the human brain and therefore create a dangerous situation where our machines can outthink and outsmart their creators.
It is an honest debate with well-known supporters. [The CEOs of OpenAI, Google DeepMind, and Anthropic](https://ai-2027.com/

Created at Tue Apr 14 2026
2 min read
What motivates our people to strive for the best? It’s not a mere matter of discipline, it’s the devotion that emerges when passion meets purpose. At Awesome CX, our employees do more than come to work. They show up as part of a community. One that believes customer experience is rooted in human connection, shared values, and the relationships built along the way.
Much of our work is centered on helping brands support their customers. This year, however, we took a moment to turn that focus