In this episode of The Business of Thinking, Richard Reid speaks with AI expert Calum Chase about one of the biggest questions of our time: what happens as artificial intelligence becomes more capable, more autonomous, and potentially more intelligent than we are?
It is a conversation that moves well beyond the usual headlines about productivity hacks and chatbot prompts. Instead, it explores the short-term realities of AI adoption, the longer-term prospect of superintelligence, and the very human questions this raises about work, purpose, security and survival.
The short-term picture: powerful, useful, but not foolproof
Calum begins by explaining that we are already living through a major AI shift. Most people now have some familiarity with tools like ChatGPT, Gemini or Claude, and many organisations are actively exploring how to use them at scale. These systems are astonishingly capable, but they are not infallible.
They still make mistakes. They still hallucinate. And, crucially, they still lack common sense.
That means businesses should not treat AI output as automatically reliable. Calum is clear on this point: nothing produced by AI should be allowed into the real world unchecked. Humans still need to review the work, especially in environments where accuracy, confidentiality and judgment matter.
He also points to the next major development: AI agents. These are systems that go beyond generating information and begin to act in the world, completing strings of tasks with a degree of autonomy. They may book travel, reconcile accounts, use tools like Excel or Jira, or interact with other systems and agents online. In time, they may begin to function almost like digital colleagues.
For businesses, this creates huge opportunity, but also significant risk. If agents are operating with reduced supervision, organisations must be confident they will do what they are supposed to do — and not do what they are not supposed to do. That is where verification, safety testing and thoughtful governance become essential.
The long-term picture: superintelligence changes everything
Where the conversation becomes even more provocative is in its discussion of what may come next.
Calum references long-standing predictions that by around 2029 we may see something close to artificial general intelligence or even superintelligence. He is careful not to claim certainty about that timeline, but he notes that some of the most influential figures in AI believe it could happen before the end of the decade.
Superintelligence, in simple terms, would mean a system capable of doing everything a human can do cognitively — and in many areas, doing it vastly better. If that happens, we would be creating a new, more intelligent species.
As Calum points out, that is not a small step forward in technology. It is a civilisational shift.
The upside could be extraordinary. A benign superintelligence could help solve problems that have plagued humanity for centuries: poverty, disease, war, environmental destruction. It could drive abundance on a scale we can barely imagine.
But the risks are equally profound. If machines can do all economically valuable work better, faster and more cheaply than humans, then our current system of earning a living through employment begins to break down.
A world without jobs?
One of the most striking parts of the discussion is Calum’s argument that the automation of jobs is likely. Not just some jobs, but eventually most or all of them.
At first glance, that sounds deeply unsettling. Richard raises an important psychological concern: work gives many people more than money. It gives identity, structure, status and a sense of contribution. What happens when that disappears?
Calum offers a more optimistic perspective. He argues that while jobs may disappear, meaningful activity will not. Humans are not simply lazy creatures waiting to do nothing. We are naturally driven to explore, learn, create, improve and connect. In his view, a world without jobs could free people to focus on what really matters: relationships, curiosity, play, care, creativity and community.
The real problem, then, is not whether people can find purpose. It is whether the wealth created by intelligent machines will be shared fairly.
The question governments are not asking
Perhaps the most urgent theme in the episode is that we are not prepared.
Calum does not believe governments are taking this seriously enough. Politicians, he suggests, are either only vaguely aware of the scale of the challenge or are not yet willing to grapple with it in a meaningful way. Meanwhile, development continues at pace because the incentives are enormous: no nation or company wants to fall behind in the AI race.
That leaves us in a dangerous position. We may be accelerating towards a future that could transform everything, without a serious collective plan for how to manage it.
The core issue is distribution. If AI systems create vast wealth, who owns it? Who benefits? How do we ensure that a handful of powerful companies or individuals do not control the productive engine of the future while everyone else is left behind?
These are not abstract philosophical questions. They are political, economic and moral questions that need serious attention now, not later.
The challenge ahead
What makes this conversation so compelling is that it refuses easy answers. Calum is not blindly utopian, nor is he relentlessly pessimistic. He sees enormous promise in AI, but also genuine danger. Richard’s questions bring a vital human dimension to the discussion, especially around meaning, behaviour and what people need in order to thrive.
The result is a conversation that encourages a more mature response to AI: neither panic nor passivity, but deeper thought.
And perhaps that is the most important takeaway of all. The future of AI is not just a technical issue. It is a human one. And if we want that future to be safe, fair and worth living in, we need to start thinking about it much more seriously than we are now.


