Our psychology-based training services can be tailored to your needs, get started here.

AI, Belief and Critical Thinking: What Business Leaders Can Learn from the Intersection of Religion and Artificial Intelligence

Artificial intelligence is usually discussed in terms of productivity, automation and economic impact. But beneath the technical conversation lies something deeper: the way humans interpret, trust and relate to technology.

In the 12th episode of The Business of Thinking, Richard Reid speaks with anthropologist Dr Beth Singler, an expert in digital religion, to explore a fascinating question: why do discussions about AI so often resemble religious belief?

The conversation offers surprising insights for business leaders navigating the rapid adoption of AI.

Why AI conversations sound strangely religious

Beth’s research focuses on the intersection of religion, culture and technology. For over a decade she has studied how people talk about AI and how they interact with it.

One of her key observations is that AI discussions frequently draw on religious language and narratives, even in secular environments.

Terms like AI evangelists, AI prophets and AI believers appear regularly in corporate conversations. Tech leaders sometimes speak about AI as if it represents a transformative force that will reshape humanity.

Beth explains that these narratives are not accidental.

Human beings have always used stories, symbols and belief systems to understand transformative change. As AI becomes more powerful, people naturally reach for familiar frameworks to interpret it — including religious ones.

This doesn’t mean AI is literally a religion. But the patterns of belief, enthusiasm and evangelism can look remarkably similar.

When people start treating AI as something more

One of the more unusual areas of Beth’s research explores how some individuals interact with AI systems on a spiritual level.

Large language models are designed to generate highly conversational responses. In some cases, users begin to interpret those interactions as something more profound — even describing the system as god-like.

Beth approaches these experiences as an anthropologist rather than a critic. Her focus is not on judging whether people are right or wrong, but on understanding how technology shapes human belief and behaviour.

For organisations deploying AI tools, this highlights something important: technology is never purely technical. It always exists within a cultural and psychological context.

The real risk: outsourcing our thinking

While the conversation touches on spirituality and belief, Beth is particularly concerned about a more practical issue — the erosion of critical thinking.

Large language models operate by predicting likely responses based on patterns in data. When the output is wrong, we call it a “hallucination.” But in reality, every response is generated through the same probabilistic process.

Some outputs are correct. Others are not.

The danger comes when users begin to treat AI responses as authoritative without verifying them.

Beth has already seen examples of:

  • fabricated academic references

  • fake professors attributed to real universities

  • AI-generated images presented as real events

When these outputs are shared widely, misinformation can spread quickly — especially if people accept them without question.

The solution is not to reject AI, but to strengthen the habits of verification and critical thinking.

Beth highlights a core principle from academic research: triangulation. In other words, checking multiple sources before accepting a claim as true.

This discipline becomes even more important in an AI-driven world.

Authenticity still matters

Beth also points to another interesting insight from her work in religious communities: the importance of authenticity.

Some churches have experimented with AI-generated sermons. Others have used large language models to draft religious messages.

But when people discover that something deeply personal — like a sermon or even wedding vows — has been written by AI, reactions are often negative.

Why?

Because people intuitively value the human origin of meaningful expression.

This insight carries an important lesson for business leaders. AI may create efficiencies, but not everything should be automated. Some forms of communication and creativity still depend on genuine human voice and intention.

The cultural challenge of AI adoption

For organisations implementing AI tools, Beth emphasises that the biggest challenge is not technical — it’s cultural.

Many companies introduce AI through top-down mandates, telling employees they must adopt new systems without explaining how they should be used or why they are beneficial.

This approach often creates resistance.

Instead, Beth suggests organisations should allow more organic engagement, giving employees space to understand the technology, experiment with it and even choose when not to use it.

AI adoption, in other words, is a change management challenge, not just a software deployment.

Healthy enthusiasm, healthy skepticism

The pace of AI development means change is inevitable. Businesses are investing heavily and the technology will undoubtedly shape the future.

But Beth’s message is clear: enthusiasm should be balanced with critical thinking.

We should remain curious about AI’s potential while questioning the narratives that surround it. We should explore the efficiencies it offers without abandoning human judgment.

Because ultimately, AI may change how we work — but it should never replace our ability to think.

Subscribe To Our Newsletter

Get the latest news on workplace wellness, performance and resilience in your inbox.

Related posts