The intersection of theology and artificial intelligence is opening up new conversations about the ethics of emerging technologies.
Researchers from the University of Oxford’s Institute for Ethics in AI argue that spiritual perspectives have a critical role to play in shaping the future of AI, raising questions about human identity, meaning, and ethics.
Dr Lyndon Drake, research fellow in AI at the Faculty of Theology and Religion, said: “Theology has substantive contributions to offer on issues such as language and meaning — areas where the assumptions embedded in AI systems are rarely made explicit.”
The discussion gained international attention in March this year when the 14th Dalai Lama met with Oxford researchers and shared his views on artificial intelligence.
Read more
Scale of huge potholes backlog revealed across Oxfordshire
Thousands flock to new £185m university centre
Oxford United Women finish season on high with 2-0 win over Lewes
He said AI, regardless of capability, will never match the depth and pace of the human mind, saying: “Although these [AI systems] may have many functions, ultimately it depends on the human mind.
“Therefore, no matter how sophisticated they become, they cannot keep up with the pace of the human mind… my own thoughts are changing moment by moment.”
His comments, and the growing involvement of religious leaders in AI discourse, underscore the importance of theology in public conversations about technology.
Dr Caroline Emmer De Albuquerque Green, director of research at the Institute for Ethics in AI, said: “The question is no longer whether theology has a place in this conversation, but whether we can afford to exclude it.”
As AI continues to advance, reshaping society in ways that raise profound ethical and existential questions, people are increasingly turning to faith and religious leaders to interpret what it means to be human in an age of intelligent machines.
Different religious traditions are beginning to articulate their own frameworks for understanding and guiding the development of AI.
At the Civic AI Conference in March 2026, Buddhist scholar Lobsang Monlam emphasised the need for AI to operate with wisdom, compassion, and sound reasoning, warning that it should not act unless it can ‘prove the absence of harm through non-observation of negative causes’.
The Vatican, meanwhile, has focused on the importance of human dignity and the common good, advocating for AI that serves humanity.
Despite the growing interest, challenges remain.
Religious leaders may lack technological expertise, and faith traditions vary widely in their ethical approaches and worldviews, complicating unified global governance.
In response to these challenges, the Oxford Collaboration on Theology and Artificial Intelligence (OCTAI) has produced the Oxford Oath for AI Practitioners.
This is a theologically informed but intended for a broadly secular audience.
It offers a guide for moral deliberation by scientists, engineers, and industry leaders advancing AI science and developing AI products.
Work in this space continues across the University of Oxford, including at the Schwarzman Centre for the Humanities.
Researchers are exploring how centuries of theological thought can offer valuable guidance for one of the most urgent ethical questions of the 21st century.