Bruno Giussani, author of a book on artificial intelligence’s impact on our lives, reflects on Pope Leo XIV’s Message for the 60th World Day of Social Communications, “Preserving Human Voices and Faces.”

By Bruno Giussani*

How many algorithms are part of our daily lives? How many sensors? How many screens?

Interacting with screens and digital interfaces of all kinds has become the main activity for almost all of us.

But how many of these interactions are truly the result of a choice? More than a habit, it is becoming a condition. Our society is increasingly structured around algorithms and digital networks that shape its forms and dynamics.

This has not happened — and is not happening — through public debate, political decision-making, or a democratic process, but rather as the indirect (though by no means accidental) consequence of commercial mechanisms and the often uncritical and impatient adoption of technologies that, in effect, redefine the social, economic, and cultural sphere.

This “algorithmization” of life raises essential questions. Who controls these systems? What values and logics do they convey, and which do they exclude? What are the consequences for our autonomy as human beings? And are we still capable of asking ourselves these questions, or are we becoming accustomed to living in a world where the answers are already written into the computer code that surrounds us?

When, in his illuminating Message for the 60th World Day of Social Communications, celebrated on May 17, His Holiness Pope Leo XIV writes that our challenge “is not technological, but anthropological,” he captures in one sentence that seems simple the full depth, unease, and responsibility that each of us should feel in the face of advancing digital technologies, and especially artificial intelligence (AI).

Many still think of these technologies as mere tools useful for simplifying and accelerating existing processes. In reality, over the last two decades — first gradually and then increasingly faster — these technologies, from social media to commercial systems, have become platforms. They are invisible architectures within which we operate and “live,” and upon which our ability to participate in social and professional life increasingly depends.

The arrival, three and a half years ago, of generative AI — the chatbots with which we interact by writing or speaking — has further accelerated this replacement of human logic with techno-logic.

AI “agents” are now beginning to spread: systems with increasing degrees of autonomy, capable of handling a variety of complex tasks on behalf of users without detailed instructions. Meanwhile, neurotechnologies are already appearing on the horizon — a family of devices that interact directly with the human brain and the nervous system.

This foreshadows a not-too-distant future in which humans and artificial entities will coexist, interact, and co-evolve. These developments progressively redefine what is considered “normal” and acceptable, actively shaping the moral and ethical landscape of our societies.

Algorithms, in fact, are not neutral. They are designed to optimize certain objectives, often economic ones (profit, engagement, efficiency, competition), and sometimes political or cultural ones.

In doing so, they reset what is considered right, desirable, or necessary, altering — in the words of Pope Leo XIV — “some of the fundamental pillars of human civilization.”

The line between what is acceptable and what is not shifts not through conscious choice, but through adaptation, through passive acceptance of what technology makes convenient, visible, and valued. Society is thus being algorithmically reprogrammed.

The central question therefore becomes: who decides the rules of this new society? If socio-economic and even ethical dynamics are increasingly conditioned by automated systems controlled by a handful of large companies or governments, what space remains for individual and collective choice?

How can fairness, justice, and dignity be guaranteed if decisions are made by machines whose functioning escapes the understanding of the majority? Algorithmic reprogramming is not an inevitable destiny. It is, however, a process that urgently needs to be understood, critically discussed, and, where necessary, resisted.

“Each of us possesses an irreplaceable and inimitable vocation, that originates from our own lived experience and becomes manifest through interaction with others,” writes Pope Leo XIV.

It is within this communicative space that our humanity is revealed and recognized. But it is precisely in this space that these machines — developed to imitate thought and speech — insert themselves without asking permission. They present themselves as “friends,” imposing themselves as mediators between us and others, and between us and the reality of the world.

In a world that places high value on productivity, speed, and profit, what generative artificial intelligence offers is deeply alluring: the possibility of freeing ourselves, by delegating to the machine, from the effort of thinking, learning, evaluating, writing, and deciding for ourselves.

It seems inevitable that this delegation will ultimately lead to a deterioration of our mental and emotional faculties, creating a kind of “cognitive debt.” More and more people may become accustomed to being guided by systems that have no real experience of life, that know neither suffering nor joy, and to passively consuming what Pope Leo XIV calls “unthought thoughts.”

As the French Catholic writer Georges Bernanos once said, the danger lies not in the multiplication of machines, but in the growing number of people accustomed, from childhood, “to desiring nothing other than what machines can give.”

Written in 1944, twelve years before the very concept of “artificial intelligence” was invented, Bernanos’s observation now sounds almost prophetic. The danger therefore lies not only in the power of algorithms and those who own and control them, but above all in our gradual abandonment of exercising our own faculties.

We must preserve “faces and voices,” as the Pope urges us in his Message. In other words, we must defend the human sphere, our “unique” and “distinctive features.”

Each of us has a different and personal experience of interacting with artificial intelligence. What one person finds enriching may be experienced by another as destabilizing. Some will see order; others will perceive disorder.

But it is highly unlikely that AI can develop in a safe, beneficial, and common-good-oriented way without a deliberate and collective effort.

All of us therefore — technological experts and beginners alike, enthusiasts as much as skeptics — share the same responsibility: to demand technologies that serve people and truth, not the other way around. Tools of justice rather than power. To protect freedom, equality, and human judgment. To recognize that not all questions have an algorithmic answer, that not everything calculable is therefore right, good, or desirable. To measure innovation by the standard of each person’s dignity.

And to never forget, behind every algorithm, every app, and every screen, the faces and voices of our fellow human beings.

* Bruno Giussani is the author of “Our Minds Under Siege. How to Avoid Being Manipulated in the Age of Artificial Intelligence” (Scheidegger & Spiess, forthcoming in June 2026). He lives in Switzerland.