Executive summary

  • The report maps how algorithmic amplification operates across major platforms, why it matters for democratic accountability, and what lessons can be drawn from Germany’s regulatory and legal actions.
  • While “algorithm” has clear technical meanings, algorithmic amplification – the process by which systems selectively boost visibility – remains poorly defined, opaque, and inconsistently documented across academia, civil society, and industry.
  • Germany has become a testing ground for algorithmic accountability, with four landmark cases (against X, TikTok, Amazon, and Meta) that span the DSA, GDPR, AI Act, and competition law, collectively probing how platform architectures shape visibility, behaviour, and market power.
  • These cases show that algorithmic amplification can constitute a systemic risk, distorting information flows and amplifying disinformative or manipulative content. Access to platform data and transparency in recommender systems are essential for independent scrutiny and democratic oversight.
  • Moving forward, integrated enforcement and sustained research capacity – including cross-platform investigations and algorithmic audits – are vital to bridge the gap between regulation and evidence, ensuring a more transparent, accountable, and rights-based digital ecosystem in Europe. At the same time, these frameworks should remain proportionate and grounded in fundamental rights, supporting innovation while ensuring that systemic risk mitigation measures are applied responsibly and without unnecessary restrictions on freedom of expression.

Introduction

Technological development has created online spaces that barely existed a few years ago – platforms, websites, and apps that we now use to communicate, shop, learn, and express ourselves. The words that we use to describe these digital environments influence how we behave within them, yet we often depend on terms and systems we only partly understand. For example, algorithms quietly decide what news we see, which friends’ posts appear first, or which videos we are encouraged to watch, shaping our sense of reality without us even noticing.

Social media brought terms such as “algorithm” and “algorithmic amplification” into everyday conversation. These terms are now used constantly, yet they are rarely defined with precision. Most people have only a partial understanding of how algorithms operate or how algorithmic amplification occurs. This terminology is ubiquitous in discussions about our digital environment, yet its substance often remains unclear.

This report, produced by EU DisinfoLab for the Friedrich Naumann Foundation for Freedom, aims to clarify what algorithms are, how they shape our digital experiences on social media platforms through their power to amplify and give visibility to certain content while de-amplifying others, and ultimately, why they matter to understand our digital ecosystems.

The first section provides a theoretical framework for algorithmic amplification, bringing together perspectives from academia, civil society organisations, private companies, and social media platforms. These perspectives are largely based on observation and inference, as there is no full visibility into how algorithmic amplification works. The lack of a universal definition reflects platforms’ unwillingness to define or disclose these mechanisms, underscoring that the field is still a black box marked by a profound lack of transparency.

This opacity has significant repercussions on our digital lives and may even ultimately constitute a violation of EU legislation such as the European Digital Services Act (DSA).

The second section focuses on the pursuit of greater transparency and accountability in Germany regarding how platforms design and deploy their algorithms. Various initiatives led by German organisations illustrate ongoing efforts to better understand algorithmic amplification on social media platforms and to counter its malicious or self-serving use for political or economic purposes.

This publication is a deliverable written by Maria Giovanna Sessa and Raquel Miguel (EU DisinfoLab) for the Friedrich Naumann Foundation