79Berlin — Germany’s federal government and a group of leading Holocaust memorial institutions have urged social media companies to take stronger action against artificial intelligence-generated images that present fabricated scenes from Nazi concentration camps and the Holocaust.
In a joint letter dated 13 January, memorial sites and documentation centres said a recent surge of low-quality or deliberately manipulative AI content, often described online as “AI slop”, was spreading across major platforms and distorting public understanding of the Second World War.
The signatories include memorial institutions at former camps such as Bergen-Belsen, Buchenwald and Dachau, which said they had observed a growing volume of images depicting invented moments in camps or during liberation, presented in an emotionally charged style. Examples cited in reporting included fictionalised depictions of reunions between prisoners and liberators, or children shown behind barbed wire.
The institutions said the material appeared to be produced for two broad purposes: to attract attention and generate revenue, and to blur historical facts by altering the roles of victims and perpetrators or promoting revisionist narratives.
Their letter argued that the spread of such imagery risked undermining confidence in authentic historical documents and photographs by flooding feeds with plausible-looking fabrications. The organisations described this as a form of trivialisation and “kitschification” of history, warning that it could erode the evidentiary basis on which public knowledge of Nazi crimes rests.
Germany’s state minister for culture and media, Wolfram Weimer, said he supported the memorial institutions’ call for clearer labelling of AI-generated images and, where necessary, their removal. In remarks reported by Reuters, he described the issue as one of respect for the millions of people murdered and persecuted under the Nazi regime.
The memorial institutions’ demands were directed at platform practices rather than at individual users. They called for services to identify and act on Holocaust-related falsifications proactively, rather than relying on reports submitted by users after content has circulated. They also urged companies to ensure clear labelling of AI-generated images and to prevent the monetisation of such material.
The intervention comes as European governments and regulators increase scrutiny of how large platforms handle manipulated media, including content generated by increasingly capable image and video tools. Weimer has separately urged the European Commission to use legal mechanisms under the EU’s Digital Services Act to address harmful or unlawful content on major platforms, in comments made in early January in relation to AI-generated imagery on X.
Holocaust memorial institutions have also drawn distinctions between legitimate uses of technology in education and research and the mass production of fabricated imagery presented without context. Reuters has previously reported on projects using AI for Holocaust-related purposes such as searching archival records to identify unnamed victims, and on individual survivors working with AI-generated imagery to preserve personal testimony. These efforts typically involve transparent sourcing, documentation and explicit explanation of what is reconstructed.
The German letter sits within a wider European debate about how historical memory is protected in public life. In November 2025, a planned auction in Germany of hundreds of Holocaust-related artefacts and documents drew criticism in Poland and among survivor organisations. The sale was later cancelled; Poland’s foreign minister, Radosław Sikorski, said he had discussed the matter with Germany’s foreign minister, Johann Wadephul.
The new appeal focuses on the mechanics of online distribution, and on the incentives created when sensational content is rewarded by algorithmic reach and advertising revenue. The memorial institutions’ central claim is that fabricated Holocaust imagery, particularly when presented as historical, can change the information environment in which authentic evidence competes for attention.
The letter does not propose a single technical solution, but its emphasis on labelling, de-monetisation and proactive enforcement reflects a view that platform design choices can shape the scale of manipulated content. It also places responsibility on large networks to treat Holocaust-related falsification as a distinct category of harm, given the established historical record of Nazi crimes and the continued presence of denial and revisionism online.
Following Europe’s 80th anniversary commemorations marking the end of the Second World War, the dispute over AI-generated images highlights a practical question for platforms and regulators: how to ensure that synthetic content is identified and contextualised before it is consumed as evidence.
Post Views: 1,033