Conclusion

This conclusion forms the basis for proposed amendments to the Criminal Code and the Administrative Offences Act, aimed at reducing the widespread dissemination of disinformation, as this erodes the shared factual basis for democratic dialogue.

Problem: Disinformation

Russia and China are attacking Western democracies using hybrid methods. Disinformation is a particularly potent tool in this regard. Disinformation is a dangerous weapon.

Complication: Scaling

In the age of ‘grey media’ such as Fox News and social media, virtually anyone can become a ‘journalist’ and ‘publisher’ with potentially vast reach. Some influencers have a wider reach than established media outlets. The legal distinction between “freedom of expression” and “freedom of the press” no longer does justice to this situation. There are blurred lines between private and public communication, and a shift from a few hundred media outlets to millions of amateur content creators with occasionally enormous reach. The existing tools for upholding minimum journalistic standards do not scale with the large number of actors and publications. Hundreds of thousands of complaints to the Press Council, which is not even competent in the first place? Hundreds of thousands of legal proceedings under Section 84a of the Criminal Code? – Unworkable. We need new legal frameworks, procedures and institutions that can scale with the vast number of actors and publications.

Complication: Power

Social media is dominated by tech oligarchs pursuing political agendas. Social media is psychologically engineered to make users addicted. It rewards hate, incitement and disinformation. Worse still, it lacks transparency, promotes extremist content, overwhelms users and resists regulation. Through their power and lack of transparency, social networks have become a battleground in the information war waged by Russia and China against a free Europe. Russia has decades of experience in hybrid warfare and is investing billions in non-military warfare (active measures, reflexive control, propaganda, firehouse of falsehoods). Authoritarian actors are spending billions on disinformation campaigns on an industrial scale: funding extremist parties, agents of influence, troll farms, automated bots, thousands of fake websites (“lookalikes”), Wikipedia manipulation and now AI, deepfakes and AI agents.

Complication: Passivity

The response from the EU and its member states has largely been characterised by passivity. There is no regulation obliging platform providers to ensure algorithmic neutrality and transparency (open-source). This results in competitive disadvantages for European providers, who stand no chance against the network effects of the established, monopoly-like providers.

Whilst, in the face of Russia’s war of aggression against Ukraine – which violates international law – politicians in individual member states are gradually allocating budgets for military defence and speaking of “Total Defence” as a “task for society as a whole”, there are no significant budgets for defence against hostile disinformation on an industrial scale. This represents a success for hostile disinformation, which immediately frames any effort at countermeasures within the context of “restrictions on freedom of expression”.

Countermeasures: A Failure

So far, political leaders have been shifting the state’s responsibility for protection back onto citizens, whilst calling for and promoting greater media literacy. However, the power of social media and the lavishly funded hostile campaigns cannot be countered by a handful of voluntary organisations and activists, nor by media-literate citizens in their everyday lives. A few examples:

  • “Fit Against Disinformation” Workshops: As part of the Bavarian Alliance Against Disinformation, the Bavarian State Ministry for Digital Affairs is funding around 100 free, practical online and face-to-face workshops designed to help politically and socially active individuals recognise disinformation and respond to it confidently. This is valuable, but only for a select few opinion leaders. One of the topics covered in the workshops is recognising fake photos and videos. Even now, even experienced participants regularly fail at this. In a few years (or rather months), it will be completely impossible to distinguish deepfakes from genuine videos without drawing on extensive contextual knowledge and without access to sophisticated analysis tools and reference data. Media literacy is important, but media-literate citizens as a weapon against disinformation on an industrial scale: that is a fiction.

  • Debunking: Fact-checks typically come too late and are rarely read. Because fact-checks are labour-intensive, they are reserved for a small number of particularly influential or randomly selected publications. The EUvsDisinfo.eu debunking database has refuted around 20,000 publications over ten years of excellent work. Debunking alone cannot be scaled up, neither to cope with millions of publications nor with millions of recipients.

  • Prebunking: Rather than reacting too late, warning people in advance of impending disinformation is a good idea, but it has its own difficulties. How does one find out or guess in advance what campaigns the opponent is planning? How does one avoid the description of the expected disinformation becoming the very thing people remember, meaning the person issuing the warning ends up doing the disinformer’s job and becomes part of the campaign? How many warnings can a citizen process in a day? To what extent does it undermine public trust if the warning turns out to be wrong or if the opponent has deliberately called off their campaign? Prebunking does not scale, neither with millions of publications nor with millions of recipients.

  • Disinfo Trends: Through the SPARTA project, the University of the German Armed Forces in Munich analyses current trends on social media ahead of major elections. This is important work, but the budgets are insufficient to provide this service on a daily basis. Trend analyses are time-consuming, cost money, and, at best, are only effective on days when they are available and when informed citizens take the time to check what questionable trending content has already been identified.

If we want to win the battle to preserve democracy and combat disinformation, we need more measures that can be scaled up more effectively, as well as the necessary funding. And the political courage to implement them, just as we did in the 1970s when seat belts became compulsory.

Vision: Secure Communication Paths

The aim should be for our communication channels to be as civilised and as safe as our road traffic in the future. We will then have rules that ensure civilised communication, and scalable institutions that make communication convenient, safe and trustworthy. When it comes to cars, we are willing to pay for safety: manufacturers adhere to safety standards for every car, every town has a vehicle inspection body, every car is regularly tested for safety, every town has a driving school, every driver has a driving licence, every road meets quality criteria and is regularly maintained and repaired. Where the road is damaged, there are warning signs. There are traffic lights that regulate traffic, which we can rely on. Almost no one drives through red lights, almost no one parks in the fire brigade access lane. Why? – Because we sanction misconduct with a scalable instrument: administrative offence law. And because we have criminal law in reserve for completely unacceptable, repeated misconduct.

Given: Scientific basis

In addition to positive experiences with a well-functioning system for regulation, monitoring and enforcement in road traffic, there is also scientific evidence, that sanctions help to reduce lying in communication, provided certain conditions are met:

Highly efficient punishment does not only affect senders’ honesty in a positive way but also leads to substantially higher trust levels among receivers1

Consequently, sanctions are effective when the sender expects a relevant sanction with a relevant risk (sanction expectation value), which requires three conditions:

  1. a relevant probability that the lie will be detected, i.e. a scalable monitoring or reporting system
  2. a relevant probability that the detected lie will be sanctioned, i.e. a scalable sanctioning process with low costs for reporting and imposing the sanction
  3. a sanction of a relevant magnitude that is expected to be enforced

  1. Behnk S, Barreda-Tarrazona I, García-Gallego A. Punishing liars—How monitoring affects honesty and trust. PLoS One. 10 October 2018;13(10):e0205420. doi: 10.1371/journal.pone.0205420. PMID: 30304019; PMCID: PMC6179269.↩︎