Conclusion
This conclusion forms the basis for proposed amendments to the Criminal Code and the Administrative Offences Act, aimed at reducing the widespread dissemination of disinformation, as this erodes the shared factual basis for democratic dialogue.
Problem: Disinformation
Russia and China are attacking Western democracies using hybrid methods. Disinformation is a particularly potent tool in this regard. Disinformation is a dangerous weapon.
Complication: Scaling
In the age of ‘grey media’ such as Fox News and social media, virtually anyone can become a ‘journalist’ and ‘publisher’ with potentially vast reach. Some influencers have a wider reach than established media outlets. The legal distinction between “freedom of expression” and “freedom of the press” no longer does justice to this situation. There are blurred lines between private and public communication, and a shift from a few hundred media outlets to millions of amateur content creators with occasionally enormous reach. The existing tools for upholding minimum journalistic standards do not scale with the large number of actors and publications. Hundreds of thousands of complaints to the Press Council, which is not even competent in the first place? Hundreds of thousands of legal proceedings under Section 84a of the Criminal Code? – Unworkable. We need new legal frameworks, procedures and institutions that can scale with the vast number of actors and publications.
Complication: Power
Social media is dominated by tech oligarchs pursuing political agendas. Social media is psychologically engineered to make users addicted. It rewards hate, incitement and disinformation. Worse still, it lacks transparency, promotes extremist content, overwhelms users and resists regulation. Through their power and lack of transparency, social networks have become a battleground in the information war waged by Russia and China against a free Europe. Russia has decades of experience in hybrid warfare and is investing billions in non-military warfare (active measures, reflexive control, propaganda, firehouse of falsehoods). Authoritarian actors are spending billions on disinformation campaigns on an industrial scale: funding extremist parties, agents of influence, troll farms, automated bots, thousands of fake websites (“lookalikes”), Wikipedia manipulation and now AI, deepfakes and AI agents.
Complication: Passivity
The response from the EU and its member states has largely been characterised by passivity. There is no regulation obliging platform providers to ensure algorithmic neutrality and transparency (open-source). This results in competitive disadvantages for European providers, who stand no chance against the network effects of the established, monopoly-like providers.
Whilst, in the face of Russia’s war of aggression against Ukraine – which violates international law – politicians in individual member states are gradually allocating budgets for military defence and speaking of “Total Defence” as a “task for society as a whole”, there are no significant budgets for defence against hostile disinformation on an industrial scale. This represents a success for hostile disinformation, which immediately frames any effort at countermeasures within the context of “restrictions on freedom of expression”.
Countermeasures: A Failure
So far, political leaders have been shifting the state’s responsibility for protection back onto citizens, whilst calling for and promoting greater media literacy. However, the power of social media and the lavishly funded hostile campaigns cannot be countered by a handful of voluntary organisations and activists, nor by media-literate citizens in their everyday lives. A few examples:
“Fit Against Disinformation” Workshops: As part of the Bavarian Alliance Against Disinformation, the Bavarian State Ministry for Digital Affairs is funding around 100 free, practical online and face-to-face workshops designed to help politically and socially active individuals recognise disinformation and respond to it confidently. This is valuable, but only for a select few opinion leaders. One of the topics covered in the workshops is recognising fake photos and videos. Even now, even experienced participants regularly fail at this. In a few years (or rather months), it will be completely impossible to distinguish deepfakes from genuine videos without drawing on extensive contextual knowledge and without access to sophisticated analysis tools and reference data. Media literacy is important, but media-literate citizens as a weapon against disinformation on an industrial scale: that is a fiction.
Debunking: Fact-checks typically come too late and are rarely read. Because fact-checks are labour-intensive, they are reserved for a small number of particularly influential or randomly selected publications. The EUvsDisinfo.eu debunking database has refuted around 20,000 publications over ten years of excellent work. Debunking alone cannot be scaled up, neither to cope with millions of publications nor with millions of recipients.
Prebunking: Rather than reacting too late, warning people in advance of impending disinformation is a good idea, but it has its own difficulties. How does one find out or guess in advance what campaigns the opponent is planning? How does one avoid the description of the expected disinformation becoming the very thing people remember, meaning the person issuing the warning ends up doing the disinformer’s job and becomes part of the campaign? How many warnings can a citizen process in a day? To what extent does it undermine public trust if the warning turns out to be wrong or if the opponent has deliberately called off their campaign? Prebunking does not scale, neither with millions of publications nor with millions of recipients.
Disinfo Trends: Through the SPARTA project, the University of the German Armed Forces in Munich analyses current trends on social media ahead of major elections. This is important work, but the budgets are insufficient to provide this service on a daily basis. Trend analyses are time-consuming, cost money, and, at best, are only effective on days when they are available and when informed citizens take the time to check what questionable trending content has already been identified.
If we want to win the battle to preserve democracy and combat disinformation, we need more measures that can be scaled up more effectively, as well as the necessary funding. And the political courage to implement them, just as we did in the 1970s when seat belts became compulsory.
Vision: Secure Communication Paths
The aim should be for our communication channels to be as civilised and as safe as our road traffic in the future. We will then have rules that ensure civilised communication, and scalable institutions that make communication convenient, safe and trustworthy. When it comes to cars, we are willing to pay for safety: manufacturers adhere to safety standards for every car, every town has a vehicle inspection body, every car is regularly tested for safety, every town has a driving school, every driver has a driving licence, every road meets quality criteria and is regularly maintained and repaired. Where the road is damaged, there are warning signs. There are traffic lights that regulate traffic, which we can rely on. Almost no one drives through red lights, almost no one parks in the fire brigade access lane. Why? – Because we sanction misconduct with a scalable instrument: administrative offence law. And because we have criminal law in reserve for completely unacceptable, repeated misconduct.
Given: Scientific basis
In addition to positive experiences with a well-functioning system for regulation, monitoring and enforcement in road traffic, there is also scientific evidence, that sanctions help to reduce lying in communication, provided certain conditions are met:
Highly efficient punishment does not only affect senders’ honesty in a positive way but also leads to substantially higher trust levels among receivers1
Consequently, sanctions are effective when the sender expects a relevant sanction with a relevant risk (sanction expectation value), which requires three conditions:
- a relevant probability that the lie will be detected, i.e. a scalable monitoring or reporting system
- a relevant probability that the detected lie will be sanctioned, i.e. a scalable sanctioning process with low costs for reporting and imposing the sanction
- a sanction of a relevant magnitude that is expected to be enforced
Required: Legal Basis
As a reminder (from Federal Constitutional Court, judgment of 22 June 1982, 1 BvR 1376/79, para. 24):
However, anything that does not contribute to the constitutionally required formation of opinion is not protected, in particular the assertion of facts that are proven or knowingly false
§ 268a StGB Forgery with Recordings (Draft)
Deepfakes undermine the foundation of truth upon which differing opinions can be meaningfully discussed. Without a commonly accepted factual basis, a peaceful democratic society cannot exist. This shared factual basis is being undermined by Russian disinformation campaigns. The Criminal Code penalises “falsification of technical records” under Section 268, but only “for the purpose of deception in legal transactions”. That is why deepfakes need to be penalised. And because detecting deepfakes is complex – meaning the probability of detection is low – the penalty must be severe in order to create a deterrent effect.
Draft:
§ 268a StGB Forgery of Recordings
-
A person shall be liable to imprisonment for a term of not less than one year if they
- falsify a recording (photograph, audio, video) with the intention that it be circulated as genuine or that such circulation be facilitated, (or falsify a recording with this intention in such a way as to create the impression of a different reality),
- obtains or offers for sale a false recording with that intention; or
- circulates as genuine a false recording which they have produced, falsified or obtained under the conditions set out in points 1 or 2.
If the offender acts on a commercial basis or as a member of a gang that has joined together for the purpose of the continued dissemination of disinformation, the penalty shall be imprisonment for a term of not less than two years.
In less serious cases under paragraph 1, the penalty shall be imprisonment for a term of three months to five years; in less serious cases under paragraph 2, the penalty shall be imprisonment for a term of one year to ten years.
If an artificially generated recording is clearly marked as such at every point in the recording in accordance with an agreed standard, it shall not fall under paragraph (1)
§ 118a OWiG Dissemination of False Information (Draft)
As already explained, criminal proceedings under the Criminal Code (StGB) are not scalable for mass offences. Here – as in road traffic law – there is a need for a predefined list of common offences that can be dealt with in large numbers at low cost. Under administrative offence law, there is currently only the very vague Section 118, “Disturbance of the public”, which is why we propose that this be made more specific.
Draft:
A person commits an administrative offence if they disseminate facts that are proven to be false and are likely to disturb the public peace, in particular if this results in the denial of human rights or the right to defend those rights to a person or group of persons.
The administrative offence may be punished by a fine if the act cannot be punished under other provisions.
-
The amount of the administrative fine shall be determined by
- the severity of the denial of human rights
- the extent to which particularly vulnerable groups are affected (children, defenceless persons, prisoners)
- any reaffirmation of the false statement following notification of the offence
- the expected reach of the dissemination
- the actual reach of the dissemination
- the frequency and severity of offences under paragraph (1) committed in the last 365 days
- the involvement of a human decision-maker, see paragraph (6)
In less serious cases where, together with an early correction, an oversight is to be assumed, a fixed penalty shall not be imposed.
-
The sanctioning authority shall maintain a public list of proven false facts
- The list shall be made available in a human-readable format (accessible website)
- The list shall be made available in a common machine-readable format (e.g. CSV file)
- The list is published under a Creative Commons licence
- Changes to the list are published with a special mark 4 weeks before they take effect
-
If the offence is determined by an automated process, the following applies:
- There is a right to review by a human decision-maker within a reasonable timeframe
- If the human decision-maker confirms the offence, a correspondingly higher administrative fine shall be imposed
-
If the offence is committed on a social media platform, the following applies
- The administrative fine shall be advanced by the platform operator
- The platform operator may reclaim the administrative fine from the user
- The platform operator must offer an anonymous payment option for anonymous accounts
- If the user fails to pay the administrative fine within a reasonable period, the network operator may issue a reminder to the user and suspend the account until payment is made
- If, following a reminder, the user fails to pay the administrative fine within a reasonable period, the network operator may delete the account
- An objection under (6) shall have suspensive effect on (7)