Reducing disinformation and hate in election campaigns: how can we detox the debating culture?

Analysis

The German parliamentary election campaign played out on the internet as never before. In the midst of the coronavirus pandemic, this was certainly necessary, but it brought with it all the evils we have previously seen in US election campaigns: disinformation campaigns and hate speech were used to discredit candidates, paid political online advertising and foreign influence circumvented basic democratic values. As a woman, Annalena Baerbock the Greens' candidate for Chancellor, was particularly affected. The elections have shown that what we need, in Germany and the EU, are better media skills for the population and clear rules for communication platforms.

Reading time: 12 minutes
Teaser Image Caption
Germany's three top candidates for Chancellor: Annalena Baerbock (Alliance 90/The Greens), Olaf Scholz (SPD) and Armin Laschet (CDU/CSU).

This article is part of our dossier "Germany has voted - The aftermath of the 2021 German federal election".

 

Anybody entering an election campaign in the hashtag age needs a thick skin. Before the German parliamentary elections, the CDU candidate Armin Laschet was subjected mercilessly to ridicule and hatred on the internet: hashtags like #laschetluegt (Laschet lies) and #laschetschreibtab (Laschet the copycat) began trending on Twitter as soon as accusations of plagiarism were made against him. When he visited the areas of west Germany hit by the disastrous flooding in the summer, he made the mistake of being seen joking around with his advisers in the background whilst President of Germany Frank-Walter Steinmeier was making a speech. Instantly, he had a new hashtag: #laschetlacht (Laschet laughs).

When politicians are confronted with hate campaigns and digital violence, this constitutes a danger to the democratic process that must be taken seriously. What Laschet had to face as a political candidate (here in the case of the platform Twitter) affects female politicians tenfold. Although the Greens' candidate for Chancellor, Annalena Baerbock, was not targeted any more frequently by hate comments, according to HateAid founder Anna-Lena von Hodenberg, but hers came with added misogyny. As soon as Baerbock's campaign was launched in April, a fake nude photo of her started to make the rounds along with suggestions that she had carried out sex work.

Digital election campaigns come with inherent risks. In this federal election campaign parties frequently used communication platforms for (paid) political advertising. Through micro-targeting practices, the platforms feed tailor-made advertisements to particular target groups. If potential voters are presented only with certain information on the basis of personal characteristics on their news feeds, this comprises the risk of manipulating the political competition in a targeted manner. Additionally, disinformation campaigns from other countries represent a growing danger. Back in March 2021, the European External Action Service noted that Germany was the main target of campaigns carried out by media with close links to the Kremlin. There had been phishing attacks against members of the German parliament and federal parliaments before that – the data captured in those attacks can, in certain circumstances, be used for the purposes of targeted interference. In 2017, the email leaks of then presidential candidate Emmanuel Macron proved this in the dying hours of the French presidential election campaign.

Digital smear campaigns and disinformation

Even before the parliamentary election race proper began, it became clear that targeted hate campaigns against politicians were a serious problem. A HateAid investigation showed that in these elections, it was the parties’ lead candidates who were chiefly affected by hatred and smear campaigns on the internet. Accusations of plagiarism against Armin Laschet, similar to those previously made against Annalena Baerbock, circulated in Telegram groups. On the platform Twitter, the CDU candidate took by far the most heat, followed by the Greens' and SPD's candidates, Annalena Baerbock and Olaf Scholz respectively.

Disinformation played a massive role in this election campaign and is becoming an ever greater online risk to the formation of political opinions. This includes information that is incorrect and deliberately aims to do harm – it is targeted against people, groups or even against a certain country. Disinformation campaigns spread like wildfire on the internet. They stir up hatred and encourage extremist behaviours, such as a growing mistrust in institutions. According to a Forsa survey carried out on behalf of the media authority of North Rhine-Westphalia, in July of this year 82% of respondents expressed the concern that political disinformation campaigns from Germany and abroad could influence the democratic process. 71% of respondents have come across disinformation in the course of the political election campaign on the Internet (up from already 66% in 2020).

Disinformation narratives frequently have their roots in Germany, but their originators are often inspired by “examples” set by campaigns in other countries, such as the USA. For instance, AfD started early, attempting to sow the seeds of doubt as to the legitimacy of the elections. The party warned of “postal vote fraud”, with a video opining that “ballot slips belong in the ballot box, not the post box”. The network TikTok, which is principally popular with younger people, also became a home for canvassing and disinformation in the parliamentary elections. Young influencers use the platform to speak openly in favour of the political views of AfD, according to Khesrau Behroz and Patrick Stegemann in the Podcast Noise.

Gender-specific hate

Disinformation campaigns frequently have a gender-specific angle. In the 2021 German parliamentary election campaign, Green Chancellor candidate Annalena Baerbock was a major target of disinformation campaigns, as highlighted by a study carried out by the Institute for Strategic Dialogue (ISD) in September with a focus on Facebook and Telegram. More than half of all adults in Germany received false information about the candidate. Experts have expressed shock at how quickly and intensively gender-specific hate content and disinformation about women in election campaigns spread. Josephine Ballon, a lawyer for HateAid, explained: “what is going on here is gender-specific hate. This type of hate aims to discredit the target and force them into silence”. This is not a fact-based criticism of her as a candidate, but a hurtful and targeted attack. According to ISD, Baerbock was exposed to more verbal attacks, sexual insults and threats of violence than her male rivals, Scholz and Laschet.

Hate crime and constant narratives of disinformation against electoral candidates sketch out a tragic political picture – the ISD study noted a worrying increase in both. These developments make it a very real prospect that women in particular will think twice about becoming politically engaged and standing in elections. Germany has had the network enforcement act (NetzDG) against internet hate in place for four years. This has led to a handful of judgements against smear campaigners on the internet and to hate comments being taken down. When dealing with digital violence, however, it is proving less effective. For instance, in 2020 the Counter Extremism Project observed that YouTube removed just 35% of content reported by the project as clearly unlawful. Nor does NetzDG cover messenger services such as Telegram, which have grown to play an important role in spreading hatred and disinformation digitally.

Online political advertising: non-transparent and unregulated

Digital communication platforms are increasingly important in forming political opinions: for years, the range of platforms as a source of news has increased exponentially – last year, according to a study carried out by the Hans-Bredow-Institut, 37% of respondents said that social network had helped them to make up their minds. Partly because of the pandemic, the parliamentary election campaign frequently took place in the digital sphere. At the same time, the relevance of these platforms for paid political online canvassing increased correspondingly. However, this – unlike “traditional” election campaigning on the television or billboards – is almost entirely unregulated. In particular, there is no transparency, as it is the platforms themselves which govern the procedures for (paid) political advertising. “For direct mail, there are ‘clear rules on what demographic data may be used by whom’”, writes Julian Jaursch of the Stiftung Neue Verantwortung (SNV). These rules should also apply to micro-targeting on major platforms.

This development brings with it a range of challenges: with the growth of online political advertising comes a growth in the number of advertisers. These include lobby organisations, influencers and extremist associations. This becomes particularly problematic if certain content is not even accessible, for instance when politicians communicate in groups via messaging services such as Telegram or WhatsApp. These services, originally intended for individual communication, are increasingly developing into platforms organized in groups. They often go under the radar when it comes to content moderation. The enormous amount of behavioural data on the platforms makes it possible to feed tailor-made adverts to specific groups of people. The majority of this data-based ecosystem is controlled by Big Tech companies, which cream off the advertising revenue. Publishing houses, quality journalism and other creative sectors are cut off from this source of financing. At the same time, a tracking-based advertising industry allows the financing and spread of disinformation, as this often builds on negative emotions and has no connection to the truth. Negative and populist simplified content, as is often the case with disinformation, achieves higher user engagement. This increases its likely appeal to users, leading in turn to higher advertising revenue.

Tackling disinformation more effectively

What goes on in Europe in the field of tech regulation often inspires countries outside the EU, as was the case with the General Data Protection Regulation, for example. The protection of democracy is a duty of the rule of law. For this reason, in December 2020, the European Commission ushered in a paradigm change in platform regulation with the Digital Services Act (DSA). In future, YouTube, Facebook and the rest must proactively demonstrate that they have taken sufficient measures to counter the spread of illegal content on their services. This EU initiative was, however, unfortunately too late for the German federal election campaign of 2021.

In the fight against disinformation, the DSA and theits Ccode of Practice on disinformation provide precious little hope as things stand. The DSA requires communication platforms to take down “illegal content” as soon as this is recognised or reported to them, on pain of punitive fines. The accompanying voluntary code of practice, which was introduced in 2018 and updated this year, applies the same principle: it requires companies to either remove or downgrade content considered false and the accounts that disseminate it. This places the responsibility onto the shoulders of private technology companies. Decisions are made behind closed doors as to what constitutes illegal or harmful content. There is no clear definition of what is considered false information and disinformation. The inclusion of hate speech and hate crimes in the EU list of crimes, as set out in the European Commission's 2020 working programme for this autumn, would provide a little more clarity.

Any approach to tackling disinformation should predominantly look at why people create it in the first place and why this content is spread so very virulently on communication platforms. The members of the European Parliament involved in the negotiations on the Digital Services Act must ensure that the problem is tackled at its root.

Bolstering basic democratic values – at legal and political level

According to a study by the Stiftung Neue Verantwortung, 24% of German citizens believe that the media systematically lies to the German population. Another 30% are of the opinion that this is the case at least sometimes. Democracies rely on well-informed citizens, but these are often left to their own devices. Whether people are able to understand news, correctly attribute it or question it affects trust in democratic institutions. The media skills and training of the population is therefore an important factor in the democratic debate – alongside independent journalism and a strong civil society.

A massive task awaits Germany's new government. Neither the government nor the platform providers had a clear anti-disinformation and hate campaign strategy in the run-up to the elections. A proper protection concept against targeted hacking attacks from other countries is also needed. Certainly, with the Digital Services Act, an important law on holding platforms responsible is currently being negotiated. But lessons should also be learnt from the experience with the election campaigns on dealing with risks on communication platforms in the future.

What is currently lacking is transparency about the platforms' processes. What is needed is an enforceable right of data access for researchers. Those platforms that previously flew under the radar in the spread of hate and disinformation campaigns, but also online political canvassing, should come in for greater scrutiny by the regulators. This includes messenger services such as Telegram, but also live videos on platforms such as YouTube and TikTok. The parties themselves can also make a contribution in the forthcoming legislative period and undertake to create a code of conduct for fair and transparent election campaigns – not least as far as online political canvassing is concerned.

 

The original German version of this articles is available on boell.de.