EU approach to dealing with fake news and disinformation


The spread of disinformation and fake news has been a key concern for EU policy makers in the run-up to May’s elections. A now familiar narrative, online elections-meddling impacted some of the most significant votes taking place in Western societies over the past couple of years, including the 2016 UK Referendum on EU Membership, the 2016 US Presidential elections and the 2017 French national elections.

While EU regulators and Member State governments have put plans in place to prepare for a potential disruption, which include stricter rules and supervision on social media platforms as well as national awareness campaigns, the risk of manipulation is considered even higher due to the unique characteristics of the elections, which are held in each of the respective Member States. A main concern is that the integrity of the entire European Parliament could be compromised if online trolls or hackers would effectively disrupt the elections in just one Member State – which in the current times of political instability, right-wing activism and Euroscepticism could damage the credibility of the European Union.

The fight against disinformation

Since Russian trolls successfully disrupted the US elections in 2016 through a disinformation campaign at scale, and allegedly impacted the results of the referendum on the UK’s withdrawal from the EU, the issue of hostile foreign election intervention has become increasingly threatening to the EU. While the spread of fake news and disinformation is by no means a new phenomenon, the ecosystem created by digital media and online platforms provides unprecedented opportunities for its rapid and largely uncontrolled spread. At the same time, the speed in which disinformation is created and shared continues to be unrivalled by EU initiatives to regulate the issue. This is however not due to a lack of trying. Instead, fake news is an issue that does not easily allow for regulation given its proneness to clash with legal, ethical and practical considerations, most notably the freedom of expression. Laws against fake news are often compared to censorship, as they may restrict free speech and could potentially allow politicians to block any information that may be damaging to them by labelling it “fake”. A controversial law introduced by Marcon and adopted by the French Parliament in late 2018 has been heavily criticised on the same basis, as opposition argued it would be ineffective and could be used to censor free speech.

Against this background the EU’s High-Level Expert Group on Fake News advised the Commission against “simplistic solutions”, considering that any form of censorship should be avoided. Instead, it recommended that the Commission’s approach should be centred on transparency, by providing short-term responses to the most pressing issues and longer-term responses to increase resilience to disinformation in society.

Commission action plan

In December 2018, the European Commission stepped up its efforts to combat fake news, by launching its Action Plan on Fake News. Key actions proposed by the Commission included the setting up of a Rapid Alert System to facilitate real time data-sharing between the EU and Member States such as warnings for waves of fake news, raising awareness amongst citizens and the swifter implementation of commitments made by online platforms. The Commission argued earlier this year, that if these measures prove to be insufficient it will consider further actions, including possible regulation.

During the March Council Summit, the European Council discussed the Commission’s efforts to combat fake news, as included in the Action Plan. It called for continued and coordinated efforts to combat disinformation. The Commission and the Romanian Council Presidency will monitor the elections and provide a report on “lessons learned” for the June European Council, which will inform the construction of a more long-term response.

Online platforms

Online platforms are at the core of the EU’s fake news policy in the run up to the elections. Experience from previous hostile disinformation campaigns as well as a study conducted by the Commission’s research service JRC indicated the important role that online platforms play in the spread of disinformation. Results from a survey launched by the JRC indicated that over 50% of respondents use social media platforms such as Twitter and Facebook as a daily news outlet. These platforms generally don’t have third-party technologies in place to filter the authenticity of information posted and do not conduct systematic fact-checking or editorial judgement. As a consequence, it has been estimated that between 2013 and 2018, Russian and Iranian “troll farms” shared more than 10 million tweets, including with the hashtag #ReasonsToLeaveEU.

To tackle this problem, platforms such as Twitter, Facebook and Google voluntarily signed the Code of Practice on Disinformation, including actions such as ensuring transparency of political advertising and stepping up efforts to close fake accounts. In their March monthly update on progress made, the Commission concluded that more action is required in all key areas, and that more systemic information will be necessary for the Commission to understand measures taken against bots and fake accounts. It also encouraged platforms to work together with researchers and fact-checkers to inform a comprehensive and independent picture of disinformation trends. The Commission aims to carry out a full assessment of the Code of Practice by the end of 2019. In the meantime, the next progress report is expected to be published mid-April.


Each Member State is in charge of organising European elections domestically. Some Member States have adopted comprehensive fake news strategies and taken measures to prepare for potential disruptions. This includes initiatives such as the French law on disinformation, which allows candidates or political parties to seek a court injunction preventing the publication of fake information in the three months ahead of the elections, the German Network Enforcement Act which imposes fines as much as €50mn on social media companies if they fail to remove illegal content within 24 hours of receiving a complaints and Italy’s introduction of school curricula to teach children to discern between false and credible information. Yet, other Member States are far less prepared. The stakes for the EU are high. If election results of a specific Member State are considered to have been so disrupted as to threaten the integrity of the results, re-elections may need to be held and the Parliament may not be able to assemble.