SMARTeD Against fake news Thu, 17 Oct 2019 08:45:32 +0000 en-GB hourly 1 Fighting ‘fake news’ online: How soldiers in Latvia got fooled by bots Thu, 17 Oct 2019 08:36:10 +0000 By Hannes Heine | Der Tagesspiegel | translated by Daniel Eck

When NATO’s Centre for Strategic Communication in Riga discovered how easy it was to dupe its soldiers online, it has started looking for ways of countering false information, which comes, in large part, from Russia. EURACTIV’s media partner der Tagesspiegel reports.

The Latvian forest, not far from the Russian border. Thousands of soldiers from different NATO member states are training there to ensure continued military presence in Eastern Europe.

But during manoeuvres that spanned several days, some soldiers who were winding down with their mobile phones stumbled across a well-done website claiming to be designed by and for soldiers. On there, the men chatted about the army, the weather and life in general. A few of them also ordered T-shirts on the site, for which they agreed to give their home address for delivery.

On Tinder, a popular dating app, some even communicated with a woman, sending pictures of themselves in uniform. One evening, two soldiers even arranged to meet the virtual woman. They both left their post for her, a move which proved to be a mistake.

The website and the Tinder profile turned out to be a trap – a test carried out by a team of NATO experts on behalf of the Latvian army in the summer of 2018 to identify weaknesses in its own ranks. Soldiers were prompted to send their addresses, spread photos of a manoeuvre and even leave their posts, all with little effort.

For NATO cyber experts, the experiment meant the same could happen anywhere. Who could rule out the possibility that soldiers might not even betray their positions or be blackmailed because of the information they spread online?

The website for soldiers and the Tinder profile was tested by a group of men and women in a low-rise building in Riga, in NATO’s Centre for Strategic Communication, which is supported by Germany.

Janis Sarts, who previously worked for the Latvian defence ministry, is now leading a group of 50 people in the NATO centre, which analyses disinformation campaigns, also called ‘Fake News’.

According to Sarts, it is essential to develop strategies to win virtual battles that are often not recognised. With the test, the NATO cyber experts wanted to show what is possible with smartphones alone: “We want to shake things up,” he said.

Virtual battle for ‘sovereignty of interpretation’

In years gone by, anyone who wanted to use propaganda and ‘muddy the waters’ within the enemy’s ranks had to print flyers, deceive journalists, or send their agents.

Today, it only takes a few minutes to create a target group-specific profile in a popular online forum. People can frequently be misled online, usually for monetary reasons such as credit card fraud.

Those days, what appears increasingly significant, however, is the virtual struggle for so-called ‘sovereignty of interpretation’, a notion which can be boiled down to one question: Whose narrative is true?

In Latvia’s neighbouring country Lithuania, a rumour started spreading in 2017 via social media and chain e-mails that German soldiers who were part of NATO had raped a 17-year-old girl. Many suspect Russian sources were behind the rumour.

Although Lithuania’s government made it clear that there was no such incident, the rumour persisted.

“Deception and lies are spreading faster than ever before,” Sarts warned. According to him, “it’s about sensitising the population and governments to take a closer look”.

German Chancellor Angela Merkel is aware of the risks. The German military needs to learn “how to deal with so-called ‘fake news’ as part of hybrid warfare,” she said at the opening of the headquarters of the Federal Intelligence Service in Berlin last February. According to Merkel, this will be essential for “Germany’s security and social cohesion in the future”.

The notion of ‘hybrid warfare’ is broad. It describes all forms of open and covert, regular and irregular, military and non-military fighting. In modern conflicts, it is sometimes even difficult to make a clear distinction between “war” and “peace”.

Experts examine forums and debates

According to Merkel, most false reporting was often state propaganda. For Janis Sarts, Russia is the most active when it comes to this.

However, experts in Brussels and Helsinki also stressed that many states, movements and criminal cartels are willing and capable of disinformation campaigns. For instance, Islamist groups are said to be systematically spreading ‘fake news’ to serve their interests.

This is why NATO experts in Riga are paying close attention to online forums and debates.

If a multitude of comments and links are being posted at the same time, this could mean a bot is being used. If commentators systematically comment on very different topics such as elections in Brazil and Latvia’s football league, this could also mean that posting these comments did not require just one person.

And what about pictures? Do the photos even correspond to the profile? For instance, the profile of an alleged Russian citizen, who comments massively online, turned out to have a profile picture of Canadian actress Nina Dobrev.

However, dubious online actors do not always act in the interests of states. In Russia alone, according to one estimate, 300 companies are supposed to sell “likes” and “followers” on social media, but mostly for “commercial manipulation”, according to the NATO experts. For example, these are used to draw attention to certain restaurants, traders and even doctors’ practices.

However, the NATO centre in Riga is becoming larger, and talks with other countries are currently underway. Already today, the centre also welcomes non-NATO members, including Sweden and Finland.

In Riga, however, the information war is only being researched and documented – it is not necessarily being fought against.

But when hostile actors deliberately spread unrest online, they may provoke manoeuvering mistakes, accidents and even riots, meaning affected states would then have to react.

For example, Latvia, as a NATO member state, could rely on Article 5 of the NATO Treaty in case of a virtual attack, meaning other NATO members would have to provide assistance.

However, even hardliners in Brussels and Washington would not want to risk military strikes based on ‘fake news’ allegations, even if they are well documented. In any case, no common approach on the issue is currently in sight, both between the 29 NATO members and the 28 EU member states.

Russia’s propaganda links up with real problems

Governments themselves are in the sole position of changing this. That is why they have set up the European Centre of Excellence for Countering Hybrid Threats in Helsinki. Twenty-six states, including formally neutral ones, take part in the think tank’s activities. This centre is also growing, and Portugal plans to join soon.

In Helsinki, the young Lithuanian Vytautas Kersanskas is currently examining what measures should be taken in case of propaganda attacks. Could this involve the setting up of one’s campaigns, the imposition of sanctions or the secret service?

“We want to develop a catalogue of measures in the coming months to see how to respond appropriately based on the type of attack,” Kersanskas said.

Now it is no coincidence that Latvians and Lithuanians of all people are disturbed by Russia’s policies.

Until 1991, the Baltic states, Latvia and Lithuania as well as Estonia, belonged to the Soviet Union as three republics, which have been considered as Russian-dominated by many. This is not only because Moscow was the Soviet capital, but also because after 1945, many workers and families of officials moved from Russia to the Baltic states.

The Baltic states joined NATO in 2004 and Moscow has felt provoked ever since.

In Riga, people have been worried about a Russian invasion, similar to what Moscow did in Ukraine. There, 25% to 30% of the inhabitants are of Russian origin.

And Latvia’s governments don’t make it easy for them. Around 200,000 Russians living in the country do not have Latvian citizenship to date. Moreover, 1,000 Latvians marched across Riga to commemorate Latvian veterans who fought alongside the Waffen-SS,  making it easier for Moscow to speak of anti-Russian discrimination.

Viktors Makarovs is a security advisor to the Latvian government – and a Russian. As the son of Russian-speaking Soviet citizens, Makarovs also had no Latvian citizenship after the fall of communism.

“But the procedure for obtaining citizenship is simple,” Makarovs said, adding that “not everything the Russian media spreads about Latvia is wrong”. According to Makarovs, Russian propaganda ties in with real problems such as Balts recently emigrating on a massive scale and unemployment being particularly widespread in the country’s Russian-speaking communities.

The question, therefore, remains: What kind of information actually enriches the debate and which one can be considered false?

Sometimes people lie

According to Rita Rudusa, a reporter who is known outside of Latvia, the core of such campaigns is often correct – but then facts would be exaggerated or taken out of context to draw a particular picture.

The Kremlin often portrays Latvia as a “failed state” and Western European states as threatened by disintegration. With this in mind, the question arises as to what qualifies as a legitimate debate and what can be described as propaganda.

But Russian sources often convey plain lies, according to Rudusa. For instance, Russian media recently published pictures of a forest filled with plastic bottles, reporting that Canadian soldiers had defaced a forest in Latvia. Research has shown that these plastic bottles are not even available in Latvia, but Russia.

In Riga and Helsinki, they say in unison that the West should not restrict freedoms online, which they ultimately see as freedom of opinion and the press. However, the regulation of social media should nevertheless be discussed because anyone who reads ‘fake news’ on social media would automatically receive articles, clips and comments that follow from such ‘fake news’.

This is ensured by algorithms used on Facebook, Twitter, Instagram, YouTube and Google, which have been programmed to provide users with similar content.

Although these companies are based in the US, standards and regulations still apply to publishing houses, banks and the automobile industry, according to Sarts. Maybe these could also be applied to these internet giants.

[Edited by Frédéric Simon]


]]> 0
A Europe that protects: EU reports on progress in fighting disinformation ahead of European Council Tue, 08 Oct 2019 09:52:55 +0000 European Commission - Press release

Today the Commission and the High Representative report on the progress achieved in the fight against disinformation and the main lessons drawn from the European elections, as a contribution to the discussions by EU leaders next week.

Protecting our democratic processes and institutions from disinformation is a major challenge for societies across the globe. To tackle this, the EU has demonstrated leadership and put in place a robust framework for coordinated action, with full respect for European values and fundamental rights. Today’s joint Communication sets out how the Action Plan against Disinformation and the Elections Package have helped to fight disinformation and preserve the integrity of the European Parliament elections. 

High Representative/Vice President Federica Mogherini, Vice-President for the Digital Single Market Andrus Ansip, Commissioner for Justice, Consumers and Gender Equality Věra Jourová, Commissioner for the Security Union Julian King, and Commissioner for the Digital Economy and Society Mariya Gabriel said in a joint statement:

“The record high turnout in the European Parliament elections has underlined the increased interest of citizens in European democracy. Our actions, including the setting-up of election networks at national and European level, helped in protecting our democracy from attempts at manipulation.

We are confident that our efforts have contributed to limit the impact of disinformation operations, including from foreign actors, through closer coordination between the EU and Member States. However, much remains to be done. The European elections were not after all free from disinformation; we should not accept this as the new normal. Malign actors constantly change their strategies. We must strive to be ahead of them. Fighting disinformation is a common, long-term challenge for EU institutions and Member States.

Ahead of the elections, we saw evidence of coordinated inauthentic behaviour aimed at spreading divisive material on online platforms, including through the use of bots and fake accounts. So online platforms have a particular responsibility to tackle disinformation. With our active support, Facebook, Google and Twitter have made some progress under the Code of Practice on disinformation. The latest monthly reports, which we are publishing today, confirm this trend. We now expect online platforms to maintain momentum and to step up their efforts and implement all commitments under the Code.”

While it is still too early to draw final conclusions about the level and impact of disinformation in the recent European Parliament elections, it is clear that the actions taken by the EU – together with numerous journalists, fact-checkers, platforms, national authorities, researchers and civil society – have helped to deter attacks and expose attempts at interfering in our democratic processes. Increased public awareness made it harder for malicious actors to manipulate the public debate.

In particular, EU action focused on four complementary strands: 

  1. The EU has strengthened its capabilities to identify and counter disinformation, via the Strategic Communication Task Forces and the EU Hybrid Fusion Cell in the European External Action Service. It has also improved the coordinated response by setting up a Rapid Alert System to facilitate the exchange of information between Member States and the EU institutions. 
  1. The EU worked with online platforms and industry through a voluntary Code of Practice on disinformation to increase transparency of political communications and prevent the manipulative use of their services to ensure users know why they see specific political content and ads, where they come from and who is behind them.
  1. The Commission and the High Representative, in cooperation with the European Parliament, helped increase awareness and resilience to disinformation within society, notably through more dissemination of fact-based messaging and renewed efforts to promote media literacy.
  1. The Commission has supported Member States’ efforts to secure the integrity of elections and strengthen the resilience of the Union’s democratic systems. The establishment of election networks at EU and national level, with links to the Rapid Alert System, improved cooperation on potential threats.

However, more remains to be done to protect the EU’s democratic processes and institutions. Disinformation is a rapidly changing threat. The tactics used by internal and external actors, in particular linked to Russian sources, are evolving as quickly as the measures adopted by states and online platforms. Continuous research and adequate human resources are required to counter new trends and practices, to better detect and expose disinformation campaigns, and to raise preparedness at EU and national level. 

Update by online platforms under the Code of Practice

Online platforms have a particular responsibility in tackling disinformation. Today the Commission also publishes the latest monthly reports by Google, Twitter and Facebook under the self-regulatory Code of Practice on Disinformation. The May reports confirm the trend of previous Commission assessments. Since January, all platforms have made progress with regard to the transparency of political advertising and public disclosure of such ads in libraries that provide useful tools for the analysis of ad spending by political actors across the EU. Facebook has taken steps to ensure the transparency of issue-based advertising, while Google and Twitter need to catch up in this regard.

Efforts to ensure the integrity of services have helped close down the scope for attempted manipulation targeting the EU elections but platforms need to explain better how the taking down of bots and fake accounts has limited the spread of disinformation in the EU. Google, Facebook and Twitter reported improvements to the scrutiny of ad placements to limit malicious click-baiting practices and reduce advertising revenues for those spreading disinformation. However, no sufficient progress was made in developing tools to increase the transparency and trustworthiness of websites hosting ads.

Despite the achievements, more remains to be done: all online platforms need to provide more detailed information allowing the identification of malign actors and targeted Member States. They should also intensify their cooperation with fact checkers and empower users to better detect disinformation. Finally, platforms should give the research community meaningful access to data, in line with personal data protection rules. In this regard, the recent initiative taken by Twitter to release relevant datasets for research purposes opens an avenue to enable independent research on disinformation operations by malicious actors. Furthermore, the Commission calls on the platforms to apply their political ad transparency policies to upcoming national elections.

Next Steps

As set out in its conclusions in March, the European Council will come back to the issue of protecting elections and fighting disinformation at its June Summit. Today’s report will feed into this debate by EU leaders who will set the course for further policy action.

The Commission and the High Representative remain committed to continue their efforts to protect the EU’s democracy from disinformation and manipulation. Still this year, the Commission will report on the implementation of the elections package and assess the effectiveness of the Code of Practice. On this basis, the Commission may consider further actions to ensure and improve the EU’s response to the threat.


The European Union has been actively tackling disinformation since 2015. Following a decision of the European Council in March 2015, in order to challenge Russia’s ongoing disinformation campaigns, the East StratCom Task Force in the European External Action Service (EEAS) was set up. In 2016, the Joint Framework on countering hybrid threats was adopted, followed by the Joint Communication on increasing resilience and bolstering capabilities to address hybrid threats in 2018.

In April 2018, the Commission outlined a European approach and self-regulatory tools to tackle disinformation online. In October 2018, the Code of Practice was signed by Facebook, Google, Twitter and Mozilla as well as the trade associations representing online platforms, the advertising industry, and advertisers. In addition, Facebook, Google and Twitter committed to report monthly on measures taken ahead of the European Parliament elections. The Commission, with support by the European Regulators Group for Audiovisual Media Services (ERGA) closely monitored the progress and published monthly assessments together with the submitted reports. On 22 May also Microsoft joined the Code of Practice and subscribed to all its commitments.

The Code of Practice goes hand-in-hand with the Recommendation included in the election package announced by President Juncker in the 2018 State of the Union Address to ensure free, fair and secure European Parliament elections. The measures include greater transparency in online political advertisements and the possibility to impose sanctions for the illegal use of personal data to influence the outcome of the European elections. Member States were also advised to set up a national election cooperation network and to participate in a European election network.


]]> 0
Kariņš: European-wide regulation for social media should be considered Tue, 18 Jun 2019 13:49:45 +0000 Prime Minister Krišjānis Kariņš, during his speech at “The Riga StratCom Dialogue 2019”, stressed the need to introduce the European level regulation for social media platforms

“Social media platforms are used throughout the world in the ways in which they are not intended by their founders. On the one hand, you can find groups of people who think like you and maybe share your hobbies, while on the other hand, the social media can be used in ways that undermine our society as such. There are states, which are in the business of doing their best to undermine our societies – to create discord among people and unrest in society,” said K. Kariņš.

Although the EU regulates many spheres, the Prime Minister acknowledged that the sphere of social media is not regulated.

“I believe that there is a fine line between freedom of speech, free information flow and responsibility for Internet content and social media platforms. While avoiding censorship, we have to consider clever, European-wide legislation to bring the social media platforms under the umbrella of responsibility of what sorts of information they are disseminating. We have to introduce it in order to defend democracy, our values and way of life. New technologies require new thinking,” stressed the Prime Minister.


]]> 0
SMARTeD survey results presented at a media literacy event Fri, 12 Apr 2019 10:41:41 +0000 As a part of the European Media Literacy Week a conversation on media literacy took place on 21th March in Trubar literature house in Ljubljana. The event was organized by the Slovene Association of Journalists and Časoris – Slovenia’s newspaper for kids.

Participants from media, government, education, and non-governmental organizations addressed the importance of media literacy in society in which fake news and other forms of disinformation influence election results and encourage hate and intolerance.

During the conversation, Simon Delakorda from the Institute for Electronic Participation presented SMARTeD project survey results. On average, 46 % of the population in Czech Republic, Estonia, France, Greece, Latvia and Slovenia cannot identify disinformation. Anonymous social media accounts and politicians, followed by political parties, are the most likely agents to create and disseminate disinformation. The survey results suggest promoting media and information literacy and encouraging critical thinking about the origin of the information on the internet.

Invited speakers highlighted more focus should be given to educating young people to understand how digital technologies are functioning and how these functionalities are impacting daily lives. The participants also stressed that in the world of information overabundance and in the time in which disinformation and fake news are destroying people’s trust in media and other public institutions, ability of finding and using credible information is of crucial importance.

At the end of the event, the Slovene translation of the online game Bad news ( was presented by the Časoris editor dr. Sonja Merljak Zdovc.

]]> 0
Statement on the Code of Practice against disinformation: Commission asks online platforms to provide more details on progress made Fri, 01 Mar 2019 08:20:46 +0000 European Commission – Statement

Brussels, 28 February 2019

European Commission published reports by Facebook, Google and Twitter covering the progress made in January 2019 on their commitments to fight disinformation. These three online platforms are signatories of the Code of Practice against disinformation and have been asked to report monthly on their actions ahead of the European Parliament elections in May 2019.

More specifically, the Commission asked to receive detailed information to monitor progress on the scrutiny of ad placement, transparency of political advertising, closure of fake accounts and marking systems for automated bots. Vice-President for the Digital Single Market Andrus Ansip, Commissioner for Justice, Consumers and Gender Equality Věra Jourová, Commissioner for the Security Union Julian King, and Commissioner for the Digital Economy and Society Mariya Gabriel said in a joint statement:

“The online platforms, which signed the Code of Practice, are rolling out their policies in Europe to support the integrity of elections. This includes better scrutiny of advertisement placements, transparency tools for political advertising, and measures to identify and block inauthentic behaviour on their services.

However, we need to see more progress on the commitments made by online platforms to fight disinformation. Platforms have not provided enough details showing that new policies and tools are being deployed in a timely manner and with sufficient resources across all EU Member States. The reports provide too little information on the actual results of the measures already taken.

Finally, the platforms have failed to identify specific benchmarks that would enable the tracking and measurement of progress in the EU. The quality of the information provided varies from one signatory of the Code to another depending on the commitment areas covered by each report. This clearly shows that there is room for improvement for all signatories.

The electoral campaigns ahead of the European elections will start in earnest in March. We encourage the platforms to accelerate their efforts, as we are concerned by the situation. We urge Facebook, Google and Twitter to do more across all Member States to help ensure the integrity of the European Parliament elections in May 2019.

We also encourage platforms to strengthen their cooperation with fact-checkers and academic researchers to detect disinformation campaigns and make fact-checked content more visible and widespread.”

Main outcomes of the signatories’ reports:

  • Facebook has not reported on results of the activities undertaken in January with respect to scrutiny of ad placements. It had earlier announced that a pan-EU archive for political and issue advertising will be available in March 2019. The report provides an update on cases of interference from third countries in EU Member States, but does not report on the number of fake accounts removed due to malicious activities targeting specifically the European Union.
  • Google provided data on actions taken during January to improve scrutiny of ad placements in the EU, divided per Member State. However, the metrics supplied are not specific enough and do not clarify the extent to which the actions were taken to address disinformation or for other reasons (e.g. misleading advertising). Google published a new policy for ‘election ads’ on 29 January, and will start publishing a Political Ads Transparency Report as soon as advertisers begin to run such ads. Google has not provided evidence of concrete implementation of its policies on integrity of services for the month of January.
  • Twitter did not provide any metrics on its commitments to improve the scrutiny of ad placements. On political ads transparency, contrary to what was announced in the implementation report in January, Twitter postponed the decision until the February report. On integrity of services, Twitter added five new account sets, comprising numerous accounts in third countries, to its Archive of Potential Foreign Operations, which are publicly available and searchable, but did not report on metrics to measure progress.

Next steps

Today’s reports cover measures taken by online companies in January 2019. The next monthly report, covering the activities done in February, will be published in March 2019. This will allow the Commission to verify that effective policies to ensure integrity of the electoral processes are in place before the European elections in May 2019.

By the end of 2019, the Commission will carry out a comprehensive assessment of the Code’s initial 12-month period. Should the results prove unsatisfactory, the Commission may propose further actions, including of a regulatory nature.


The monitoring of the Code of Practice is part of the Action Plan against disinformation that the European Union adopted last December to build up capabilities and strengthen cooperation between Member States and EU institutions to proactively address the threats posed by disinformation.

The reporting signatories committed to the Code of Practice in October 2018 on a voluntary basis. In January 2019 the European Commission published the first reports submitted by signatories of the Code of Practice against disinformation. The Code aims at achieving the objectives set out by the Commission’s Communication presented in April 2018 by setting a wide range of commitments articulated around five areas:

  • Disrupt advertising revenue for accounts and websites misrepresenting information and provide advertisers with adequate safety tools and information about websites purveying disinformation.
  • Enable public disclosure of political advertising and make effort towards disclosing issue-based advertising.
  • Have a clear and publicly available policy on identity and online bots and take measures to close fake accounts.
  • Offer information and tools to help people make informed decisions, and facilitate access to diverse perspectives about topics of public interest, while giving prominence to reliable sources.
  • Provide privacy-compliant access to data to researchers to track and better understand the spread and impact of disinformation.

Between January and May 2019, the Commission is carrying out a targeted Monthly Intermediate Monitoring of the platform signatories’ actions to implement Code commitments that are the most relevant and urgent to ensure the integrity of elections. Namely: scrutiny of ad placements (Commitment 1); political and issue-based advertising (Commitments 2 to 4); and integrity of services (Commitments 5 & 6).

The Code of Practice also goes hand-in-hand with the Recommendation included in the election package announced by President Juncker in its 2018 State of the Union Address to ensure free, fair and secure European Parliament’s elections. The measures include greater transparency in online political advertisements and the possibility to impose sanctions for the illegal use of personal data to deliberately influence the outcome of the European elections. As a result, Member States have set up a national election cooperation network of relevant authorities – such as electoral, cybersecurity, data protection and law enforcement authorities – and appointed a contact point to participate in a European-level election cooperation network. The first meeting of this network took place on 21 January 2019 and a second one on 27 February 2019.

]]> 0
How can you participate? Join the national workshops! Tue, 12 Feb 2019 10:08:44 +0000 The Riga conference on Disinformation and fake news challenge to democracy, as well as the international survey on disinformation, was only a kick-off activity for a year-long project “Smart E-democracy Against Fake News (SMARTeD​)”

On 30th of Nov 8 organisations from 7 EU countries came together to develop a methodology for workshops that will be put in practice already next year all over Europe! Workshops aim to raise practical awareness for citizens of skillful participation through eDemocracy tools.

🔔 Follow our activities and stay engaged!

]]> 0
“Fake news is not the problem – people are.” Tue, 08 Jan 2019 11:34:17 +0000
On 29th of November, 2018, organisation ManaBalss hosted an international conference “Disinformation and fake news challenge to democracy”! What a great event it was with a lots of knowledgeable speakers, fresh ideas and even better audience!

During the conference the preliminary results of survey on disinformation and fake news (carried out in 6 EU member states – CZ, EE, FR, LV, SL, GR) were also presented. According to the report, the highest percentage of people who cannot tell the difference between fake and real news is in Greece – 56.1%.

Conference participants pointed out, that the solution for determining the “truth” and expecting informative and reasoned activities from the society, while there is a visible disinformation presence in the information space around us, is to raise awareness that this problem exists.

Self-regulated quality mechanisms used by the content makers are another effective way how to deal with this challenge. It means to possess a stronger responsibility towards the published content, as well as the collective knowledge and ability to monitor and control the presence of disinformation in publicly available information.

]]> 0