Disinformation

A controversial Facebook policy faces a major stress test ahead of UK elections

We begin and end this week’s newsletter with a series of hashtags.

#GetBrexitDone, #RealChange, #StopBrexit — in case you hadn’t seen, it is election season (again) in the United Kingdom. 

Ahead of December 12 polls, which could determine the manner in which Britain will exit from the European Union, the country’s leading politicians will visit TV studios and pose awkwardly with nurses, teachers, and factory employees. The politicians will hope to avoid consuming greasy products on camera, like former Labour Party leader Ed Miliband (during local elections in 2014) who had his campaign derailed for 48 hours after struggling with a bacon sandwich.

More seriously, the election also marks an important test for Facebook’s controversial policy allowing politicians to run false ads on the platform. According to Facebook, the social media giant will not fact-check ads run by British political parties and the thousands of other candidates running for office in the House of Commons. But ads from other political groups, the pro-Brexit group Leave.EU, for example, will be fact-checked. 

The policy has been in place for a year but outrage was recently reignited ahead of next year’s U.S. election.

Last week, the EU warned that political disinformation is still rife on Facebook and Twitter and more needs to be done to eliminate it. Ahead of elections in the UK, the European Commission said social media giants must take “serious further steps” to tackle disinformation by the end of the year. If the platforms fail to improve, the Commission has said it may introduce “regulatory or co-regulatory measures”, which could force the companies to share their data more openly. This would allow journalists, developers, and academics to carry out a much closer analysis of disinformation operations.

While the EU praised initiatives by social media giants to tackle the problem, it said large scale disinformation continues. “Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the code. We cannot accept this as a new normal,” the EU commissioners for justice, security and digital economy said in a joint statement.

British politicians are worried about how disinformation could impact the election in light of Facebook’s controversial policy. Damian Collins, a Conservative MP who has been leading parliamentary hearings on Facebook, said: “People shouldn’t be able to spread disinformation during election campaigns just because they are paying Facebook to do so.”

The UK election, already one of the country’s most important since World War II because of Brexit, will act as a test site for Facebook’s policy in a major contest. While Facebook has announced a dedicated operations center to monitor and remove activity that breaks its rules, the company says its role is not to judge the veracity of what politicians say.

The Open Knowledge Foundation has been campaigning to stop the spread of disinformation on Facebook and other social media sites. Its chief executive, Catherine Stihler, said: “The social media giants have been at the center of a series of rows about disinformation, particularly in connection with the Brexit referendum, and that simply cannot be allowed to happen once again in the run-up to December’s UK general election.”

My favorite Coda Story this week:

Last week, Rui Zhong wrote about China’s commercial social credit systems, which are often depicted as abusive and Orwellian in the West. She found that in China, social credit is popular and has even been gamified by users. A major development is due next year when the country’s planned nationwide social credit scheme launches. The scoring system will be backed by some 200 million CCTV cameras and supported by artificial intelligence and machine-learning algorithms. For hundreds of millions of Chinese users, it will fundamentally change what it means to be a citizen.

Elsewhere: 

This data analysis examines President Trump’s 11,000 tweets since taking office and offers a comprehensive view of a political leader who spends significant amounts of time circulating conspiracy theories and giving encouragement to extremists and impostors. According to the analysis, fake accounts tied to intelligence services in China, Iran and Russia have directed thousands of tweets at President Trump. Russian accounts have tagged the president more than 30,000 times. A political career that began with the hashtag #FakeBirthCertificate now faces possible ruin under the shadow of another hashtag, #FakeWhistleblower. (New York Times)

Pitch alert: We are seeking story ideas on worldwide anti-science movements. Send your ideas for character-driven narratives explaining how anti-science movements are created, and how they thrive. Write to me:  [email protected]


Burhan Wazir

Burhan Wazir is an award winning journalist and editor, based in London. He has previously worked at The Observer, The Times, Al Jazeera and WikiTribune. He lived in the Middle East from 2008-2016.

Get in touch via [email protected]

We use cookies on this website to make your browsing experience better. Accept our use of cookies, Privacy Policy and Terms of Use