In the Battle against Fake News, the Bots may be Winning
Andrea Stroppa on the state of the ongoing struggle agaisnt fake news.
Lawyers from top tech companies were recently asked about the role their firms played in the 2016 United States presidential election, during three Congress hearings.
Propaganda, disinformation and misinformation messages on Facebook and Instagram reached approximately 146 million American citizens – almost half the population – Facebook revealed in a prepared testimony.
Twitter accounts linked to Russia "generated approximately 1.4 million automated, election-related tweets, which collectively received approximately 288 million impressions" between September 1 to November 15 2016, according to company executives.
These revelations echo my own research during November 2016. The then Republican candidate Donald Trump’s Instagram account was amassing a good deal of followers, but significant tranches of them were bots and Russians – even if there was no clear evidence of direct involvement by the Russian government.
Hatred, confusion and disarray
These figures give rise to many urgent questions, such as whether these social media activities influenced the outcome of the election. In a wider context, consider how the use of social media tools has changed since their inception. The same media that gained traction by promoting freedom and democracy around the world – during the 2011 Arab Spring, for example – now seem ideal tools to manipulate opinions and spread hatred, confusion and disarray. Public perception of social media has shifted accordingly in recent months.
US law-makers, academics and tech experts are pressuring Facebook, Twitter and Google to prevent such digital propaganda and disinformation campaigns, and rightly so. But there are two significant issues. Firstly, from a technical standpoint, it is difficult to stop these “botnets” – the large number of fake accounts run by specific software which swamp user newsfeeds and timelines with fake news or deceptive posts.
Secondly, the value of social networks such as Facebook, Instagram and Twitter is based on the volume of their active users. Therefore, the primary and constant goal of these tech companies is to expand their user base. (This is why Facebook is so eager to enter the Chinese market.)
These networks want to give access to those living under repressive governments who censor the internet. So they enable techniques which circumvent national surveillance systems, such as proxy, VPN and others.
This makes life very easy for botnet creators, who are developing sophisticated software to imitate real users. For example, in order to bypass the phone verification typically required by social media platforms, botnet owners use virtual phone numbers and private IP proxies. Also, thanks to the work of some tech experts who reverse-engineer apps to find out how their deepest processes work, these ‘smart’ bots are quickly able to evade any security system. Such operations are cheap to run and available to any organisation or government.
How to spot a bot
Particularly on Facebook, these propaganda or disinformation campaigns run on pages either professing support for a social cause or simply offering generic entertainment news. How are users to decipher a genuine page from a fake one? Does this power lie only with Facebook engineers?
A potential answer may be the appearance of the post, such as its format and featured links. By applying data analysis and open-source intelligence, experts could have understood in advance what was actually happening. Facebook could simply have used common sense when accepting Russian rubles for sponsored, politically motivated posts.
On a technical level, the digital propaganda strategies in question mostly rely on botnets and exploiting online communities, using paid content and sharing to disseminate material. In response, social media companies are planning new measures to better manage paid content on their platforms.
An upcoming bipartisan bill is focused on tightening rules for political advertising. However, as respected tech journalists such as Anthony de Rosa have noted, the real problem is that people continue to share links, posts and spam on Facebook pages with a high number of followers.
Dangers of digital propaganda
This is a problem which independent research projects could help with, if the internet giants were only willing to share their internal data. In recent years, it is independent researchers who have shed light on previously unknown issues affecting social media users, such as software bugs, counterfeit item trafficking, phishing campaigns and malware dissemination.
More generally, we are also facing a cultural problem. Broadsheet newspapers, research papers, government inquiries, Congressional hearings, and tech companies have all underscored the dangers of digital propaganda in recent weeks. But some public sources still tend to minimise or dismiss this phenomenon.
Nobody could seriously “believe that a post on FB could ever swing an election”, suggested a well-known commentator at the Italian newspaper Corriere della Sera on Twitter recently. Such glib comments miss the point. Digital propaganda strategies take aim at real issues, such as immigration, economic crisis, terrorism and social inequality, in order to push millions of people surreptitiously towards a particular political agenda or viewpoint.
The revelations from the US 2016 election pertain to other countries. Italy, for example, has its own general election coming up. It has 30 million Facebook users, and 35% of its adult population get their daily news from that platform (only 14% still rely on newspapers).
Some organised networks have already launched their social media election strategy to influence voters, using propaganda and fake news campaigns. Media pundits are on high alert for what promises to be another challenging situation worthy of global attention.
Andrea Stroppa writes about security and technology for the World Economic Forum. this post first appeared on the Agenda blog.
Image Credit: Zen Skillicorn via Flickr (CC BY-ND 2.0)