In a whitepaper by members of Facebook’s security team, the company said it plans to expand its security focus to include attempts to “manipulate civic discourse and deceive people”.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In addition to account hacking, malware, spam and financial scams, the security team will work to counteract more subtle and insidious forms of misuse, including what Facebook terms “information operations” by governments and non-state actors to distort domestic or foreign political sentiment.
These “information operations” are most frequently aimed at achieving a strategic and/or geopolitical outcome using a combination of methods, such as false news, disinformation, or networks of fake accounts aimed at manipulating public opinion, the whitepaper said.
According to the social network’s security team, online information operations have been attempted through Facebook via targeted data collection, content creation and false amplification.
Targeted data collection refers to stealing, and often exposing, non-public information that can provide unique opportunities for controlling public discourse.
To combat this, the Facebook security team said it has long focused on helping people protect their accounts from compromise and it closely monitors a range of threats to defend people on Facebook and the company against targeted data collection and account takeover.
Specific steps to counteracting targeted data collection include:
- Providing a set of customisable security and privacy features, including multiple options for two-factor authentication and in-product marketing to encourage adoption.
- Notifications to specific people if they have been targeted by sophisticated attackers, with custom recommendations depending on the threat model.
- Proactive notifications to people who have yet to be targeted, but who may be at risk based on the behaviour of malicious actors.
- Where appropriate, working directly with government bodies responsible for election protections to notify and educate people who may be at greater risk.
False amplification is defined as “co-ordinated activity by inauthentic accounts with the intent of manipulating political discussion”, such as by discouraging specific parties from participating in discussion or amplifying sensationalistic voices over others, for example.
“We detect this activity by analysing the inauthenticity of the account and its behaviours, and not the content the accounts are publishing,” Facebook said.
The company said it has long invested in both preventing fake account creation and identifying and removing fake accounts. “Through technical advances, we are increasing our protections against manually created fake accounts and using new analytical techniques, including machine learning, to uncover and disrupt more types of abuse.
“We build and update technical systems every day to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised,” said the Facebook security team.
In France, for example, the social networking firm said it had suspended more than 30,000 fake accounts in the run up to the French presidential election.
Information operations detected
During the 2016 US Presidential election season, Facebook said it responded to several situations that fitted the “pattern of information operations” but uncovered no evidence of any Facebook accounts being compromised as part of this activity.
However, Facebook said it did see malicious actors using conventional and social media to share information stolen from other sources, such as email accounts, with the intent of harming the reputation of specific political targets.
Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook) and dedicated sites hosting this data were registered, fake personas were created on Facebook and elsewhere to point to and amplify awareness of this data, Facebook said.
“While we acknowledge the ongoing challenge of monitoring and guarding against information operations, the reach of known operations during the US election of 2016 was statistically very small compared to overall engagement on political issues,” the whitepaper said.
Facebook data doesn’t contradict US intelligence
Facebook said it was “not in a position to make definitive attribution to the actors sponsoring this activity” but said its data “does not contradict” the attribution provided by a US national intelligence report that points to Russian involvement in attempt to influence the election.
The Pawn Storm Russian hacking group that was widely linked to cyber attacks on the Democratic National Committee and Hillary Clinton’s campaign in the 2016 US presidential election, has more recently been found to be targeting French presidential candidate Emmanuel Macron, according to a report by security firm Trend Micro. Pawn Storm is also believed to have targeted the German political party Christian Democratic Union (CDU), the Turkish parliament, the parliament in Montenegro, and the World Doping Agency (WADA).
Facebook said it recognises that in today’s information environment, social media plays a significant role in facilitating communications, and in some circumstances the risk of malicious actors seeking to use the site to mislead people or otherwise promote “inauthentic communications” can be higher.
To counteract this, Facebook said it is taking a “multifaceted approach” including:
- Continually studying and monitoring the efforts of those who try to negatively manipulate civic discourse on Facebook.
- Innovating in the areas of account access and account integrity, including identifying fake accounts and expanding our security and privacy settings and options.
- Participating in multi-stakeholder efforts to notify and educate at-risk people of the ways they can best keep their information safe.
- Supporting civil society programmes around media literacy.