After Cambridge Analytica scandal Facebook announces Election security Improvements

Pierluigi Paganini April 02, 2018

After Cambridge Analytica case, Facebook announced security improvements to prevent future interference with elections.

Facebook is under the fire after the revelation of the Cambridge Analytica case and its role in the alleged interference to the 2016 US presidential election.

While the analysts are questioning about the interference with other events, including the Brexit vote, Facebook is now looking to prevent such kind of operations against any kind of election.

Guy Rosen, Facebook VP of Product Management declared that everyone is responsible for preventing the same kind of attack to the democracy and announced the significant effort Facebook will spend to do it.

“By now, everyone knows the story: during the 2016 US election, foreign actors tried to undermine the integrity of the electoral process. Their attack included taking advantage of open online platforms — such as Facebook — to divide Americans, and to spread fear, uncertainty and doubt,” said Guy Rosen.

“Today, we’re going to outline how we’re thinking about elections, and give you an update on a number of initiatives designed to protect and promote civic engagement on Facebook.”

Facebook plans to improve the security of elections in four main areas: combating foreign interference, removing fake accounts, increasing ads transparency, and reducing the spread of false news.

Alex Stamos, Facebook’s Chief Security Officer, added that the company always fight “fake news,” explaining that the term is used to describe many malicious activities including:

  1. Fake identities– this is when an actor conceals their identity or takes on the identity of another group or individual;
  2. Fake audiences– so this is using tricks to artificially expand the audience or the perception of support for a particular message;
  3. False facts – the assertion of false information; and
  4. False narratives– which are intentionally divisive headlines and language that exploit disagreements and sow conflict. This is the most difficult area for us, as different news outlets and consumers can have completely different on what an appropriate narrative is even if they agree on the facts.

“When you tease apart the overall digital misinformation problem, you find multiple types of bad content and many bad actors with different motivations.” said Alex Stamos.

“Once we have an understanding of the various kinds of “fake” we need to deal with, we then need to distinguish between motivations for spreading misinformation. Because our ability to combat different actors is based upon preventing their ability to reach these goals.” said Stamos.

“Each country we operate in and election we are working to support will have a different range of actors with techniques are customized for that specific audience. We are looking ahead, by studying each upcoming election and working with external experts to understand the actors involved and the specific risks in each country.”

Stamos highlighted the importance to profile the attackers, he distinguished profit-motivated organized group, ideologically motivated groups, state-sponsored actors, people that enjoy causing chaos and disruption, and groups having multiple motivations such as ideologically driven groups.

Facebook is working to distinguish between motivations for spreading misinformation and implement the necessary countermeasures.

Facebook

Currently, Facebook already spends a significant effort in combatting fake news and any interference with elections.

Samidh Chakrabarti, Product Manager, Facebook, explained that the social media giant is currently blocking millions of fake accounts each day with a specific focus on those pages that are created to spread inauthentic civic content.

Chakrabarti explained that pages and domains that are used to share fake news is increasing, in response, Facebook doubles the number of people working on safety issues from 10,000 to 20,000. This hard job is mainly possible due to the involvement of sophisticated machine learning systems.

“Over the past year, we’ve gotten increasingly better at finding and disabling fake accounts. We’re now at the point that we block millions of fake accounts each day at the point of creation before they can do any harm.” said Chakrabarti.

“Rather than wait for reports from our community, we now proactively look for potentially harmful types of election-related activity, such as Pages of foreign origin that are distributing inauthentic civic content. If we find any, we then send these suspicious accounts to be manually reviewed by our security team to see if they violate our Community Standards or our Terms of Service. And if they do, we can quickly remove them from Facebook. “

But we all know that Facebook is a business that needs to increase profits, for this reason ads are very important for it.

Facebook is building a new transparency feature for the ads on the platform, dubbed View Ads, that is currently in testing in Canada. View Ads allows anyone to view all the ads that a Facebook Page is running on the platform.

“you can click on any Facebook Page, and select About, and scroll to View Ads.” explained Rob Leathern, Product Management Director.

“Next we’ll build on our ads review process and begin authorizing US advertisers placing political ads. This spring, in the run up to the US midterm elections, advertisers will have to verify and confirm who they are and where they are located in the US,”

This summer, Facebook will launch a public archive with all the ads that ran with a political label.

Stay tuned ….

[adrotate banner=”9″] [adrotate banner=”12″]

Pierluigi Paganini

(Security Affairs – Facebook, fake news)

[adrotate banner=”5″]

[adrotate banner=”13″]



you might also like

leave a comment