Facebook publishes its online abuse numbers


In a company first, Facebook has published details of how it deleted or added warnings to about 29 million posts that broke its rules on hate speech, graphic violence, terrorism and sex.

The report, which was published on its site, covers its enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. It is the first time the company has outlined its efforts to enforce its own rules. 

In a post on the company’s website, vice president of product Guy Rosen says Facebook still has a lot of work still to do to prevent abuse.

“It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”

Help WikiTribune pull out key data from the report and each of the categories Facebook outlines. Add the metrics beneath the below subheadings.

Got a fact or something to add?

Edit
  • Graphic violence

  • An estimate of 0.22 percent to 0.27 percent of views were of content that violated the facebook standards for graphic violence in Q1 2018. This increased from an estimate of 0.16 percent to 0.19 percent in Q4 2017.

    In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter.

  • Adult nudity and sexual content

  • An estimate of 0.07% to 0.09% of views were of content that violated the Facebook standards for adult nudity and sexual activity in Q1 2018. This is slightly higher than an estimate of 0.06% to 0.08% of views in Q4 2017.

    In other words, of every 10,000 content views, an estimate of 7 to 9 contained adult nudity and sexual activity that violated our standards, compared to 6 to 8 views last quarter.

  • Terrorist propaganda 

  • This metric is not available because Facebook can’t reliably estimate it.

    Compared to some other violation types such as graphic violence, the number of views of terrorist propaganda content related to ISIS, al-Qaeda and their affiliates on Facebook is extremely low. That’s because there’s relatively little of it and because the majority is removed before people see it.

  • Spam

  • Facebook is updating measurement methods for this violation type. It is unable to provide reliable data for this time period.
  • Hate speech

     

  • Facebook is updating measurement methods for this violation type. It is unable to provide reliable data for this time period.
  • Fake accounts

  • Fake accounts are estimated to represent approximately 3 percent to 4 percent of monthly active users (MAU) on Facebook during Q1 2018 and Q4 2017. This estimate may vary each quarter based on spikes or dips in automated fake account creation.
  • In Q1 2018, Facebook disabled 583 million fake accounts, down from 694 million in Q4 2017.

Discuss or suggest changes

Talk

 

  • Share
    Share

Subscribe to our newsletter and be the first to collaborate on our developing articles:

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Email us