Facebook closed 583m fake accounts in first three months of 2018

Facebook's latest transparency move is showing you how much objectionable content it removes

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months.

A spokeswoman later said that Facebook blocks "disturbing or sensitive content such as graphic violence" so that users under 18 can not see it "regardless of whether it is removed from Facebook".

Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors.

"Yes there are clear skews in many of these metrics", said Schultz.

"We took down or applied warning labels to about three and a half million pieces of violent content in Q1 2018, 86 per cent of which was identified by our technology before it was reported to Facebook".

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report. But a recent report from the Washington Post found that Facebook's facial recognition technology may be limited in how effectively it can catch fake accounts, as the tool doesn't yet scan a photo against all of the images posted by all 2.2 billion of the site's users to search for fake accounts.

Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4. Of the total 2.5 million hate speech posts removed, only 38 percent were pulled by Facebook's tech before users reported it. Compare that to the 95.8 percent of nudity or 99.5 percent of terrorist propaganda that Facebook purged automatically.

Most of the content was found and flagged before users had a chance to spot it and alert the platform.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of a year ago.

Facebook noted in the report that, "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards".

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. It says it found and flagged almost 100% of spam content in both Q1 and Q4. It also purged 583 million fake accounts, "most of which were disabled within minutes of registration".

While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech. "We'd also like to see better messaging to users when an action has been taken on their account, so they know the specific violation".

Related:

Comments


Other news