Facebook Details Amount Of Hate Speech On Its Platform

Person looking at Facebook on a smartphone and a laptop

The prevalence of hate speech on Facebook in July-September period this year was 0.10 per cent to 0.11 per cent, meaning that out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech, the company has said.

In its latest quarterly Community Standards Enforcement Report, Facebook has made the "prevalence" of hate speech on its platform public for the first time.

"You can think of prevalence as an air quality test", said Guy Rosen, vice president of integrity at Facebook, in a conference call with journalists. Though this does not sound like a lot, it is. "Important speech got swept into the maw of the Facebook filter - and risky content, like self-harm, stayed up", they wrote.

Facebook said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. And it removed 265,000 pieces of content for breaking the company's rules against voter interference.

Though Facebook's AI technology has been improved, this still leaves around a million pieces of content to be detected by users. This, despite confirmed coronavirus cases in moderator buildings.

Facebook moderators have suffered bad work conditions, and exposure to some content has led to some developing PTSD-like symptoms.

The employees state that the reason Facebook is forcing them to return to work is that Facebook's AI-based moderation tools are "years away", from working effectively.

The workers are calling on Facebook to improve worker safety, issuing demands such as hazard pay for moderators returning to the office, maximizing at-home working, and offering better healthcare and mental health support. Employees were sent home during the early days of the pandemic, but workers say Accenture has recently started requiring workers to come back to the office.

"By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media", notes the letter. In the letter, it was also stated that Facebook does not provide mental health services to its moderators, who have to see countless harmful content from child abuse to uncensored violence every day.

The AI wasn't up to the job.

"The lesson is clear".

Like many other social networks, Facebook relies on AI to help a crew of humans moderate an ever-growing mountain of content on its eponymous platform and Instagram, which it also owns. "They may never get there". "This raises a stark question", said the letter.

The world's largest social media company, under scrutiny over its policing of abuses, particularly around November's USA presidential election, released the estimate in its quarterly content moderation report. "These include vastly reduced building occupancy, extensive social distancing and masks, daily office cleaning, individual transportation and other measures".

During the first six months of 2020, government requests for user data increased by 23 per cent from 140,875 to 173,592, it said.

Facebook has previously said it would welcome more regulation within the sector. The report revealed new metrics around the company's work to combat hate speech. He also added from March to October, Facebook has taken up to terminate nearly 12 million posts. "The idea of moving to an online detection system optimized to detect content in real time is a pretty big deal", he said.

In the weeks reaching up to the election on 3 November, Facebook had rampantly tagged warning labels to over 180 million posts by users which were suspected of spreading misinformation and hate speech.

Related:

Comments


Other news