Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. "Accountable to the community".

Facebook took moderation action against nearly 1.5bn accounts and posts which violated its community standards in the first three months of 2018, the company has revealed.

837 million pieces of Spam were detected and removed during the first quarter, up 15% on the previous period, while 583 million fake accounts were disabled, a reduction of 16%.

The figures are contained in an updated transparency report published by the company which for the first time contains data around content that breaches Facebook's community standards.

The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds. "It's why we're investing heavily in more people and better technology to make Facebook safer for everyone". The rate at which we can do this is high for some violations, meaning we find and flag most content before users do. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". The company didn't provide a number of views, but said it was "extremely low". Rosen added that the reviewers will speak 50 languages in order to be able to understand as much context as possible about content since, in many cases, context is everything in determining if something is, say, a racial epithet aimed at someone, or a self-referential comment.

It attributed the increase to the enhanced use of photo detection technology.

Facebook took down or applied warnings labels to about 3.5 million pieces of violent content in Q1 2018 - 86% of which was identified by our technology before it was reported to Facebook. In total, it took action on 21 million pieces of content in Q4, similar to Q4.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

This led to old as well as new content of this type being taken down.

As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day. The inaugural report was meant to "help our teams understand what is happening" on the site, he said.

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it.

The company estimates that 3% to 4% of its monthly active users are "fake", up from 2% to 3% in Q3 of 2017, according to filings.

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious.

Related Articles