Facebook Shuts Down 583 Million Accounts, Check If Yours Is Still Up

Facebook Shuts Down 583 Million Accounts, Check If Yours Is Still Up

And while the company announced that it had removed more posts from its platform in the first quarter of 2018 vis-à-vis the last quarter of 2017, the company did not provide any specific numbers to validate this assertion. The self-assessment on Tuesday, May 15, 2018, came three weeks after Facebook tried to give a clearer explanation of the kinds of posts that it won't tolerate. Or sexual activity. Or hate speech. In this endeavor, it has removed around 1.5 billion posts as well as accounts since the advent of 2018. Even so, that supposedly tiny fraction, a mere 0.5 percent that escapes Facebook's filters, are surprisingly easy to find online, according to a report by an internet safety organization.

Facebook stated that artificial intelligence has played an essential role in helping the social media company flag down content.

Taking down fake accounts is important not just to fight spam.

The report also doesn't address how Facebook is tacking another vexing issue - the proliferation of fake news stories planted by Russian agents and other fabricators trying to sway elections and public opinion.

In this report, 97% of the content deleted was spam. Facebook estimates that out of every 10,000 pieces of content viewed on Facebook, just seven to nine views were of content that violated its adult nudity and pornography standards... which, by the way, have a history of head-scratching decisions.

An estimated 3 to 4 percent of Facebook accounts were fake accounts, the company said.

When it comes to nudity, sex, fake accounts, and spam, Facebook is capable of flagging over 95 percent of these posts. TL;DR: it has to do with the nuances of nipples.

Facebook said Tuesday it took down almost 2 million posts related to terrorist propaganda this year before users reported them.

From this it follows that for the detection of hateful speech, the company still relies mostly on other people, not on the computer system. Facebook says its technology "still doesn't work that well and so it needs to be checked by our review teams".

"Even if they remove 100 million posts that are offensive, there will be one or two that have some really bad stuff and those will be the ones everyone winds up talking about on the cable-TV news", said Timothy Carone, who teaches about technology at the University of Notre Dame. For the past couple of months, Facebook co-founder Mark Zuckerberg has been in charge of providing damage control for his company, and finding ways of filtering out questionable content and preventing a situation like Cambridge Analytica from happening again.

For example, spotting hate speech is complex, as Rosen described in a detailed post following F8. Facebook doesn't always have that much training data, particularly in less widely used languages.

Related Articles