Do The Morons Running Facebook Really Think They Are Fighting Spam When They Censor Political Views?

Facebook has released ts first quarterly Community Standards Enforcement Report, bragging about how they have closed fake accounts and shut down spam. Never mind that much of what they block includes political views on both the left and right which vary from their establishment Democratic Party views, along with certain verboten topics which I will not list here to decrease the risk that this is censored from Facebook groups. While Facebook brags about eliminating spam, among the “spam” they have censored this year has been this post on the corporate money being taken by members of the party with the donkey symbol, even when they make noise about not taking corporate money. That post was enough to put me in Facebook Jail for three days, and I have any Facebook friends who have spent far more time there this year.

The Guardian has more on the report:

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity…

Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious. Those tools worked particularly well for content such as fake accounts and spam: the company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and “nearly 100%” of the spam.

Automatic flagging worked well for finding instances of nudity, since, Schultz said, it was easy for image recognition technology to know what to look for. Harder, because of the need to take contextual clues into account, was moderation for hate speech. In that category, Facebook said, “we found and flagged around 38% of the content we subsequently took action on, before users reported it to us”.

Facebook has made moves to improve transparency in recent months. In April, the company released a public version of its guidelines for what is and is not allowed on the site – a year after the Guardian revealed Facebook’s secret rules for content moderation.

Either their AI-based tools are failing miserably with an incredible amount of false positive results or they are failing to disclose their true criteria for blocking material on Facebook. Either way, it is a serious problem with a site which has become a major avenue for speech around the world.