Facebook has released ts first quarterly Community Standards Enforcement Report, bragging about how they have closed fake accounts and shut down spam. Never mind that much of what they block includes political views on both the left and right which vary from their establishment Democratic Party views, along with certain verboten topics which I will not list here to decrease the risk that this is censored from Facebook groups. While Facebook brags about eliminating spam, among the “spam” they have censored this year has been this post on the corporate money being taken by members of the party with the donkey symbol, even when they make noise about not taking corporate money. That post was enough to put me in Facebook Jail for three days, and I have any Facebook friends who have spent far more time there this year.
The Guardian has more on the report:
In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity…
Facebook also managed to increase the amount of content taken down with new AI-based tools which it used to find and moderate content without needing individual users to flag it as suspicious. Those tools worked particularly well for content such as fake accounts and spam: the company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and “nearly 100%” of the spam.
Automatic flagging worked well for finding instances of nudity, since, Schultz said, it was easy for image recognition technology to know what to look for. Harder, because of the need to take contextual clues into account, was moderation for hate speech. In that category, Facebook said, “we found and flagged around 38% of the content we subsequently took action on, before users reported it to us”.
Facebook has made moves to improve transparency in recent months. In April, the company released a public version of its guidelines for what is and is not allowed on the site – a year after the Guardian revealed Facebook’s secret rules for content moderation.
Either their AI-based tools are failing miserably with an incredible amount of false positive results or they are failing to disclose their true criteria for blocking material on Facebook. Either way, it is a serious problem with a site which has become a major avenue for speech around the world.
I have you on my blogroll, so unless Google starts taking an ax to Blogger blogs, I'm OK.
Unfortunately you are a minority. A huge percentage of readers these days comes from Facebook links–with this down since Facebook changed their algorithms. I sure miss the pre-Facebook days when I had 10,000 subscribers to the RSS feed. There is still a fair number of RSS feed subscribers, but that is way down from the old days.
I agree … I miss the pre-Facebook days. Hang in there.
We will probably never go back to the pre-Facebook days. When Facebook works (ie lets us post without censorship) it provides far more exposure to posts than can be obtained these days on blogs. You aren’t on Facebook at all are you? While the last few weeks have been crazy and I haven’t put much on the blog, I’ve had multiple brief items on Facebook which have led to long discussion threads, along with many shares. While they are generally rather brief for a blog post, maybe I should gather some of the better ones and post them as a single post later this week.