How Facebook track Contents that violate its Community standards



The leading social networking platform, Facebook has been grappling with the issue of offensive and threatening content, with growing criticism on how it controls the content that shows up on the News Feed.

Facebook had formerly relied on users to report offensive content, now the company has resorted to artificial intelligence (AI) to help in cleaning up the News Feed, while the technology needs to improve in order to effectively tackle more linguistically nuanced issues.

While Facebook still retain people that report offensive content, it has thrown its full weight on the use AI to weed out offensive contents even before it appears to anyone.

Facebook has categorized offensive contents into: terrorist propaganda, graphic violence, adult nudity/sexual activity, hate speech, spam and fake news.

According to the company, AI has played an increasing role in flagging inappropriate content, with over 85 percent of the 3.4 million posts containing graphic violence that was acted on in the first quarter been flagged by AI before users reported them.

Facebook maintains that human users remains strategic and is responsible for reporting the balance percentage.

In its Community Standards Enforcement Preliminary Report, released on Tuesday, Facebook says that this combination of People and AI have helped to easily find and flag potentially violating content at scale before many people see or report it.

The social network had successfully disabled about 583 million fake accounts during the first quarter of this year, with the majority happening in minutes after the registration of the accounts.
Previous
Next Post »