How Twitter Algorithmic fiasco targets racial and derogatory sobriquet



Twitter was enmeshed in a rather algorithmic fiasco that allowed advertisers to target users with derogatory sobriquet, with ad campaigns using such derogatory terms potentially reaching millions of users on the network.

While the social media company claims the algorithm that allowed campaigns to target millions of people based on derogatory terms was a bug.

Albeit, Twitter’s ad platform allow advertisers to choose via what it calls “audience features” when creating an advertising campaign to target any keyword, the company does not reveal the audience size, particularly for terms with two or more words.

Twitter also suggests what it calls “follower look-alike” accounts, explaining: “Target people with interests similar to an account's followers.

The automated feature keyword targeting allows advertisers to reach users based on keywords in their search queries, recent Tweets, and engagements on the network.

But Twitter maintains that it prohibits "the promotion of hate speech globally saying its ad policy includes race, ethnicity, and national origin, and tells marketers: "You are responsible for all your promoted content on Twitter."

Notwithstanding, the company has been urged to implement more proactive human oversight to their algorithms, especially to filter inappropriate and hateful speeches.
Previous
Next Post »