Apparently, Facebook is trying to contain misinformation and co. But internal documents show that Facebook’s algorithm itself promoted the dissemination of controversial content by preferring emoji reactions.
Five years ago, Facebook added five more possible reactions to the regular “Like” button. The symbols “Love”, “Haha”, “Wow”, “Sad” and “Angry” should give Facebook users the opportunity to express their opinions and emotions about a post more clearly. What very few people might have been aware of: Behind the scenes, Facebook programmed the algorithm to favor posts with a lot of emoji reactions. The new buttons were weighted five times more than the well-known “Like” reaction. This is evident from internal documents, available to the Washington Post.
The background: the more users saw emotional and provocative content, the more Engagement the platform could calculate. The emojis, which also included the reactions “sad” and “angry”, were used by the algorithm to identify that content.
Provocative posts were preferred
The problem with preferring the emoji responses was that it created more controversial posts as well Spam and clickbait could be brought to the users. This concern, which employees expressed in 2017, was confirmed: Facebook’s data scientists found in 2019 that content that triggered the emoji reaction “angry” contained misinformation, toxic content and messages of poor quality more than average.
Several employees suggested that the “angry” reaction should be rated lower than the other emojis. But the proposal was rejected several times. It was only last year that Facebook decided to give all emojs 1.5 times the value of the likes. Shortly afterwards, the angry emoji was finally set to zero.
Facebook versus Facebook
It sounds like a bad joke: While Facebook’s content moderators tried to curb the spread of misinformation and other harmful content, the algorithm preferred such content to harmless content for years. The Facebook spokeswoman Dani Lever commented on the situation as follows:
We continue to work to understand what content creates negative experiences, so we can reduce its distribution. This includes content that has a disproportionate amount of angry reactions, for example.
It is to be welcomed that Facebook has recognized the harmful effects of distributing posts with many angry reactions. Nevertheless, it remains questionable why the platform remained inactive for so long despite the concerns of its employees.