A former Meta employee who worked on its content moderation systems and policy, and who spoke to WIRED on the condition of anonymity, says, however, that mass reporting could at least get certain pieces of content or accounts flagged for review. And the more frequently a certain type of content is flagged, the more likely the algorithm will be to flag it in the future. However, with languages where there is less material to train the algorithm, like Bulgarian, and AI might be less accurate, the former employee says that it’s possibly more likely that a human moderator would make the final call about whether or not to remove a piece of content.
Meta spokesperson Ben Walters told WIRED that Meta does not remove content based on the number of reports. “If a piece of content does not violate our Community Standards, no matter how high the number of reports is, it won’t lead to content removal,” he says.
Some moderation issues could be the result of human error. “There are going to be error rates, there are going to be things that get taken down that Meta did not mean to take down. This happens,” they say. And these errors are even more likely in non-English languages. Content moderators are often given only seconds to review posts before having to make a decision about whether or not it will stay online, an indicator through which their job performance is measured.
There is also a real possibility that there could be bias among human moderators. “The majority of the population actually supports Russia even after the war in Ukraine,” says Galev. Galev says that it’s not unreasonable to think that some moderators might also hold these views, particularly in a country with limited independent media.
“There’s a lack of transparency around who is who is deciding, who is making the decision,” says Ivan Radev, a board member of the Association of European Journalists Bulgaria, a nonprofit, which put out a