Social media content moderators have many names, censors, social media cleaners, silent guardians, custodians, being just some of them. In essence, these moderators are tasked to examine and evaluate visual/textual content flagged by social media users as breaking community standards and decide on a course of action.
Ignore or Delete: the Logic behind Content Moderation
Social media content moderation is a fallible business. Those who moderate online content are requested to examine it and ensure that it follows community guidelines, and if not, to erase it from the platforms. However, most content moderation is still done by humans with the AI required to independently monitor online content and moderate it accordingly still in the process of being developed. The individuals who currently moderate online content must struggle to interpret the rules objectively, and are challenged in terms of understanding the context and meaning of content they are required to evaluate, while all the time working under intense psychological pressure.
Individuals who work as social media content moderators can be divided into two categories those who follow the “Logic of Care” – usually experienced professionals and volunteers who use their work to ensure that the delicate balance between healthy and safe online environment and freedom of speech and expression is maintained. Such logic is used on local levels such as in Finland where such jobs are not outsourced but pay well and take into account the demands and interests of the content moderators. This logic is contrasted to the “Logic of Choice” content moderation used by large social media companies that employ outsourced workers in global content moderation hubs like Manila whilst offering low-income employment, no psychological help and forcing algorithmic approach to media content review – “Ignore” or “Delete” therefore, the consequences for the impact on overall context/discussion/debate are often overlooked.
Why then is the “Logic of Care” leaving the social media content moderating practices? For simple reasons – social media moderators cannot keep up with the amount of content that is being posted and is expected to react in almost real-time leading to a simplified content moderation practice.
The Human Toll on the ‘Cleaners’
The consequences of this shift towards the so-called “Logic of Choice” are becoming more and more apparent. Social media moderators are often outsourced from low-income countries and are faced with having to independently moderate thousands of textual and visual posts online throughout their working day. They are not directly employed by big social media companies like Facebook, Instagram, Twitter, or YouTube but contracted by large outsourcing companies.
To put this into a perspective, moderators are faced with 65 years’ worth of visual content published on YouTube and over a million reports of flagged content by users on Facebook, Twitter, or Instagram, throughout the course of one day. This takes an overwhelming psychological toll on moderators, many of whom can hardly keep up emotionally with processing the content they view which ranges from terrorist content and torture to pornography and bestiality. This leads to many employees leaving their positions not long after starting their employment as their companies do not offer relevant psychological support and counselling services. At the same time, social media content moderators are faced with being penalized or having their contracts terminated if they make a mistake in their judgement.
It seems apparent that AI-led content moderation is years away. For example, Facebook’s AI algorithms have failed to detect terrorist content in due time, exemplified by the case in which a live-stream of the Christchurch attacks was posted on this social media network and viewed by thousands of people before being erased 29 minutes later – hence – somebody still has to watch it.
While it is axiomatic that all content moderators must suffer to some degree psychologically, in extreme cases, some moderators have even become addicted to the graphic content they are required to moderate. The outcome of these unregulated practices has led to lawsuits against social media giants such as Facebook which was forced to pay $52 million in compensation for the PTSD suffered by more than 11,000 of its moderators.
Content moderators, cleaners, guardians, custodians – call them what you like – are one part of the invisible workforce that keeps the online space safe from damaging content, a safety that comes at a terrible price.
Nemanja Dukic is a teaching assistant at the Corvinus University of Budapest, Hungary where he also works as a research assistant at the Cold War History Research Centre working on creating an exhaustive chronology of post-World War II events. He specializes in geopolitics in Greenland and China’s influence in the Arctic Region.