Over the past few months, the BBC has been investigating a concealed aspect of the internet – a realm where the most extreme, disturbing, distressing, and frequently illicit online material converges. Content such as beheadings, mass killings, child abuse, and hate speech ultimately reaches the inboxes of a global workforce of content moderators. These individuals, rarely seen or heard from, are tasked with reviewing and, when necessary, removing content that is either reported by other users or automatically flagged by technological tools. The imperative of online safety has gained increasing prominence, placing greater pressure on technology companies to promptly eliminate harmful material. Despite significant research and investment directed towards technological solutions, human moderators largely retain the ultimate decision-making authority for now. Moderators are often employed by third-party firms but process content posted directly on major social networks, including Instagram, TikTok and Facebook. They are situated worldwide. The individuals interviewed for the BBC series “The Moderators,” broadcast on Radio 4 and BBC Sounds, primarily resided in East Africa and had all subsequently departed the industry. Their accounts were harrowing; some recorded material was too brutal for broadcast. On occasion, my producer Tom Woolfenden and I would conclude a recording session and simply sit in silence. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” states Mojez, a former Nairobi-based moderator who handled TikTok content. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos.“I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.” Currently, numerous legal claims assert that this work has severely damaged the mental health of such moderators. Some former workers in East Africa have united to form a union. “Really, the only thing that’s between me logging onto a social media platform and watching a beheading, is somebody sitting in an office somewhere, and watching that content for me, and reviewing it so I don’t have to,” explains Martha Dark, who leads Foxglove, a campaign group supporting these legal actions. In 2020, Meta, then known as Facebook, agreed to a settlement of $52m (£40m) for moderators who had developed mental health conditions due to their employment. This legal action was initiated by Selena Scola, a former US-based moderator. She characterized moderators as the “keepers of souls,” given the extensive footage they view depicting the final moments of individuals’ lives. All the former moderators I spoke with used the term “trauma” to describe the work’s impact on them. Some experienced difficulties with sleep and eating. One recounted how the sound of a baby crying caused a colleague to panic. Another mentioned struggling to interact with his wife and children because of the child abuse content he had witnessed. I had anticipated them to express that this work was so emotionally and mentally demanding that no human should be required to perform it – I believed they would fully endorse the complete automation of the industry, with AI tools evolving to handle the task. However, this was not their perspective. What emerged powerfully was the profound pride moderators felt in their role of safeguarding the world from online harm. They perceived themselves as an essential emergency service. One individual expressed a desire for a uniform and a badge, likening himself to a paramedic or firefighter. “Not even one second was wasted,” remarks an individual we identified as David. He requested anonymity but had worked on material used to train the widely popular AI chatbot ChatGPT, ensuring it was programmed not to reproduce horrific content. “I am proud of the individuals who trained this model to be what it is today.” Yet, the very tool David helped develop might one day become his competitor. Dave Willner, former head of trust and safety at OpenAI, ChatGPT’s creator, states his team developed a basic moderation tool, leveraging the chatbot’s technology, which successfully identified harmful content with an accuracy rate of approximately 90%. “When I sort of fully realised, ‘oh, this is gonna work’, I honestly choked up a little bit,” he recounts. “[AI tools] don’t get bored. And they don’t get tired and they don’t get shocked…. they are indefatigable.” Nevertheless, not everyone is convinced that AI offers a complete solution for the challenging moderation sector. “I think it’s problematic,” comments Dr Paul Reilly, a senior lecturer in media and democracy at the University of Glasgow. “Clearly AI can be a quite blunt, binary way of moderating content.“It can lead to over-blocking freedom of speech issues, and of course it may miss nuance human moderators would be able to identify. Human moderation is essential to platforms,” he adds. “The problem is there’s not enough of them, and the job is incredibly harmful to those who do it.” The BBC also contacted the technology companies mentioned in the series. A TikTok spokesperson stated that the company acknowledges content moderation is a difficult task and strives to foster a supportive working environment for employees. This includes providing clinical support and and developing programs to enhance moderators’ wellbeing. They added that automated technology initially reviews videos, which they claim removes a significant volume of harmful content. Concurrently, OpenAI – the company behind ChatGPT – expressed gratitude for the crucial and often demanding work human workers perform to train the AI to detect such images and videos. A spokesperson further noted that, in collaboration with its partners, OpenAI enforces policies to safeguard the wellbeing of these teams. And Meta – the parent company of Instagram and Facebook – affirmed that it mandates all partner companies to offer 24-hour on-site support from trained professionals. It also mentioned that moderators have the option to customize their review tools to blur graphic content. “The Moderators” is scheduled for broadcast on BBC Radio 4 at 13:45 GMT, from Monday 11, November to Friday 15, November, and is also available on BBC Sounds. Post navigation Squid Game Director Discusses Stress and Industry Compensation for Second Series Repurposing of Wolf Hall Film Sets