The Online Safety Act (OSA) is set to impose financial penalties on online platforms that fail to start evaluating by 16 March 2025 whether their services expose users to illegal content. On Monday, Ofcom, the regulatory body responsible for enforcing the UK’s internet safety legislation, released its definitive codes of practice outlining how companies are expected to manage illegal online material. These platforms are now required to conduct risk assessments within three months to identify potential harms on their services, or they risk incurring fines reaching up to 10% of their worldwide turnover. Dame Melanie Dawes, head of Ofcom, informed BBC News that this represents the “last chance” for the industry to implement necessary changes. “If they don’t start to seriously change the way they operate their services, then I think those demands for things like bans for children on social media are going to get more and more vigorous,” she stated. She added, “I’m asking the industry now to get moving, and if they don’t they will be hearing from us with enforcement action from March.” The codes issued by Ofcom mandate that platforms must pinpoint if, where, and how illegal content could manifest on their services and establish methods to prevent it from reaching users. The OSA specifies that this encompasses material related to child sexual abuse material (CSAM), controlling or coercive behavior, extreme sexual violence, and the promotion or facilitation of suicide and self-harm. However, critics argue that the Act does not adequately address a broad spectrum of harms affecting children. The Molly Rose Foundation, established in remembrance of teenager Molly Russell, who died by suicide in 2017 following exposure to self-harm images on social media, stated that the OSA possesses “deep structural issues.” Andy Burrows, the foundation’s chief executive, expressed that the organization was “astonished and disappointed” by the absence of specific, targeted provisions within Ofcom’s guidance for platforms to address suicide and self-harm material. He asserted, “Robust regulation remains the best way to tackle illegal content, but it simply isn’t acceptable for the regulator to take a gradualist approach to immediate threats to life.” Additionally, the children’s charity NSPCC has articulated its worries. Acting chief Maria Neophytou commented, “We are deeply concerned that some of the largest services will not be required to take down the most egregious forms of illegal content, including child sexual abuse material.” She further added, “Today’s proposals will at best lock in the inertia to act, and at worst create a loophole which means services can evade tackling abuse in private messaging without fear of enforcement.” The OSA was enacted into law in October 2023, after years of political debate regarding its specifics and reach, and advocacy from individuals concerned about social media’s influence on young people. Ofcom initiated consultations on its illegal content codes that November and states it has since “strengthened” its guidance for technology firms in various aspects. According to Ofcom, its codes now offer enhanced clarity regarding the requirements for removing intimate image abuse content, alongside more comprehensive guidance on identifying and eliminating material associated with women being coerced into sex work. The codes also incorporate child safety features, such as mandating that social media platforms cease recommending users befriend children’s accounts and providing warnings about the dangers of sharing personal information. Furthermore, specific platforms are required to employ hash-matching technology to detect child sexual abuse material (CSAM), a mandate that now extends to smaller file hosting and storage sites. Hash matching involves assigning a unique digital signature to media, which can then be compared against hashes from databases of known content, specifically known CSAM in this context. Numerous major technology companies have already implemented safety measures for adolescent users and controls to enhance parental oversight of social media activity, aiming to address dangers for teenagers and anticipate regulatory requirements. For example, on platforms like Facebook, Instagram, and Snapchat, users under 18 cannot be found via search or receive messages from accounts they do not follow. In October, Instagram also began preventing certain screenshots in direct messages to counter sextortion attempts, which experts have noted are increasing and frequently target young men. Technology Secretary Peter Kyle characterized Ofcom’s release of its codes as a “significant step” toward the government’s objective of enhancing internet safety for individuals in the UK. He stated, “These laws mark a fundamental reset in society’s expectations of technology companies. I expect them to deliver and will be watching closely to make sure they do.” Throughout the development of the OSA, concerns have emerged regarding its regulations’ applicability to a vast array of online services, with campaigners consistently highlighting the privacy implications of platform age verification mandates. Furthermore, parents whose children died following exposure to illegal or harmful content have previously criticized Ofcom for its perceived “snail’s pace.” The regulator’s codes concerning illegal content still require parliamentary approval before they can be fully implemented on 17 March. Nevertheless, platforms are being informed now, under the assumption that the codes will pass through parliament without hindrance, and companies must ensure measures are established to prevent users from accessing prohibited material by this deadline. Post navigation Federal Aviation Authorities Implement Drone Restrictions in New Jersey and New York Chatbots Depicting Molly Russell and Brianna Ghey Discovered on Character.ai