The UK’s Online Safety Bill has finally been made law and it’s not without complication and controversy. Here’s everything you need to know.
The Online Safety Bill — a much-debated law that’s now been passed after months of back-and-forth in the House of Commons — aims to protect children and put more responsibility into the hands of tech firms to manage their platforms’ content.
The bill enforces tech companies, including the likes of Google and Meta (which owns Instagram, Facebook, Threads and WhatsApp), to protect children and young adults from harmful and illegal material. This includes content that shows sexual abuse, sexual violence, animal cruelty, self-harm and suicide, illegal immigration and terrorism, but also that which involves coercive or controlling behaviour, like bullying.
The law also incorporates new offences, such as ‘deep-fake’ pornography that uses AI, and ‘cyber-flashing’ — sending unsolicited sexual images online.

What is the controversy?
The government has stated its key aim for the bill is to increase user safety online while “defending freedom of expression”. The need to strike a balance between these two policy objectives is what led to the long delays in parliament for getting the act passed. However, critics have ongoing concerns around the implications for users’ privacy. And much of the push-back has come from the tech companies themselves.
A number of platforms including WhatsApp and iMessage say it’s impossible to access people’s messages without eroding the security and privacy protections for its users, and have threatened to leave the UK rather than compromise message security.
On the other hand, several charities have welcomed the bill, including the NSPCC, the Internet Watch Foundation and The Equality and Human Rights Commission.
Sir Peter Wanless, NSPCC chief executive, said the law “will mean that children up and down the UK are fundamentally safer in their everyday lives.”
How will the new law be regulated?
The media regulator Ofcom will have greater enforcement powers to oversee compliance by all companies. It has said it will only require tech firms to access messages once suitable technology has been developed to enable this.
Ofcom plans to provide guidance for businesses to clarify how to comply with the rules of the bill next month, with a consultation process launching on the 9th November. They will then take a phased approach to bringing the Online Safety Bill into force, prioritising the most harmful content.
“Ofcom is not a censor, and our new powers are not about taking content down. Our job is to tackle the root causes of harm,” said Ofcom CEO, Dame Melanie Dawes. “We know a safer life online cannot be achieved overnight, but Ofcom is ready to meet the scale and urgency of the challenge.”
The act calls on age-checking measures to be used for certain pieces of content or sections of a platform. Many social media companies have already started making changes, with TikTok already implementing stronger age verification on their platform, while Snapchat has begun removing the accounts of underage users.
Companies that are found to be in breach of the bill will face fines of up to £18m or 10% of their annual global turnover, whichever is greater.
Keen to explore more?
Watch our programme The Cyber Impact here, where we talk to Viscount Camrose, Minister for AI and Intellectual Property. Or discover more about ITN Business’ upcoming programme, AI and Big Data: A Force for Good.