The internet will only get harder from here

The Impact of Ofcom on Online Safety: A Brief Note on Social Media, Terrorism, and Pseudoscalars

The Online Safety Act has taken a long while to pass in the UK so Ofcom has published its first guidelines for tech firms. Social media platforms and search engines should deal with child sexual abuse material, terrorist content, and fraud, as well as pornography sites, if a proposal is adopted.

The aim is to make sure that sites are proactive when it comes to stopping the spread of illegal content. It’s meant to encourage a switch from a reactive to a more proactive approach, says lawyer Claire Wiseman, who specializes in tech, media, telecoms, and data.

Large tech platforms already follow many of these practices, but Ofcom hopes to see them implemented more consistently. They think they are the best practice of their time but it is not always applied across the board. Some firms are applying it occasionally, but not completely, and so we think there is a huge benefit for a larger, widespread adoption.

The platform known as X is an outlier. The UK had legislation to deal with trust and safety before the acquisition of Twitter, but it was passed just as Musk implemented a number of changes to his company which could be at odds with regulators. Musk has publicly stated his intention to remove X’s block feature, despite Ofcom guidelines specifying that users should be able to easily block users. He’s clashed with the EU over similar rules and reportedly even considered pulling out of the European market to avoid them. I asked if X was cooperating in talks with Ofcom and was declined to state if the company had done so.

Ofcom can require online platforms to use accredited technology to detect CSAM. But WhatsApp, other encrypted messaging services, and digital rights groups say this scanning would require breaking apps’ encryption systems and invading user privacy. The full impact of the proposed consultation will be uncertain, as Ofcom plans to consult next year.

There’s another technology not emphasized in today’s consultation: artificial intelligence. That doesn’t mean that the content will be in line with rules. The Online Safety Act tries to address online harms in a way that is technology neutral. So AI-generated CSAM would be in scope by virtue of it being CSAM, and a deepfake used to conduct fraud would be in scope by virtue of the fraud. In order to regulate the context, we are not regulating the technology.

The Regulator’s Recommendation on Technology Platforms and Cyber-Processing in the Light of MacKinnon

MacKinnon says that the platform they agree on has responsibilities but that it is problematic to have zero sum hours of work.

The act is a long piece of legislation that covers a number of issues from how technology platforms should protect children from abuse to scam advertising and terrorist content. Today, the Regulator released a proposal for what technology companies will need to do to comply with the act.

Previous post The road back from hell is an opinion
Next post There are four things we learned from Disney’s earnings call