newsletter

Why are AI software makers lobbying for kids’ online safety laws?

THINK OF THE CHILDREN

Last week, the U.K. passed the Online Safety Bill, a law that’s meant to help snuff out child sexual exploitation and abuse on the internet. The law will require websites and services to scan and somehow remove all content that could be harmful to kids before it appears online. 

This could fundamentally change the rules of the game not only for big social media sites but also for any platform that offers messaging services. A provision within the law requires companies to develop technology that enables them to scan encrypted messages, thus effectively banning end-to-end encryption. There is powerful backing for similar laws to be passed in both the U.S. and the European Union.

Scouring the web in an effort to protect children from the worst kinds of abuse sounds like a noble endeavor. But practically speaking, this means the state would be surveilling literally everything we write or post, whether on a public forum or in a private message. If you don’t already have a snoopy government on your hands, a law like this could put you just one election away from a true mass surveillance regime of unprecedented scale. Surely, there are other ways to keep kids safe that won’t be quite so detrimental to democracy.

As a parent of two tiny children, I feel a little twinge when I criticize these kinds of laws. Maybe the internet really is rife with content that is harmful to children. Maybe we should be making these tradeoffs after all. But is kids’ safety really what’s driving the incredibly powerful lobbying groups that somehow have a seat at every table that matters on this issue, from London to D.C. to Brussels?