Imagine someone driving a high-end sports car (choose one at random, a £1.5 million Koenigsegg Regera), driving to a pub, parking it, and sauntering out of the car. They come to the pub where you’re drinking, start walking around the patrons, slip their hands into their visible pockets, and smile at you as they pull out your wallet and empty it of cash and cards.
Not-so-sophisticated pickpockets will stop if you ask out loud, “What the hell are you doing?” “We apologize for the inconvenience,” says Suri. “It’s an opt-out system, dude.”
It sounds ridiculous. But this appears to be the approach the government is pursuing to appease AI companies. The Financial Times reports that talks will soon begin, allowing AI companies to collect content from individuals and organizations unless they explicitly opt out of having their data used.
The AI revolution is both rapid and comprehensive. Whether you’re one of the 200 million people who log on to ChatGPT every week, or whether you’ve dabbled in generative AI competitors like Claude or Gemini, you’re undoubtedly working with AI systems, whether you know it or not. You’ve probably had a conversation. But to keep the AI fire from burning out, we need two constantly replenishing sources. One is energy. This is why AI companies are getting into the nuclear power plant acquisition business. And the other thing is data.
Data is essential to AI systems because it helps them recreate how we interact. If the AI has any “knowledge”, which is highly disputed given that it is actually a fancy pattern matching machine, it comes from the data used to train it. .
One study predicts that large language models like ChatGPT will run out of training data by 2026. Its appetite is very strong. But without that data, the AI revolution could stall. Tech companies know this, which is why they license content from left, right, and center. But that creates friction, and not in an industry whose unofficial motto for the past decade-plus has been “act fast and break things.”
This is why they are already trying to steer us towards an opt-out approach to copyright, rather than an opt-in regime, where everything we type, post and share is locked in until we say no. It is destined to become AI training data by default. Companies must ask us to use their data. We can already see how companies are nudging us towards this reality. This week, X began notifying users of changes to its terms of service that will allow all posts to be used to train Elon Musk’s AI models designed to compete with Grok. Chat GPT. Meta, the parent company of Facebook and Instagram, then made similar changes, resulting in the widespread urban legend of “Goodbye Meta AI,” which purportedly invalidates legal agreements.
It’s clear why AI companies want an opt-out system. If you ask most people if they want to use something in the books they write, the music they produce, or the posts and photos they share on social networks to train an AI, they’ll probably say no. And the wheels of the AI revolution will turn off. It is less clear why the government would want to allow such a change to a concept of copyright ownership that has existed for more than 300 years and been enshrined in law for more than 100 years. But like many things, it seems to come down to money.
The government faces lobbying from big tech companies suggesting this is a requirement for the country to be considered as a place to invest in AI innovation and share the spoils. A lobbying document produced by Google suggests that supporting its approach to an opt-out copyright regime could “could make the UK a competitive place to develop and train AI models in the future”. There is. Therefore, the government’s framing of the issue, which has already raised opt-out options on the table, is a major victory for big tech lobbyists.
With so much money flowing into the tech industry and high levels of investment going into AI projects, Keir Starmer understandably doesn’t want to miss out on the potential benefits. It would be remiss of the Government not to consider how to appease the tech companies developing world-changing technology and help turn Britain into an AI powerhouse.
But this is not the answer. To be clear, the copyright system in question in the UK means that companies effectively own every post we make, every book we write, every book we create. This means it will be possible to add nicknames to songs and to our data without being penalized. That requires us to sign up to every individual service and say, “No, we don’t want you to chop up our data and spit out a poor composite image of us.” The number can number in the hundreds, from large technology companies to small research institutes.
As a reminder, OpenAI, a company now worth more than $150 billion, plans to abandon its founding non-profit principles and become a for-profit company. Rather than relying on the charity of the general public, we have enough funds in our coffers to pay for training data. Surely such companies can afford to line their own pockets rather than ours. So please let go.