Imagine some people driving up to a pub in a top-of-the-range sports car – say a £1.5m [€1.8m] Koenigsegg Regera – parking it and slowly getting out of the vehicle. They walk into the pub where you’re drinking and start walking around the patrons, reaching into your pockets in front of everyone, and smiling as they pull out your wallet and empty it of cash and cards.
The not-so-subtle crime stops if you yell and ask what the hell they’re doing. “Sorry for the trouble,” says the pickpocket. “Friend, you have the opportunity to refuse.”
It sounds absurd. However, this seems to be the approach the government is taking to satisfy artificial intelligence (AI) companies. According to the Financial Times, a consultation will soon open that will allow AI companies to collect content from individuals and organizations, unless the latter explicitly refuse the use of their data.
The AI revolution has been as sweeping as it has been rapid. Even if you’re not one of the 200 million people who log into ChatGPT every week, or if you use its generative AI competitors like Claude and Gemini, you’ve undoubtedly interacted with an AI system – whether you know it or not. But AI momentum needs two sources of continuous supply in order to survive and not fizzle out. One is energy – which is why AI companies are getting into the business of buying nuclear power plants. And, the other is data.
Data is essential to AI systems because it helps evolve the ways we interact. If the AI has any “knowledge” – and this is highly debatable, considering that it is really just a pattern-matching machine – then it comes from the data it is trained on.
One study predicts that large language models like ChatGPT will run out of training data by 2026 due to its voracious appetite. However, without that data, the AI revolution may stall. Tech companies know this, which is why they’re signing blanket content licensing deals all over the place. But this creates obstacles, and a sector whose unofficial motto during the last decade has been “act fast and don’t think”, does not like obstacles.
For this reason, they are already trying to push us into a copyright waiver approach where everything we write, post and share is destined to become training data for AI in advance, unless we say no when companies must seek our permission to use our data. We can already see how companies are preparing us for this reality: this week, X began notifying users of a change to its terms and conditions of use that would allow all posts to be used to train Grok, the model Elon Musk’s AI, designed to compete with ChatGPT. Meanwhile, Meta, the company that owns Facebook and Instagram, has made a similar change — resulting in the viral urban legend “Goodbye Meta AI” that supposedly overturns legal agreements.
The reason AI companies want automatic opt-in is clear: if you ask most people if they want anything from the books they write, the music they produce, or the posts and photos they share on social media to be used to train AI, they’ll say no. And then the AI revolution fails. Why governments want to enable such a change in the concept of copyright, which has existed for more than 300 years and been enshrined in law for more than 100 years, is less clear. But, like many things, it seems to be about money.
The government has faced lobbying from big tech companies suggesting this is a requirement to rate their countries as places to invest and share the benefits of AI innovation. A lobbying document written by Google suggested that its support for the automatic inclusion of the entire copyright regime, with the option of opt-out by certain individuals, would “ensure that the UK can be a competitive place to develop and training future AI models”. The government’s proposed framework for the matter, which already establishes opt-out access, is a major win for big tech lobbyists.
Considering the amount of money circulating in the technology sector and the level of investment thrown into AI projects, it’s no surprise that Keir Starmer doesn’t want to miss out on the potential for potential profits. The Government would be remiss if it did not consider how to delight companies as they develop world-changing technology, and make the UK a powerhouse in AI.
But, that is not the answer. Let’s be clear: the proposed UK copyright scheme would effectively allow companies to steal our data – every post we make, every book we write, every song we create – without any consequence. It would require us to sign up to each separate service and tell them no, we don’t want them processing our data and giving us a poor image. Potentially hundreds of them, from large technology companies to small research labs.
Lest we forget, OpenAI – a company now valued at more than $150 billion – is planning to abandon its non-profit founding principles to become a for-profit company. It has more than enough money to pay for the training data, rather than relying on the generosity of the general public. Such companies can certainly afford to put their hands in their pockets, not ours. Therefore, hands off.
Good post! We will be linking to this particularly great post on our site. Keep up the great writing
Good post! We will be linking to this particularly great post on our site. Keep up the great writing
Hi my family member I want to say that this post is awesome nice written and come with approximately all significant infos I would like to peer extra posts like this