- The British government said this week it would consider “appropriate legislation” for the world’s most powerful artificial intelligence models.
- But details of the actual AI bill that many tech executives and commentators had been hoping for were not released.
- The new Labour government faces the challenge of striking a delicate balance between making rules strict enough while also allowing for innovation.
An internet user checks ChatGPT on his mobile phone in Suqian, Jiangsu Province, China, on April 26, 2023.
Future Publishing | Future Publishing | Getty Images
LONDON — Britain is set to introduce its first artificial intelligence laws, but Prime Minister Keir Starmer’s new Labour government faces the delicate challenge of striking a delicate balance between making rules tough enough while also allowing for innovation.
In a speech delivered by Charles III on behalf of Starmer’s government, the government said on Wednesday it would “seek to bring in appropriate legislation to impose requirements on those working on developing the most powerful artificial intelligence models”.
But the speech did not include any mention of the actual AI legislation that many tech executives and commentators had been hoping for.
In the European Union, authorities have introduced comprehensive legislation, the “AI Act,” that will impose stricter regulations on companies that develop and use artificial intelligence.
Many tech companies, large and small, are hoping that the UK will not follow the same path of applying rules that they consider to be too strict.
The Labour Party is expected to introduce formal rules on AI, as outlined in its election manifesto.
Starmer’s government has promised to introduce “binding regulation of the small number of companies developing the most powerful AI models” as well as legislation to ban sexually explicit deepfakes.
By targeting the most powerful AI models, Labour would impose tougher regulation on companies such as OpenAI, Microsoft, Google and Amazon, as well as AI start-ups such as Anthropic, Cohere and Mistral.
“The largest AI companies are going to face much more scrutiny than they’ve ever faced before,” Matt Calkins, CEO of software company Appian, told CNBC.
“What we need is an enabling environment for widespread innovation, governed by a clear regulatory framework that provides a level playing field and transparency for all.”
Lewis Liu, head of AI at contract management software company Sirion, warned that governments should avoid a “broad-brush hammer approach to regulating every use case.”
Use cases such as clinical diagnostics, which involve sensitive medical data, shouldn’t be put in the same category as, say, enterprise software, he said.
“The UK has an opportunity to get this nuance right and deliver huge benefits for the tech industry,” Liu told CNBC, but added that there were “positive signs” about Labour’s AI plans so far.
The AI Bill will contrast with that of former Chancellor Starmer, who under Rishi Sunak the government opted for a light-handed approach to AI and sought to apply existing rules to it.
The previous Conservative government said in a policy paper in February that introducing binding measures too early could “fail to effectively address risks, quickly become outdated or stifle innovation”.
In February, UK New Technologies Minister Peter Kyle said the Labour party would make it legally necessary for companies to share testing data about the safety of their AI models with the government.
“We will be making it a law that the results of these test data must be made available to the government,” Mr Kyle, who was shadow technology minister at the time, told the BBC in an interview.
The Sunak government had secured agreements from tech companies to share safety testing information with the AI Safety Institute, a government-backed body that tests advanced AI systems, but this was done on a voluntary basis.
The UK government wants to avoid pushing AI regulations too hard and stifling innovation. The Labour Party also emphasized in its manifesto that it will “support a variety of business models that bring innovation and new products to market.”
Zahra Baroloumi, CEO of Salesforce UK and Ireland, told CNBC that any regulation would need to be “nuanced” and responsibility allocated “accordingly”, adding that she welcomed calls from the government to bring in “appropriate legislation”.
Matthew Houlihan, Cisco’s senior director of government relations, said any regulation of AI “needs to be centered around a thoughtful, risk-based approach.”
Other proposals already put forward by British politicians offer some insight into what Labour’s AI bill might contain.
Chris Holmes, a rank-and-file Conservative MP in the UK House of Lords, introduced a bill to regulate AI last year, which passed its third reading in May and was sent to the House of Commons.
Holmes’ bill is less likely to become law than the one proposed by the government, but it does offer some ideas for how Labor could craft its own AI legislation.
Holmes’ proposed legislation also includes a proposal to create a centralized AI authority to oversee the enforcement of regulations regarding AI technology.
Companies will have to provide AI authorities with any third-party data and intellectual property they have used to train their models, and confirm that such data and IP is being used with consent from the original source.
In some ways, this mirrors the thinking of the EU’s AI Office, which is responsible for overseeing the development of advanced models of AI.
Another of Holmes’ suggestions is for companies to appoint a separate AI officer to ensure the company’s use of AI is safe, ethical and fair, as well as to ensure that the data used in any AI technology is fair.
Based on Labour’s promises so far, such legislation would inevitably be “a far cry from the sweeping scope of EU AI law,” Matthew Holman, a partner at law firm Cripps, told CNBC.
Holman added that the UK was more likely to find a “middle ground” rather than pressuring AI model makers for heavy-handed disclosure — for example, the government could ask AI companies to share what they’re working on in closed sessions of their AI safety labs, but not reveal trade secrets or source code.
Technology Secretary Kyle previously said at London Tech Week that Labour would not pass anything as tough as the AI Act because it did not want to stifle innovation or block investment from major AI developers.
Still, the UK’s AI law would go a step further than the US, which currently lacks any federal AI legislation, while in China, regulations are stricter than any legislation the EU or UK might propose.
Chinese regulators finalized rules last year to regulate generative AI aimed at combating illegal content and strengthening security protections.
Sirion’s Liu said one thing he would like the UK government to avoid is restricting open source AI models. “It is important that the UK’s new AI regulations do not stifle open source or fall into a regulatory trap,” Liu told CNBC.
“There’s a big difference between the harm done by a large LLM like OpenAI and a specific, customized open source model that a startup uses to solve a specific problem.”
Herman Narula, CEO of metaverse venture builder Improbable, agreed that restricting open-source AI innovation is a bad idea. “New government action is needed, but this action needs to be focused on creating a world in which open-source AI companies can survive. This is necessary to prevent monopolies,” Narula told CNBC.