Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

[ad_1]
Jacques Silva | Nurphoto | Getty Images
LONDON — Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, hinting at a possible divergence from approaches taken by its main Western counterparts.
“It’s very important that we, as a UK, do our own thing when it comes to regulation,” Ferial Clarke, the UK’s minister for artificial intelligence and digital government, told CNBC in an interview that aired on Tuesday.
She added that the government already has “good relationships” with AI companies such as OpenAI and Google DeepMind, which have voluntarily opened up their models to the government for security testing.
“It is very important that we are driven by this safety from the outset when the models are developed … and so we will work with the sector on any safety measures that come forward,” Clarke added.
Her comments echoed remarks by Prime Minister Keir Starmer on Monday that the UK had “the freedom now to regulate to do it as we think is best for the UK” after Brexit.
“You have different models around the world, you have the EU approach and the US approach, but we have the opportunity to choose the one that we think is in our best interest and we intend to do that,” Starmer said. in response to a journalist’s question after the announcement of a A 50-point plan to make the UK a world leader in artificial intelligence.
Until now, Britain has refrained from introducing formal laws to regulate artificial intelligence, instead leaving individual regulators to enforce existing rules for businesses when it comes to the development and use of artificial intelligence.
This differs from the EU, which has introduced comprehensive EU-wide legislation aimed at harmonizing rules for technology across the bloc, using a risk-based approach to regulation.
The US, meanwhile, lacks any regulation of artificial intelligence at the federal level and instead adopted much regulatory framework at the state and local level.
During Starmer’s election campaign last year, the Labor Party pledged in its manifesto to introduce regulation focused on so-called “frontier” AI models – meaning large language models such as OpenAI’s GPT.
However, the UK has so far not confirmed the details of the proposed AI safety legislation, instead saying it will consult with the industry before proposing formal regulations.
“We’re going to work with the sector to develop that and push that forward in line with what we said in our manifesto,” Clarke told CNBC.
Chris Mooney, partner and head of commercial at London-based law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to AI regulation, despite the EU moving forward with an AI law.
“While the UK government says it has taken a ‘pro-innovative’ approach to AI regulation, our experience with clients shows that they find the current position uncertain and therefore unsatisfactory,” Mooney told CNBC via email.
One area in which the Starmer government has spoken out about reforming AI regulations has been copyright.
At the end of last year, the UK opened a consultation on the revision of the copyright system in the country to assess possible exemptions from the existing rules for AI developers who use the works of artists and media publishers to train their models.
Sachin Dev Duggal, CEO of London-headquartered startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” acting without clear rules is “borderline reckless.”
“We’ve already missed important regulatory periods twice — first with cloud computing and then with social media,” Duggal said. “We can’t afford to make the same mistake with AI, where the stakes are exponentially higher.”
“UK data is our crown jewel; they should be used to build sovereign AI capabilities and create British success stories, not simply to fuel foreign algorithms that we cannot effectively regulate or control,” he added.
There were details of Labour’s plans for AI legislation he was originally expected to appear in King Charles III’s opening speech to the British Parliament last year.
However, the government has only committed to creating “appropriate legislation” for the most powerful AI models.
“The UK government needs to bring clarity here,” John Byers, international head of AI at law firm Osborne Clarke, told CNBC, adding that he had learned from sources that the consultation on formal AI security laws was “awaiting release”.
“By issuing piecemeal advice and plans, the UK has missed an opportunity to provide a holistic view of where its AI economy is heading,” he said, adding that a failure to reveal details of the new AI safety laws would lead to uncertainty for investors. .
However, some in the UK tech scene believe a more relaxed, flexible approach to AI regulation may be in order.
“It’s clear from recent discussions with the government that there is a significant effort to protect AI,” Russ Shaw, founder of advocacy group Tech London Advocates, told CNBC.
He added that the UK was well placed to adopt a “third way” on AI security and regulation – “specific” rules for different industries such as financial services and healthcare.
[ad_2]
Source link