Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
US President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas on November 19, 2024.
Brandon Bell | Via Reuters
The year 2025 will see some changes in the US political landscape—and these changes will have major implications for the regulation of artificial intelligence.
The elected president Donald Trump the grand opening will take place on January 20. He will be joined at the White House by a group of top business advisers, including Elon Musk and Viveka Ramaswamy, who are expected to influence policy on emerging technologies such as artificial intelligence and cryptocurrencies.
On the other side of the Atlantic, a tale of two jurisdictions emerged: Great Britain and the European Union discrepancies in normative thinking. While the EU has taken a heavier hand with the Silicon Valley giants behind the most powerful AI systems, Britain has adopted easier approach.
In 2025, the state of AI regulation around the world could be in for a major overhaul. CNBC takes a look at some of the key developments to watch, from the evolution of the EU’s landmark AI Act to what the Trump administration might do for the U.S.
Elon Musk walks on Capitol Hill during a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S., on December 5, 2024.
Benoit Tessier | Reuters
Although not an issue that was very important during Trump’s election campaign, artificial intelligence is expected to be one of the key sectors that will benefit from the next US administration.
For example, Trump appointed Musk as the CEO of an electric car manufacturer Teslato co-chair his “Department of Government Efficiency” with Ramaswamy, the American biotech entrepreneur who dropped out of the 2024 presidential race to support Trump.
Matt Calkins, CEO of Appian, told CNBC that Trump’s close relationship with Musk could put the US in a good position when it comes to artificial intelligence. positive indicators.
“We finally have one person in the US administration who really knows about artificial intelligence and has an opinion about it,” Calkins said in an interview last month. Musk has been one of Trump’s most prominent supporters in the business community, even appearing at some of his campaign rallies.
There is currently no confirmation of what Trump has planned in terms of possible presidential directives or executive orders. But Calkins believes that Musk is likely to propose safeguards to ensure that the development of artificial intelligence does not threaten civilization – a risk that he warned several times in the past.
“He has an unquestionable reluctance to allow artificial intelligence to cause catastrophic consequences for humans — he’s certainly concerned about that, he’s been talking about it long before he took a political position,” Calkins told CNBC.
There is currently no comprehensive federal AI legislation in the US, and there is a plethora of legislation at the state and local levels, with numerous AI bills introduced in 45 states, as well as Washington, Puerto Rico, and the US Virgin Islands.
The European Union is so far the only jurisdiction in the world to promote comprehensive rules for artificial intelligence with its AI Act.
Jacques Silva | Nurphoto | Getty Images
The European Union has so far been the only jurisdiction in the world to promote comprehensive statutory rules for the artificial intelligence industry. Earlier this year, the bloc’s AI Law, a first-of-its-kind AI regulatory framework— officially entered into force.
The law has not yet taken effect, but it is already causing tension among major US technology companies, which worry that some aspects of the regulation are too strict and could stifle innovation.
In December, the EU Office for Artificial Intelligence, the newly created body overseeing models under the AI Act, published a second draft code of practice for General Purpose Artificial Intelligence (GPAI) models, which covers systems such as the GPT OpenAI family of large-scale models of languages, or LLMs.
The second draft included exemptions for providers of some open source AI models. Such models are usually available to the public so that developers can create their own versions. It also includes requiring developers of GPAI “systems” models to undergo a rigorous risk assessment.
An industry association for the computer and communications industry, whose members include Amazon, Google and Meta — warned that it “contains measures that go far beyond the agreed scope of the Act, such as far-reaching copyright measures.”
The AI office was not available for comment when contacted by CNBC.
It should be noted that the EU AI Law is far from being fully implemented.
As Shelley McKinley, chief legal officer of the popular code repository platform GitHub, told CNBC in November, “the next phase of work has begun, which may mean we have more ahead of us than behind us.”
For example, the first provisions of the law will enter into force in February. These provisions apply to “high-risk” AI applications such as remote biometric identification, loan decision-making and educational assessment. A third draft code for the GPAI models is scheduled to be published in the same month.
European tech leaders are worried about the risk that EU punitive measures against US tech companies could provoke a backlash from Trump, which could in turn force the bloc to soften its approach.
Take, for example, antitrust regulation. The EU has been an active player in taking steps to curb the dominance of US tech giants, but that could lead to a backlash from Trump, according to Andy Yen, CEO of Swiss VPN firm Proton.
“(Trump’s) view is that he probably wants to regulate his own technology companies,” Yen told CNBC in a November interview at the Web Summit tech conference in Lisbon, Portugal. “He doesn’t want Europe to interfere.”
British Prime Minister Keir Starmer gives an interview to the media while attending the 79th United Nations General Assembly at the United Nations Headquarters in New York, the United States, on September 25, 2024.
Leon Neal | Via Reuters
One country to watch is the UK. It used to be in the UK evaded the introduction of statutory duties for manufacturers of artificial intelligence models due to concerns that the new legislation may be too restrictive.
However, Keir Starmer’s government has said it plans to develop legislation for artificial intelligence, although details remain sketchy. The general expectation is that the UK will take a more principled approach to AI regulation, as opposed to the EU’s risk-based system.
Last month, the government dropped its first major indication of where regulation is headed by announcing a consultation measures to regulate the use of copyrighted content for training artificial intelligence models. Copyright is a big issue for generative artificial intelligence and the LLM in particular.
Most masters use publicly available data from the open internet to train their AI models. But it often includes examples of artwork and other copyrighted material. Artists and publishers like New York Times claim that these systems are unfairly removing their valuable content without consent to create an original result.
To address this issue, the UK government is considering making an exception to copyright law for training AI models, but still allowing rights holders to opt out of using their work for educational purposes.
Appian’s Calkins said the UK could become a “world leader” on the issue of copyright infringement by AI models, adding that the country was not “subject to the same overwhelming lobbying attack from national AI leaders as the US”.
U.S. President Donald Trump, right, and Chinese President Xi Jinping walk past members of the People’s Liberation Army (PLA) during a welcoming ceremony outside the Great Hall of the People in Beijing, China, Thursday, Nov. 9, 2017.
Tilai Sheng | Bloomberg | Getty Images
Finally, as governments around the world seek to regulate rapidly developing artificial intelligence systems, there is a risk that geopolitical tensions between the US and China could escalate under Trump.
During his first term as president, Trump took a number of tough policy measures against China, including the decision to add Huawei to a trade blacklist that bars it from doing business with American technology suppliers. It also launched an effort to ban TikTok, owned by Chinese firm ByteDance, in the US – although it has since softened its stance on TikTok.
There is China race to beat the US for dominance in II. At the same time, the U.S. has taken steps to limit China’s access to key technologies, mainly chips similar to those developed by Nvidiawhich are needed to train more advanced AI models. China responded by trying to create its own chip industry.
Technologists worry that the geopolitical divide between the US and China over artificial intelligence could lead to other risks, such as the possibility for one of the two to develop a form of AI smarter than humans.
Max Tegmark, founder of the non-profit Institute for the Future of Life, believes that in the future the US and China could create a form of artificial intelligence that can improve itself and develop new systems without human supervision, potentially forcing the governments of both countries to come up with rules for AI safety.
“My optimistic way forward is that the US and China unilaterally impose national security standards to prevent damage to their own companies and the creation of uncontrollable AGI, not to appease rival superpowers, but simply to protect themselves.” — Tegmark told CNBC in a November interview.
Governments are already trying to work together to figure out how to create rules and frameworks around artificial intelligence. In 2023, Great Britain hosted a World Summit on Artificial Intelligence Securityattended by the US and Chinese administrations to discuss potential obstacles surrounding the technology.
– CNBC’s Arjun Harpal contributed to this report