Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI presents its preferred version of AI regulation in new ‘blueprint’


OpenAI on Monday published what it calls an “economic plan” for AI: a living document that sets out the policies that the company thinks it can build with the US government and its allies.

The plan, which includes a forward from Chris Lehane, OpenAI’s VP of global business, states that the United States must act to attract billions in funding for the chips, data, energy and talent needed to “win on to AI”.

“Today, while some countries are abandoning AI and its economic potential,” Lehane wrote, “the US government can pave the way for its AI industry to continue the country’s global leadership in the ‘innovation while protecting national security’.

OpenAI has repeatedly called on the US government to take more substantive action on AI and infrastructure to support technology development. The federal government has largely left the regulation of AI to the states, a situation that OpenAI describes in the plan as unsustainable.

In 2024 only, state legislators introduced almost 700 bills related to AI, some of which conflict with others. The Texas AI Responsible Governance Act, for example, impose onerous liability requirements on the developers of open source AI models.

OpenAI CEO Sam Altman also criticized existing federal laws on the books, such as the CHIPS Actwhich aims to revitalize the US semiconductor industry by attracting domestic investment from the world’s leading chipmakers. In a recent interview with Bloomberg, Altman he said that the CHIPS Act “(hasn’t) been as effective as some of us hoped,” and that he thinks there is “a real opportunity” for the Trump administration to “do something much better as follows “.

“What I really agree with (Trump) is, it’s wild how difficult it has become to build things in the United States,” Altman said in the interview. “Power plants, data centers, any of those kinds of things. I understand how the bureaucracy grows, but it’s not helpful to the country as a whole. It’s not particularly helpful when you think about what needs to happen for the States United to lead AI. And the US really needs to lead AI.”

To power the data centers needed to develop and manage AI, OpenAI’s plan recommends “drastically” increasing federal spending on energy and data transmission, and the significant creation of “new energy sources,” such as solar, wind and nuclear parks. OpenAI – with its AI rivals – has at first supported nuclear energy projects, discussing which are needed to meet the electricity demands of next-generation server farms.

Tech giants Meta and AWS clashed with their nuclear efforts, although for reasons which have nothing to do with nuclear energy itself.

In the near term, the OpenAI plan proposes that the government “develop best practices” for the implementation of models to protect against abuse, “streamline” the engagement of the AI ​​industry with security agencies nationally, and develop export controls that allow the sharing of models with allies while “limit(ing)” their export to “adversary nations.” In addition, the plan encourages the government to share certain information related to national security, such as briefings on threats to the AI ​​industry, with vendors, and help vendors secure resources to evaluate their models. for the risks.

“The federal government’s approach to the safety and security of border models should streamline the requirements,” the plan says. “Responsibly exporting … models to our allies and partners will help them support their AI ecosystems, including their developer communities innovating with AI and distributing its benefits, as well as building AI on top of technology of the United States, not the technology funded by the Chinese Communist Party.”

OpenAI already counts a few US government departments as partners, and – if its plan gains currency among policy makers – it is to add more. The company has dealt with the Pentagon for cybersecurity work and other related projects, and has united with defense startup Anduril to provide its AI technology to the systems the US military uses to counter drone attacks.

In its plan, OpenAI calls for the drafting of standards “recognized and respected” by other nations and international bodies on behalf of the private sector of the United States. But the company does not stop to approve the mandatory rules or edicts. “(The government can create) a defined, voluntary path for companies that develop (AI) to work with the government to define model evaluations, test models, and exchange information to support business safeguards,” he says the floor

The Biden administration took a similar route with its AI Executive Orderwhich sought to enact several high-level, voluntary AI safety and security standards. The executive order established the US AI Safety Institute (AISI), a federal government body that studies risks in AI systems, which has associated with companies including OpenAI to evaluate the security of the model. But Trump is his allies to get promised to repeal Biden’s executive orderputting their coding – and AISI – at risk of being broken.

OpenAI’s plan also addresses AI-related copyright, a hot button theme. The company makes the case that AI developers should be able to use “publicly available information,” including copyrighted content, to develop models.

OpenAI, along with many other AI companies, provide models public data from all over the web. The company has license agreements in place with a number of platforms and publishers, and offers limited ways for creators to “opt out” of their model development. But OpenAI has too he said that it would be “impossible” to train AI models without using copyrighted materials, and a number of breeders to get sued the company for allegedly training on their jobs without permission.

“(Other) other actors, including developers in other countries, make no effort to respect or engage with IP rights owners,” the plan says. “If the United States and similar nations do not address this imbalance through sensible measures that help advance AI in the long term, the same content will still be used for AI training elsewhere, but for the benefit of other economies.(Government should ensure) that AI has the ability to learn from universal, publicly available information, like humans, while also protecting creators from unauthorized digital replicas.

It remains to be seen which part of OpenAI’s plan, if any, will influence the legislation. But the proposals are a sign that OpenAI intends to remain a key player in the race for a unifying US AI policy.

In the first half of last year, OpenAI more than tripled its lobbying expenses, spending $800,000 versus $260,000 in all of 2023. The company also brought former government leaders into its executive ranks, including former Defense Department official Sasha Baker, head of the NSA. Paul Nakasoneand Aaron Chatterji, formerly the Commerce Department’s chief economist under President Joe Biden.

How does it make assumptions and it expands its global affairs division, OpenAI has been more vocal about what AI laws and rules it prefers, e.g. throwing his weight behind Senate bills that would establish a federal regulatory body for AI and provide federal grants for AI R&D. The company also has opposite bills, especially from California SB 1047arguing that it would suffer AI innovation and push talent.



Source link