Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Almost all the big AI news this year was about how fast the technology is advancing, the damage it’s causing, and speculation about how soon it will grow beyond the point where humans can control it. But 2024 will also see governments make significant inroads into the regulation of algorithmic systems. Here’s a breakdown of the most important AI legislation and regulatory efforts of the past year at the state, federal and international levels.
US state legislatures took the lead in AI regulation in 2024, introducing hundreds of bills– some had modest goals such as the creation of study committees, while others would have imposed serious civil liability on AI developers in case their creations cause catastrophic damage to society. Most of the bills did not pass, but several states enacted significant legislation that could serve as models for other states or Congress (assuming Congress ever begins to function again).
As AI slop flooded social media before the election, politicians from both parties got behind anti-deepfake laws. More than 20 states they now have prohibitions against misleading AI-generated political ads in the weeks immediately before an election. Bills aimed at curbing AI-generated pornography, particularly images of minors, have also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina and South Dakota.
Unsurprisingly, given that it’s the tech industry’s backyard, some of the most ambitious AI proposals have come from California. A high-profile bill would have forced AI developers to take security precautions and held companies liable for catastrophic damage caused by their systems. That bill passed both houses of the legislature amid a fierce lobbying effort, but was finally veto by Governor Gavin Newsom.
Newsom, however, signed more than a dozen other bills aimed at less apocalyptic but more immediate AI damage. A new California law requires health insurers to ensure that the AI systems they use to make coverage determinations are fair and equitable. Another requires that generative AI developers create tools that label content as AI-generated. And a pair of bills prohibits the distribution of the AI-generated likeness of a dead person without prior consent and mandates that agreements for the AI-generated likeness of living people must clearly specify how the content will be used .
Colorado passed a first of its kind in US law requiring companies that develop and use AI systems to take reasonable steps to ensure that the tools are non-discriminatory. Consumer advocates call the legislation an basic important. It is likely that similar bills will be widely discussed in other states in 2025.
And, in a middle finger to our future robot overlords and the planet, Utah enacted a law which prohibits any governmental entity from granting legal personhood to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants and other non-human things.
Congress talked a lot about AI in 2024, and the House ended the year by releasing a 273 page bipartisan report outlining guiding principles and recommendations for future regulation. But when it came to passing legislation, federal lawmakers did very little.
Federal agencies, on the other hand, were busy all year round try to meet the goals set out in President Joe Biden’s 2023 executive order on AI. And many regulators, notably the Federal Trade Commission and the Department of Justice, have cracked down on deceptive and harmful AI systems.
The work agencies did to comply with the AI executive order was not particularly sexy or headline-grabbing, but it laid important foundations for the governance of public and private AI systems in the future. For example, federal agencies have embarked on a hiring spree of AI talent and created standards for responsible model development and damage mitigation.
And, in a big step toward increasing the public’s understanding of how the government uses AI, the Office of Management and Budget has beaten (most of) its fellow agencies in disclosure. critical information about the AI systems they use that can impact people’s rights and safety.
On the enforcement side, the FTC Operation AI Comply target companies that use AI in deceptive ways, such as writing fake reviews or providing legal advice, and sanctioned AI gun detection company Evolv for making misleading claims about what its product could do. The agency too established an investigation into facial recognition company IntelliVision, which it accused of falsely claiming its technology was free of racial and gender bias, and forbidden drugstore chain Rite Aid will use facial recognition for five years after an investigation determined the company was using the tools to discriminate against shoppers.
The DOJ, meanwhile, has joined state attorneys general in a lawsuit accusing the real estate software company. RealPage of a massive algorithmic pricing scheme which has increased rents across the nation. He also won several anti-trust cases against Google, including one involving the company monopoly on internet searches which could significantly shift the balance of power in the burgeoning AI research industry.
In August, the AI Act of the European Union went into effect. The law, which already serves as a model for other jurisdictions, requires AI systems that perform high-risk functions, such as assisting with hiring or medical decisions, to be subject to risk mitigation and meet certain standards regarding the quality of training and human supervision. It also prohibits the use of other AI systems, such as algorithms that could be used to assign social scores to residents of a country that are then used to deny rights and privileges.
In September, China issued a major AI security governance square. Like similar frameworks published by the US National Institute of Standards and Technology, it is not binding, but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.
One of the most interesting pieces of AI politics legislation comes from Brazil. By the end of 2024, the country’s Senate passed a comprehensive AI security bill. It faces a challenging path forward, but if passed, it would create an unprecedented set of protections for the types of copyrighted material commonly used to train generative AI systems. Developers would have to disclose what copyrighted material was included in their training data, and creators would have the power to prohibit the use of their work for training AI systems or to negotiate compensation agreements that would be based, in part, on the size of the AI. developer and how the material is used.
Like the EU AI Act, the proposed Brazilian law also requires high-risk AI systems to follow certain security protocols.