Posted on

California Gov. Gavin Newsom is vetoing a bill that would create the nation’s first security measures for artificial intelligence

California Gov. Gavin Newsom is vetoing a bill that would create the nation’s first security measures for artificial intelligence

SACRAMENTO, Calif. – California Gov. Gavin Newsom on Sunday vetoed a landmark bill that aimed to introduce state-first security measures for large artificial intelligence models.

The decision is a major blow to efforts to rein in the domestic industry, which is rapidly evolving and has little control. The bill would have established some of the first regulations for large-scale AI models in the country and paved the way for AI safety regulations across the country, advocates said.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California needs to be a leader in regulating AI given federal inaction, but that the proposal “may have a chilling effect on the industry.”

The proposal, which drew strong opposition from startups, tech giants and several Democratic House members, could have harmed the domestic industry by imposing strict requirements, Newsom said.

“SB 1047, while well-intentioned, does not take into account whether an AI system is used in high-risk environments, requires critical decisions, or uses sensitive data,” Newsom said in a statement. “Instead, the bill applies strict standards to even the most basic functions – as long as they are used in a large system. I don’t believe this is the best approach to protecting the public from real threats from technology.”

Instead, Newsom announced Sunday that the state would work with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails for powerful AI models. Li rejected the AI ​​security proposal.

The measure, aimed at reducing potential risks from AI, would have required companies to test their models and publicly disclose their security protocols to prevent the models from being manipulated to, for example, wipe out the state’s power grid or in the construction of chemical plants weapons to help. Experts believe such scenarios could be possible in the future as the industry continues to advance at a rapid pace. It would also have provided whistleblower protection for workers.

The bill’s author, Democratic Sen. Scott Weiner, called the veto “a setback for everyone who believes in oversight of large corporations that make critical decisions that affect the safety and well-being of the public and the future of the planet.” impact.”

“The companies developing advanced AI systems recognize that the risks these models pose to the public are real and rapidly increasing. While the major AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary industry commitments are unenforceable and rarely work well for the public,” Wiener said in a statement Sunday afternoon.

Wiener said the debate around the bill has dramatically advanced the issue of AI safety and he will continue to press that point.

The law is among a series of bills passed by lawmakers this year to regulate AI, combat deepfakes and protect workers. State lawmakers said California must take action this year, citing the hard lessons they learned from failing to rein in social media companies when they could have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have created some level of transparency and accountability for large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI evolves -Models behave Why.

The bill targeted systems that require more than $100 million to build. No current AI model has reached that threshold, but some experts said that could change next year.

“This is due to the massive expansion of investment within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April because of what he said was the company’s disregard for AI risks. “It’s an incredible amount of power to allow a private company to be irresponsibly controlled, and it’s also incredibly risky.”

The United States already lags behind Europe in regulating AI to limit risk. California’s proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step in putting protections in place for the fast-growing technology that is raising concerns about job loss, misinformation, invasion of privacy and automation bias, advocates said .

A number of leading AI companies have volunteered over the past year to follow security measures outlined by the White House, such as testing and sharing information about their models. California’s bill would have required AI developers to meet requirements similar to those commitments, the measure’s supporters said.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California technology” and stifle innovation. They have discouraged AI developers from investing in large models or sharing open source software, they said.

Newsom’s decision to veto the bill represents another victory in California for big tech companies and AI developers, many of whom have lobbied alongside the California Chamber of Commerce over the past year to stop the governor and lawmakers from banning AI -Promote regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, failed last month before the legislative deadline. The bills would have required AI developers to label AI-generated content and prohibit discrimination by AI tools used in employment decisions.

The governor said earlier this summer that he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are based in the state.

He has promoted California as an early adopter because the state could soon use generative AI tools to combat highway congestion, provide tax advice and streamline homeless programs. The state also announced last month a voluntary partnership with AI giant Nvidia to train students, faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier this month, Newsom signed some of the country’s strictest laws to crack down on election fraud and measures to protect Hollywood employees from unauthorized use of AI.

But even with Newsom’s veto, California’s security proposal is inspiring lawmakers in other states to take similar actions, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They may either copy it or do something similar in the next legislative session,” Rice said. “So it’s not going away.”

The Associated Press and OpenAI have a license and technology agreement that allows OpenAI to access a portion of AP’s text archives.

Copyright © 2024 by The Associated Press. All rights reserved.