California governor Gavin Newsom has vetoed a landmark artificial intelligence (AI) safety bill, which had faced significant resistance from major tech companies, according to BBC News. The proposed legislation, which would have imposed some of the first AI regulations in the US, was blocked due to concerns it could stifle innovation and push developers out of the state.
The bill, authored by senator Scott Wiener, sought to introduce strict safety testing for advanced AI systems and mandate a “kill switch” to disable potentially dangerous models. It also aimed to impose official oversight for the development of powerful “Frontier Models.”
In his veto statement, Newsom explained the legislation “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” adding that the bill would impose strict regulations on even basic AI functions if implemented by large systems.
Newsom announced plans to protect the public from the risks posed by AI, requesting input from leading experts to help shape future safeguards. In recent weeks, he has signed 17 other bills related to technology, including measures to combat misinformation and deepfakes created with generative AI.
Senator Wiener criticised the decision, claiming the veto allows AI companies to continue developing “extremely powerful technology” without government oversight. He argued that without this regulation, AI firms will face “no binding restrictions from US policymakers,” especially given the lack of action from Congress.
The bill had drawn opposition from tech giants such as OpenAI, Google, and Meta, who warned it would hinder AI’s development. Wei Sun, an analyst at Counterpoint Research, called the restrictions premature, noting that regulating specific AI applications rather than the technology itself would be more beneficial in the long term.