CalChamber, Tech Experts, Leading Calif. Democrat Oppose AI Bill

An artificial intelligence (AI) bill opposed by the California Chamber of Commerce and other groups, was sent to the Assembly Appropriations Committee Suspense File this week.

SB 1047 (Wiener; D-San Francisco) enacts the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

In addition to being opposed by a CalChamber-led coalition of business and industry groups, SB 1047 was opposed this week in a Fortune commentary by a renowned computer scientist, and letters from the ranking member of the U.S. House Committee on Science, Space and Technology, and academic AI researchers from seven University of California campuses plus the University of Southern California and Stanford University.

SB 1047

SB 1047 requires frontier AI developers to make a positive safety determination before initiating training of a covered model, among other things, subject to harsh penalties that include criminal penalties. The bill creates significant uncertainty for businesses due to vague, overbroad and impractical, and at times infeasible, standards, requirements, and definitions. It focuses almost exclusively on developer liability, creating liability for failing to foresee and block any and all conceivable uses of a model that might do harm—even if a third party jailbreaks the model.

As a consequence of such issues, deters open-source development, undermines technological innovation and our economy. It further imposes unreasonable requirements on operators of computing clusters, including a requirement to predict if a prospective customer “intends to utilize the computing cluster to deploy a covered model” and implement a “kill switch” to enact a full shutdown in the event of an emergency. It also establishes a totally new regulatory body, the “Frontier Model Division,” within the Department of Technology, with an ambiguous and ambitious preview.

Commentary

In an August 6 commentary for Fortune, Dr. Fei-Fei Li warns that SB 1047 would have significant unintended consequences that will stifle innovation.

Widely credited with being the “Godmother of AI,” Li is a professor and co-director of Stanford’s Human-Centered AI Institute.

In her commentary, Li calls SB 1047 “well-meaning,” but warns that due to the penalties and restrictions the legislation sets on open-source development, SB 1047 will not just harm innovation in California, but in the entire country as well.

“If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech.’” she says. “SB-1047 will unnecessarily penalize developers, stifle our open-source community, and hamstring academic AI research, all while failing to address the very real issues it was authored to solve.”

Li points out that it’s impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model. SB 1047’s penalties unduly punish developers and will force them to pull back.

The bill also “shackles” open-source development, mandating a “kill switch” in certain cases, which is a mechanism by which the program can be shut down at any time.

“If developers are concerned that the programs they download and build on will be deleted, they will be much more hesitant to write code and collaborate,” she says.

Open-source development is also vital to academia and the restrictions on open-source development would be a “death knell” to academic AI, Li warns.

“Take computer science students, who study open-weight AI models. How will we train the next generation of AI leaders if our institutions don’t have access to the proper models and data? A kill switch would even further dampen the efforts of these students and researchers, already at such a data and computation disadvantage compared to Big Tech,” she says.

Rather than pass an “overly and arbitrarily restrictive” mandate such as SB 1047, California should adopt a policy that will empower open-source development and put forward uniform and well-reasoned rules, Li states.

Congressional Letter

In an August 7 letter to the author of SB 1047, Congresswoman Zoe Lofgren (D-San Jose) says that while she firmly supports AI governance to guard against demonstrable risks to public safety, “unfortunately, this bill would fall short of these goals — creating unnecessary risks for both the public and California’s economy.”

Lofgren is the ranking member of the U.S. House Committee on Science, Space, and Technology, which has jurisdiction over AI.

She notes that the science surrounding AI safety is still in its infancy and that SB 1047 requires firms to adhere to voluntary guidance issued by industry and the National Institute of Standards and Technology, which does not yet exist.

“Further, SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement,” Lofgren writes.

She also voices concern that SB 1047 could have unintended consequences from its treatment of open-source models. “Given that most of the discoveries that led us to this moment were achieved through open source and open science, SB 1047 could have a pernicious impact on U.S. competitiveness in AI, especially in California,” Lofgren says.

She urges the California Legislature to put the bill aside for further study and consideration.

AI Researchers

Signing a statement of opposition to SB 1047 are academic AI researchers — faculty, postdoctorate, and graduate students of the University of California at Berkeley, Davis, Los Angeles, Riverside, San Diego, Santa Barbara, and Santa Cruz, and postdoctorate and graduate students of the University of Southern California and Stanford University.

“We agree that this bill will have broad negative consequences, hamper economic dynamism, and weaken California’s position as a global AI hub, in the service of questionable, unscientific, and hypothetical public benefits,” the statement asserts.

As part of what the statement describes as a “researcher-centric” perspective in opposition to SB 1047, the signers cite the bill’s chilling effects for open-source model releases, to the detriment of research; comment on the “unscientific nature of AI risk forecasting and ‘capability’ assessment”; and express concerns about “the insufficiency of near-term carve outs for open-weight models,” among other concerns.