California's controversial AI safety bill, SB 1047, is now one step away from becoming law. But as it inches closer to reality, the debate around its potential impact on innovation and safety is heating up.
The Bill at a Glance
SB 1047 has cleared both the California State Assembly and Senate, leaving just one more process vote before it lands on Governor Gavin Newsom's desk. If signed, it would require AI companies operating in California to implement several safety measures before training advanced foundation models, including:
- The ability to quickly shut down a model in case of a safety breach
- Protection against unsafe post-training modifications
- Maintaining testing procedures to evaluate potential critical harm risks
Sound reasonable? Not everyone thinks so.
A House Divided
The AI industry is split on SB 1047. OpenAI is largely against the bill. Anthropic initially pushed back, but now appears supportive after proposing amendments. And AI experts appear divided.
Some, like Andrew Ng and Fei-Fei Li, argue that the bill focuses too heavily on catastrophic harm and could stifle innovation, particularly in open-source development. Others, like Geoff Hinton, believe it’s a sensible and necessary approach to AI regulation.
Why This Matters Beyond California
Here's the kicker: This isn't just about companies based in California, says Marketing AI Institute founder and CEO Paul Roetzer on Episode 113 of The Artificial Intelligence Show.
"It's not just companies in California, it's companies that do business in California,” he says.
Given that fact, and California's massive economy, SB 1047 could have an impact on AI companies—and firms that rely on their products—far beyond California’s borders.
Corporate America Is Watching
The uncertainty around AI regulation is already sending ripples through the business world:
- 27% of Fortune 500 companies cited AI regulation as a risk in recent SEC filings
- Concerns range from higher compliance costs to potential revenue drags
- Some corporations are proactively setting their own AI guidelines
“This uncertainty matters to businesses,” says Roetzer.
If this law gets signed soon, many will now need to comply with it—and that will affect everyone at the company. “The CMO [for instance] is all of a sudden going to have to care about this law,” says Roetzer.
The Unintended Consequences
If SB 1047 becomes law, we might also see some significant shifts in the AI landscape, says Roetzer.
The extra layers of safety checks and potential government interventions could extend the development cycle of new AI models from 8-12 months to 18-24 months.
Instead of big "model drops," we might see more frequent, smaller capability updates to navigate regulatory hurdles.
And major AI companies might line up to voluntarily participate in federal initiatives, using them as cover to continue development.
The Regulation Dilemma
The core challenge lawmakers face is striking a balance between safety and innovation.
The bill attempts to set thresholds based on model size and training methods. But in a field advancing as rapidly as AI, today's "unsafe" model could be tomorrow's obsolete technology.
An emerging school of thought suggests regulating at the application level rather than the model level.
AI expert Andrew Ng compared it to a general purpose technology in a TIME editorial, writing:
“Consider the electric motor. It can be used to build a blender, electric vehicle, dialysis machine, or guided bomb. It makes more sense to regulate a blender, rather than its motor. Further, there is no way for an electric motor maker to guarantee no one will ever use that motor to design a bomb. If we make that motor manufacturer liable for nefarious downstream use cases, it puts them in an impossible situation. A computer manufacturer likewise cannot guarantee no cybercriminal will use its wares to hack into a bank, and a pencil manufacturer cannot guarantee it won’t ever be used to write illegal speech. In other words, whether a general purpose technology is safe depends much more on its downstream application than on the technology itself.”
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.