Is California Going a Bit Too Far?
The state’s tech bills are as ambitious as they are ambiguous.
I’m a fan of some form of smart regulation on AI to make sure it is developed and used in a responsible way. I’ll admit that this is hard to do: Technology is moving really quickly, and you want regulation that enables innovation and not accidentally stifles it. Lots of people have been taking shots on goal and the devil is in the details, but as I’ve written in past newsletters, I like parts of what the White House Executive Order on AI has put forward, as well as elements of what the Europeans are doing. But let’s be honest: This is going to be tricky to get right.
To begin with, there are a lot of constituents. There are the country’s economic and national security concerns about staying competitive against China. There are the companies that are inventing this technology in the US. And there is us, the people who have to live in the world that is being (re)shaped around us.
Enter California. The state is flexing its muscles as both one of the largest states and one of the world’s largest economies. (It's a similar play to the way they force emissions standards on vehicles that eventually have to be adopted by the federal government, or maybe it’s akin to the Brussels Effect, where the rest of the world sometimes just follows what the Europeans are up to.) There have been a lot of AI-related bills showing up in California recently, and they’ve been a topic of national discussion. Just look at this tracker to give you an idea of the AI- and data-related bills that have been making their way through the state’s legislature this year.
I’m going to focus on the biggest: SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which will be voted on this week. Here are the big points:
The proposal would require safety testing for many of the most advanced AI models that cost more than $100 million to develop (think ChatGPT or Google DeepMind) or those that require more than a specific threshold of computing power.
Developers of AI software operating in California would need to provide methods for turning off the AI models should they go awry — basically a kill switch in case they, like, take over the Department of Defense server or threaten to extinguish humanity — and to report each safety incident affecting their model.
Developers would be required to hire third-party auditors to assess their safety practices.
The bill would give the state attorney general the power to sue developers that don’t comply.
Whistleblowers speaking out against AI abuses would be granted protections.
The biggest opposition has come from the companies and their investors — and, most recently, from California Congressional Democrats such as Ro Khanna, whose districts include much of Silicon Valley. There are some problems with this bill. Philosophically, there is nothing wrong with saying that developers need to be more responsible when building their models. But the ambiguity in this bill is like, really, really bad. How are companies supposed to know when they’ve done a good enough job? This isn’t building a bridge, where the usage is clear and you can compute that you’ve met all the requirements. This is more like building a tool. I’m not saying that we shouldn’t be regulating tools – look, I’m not sympathetic to the argument “guns don’t kill people, people kill people,” which we touched on in last week’s newsletter. We need to do an AND here: We need to smartly regulate both the usages and a bit of the tool, too.
The ambiguity in this bill is further complicated because developers are supposed to certify in a document to the state attorney general, who would be authorized by the state to bring civil actions. Ambiguity + lawyers, etc. = not the best combination. If we give the government carte blanche to sue me if I get something wrong, it is going to take a large number of lawyers to make sure that I’ve certified correctly, which is going to both cost me a lot of money and maybe cause me not to release software in the first place.
I’ll also come out and say that I’m a huge fan of open source (I’m on the board of the Mozilla foundation), and this bill could potentially hold the developers of open source software responsible for all the forked versions — and that isn’t how open source usually works, with new projects stemming from existing ones… So, that seems really bad, too. When big tech companies talk about the bill stifling innovation, this is where my mind goes first. While it has been dispiriting to see how much sway those companies have had over State Senator Scott Wiener, who brought forth the bill and has since conceded on some key points, the good news is that there was enough of an outcry around open source to cause him to make an amendment, raising the standard for open-source models.
As the state recently saw, big tech has the power to negotiate its own backroom deals. Look at assembly bill AB-886, known as the California Journalism Preservation Act. It was supposed to fund local newsrooms to blunt the financial hardships that have hit the news business as Google and Meta grew to dominate digital advertising, and technology radically changed the way people consume news. Covered platforms such as Google and Meta were to pay an annual fee to compensate digital journalism providers, and would have been prohibited from retaliating against any of those providers by, say, blocking access to their sites or changing their rankings. On the news provider side, the law would have required them to spend 70% of the funds it received on journalists and support staff. Again, it was a good thing philosophically, but there were definitely some implementation challenges. For instance, it could incentivize news agencies more toward clickbait to attract readers, and therefore extract even more money from the platform companies. There is a world where this bill may have actually made published journalism worse.
Tech companies stepped in and dealt directly with legislators to block the law. In its place, a group of companies helmed by Google, as well as state taxpayers, are going to allocate $250m for two new initiatives administered by UC Berkeley’s Graduate School of Journalism: a fund to distribute millions of dollars to California news outlets, and an “AI accelerator” to develop ways for journalists to use the powerful technology.
There are two things at play here: focus and implementation. For SB-1047, the focus is on these big existential risks. The reason this bill is moving is because it is trying to address what may happen if these big models go completely awry. But there are really hard issues, like bias, discrimination, etc., that are happening today. And second, implementation is hard, but it matters. Unexpected side effects — such as clickbait journalism and making tech CTOs certify their models — are a real risk. It’s super interesting that we are starting to talk about it and getting the philosophies and values that we care about into the zeitgeist, but we’re witnessing the sausage being made of government trying to regulate, not to mention companies trying to push back, and all of it at our expense.
Above I said that we live in a world that is being shaped around us. But this is the time for us to be shaping the world. It's not time to stand passively in the background. We need to demand that our government engage with regular people, too. Aren’t we the ultimate stakeholders here? I think of what Senator Blumenthal did when he convened roundtables of kids to make sure he was properly representing them with the Kids Online Safety Act. Let’s raise our voices to make sure that becomes the norm in these less-than-normal times.
Worth the Read
Is the Department of Homeland Security considering using facial recognition software to identify migrant children stopped at the border, effectively creating a huge data set of real children’s faces for aging research? Depends on who you ask.