Buckle up: 2025 is going to bring massive changes to how AI is governed in America. The rules we've just gotten used to? They're about to be rewritten, and we all need to understand what's coming. Yesterday, I hosted a conversation with Mozilla’s Nabiha Syed and Justin Hendrix of Tech Policy Press about what AI policy may look like in the year(s) ahead, so it’s top of mind. I’ll share a recording of it in the next few weeks.
Let's be real: With the Trump administration's return, Biden's careful executive order on AI is last year’s news — they’ve gone on the record to say as much. The EO, through their eyes, is probably too regulation-heavy and too multi-agency — a far cry from Trump’s 2019 executive order on AI. This isn't clickbait speculation: It’s the reality we need to prepare for. I mean, just today, they announced that the AI — and crypto — czar will be former PayPal COO David Sacks.
Here's what's actually probably going to matter starting next month
It's all going to be about beating China in the AI race, with every policy decision viewed through this more permissive lens.
That, and “national security” will be the magic words that open every door and justify every decision, spurring more investment in AI for the military.
Economic growth will dominate the discussion, from safety concerns to social impact.
Regulation, such as anti-trust laws, will be seen as red tape holding America back from winning the AI race.
Export restrictions will get even harsher.
What’s unclear
Biden’s EO was a lot about transparency and related concerns. We’re a bit unclear as to what will happen to the National Institute of Standards and Technology’s safety institute, created just last February, not to mention all of the AI officers that were hired. It’s all up in the air, especially with something like the Department of Government Efficiency advisory commission coming down the pike, helmed by Elon Musk and Vivek Ramaswamy.
Where it gets interesting
While Washington might step back, the states aren't sitting still. There are over 700 state-level AI bills in the works. California is already planning its next moves in AI oversight, Colorado is working on thoughtful anti-discrimination regulation that balances innovation and consumer safety — even Texas is exploring what responsible AI development could look like, modeled on Colorado’s proposed AI act.
And globally? The game is still on: The EU is pushing ahead with ambitious regulatory frameworks, France is positioning itself as an AI hub that balances innovation and oversight, and international cooperation on AI safety continues, with or without US federal leadership.
What you should watch for
Regulatory power and action will probably increasingly shift to state capitals, though it risks being pre-empted by a Republican trifecta.
Companies will step up their own governance to fill the vacuum.
International standards, such as the General Data Protection Regulation, will become more important than ever.
And yes, there will be some chaos as everyone adjusts.
The big question will be whether there will be an AI race outside the US that we don’t participate in, or do so in a different way.
There are some rays of hope
I believe that innovation and safety aren't opposites; they can reinforce each other. On top of that, the science of AI safety isn't going away. Even if federal oversight recedes, the technical work will still probably continue, in some ways, in labs and companies. And states like California are ready to lead with thoughtful oversight. Most importantly? People like you are paying attention and demanding better, both from big tech and the government.
Here’s what you can do
Watch your state legislature: This is where the action is moving.
Engage with your local representatives: They're more accessible than you think.
Pay attention to what companies are doing: They're the ones building this future.
Stay informed about the changes ahead. Knowledge is power.
The work of making AI safe and beneficial isn't stopping — it's just changing venues. And sometimes, that's exactly when the most important work gets done.
Worth the Read
Could it be that Google’s grip on search advertising is slipping? Because of TikTok?!
OpenAI announced the release of ChatGPT Pro. $200 a month gets you access to OpenAI o1, voice and more. They have claimed that o1 is closer to artificial general intelligence (AGI), but a recent article in Nature says not so fast.
Google’s GenCast, which we looked at before, has been shown to predict weather and deadly storms with a 15-day accuracy range.
Can airlines use AI to exploit airline passengers? Senator Richard Blumenthal says yes — for example, by knowing that a loved one has died, it can charge them higher prices to fly. “Airlines could use these tools to exploit passengers’ worst moments,” Blumenthal said. Wow. But not impossible…
Amid the fallout of the Salt Typhoon hack, US security officials recommend that Americans use encrypted messaging apps to keep us safe from foreign hackers. See you on Signal!
The newsletter Getting Out of Control has an excellent recap of the 10 policy implications of the incoming administration. Two that stood out for me: Focus on AI talent and immigration controls, and rollback on anti-bias approach to AI.