The White House Weighs in on AI
Breaking down this week’s (very big) Executive Order on AI safety.
“We will see more technological change, maybe, in the next five years than we have seen in the last 50 years. That is a fact. It is accelerating at a warp speed.”
On Monday, I had the good fortune of being seated in the East Room of the White House as President Biden issued his administration’s Executive Order on Safe, Secure, and Trustworthy Artificial intelligence.
As Suresh Venkatasubramanian, the co-author of the White House’s Blueprint for an AI Bill of Rights and who appeared on Episode 1 of our first season, reminded me from his nearby seat, “An EO is extremely important. It is both a powerful signal to agencies that says this is where our priorities are, but it's also very specific things that they will do. It's very, very clear.”
I have a longer exchange with Suresh, as well as a conversation with Bloomberg reporter Courtney Rozen, about the Executive Order on our podcast feed.
He later tweeted: “The executive branch can't pass law, but it can interpret law and execute it as the administration deems appropriate. EOs have force of law! You can't make new laws with an EO, but they must be complied with. You should think of an EO not as in opposition to or as a replacement for legislation. Rather, it's another lever of power for the government to take action on a matter of broad concern, and can complement Congressional action.”
The choice of timing could not be missed: Vice President Harris also appeared at UK’s AI Safety summit on Wednesday, and the US wants to position itself as a leader in that space. It did in fact make strides in that department, with Harris announcing not only more machine-learning standards, but also the establishment of the United States AI Safety Institute within the National Institute of Standards and Technology (NIST). The department, said the VP, will publish guidelines and develop best practices, benchmark tests, and more for evaluating potentially threatening AI systems.
The Technically Optimistic take on the 100+-page document
There are so many themes in the 100-plus-page document that it's going to take weeks and tons of people to unpack it. For the purposes of this newsletter (and brevity!), I want to try to reframe this in the way I think about AI regulation. Generally, there are three broad themes I look for: Does it incentivize widespread and equitable adoption? Does it push for cross-sector collaboration and accountability? And does it put people first?
When seen through this lens, this executive order actually proposes to do a lot of it! (There's a national security theme, too, but I’ll get to that another week.) And the government now seems to be focused on some real-world risks, not the pie-in-the-sky existential ones.
Let’s break it up a bit.
Widespread and equitable adoption Rather than banning the use of AI, I believe we should embrace it. However, we need to make sure that everyone can do so by offering training and ensuring that this transformative technology is not limited to only a few. On Episode 4 of the first season, Teemu Roos spoke about how Finland tried to educate 1% of its population about AI and reached 10%, with people from all walks of life engaging with it. The US should aim to do the same. At all levels: regular citizens all the way up to government.
The EO addresses this! One of the last sections is about the federal government’s current and future use of AI. It essentially says: We discourage federal agencies from outright banning generative AI. Instead, we want you to do risk assessment for particular systems in particular uses.
To give you a sense of what a sweeping order this is, there are even mentions of improving and accelerating processing applications for the Supplemental Nutrition Assistance Program using AI. It also mentions investing in AI-related education, training, development, research, and capacity, and supports programs to provide Americans with the necessary skills for the age of AI, as well as attracting global AI talent to the US. That is worth really noting. The order asks Homeland Security to re-evaluate its immigration pathways, especially for experts in AI and other critical and emerging technologies. (In the links section of a previous newsletter, we specifically highlighted a story about how the US missed out on developing 5G technologies because of our immigration policies. It looks like the administration does not want to miss out again.)
Cross-sector collaboration with shared accountability There are so many parties involved in AI development, but the path forward is to collaborate and to hold one another accountable. Voluntary commitments are not enough, and collaboration — probably in the way Ian Bremmer and Mustafa Suleyman laid out in their Foreign Affairs article on the AI power paradox (Bremmer also spoke with us about it on the podcast) — is probably not enough. Rather, we need real accountability to one another.
The administration is actually starting to put some real measures in place:
It’s requiring developers of the most powerful AI systems to share their safety test results and other critical information with the US government via the Defense Production Act (though it would be nice if they were asked to share it with the public, too).
NIST is being asked to create real standards for red-teaming (i.e., authorizing ethical hackers to find flaws in systems, policies, etc.) to ensure safety before public release.
There will be risk mitigation put in place to protect against using these systems for biological warfare.
I’ve been harping that academia has been captured by industry (listen to our season 1 episode featuring Kyunghun Cho), and there is a note that we need to invest in academia to catalyze AI research.
Put people first Humans are the core of these technologies, from producing the data that they train on to being the downstream recipients of their power. These technologies should honor that provenance, and should be developed to minimize harm and maximize potential.
What are the wins for everyday Americans?
Look, the bottom line is that the executive order actually proposes to do a lot! The President is using the entirety of the government to flex some control over artificial intelligence and its deployment, as well as applying existing laws and powers. There is a whole civil rights framework as well. The benefits to us are many.
The Department of Commerce is tasked with figuring out content authentication and watermarking so that regular people can understand whether content they are interacting with has been algorithmically generated.
Privacy, which is basically what I testified about on Oct. 18, plays a huge role in this order: Federal support will be given to really try to accelerate privacy-preserving techniques, especially given that data minimization is currently in conflict with deep-learning algorithms.
There is guidance to keep AI algorithms from being used to exacerbate discrimination – effectively taking a shot to address algorithmic discrimination in many cases (landlords, federal benefits, job displacement, labor standards, etc.) — even ordering the Department of Justice to create best practices for investigating and prosecuting civil rights violations related to AI.
We’ve spoken a lot about education and AI, and have had an entire podcast episode on it, and the EO specifically names that resources need to be given to educators who are deploying AI-enabled educational tools, such as personalized tutoring.
Seriously, the list keeps going. On the people front, this is a very strong start.
Okay, how about companies?
If you’re one of the major technology companies, there may be some big pills to swallow and/or things to watch out for. We have to see how the guidance is implemented, and the devil will be in the details. It will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training it, and must share the results of all red-team safety tests. The order also tries to set some “compute limits,” which seems hard to implement and may just be the wrong measure. Companies will not like disclosing this information.
Also, this EO is directing federal agencies to flex their purchasing power. Everything from dictating how its money can or cannot be used and issuing research grants. And while it’s a small amount of money for the government, it's a large amount of money for these companies. That spending power is important. And these companies are going to have to make some changes in order to comply.
It’s impossible for the EO to be perfect. And it’s not, with something for every sector to worry about. ReNika Moore, director of the ACLU’s Racial Justice Program, says, “We’re encouraged that the Biden administration recognizes the need for a whole-of-government approach to address discrimination and other real-world harms of artificial intelligence and other automated systems in critical areas of people’s lives, such as in the workplace and in housing.” But the administration, she says, “essentially kicks the can down the road for these tools in national security and law enforcement — areas where the use of AI is widespread and growing, and where there are often profound impacts on liberty, equity, and due process.”
And of course, we have to wait for the rest of the government to catch up to this order – the Office of Management and Budget just released draft guidance given this EO. All in all, though, strengthening privacy protection, addressing algorithmic fairness, and supporting workers is a great place to start. I’m definitely optimistic about it – but also hope it's just a start and will not be used as a checkbox to say “We’re done with our part.” Now we just need good people to go into government to help out. And we definitely still need actual legislation….
For those of you who dug into the EO, I’d love to hear your thoughts! Write to me at us@technicallyoptimistic.com.
Speaking of letters, thanks so much to Dr. Niobe Way, NYU professor of applied psychology and founder of the Project for the Advancement of our Shared Humanity, for sharing her thoughts on last week’s interview with MIT’s Daron Acemoglu on what’s missing in AI education:
“If there were more developmental psychologists like myself in the world of tech, our tech would look very, very different. You have to understand our natural human brilliance that’s revealed when we are young to understand the capacity of technology.”
Worth the Read
“Assistant Professor/Associate Professor Without Tenure (tenure-track) AI and Human Experience” - The MIT Media Lab, my alma mater, is launching a search for a new professor who will focus on how AI-powered tools and algorithms intersect with how we live in the world. I expect more academics will be studying the changes in the human condition as we become a symbiotic AI society. Maybe I can get the professor to make Technically Optimistic required listening…
“California regulators suspend recently approved San Francisco robotaxi service for safety reasons” - If you’ve been in San Francisco recently, you’ve probably seen cars zipping around the city with no one in the front seat. As someone who worked on self-driving cars, it's amazing to see. But to some, it’s also pretty disconcerting. Now add the DMV to the latter list: They have called the robotaxi service “an unreasonable risk to public safety.” Putting a multiple-ton piece of hardware on the road, controlled completely by a computer, is probably the starkest example of testing on real people – and this whole situation raises real questions on when it's safe to do this type of testing, potentially without others’ consent.
“An Industry Insider Drives an Open Alternative to Big Tech’s A.I.” - Open source continues to be the big variable in all these conversations. Open source has historically been viewed as a security win: You can literally see what is going on inside the code, and potentially fix it if there is a problem! Now there are real questions about what governance over these open systems may mean. The Allen Institute is driving “radical openness,” and Mozilla released a letter (to which I signed on, as did others like Maria Ressa, the Nobel laureate, and who appeared on episode six of season one) also calling for openness. Open source was not addressed in the EO, and there are certainly questions around these open systems, but for now it seems that increasing public access and scrutiny makes technology safer, not more dangerous.
Leica M11-P - In a previous newsletter, I mentioned Content Credentials, a proposed system to help users both track the provenance of digital images and see which tools were used to edit the photo (and that sort of watermarking is also mentioned in the EO!). Leica has just released a new drool-worthy (and pricey) camera that implements this standard. We may be on the verge of having all new digital cameras cryptographically stamp so we know that an image is real.
“EU Common Charger Law May Be Final Nail in Coffin for DSLRs” - On the topic of cameras, if you remember our very first newsletter, I wrote about the power of the government to move companies toward change. Take the EU, which influenced the largest tech company in the world to move from lightning ports to USB-C. (Hello, iPhone 15!) Turns out there are potential casualties of ruling this as well: There may not be grandfather clauses in the EU’s common charger rules, and popular DSLR cameras do not have USB-C ports, so you may not be able to buy these cameras after December 28, 2024 in the EU.