It’s been a big week for larger conversations about AI. On Monday, OpenAI’s Sam Altman posted his thoughts on the world-changing progress that will be ushered in by AI in “the next several thousand months.” This was perfectly timed to the release of the UN Secretary-General’s Advisory Board on Artificial Intelligence’s final report on governing AI for humanity.
Altman’s essay, which some joked was written in “God mode” rather than “founder mode,” posited that we are about to leave the Industrial Age behind to enter the Intelligence Age, a glorious — and inevitable — era of massive prosperity and ease, all thanks to the never-before-seen progress that’s about to be unleashed by AI. That is, as long as we “drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.”
There is no direct mention of regulation, only the statement that we need to act “wisely but with conviction. … It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.”
Sam Altman has said that global governance is necessary –companies are “calling for oversight,” but do they? Don’t they? I don’t even know who to believe anymore.
Also, I can’t help but ask: Who is the “we” he’s writing about who will be navigating these risks? And how do “we,” as in the rest of the world who will be impacted most by AI, get a say in how this plays out?
Enter the UN
The UN report, a blueprint on how to navigate AI, was based on consultations with 2,000 individuals around the world and overseen by an advisory board of 39 AI leaders from 33 countries, whose expertise extends beyond tech to science, anthropology, human rights, public policy, and more — and who work at companies such as OpenAI, Mozilla, Microsoft and Sony. But, sadly, if the option for the rest of us were heard, it was most likely through organizational filters. In a perfect world, they would have been included. For example, my friend Divya Siddarth is working on the Collective Intelligence Project, an organization set up to try to ensure that all of us can weigh in on these things. They host alignment assemblies of regular people to talk about their feelings and needs around AI.
What do you need to know about the report? I broke it down into the three pros and cons through my eyes:
Pros
Ethical standards!
Transparency!
Global cooperation!
Cons
A one-size-fits-all governance clearly won’t work – the needs and ways of thinking are clearly different for different places.
Bureaucracy may slow down innovation – we don’t need innovation at all costs, but, for example, if AI can cure cancer, are we really going to slow it down?
How would we even do enforcement on a global scale?
Given the lack of global governance, it’s a good move for the UN to be having these high-level conversations. But they need to not only happen at the UN: They need to happen in the US (any news on the Executive Order on AI…?), on local levels (California’s SB 1047 is flawed, but it really got the nation talking) and between regular people.
We need to stay not just informed but engaged. AI is part of our lives and will be forever. (Just wait until Apple Intelligence enters the iOS 18 chat). If you’re not comfortable speaking up — check out the excellent example in the links below — you can do it with your wallet. Advocate for yourselves, your kids, your local institutions, because you will be impacted sooner than “next several thousand thousand months.”
To quote Sam Altman out of context: We owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.
Worth the Read
In the AI for good department, AI is being used to find archeological sites in the Arabian desert. Meanwhile, in Peru, it has been used to decipher over 300 never-before-seen geoglyphs in the Peruvian desert.
Europe’s AI Pact already has signatures from 100 companies. Still missing: Apple, Meta and Mistral.
As OpenAI transitions to a for-profit company, some execs are heading for the exit.
“I warned you guys in 1984, and you didn’t listen.” That’s director James Cameron talking about Terminator and today’s AI landscape. He’s just joined the board of the visual media company Stability AI.
Check out this open letter to LinkedIn’s privacy team about their generative AI and privacy policies, which use personal data from its users to train its models “for largely unspecified purposes.” We should be writing more of these!