The First Act
The EU is getting AI regulation over the finish line. How far do they still have to go?
Last Friday, after I hit “publish” on the newsletter, the European Union came together on a deal to regulate AI — the world’s first such act. We’ve been following the drama: In the links section a few weeks ago, we noted that they had hit a roadblock, specifically around what the regulation would do to homegrown generative AI companies such as Mistral. (It’s a lot easier to be cavalier about regulation if your country has no commercial interests.) But after marathon sessions, a deal is on the table.
There’s a lot to parse in the EU AI Act, which was first drafted in 2021 — before generative AI spurred pressure to get something on the books. What do you need to know about it?
It’s not a done deal. The European Parliament still needs to vote on it in the coming weeks (though some think it’s a formality), and the rules won’t go into effect until 2025.
The act introduces a risk-based approach to AI systems, categorizing them into minimal, high, and unacceptable risk. High-risk systems, such as those used in critical infrastructures, medical devices, and law enforcement, will need to comply with strict requirements, including risk-mitigation systems, high-quality data sets, and human oversight. Systems posing an unacceptable risk, such as those that manipulate human behavior or employ social scoring, are banned. Just for comparison, a “principle-based approach” would be one that is around ethical guidelines, or vision-setting statements such as “respect for human autonomy.” Risk-based is almost about mitigating downsides, and a possible principle-based one would be to frame it around investments.
There are big transparency and obligation requirements. The regulation mandates that AI systems must be designed in such a way that AI-generated media can be detected, and that tech companies notify people when they are interacting with AI systems, including chatbots, biometric categorization, or emotion recognition systems. It also requires labeling deepfakes and AI-generated content. This is a proactive measure to ensure users can distinguish between human- and AI-generated content.
There are big penalties at play, too. They range from 1.5% to 7% of a firm’s global sales turnover, depending on its size and the severity of the offense.
Surprise! Not everyone’s happy. President Emmanuel Macron, for one, said that the regulations will hamper French ingenuity and put its tech companies well behind those in the US and China, not to mention the UK. And while a lot of civil society groups were leading the charge to get something on the books to protect fundamental rights and ensure that there are no loopholes allowing AI developers to self-regulate, there now seems to be some wavering support on where the deal actually landed, especially when it comes to allowing the police to use AI. (Biometric identification is seriously curtailed, but not fully, and not in the case of law enforcement or national security, which gets carve-outs.) Amnesty International, for example, sees the failure to ban mass public surveillance and live facial recognition as setting a devastating global precedence.
The good news: Consumers have the right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights. Hopefully Europeans will take full advantage of this, setting a strong precedent for others around the world to see that they have a stake in how AI is designed and deployed. (We covered what’s known as the Brussels effect in a podcast episode about AI and regulation with Phil Howard. In the same episode, Representative Obernolte said the EU was over its skis with regards to regulation. I would be very interested to hear what he has to say now.)
There are some very real critiques beyond those outlined above:
What’s with that definition of AI?! The EU’s definition basically describes all software…
The significant compliance costs, including administrative burdens, legal uncertainty and the testing and monitoring of AI systems, may indeed hamper things.
And finally, as mentioned, this is a risk-based approach, not principles-based. The EU is mitigating downsides, and not necessarily investing in their society to learn how to use AI. You may mitigate all the risks, at the risk of not reaping the benefits.
My opinion? Definitely interesting. It definitely sets the bar for the rest of the world. And it sets up the debate between regulation and innovation. But we’ll have to wait and see until it’s actually implemented to understand how we can best proceed in the States. Even I’ve lost track of what’s up next here! (This legislative tracker from the Brennan Center is a great way to see what’s ahead.)
What do you think about the EU AI Act and what the ripple effect will be? Write to me at us@technicallyoptimistic.com.
Worth the Read
Forecasting the weather is hard! How many times have you gotten frustrated by it being wrong? Well, Google’s DeepMind is taking a different approach to weather forecasting. Most forecasters rely on Numerical Weather Prediction, which relies on very careful modeling, all the way down to the underlying physics, to predict the weather. And this is computationally expensive! Researchers have been chasing more and more complex (and more accurate) models. What DeepMind is doing instead is to train a neural network on past weather data to understand the cause-and-effect relationships. And it seems to work! GraphCast is now the most accurate 10-day global weather forecasting system in the world. It’s all open source, if you want to tinker.
The next robocall you receive for a campaigning politician may be an actual robot: Last weekend, thousands in Pennsylvania were called by “Ashley,” an AI-generated campaign volunteer for Shamaine Daniels. Developer Civox developed her to answer a range of questions from those who don’t hang up, from where Daniels attended law school to her policy on law reform. You can listen here, starting at 07:50. More and more AI-powered technology is going to make its way into the campaign space for 2024, and some organizations are working to ensure it’s the benevolent kind: Higher Ground Labs, the progressive tech accelerator, just announced its cohort of Progressive AI Lab grant recipients, who span the gambit of use cases for campaigns.
AI-powered drive-ins have a secret when it comes to their remarkable efficiency: Humans. Guess this means we’re not there yet.
This week in terrifying AI news: Japanese researchers have generated images from people’s brain waves with 75% accuracy. Not from pictures they were looking at, but from their thoughts. I’m going to let that one sink in. Also, AI-powered avatars are being used to commune with the dead. Discuss!
To possibly leave us on a happier note, Project CETI published a poster in NeurIPS 2023, one of the largest machine-learning conferences in the world, about the potential of using neural networks to translate animals speaking. (In their case, whales.) There has been a lot of work translating between two human languages that have no “parallel translations” available. (Think of a Rosetta stone, which has two languages saying the same thing.) It turns out a lot of languages have the same “shape,” so you can actually translate without that Rosetta stone. Could you apply that thinking to go between human language and…animal language?