It’s been almost two years since ChatGPT launched. We’re also 31 days away from the presidential election here in the States. So it’s no surprise that a bunch of books on how to regulate AI are coming out — and that its effects on democracy are a common theme.
Gary Marcus just published Taming Silicon Valley: How We Can Ensure that AI Works for Us. I actually got to interview Gary — the NYU professor who challenged OpenAI’s Sam Altman during the 2023 Senate hearing — about it at an event hosted by Sustainable Media a few weeks ago. We talked about the risks of AI and how Big Tech has captured our nation’s policymakers.
Gary claims to love AI (but maybe not generative AI….) and wants it to succeed. But he’s also pretty worried about where things are headed, especially when it comes to the impact of deepfakes and AI-generated misinformation on democracy. As he writes, “The only chance at all is for us to speak up, really loudly.”
Marietje Schaake’s new book, The Tech Coup: How to Save Democracy from Silicon Valley, finds the international policy director at Stanford University Cyber Policy Center (and former member of the European Parliament) in peak form. She has great insight into how tech companies have infiltrated governments around the world in order to avoid regulation, and pokes holes in the companies’ popular argument that regulation stifles innovation. (She points to regulated industries that have continued to innovate.) Policymakers would be smart to read her recommendations for how to resist corporate lobbying and design a balanced framework, and we can all benefit from her recommendations for citizens.
Both books come from the premise that Big Tech is bad and is eroding democracy. Gary, especially, is pretty vocal and incisive in his language that he uses. Look, I’ve been vocal that there are some issues with some people in Silicon Valley, who unfortunately control the biggest platforms…which is making the world a difficult place to navigate. They have also seriously muddied the waters when it comes to how the rest of us view tech and its leaders. Just look at the shitstorm that X has become! And now Meta is starting to inject AI-generated content into your feed, essentially saying, “If we can’t find content from your social network that you’ll find engaging — as our business model is around clicking — then we will make it up and tailor it to what you love so that you click on something.”
As one of my close friends, the incredible Judy Estrin, writes in her excellent essay, “Stop Drinking from the Toilet!,” digital companies intentionally mix fresh water with sewage in the same pipes, thereby “polluting our information systems and undermining the foundations of our culture, our public health, our economy, and our democracy.”
While I agree with many of the points made in these books, when it comes to talking about change, I’m not sure it’s productive for us to use such divisive language and thinking. We are also, generally, the same people who complain about divisive language in politics — and now we’re talking about “taming Silicon Valley”, drawing an analogy to an unchecked animal, or “the tech coup.” If we’re trying to get us all to get along, this isn’t the most productive path.
As I’ve said before, I’ve really been thinking about the trilateral relationships that are needed between technology companies, government actors and the social sector as emerging technologies come out — and how we need to be in a synergistic relationship, not an adversarial one.
While we can definitely agree that AI, and specifically generative AI, has some issues, we may disagree on the downsides of AI and how to mitigate them. Regardless of where we may stand, we can’t deny its promise. So we need to work hard together to figure out how to use it for good while mitigating the downsides. Otherwise we risk the hysteria of implementing things like the “red flag” laws that the UK passed when cars were developed. If we’re going to put all this tech to good use, we’d better start getting along.
Worth the Read
Two Harvard students modified Meta’s smart glasses to be able to dox people’s identities in real time, proving that they could instantly pull up the names, phone numbers and addresses of people they came across — even the names of their relatives — just by adding AI-based facial recognition technology to the Ray-Bans.
A federal judge blocked California’s political deepfakes law — the one stemming from the Kamala Harris parody ad that Elon Musk made viral — just weeks after Gov. Newsom signed it.
Great post! Definitely going to add Schaake’s new book to my reading list