As my guests and I have been discussing on Season One of the Technically Optimistic podcast, the one thing that we can take away from the explosion of technology over the past few decades is that as tech changes, so does society: Social media, smartphones, even the internet itself – while these things haven’t been around for long, they dominate the way many of us live, work and interact.
So naturally, people are talking about the AI revolution — the scary existential risks and/or the massive innovations that everyone should be prepared for — even if the future is totally unclear.
That comes to why I’ve called my podcast and newsletter “Technically Optimistic.” I am optimistic because I believe in people. I believe that if people are curious about all this technology around us and come to learn how it works and intersects with our lives — not to mention to learn to ask big questions for themselves – then we can broaden the conversations that our society is having. Because those conversations need more people engaged in them – not just engineers, entrepreneurs, professors and politicians. All of us.
We can have discussions that are both in awe of what technology can do and that express our worries about what that same technology can do. We can live in that zone of what other people consider to be contradictions – and use that to help guide our world to something better.
Most importantly, we need to learn how to both be in awe as well as be able to advocate for ourselves. Maria Ressa, Nobel Prize-winning journalist and co-founder of the Filipino news agency Rappler, said it herself on Episode 6 of the podcast: We have to stop thinking of ourselves merely as users of technology and start acting like what we are: citizens.
I want you to be able to see the news as what it really is: tensions between technology, humanity and power.
And so…welcome to the newsletter. I want to help you to be able to decode what’s going on every week in technology news — and hopefully to see it through a technically optimistic lens. My hope is that you will not only be able to break down how technology and our society are colliding, but also excited and to have an opinion about it all. I want you to be able to see the news as what it really is: tensions between technology, humanity and power.
The community and the conversation begin here: Please email me! Tell me what this newsletter gets right (and wrong). Tell me how you see the world and what you’d like to learn more about. Let’s navigate this together.
And now, for this week’s (actual) newsletter…
Last Friday, hundreds of thousands of iPhone 15s landed in people’s hands. Put aside the impressive specs on the new camera, the new processor and a new physical button. But probably the most newsworthy change? iPhones now use USB-C instead of lightning cables.
For those of you who, like me, have been around the iPhone block, you might remember that this isn’t the first cable switch. When I bought my first iPhone, it used what was called the 30-pin dock connector. In 2012, with the launch of the iPhone 5, Apple shifted, introducing the lightning cable that most of us know today.
That small connector caused reverberations throughout the industry. Consumers weren’t the only ones who had to upgrade; an entire industry of manufacturers had to respond, too. Immediately. Approximately 20% of the world’s mobile phone owners use iPhones, and Apple’s replacement of this one wire made the company billions between selling the cables and licensing the connector (through its MFi program, made for iPhone) to accessory manufacturers.
That’s the power that Apple has.
This time, somebody flexed their power over Apple.
Look, if you’re a longtime Apple user like me, then the lightning cable was a minor annoyance. But if you live in an Android-iPhone household, it became a real annoyance. And remember what happened when the iPhone 7 appeared? You couldn’t even use Apple’s standard-issue headphones anymore. You either had to upgrade to wireless AirPods or buy the wired ones with…a lightning connector. These have real economic impacts: Those cables add up.
There may be real environmental impacts attached, too. The math is hard to do, but the crux of the argument is that devices cannot really share cables because of Apple’s proprietary connector – so we all have to buy cables. And when things change? Suddenly, all those old accessories need to be tossed out in favor of the new ones.
This issue first came to a head on September 23, 2021, when the European Commission put forward a “revised Radio Equipment Directive” ordering that the charging port and fast-charging technology be “harmonized.” With that directive, USB-C was deemed the standard port for all smartphones, tablets, cameras, headphones, etc. They defined a transition period of 24 months to give the industry ample time to adopt this directive.
All this drama started in Malta, thanks to Alex Agius Saliba, the Vice-Chair of the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament. He went on a mission to do what he said was to “make a practical difference in the lives of Maltese citizens.” And he won: On October 24, 2022, his legislation passed the EU with near unanimity.
Aside from us having new cables for our new iPhones, what does this all mean? This is a very practical example of the Brussels Effect. Columbia Professor Anu Bradford coined this term to describe the impact that European regulations and legislation has on the rest of the world. The EU has the ability to shift standards and markets that have a global impact. These companies wish to operate in the EU, and, honestly, if they have to comply with EU laws, it's generally easier to just do the same thing everywhere. This is how a politician from Malta is able to impact one of the world’s largest companies – and now we all have different charging cables. (You can learn more about the Brussels Effect on Episodes 2 and 3 of the podcast.). And by the way, it doesn’t seem like they’re going to stop pushing.
Should We Be Looking at the EU with Gratitude — or Concern?
The same thing is now playing out digitally, and specifically with artificial intelligence. The EU is gearing up to flex its power in this field. Coming down the road? The EU AI Act. Right now, this is just a proposal, but it has the potential to really define what some of the guardrails should be. The act sets up different levels of risk as well as a uniform way for everybody to talk about risks – from things like “Cognitive behavioral manipulation of people or specific vulnerable groups” to safety legislation around toys, cars and planes. It also enforces transparency requirements on generative AI — things like ChatGPT — forcing the companies to disclose what content was used to train these systems and to put guidelines in place to guarantee that they won’t produce illegal content.
Unsurprisingly, most tech companies are not happy with it. They may have valid concerns: This technology may have too much potential upside to be constrained by regulation. Limiting companies, specifically EU companies, will cause them to consider leaving the EU and to look for greener pastures. Effectively, the argument is that there is a tradeoff between regulation and innovation – more regulation inherently may mean that we get less innovation. In fact, they explicitly say, “Building a transatlantic framework is also a priority” because they know that they may have greener pastures across the pond. (But these Europeans, and the companies that operate there, are facing the inevitable and are learning to adapt to the more stringent controls that the government is putting in place.)
The pastures in the US, however, are not fully greener. According to polling by the Artificial Intelligence Policy Institute, the majority of Americans want governmental action on AI. It also found that 82% of Americans don’t trust AI executives to self-regulate. There are numerous bills in Congress trying to make an impact here, each of them trying to target a different aspect, some of which align better or worse than that polling. However, the first real hearing the Senate had on AI, run by Senator Schumer, was both held behind closed doors and featured those very executives that the American people were concerned about.
Just like in the EU, it is popular to ask questions about the regulation-vs.-innovation tradeoff, or to bring up national security concerns with regard to China. And Silicon Valley has become the powerhouse it is mostly due to the hands-off approach that the government has taken. On top of this, you have others who are arguing that these tech companies are as powerful as countries and should have an equal seat at the table when it comes to regulation. (You’ll find me lightly pushing back and some great quotes on the subject from Ian Bremmer, author of “The AI Power Paradox,” in his bonus episode “Ian Bremmer’s plan for global AI governance,”) But even with all this potential “demand” that the US government do something, most of this may be moot for the foreseeable future, as the US is facing a government shutdown that will suck up all the legislative oxygen.
We need a checks-and-balances system when it comes to technology. Companies should do what they do best: Build amazing technology. Innovate! Push us forward in the myriad ways that we can’t possibly comprehend. But we should also remember that companies are built to optimize for financial gain, therefore we need somebody who will stand up for the everyday person. Traditionally, people would turn to the government for this. If the US cannot figure that out (which is likely, given our laissez-faire nature when it comes to Silicon Valley), perhaps the Brussels Effect is our best hope, with the EU forcing those guardrails on the entire world.
The EU has shown that literally one person can make a difference – a sole Maltese citizen was able to change an entire industry just because he was trying to make the lives of his people better.
And that’s where you come in. These technologies are intertwined with every aspect of your life, and it's not inevitable that these companies can dictate how your life and our society will be impacted.
“Never doubt that a small group of thoughtful, committed citizens can change the world indeed, it is the only thing that ever was,” said Margaret Mead. So, where can you start? It all begins with a conversation. Check out the latest bonus episode of our podcast with Justin Hendrix of the Tech Policy Press, which has been covering the intersection of technology and democracy.
Maybe you, as well as the millions of Americans who actually use the technology, will become part of the conversation. Because lightning cables can strike more than once. What’s your take? Email me at us@technicallyoptimistic.com
Worth the Read
“Summary of the 2023 WGA MBA” 148 days after it started, it seems like the Hollywood writer’s strike may be over? In Episode 5 of the podcast, we spoke with Justine Bateman about the strike, and its summary of the agreement covers some of the points – notably that a writer can’t be forced to use generative AI, presumably to drive down costs, nor can AI be used to generate “source material.” Ms. Bateman questioned whether these systems can be “creative,” and while this agreement doesn’t address that big question, it certainly draws some lines in the sand. The Verge has a good summary of this, too.
“The Coming Creator Economy Revolution: How A.I. Will Unlock New Value Across Industries” Speaking of AI in the creative industry, this is a good rundown of how, potentially, AI will collide with the creator economy and amplify it.
“Google and Howard University Are Changing the Future of Voice Technology With Project Elevate Black Voices” There is a lot of evidence that software systems are not very culturally sensitive, specifically automatic speech recognition systems have very high error levels when it comes to Black speakers. Regardless of how you feel about systems like Siri, Google Assistant or Alexa, Black speakers have a harder time using them. Google is attempting to fix this and make this technology more accessible. Notably, Google seems to recognize that data collection, especially when it comes to the Black community, is full of historical mistrust.
“The Man Who Trapped Us in Databases” The story of Hank Asher, the multimillionaire king of the data brokers. He created software and products that are used by the computer systems of the FBI, the IRS and ICE; by 80 percent of the companies in the Fortune 500, by 9 of the world’s 10 biggest banks, and by a good part of America’s roughly 18,000 law-enforcement agencies. Your data is in his products.
“Spotify’s AI Voice Translation Pilot Means Your Favorite Podcasters Might Be Heard in Your Native Language” You might soon be hearing me on Técnicamente Optimista, or Techniquement Optimiste, or Technisch Optimistisch if you listen to me on Spotify. Spotify is “cloning” the voice of popular podcasters and computer-generating their podcast, in their voice, but translated in a different language. Yes - this opens up a way larger audience for podcasters, and yes, I want more people to hear my thoughts on my show…. but am I willing to sell my voice (and my humanity) for it? That’s an interesting tradeoff.
“NYPD Unveils K5, the Subway’s New Robot Guardian” Instead of building community, the NYPD is deploying surveillance on wheels in its subways. Unsurprisingly, it's raising concerns amongst privacy advocates.
“California’s Governor Vetoes State Ban on Driverless Trucks” The summer of 2023 has really been the summer of labor, and one that has been highlighting this intersection of AI and jobs. CA Governor Gavin Newson just vetoed a bill, AB316, that would have required a human onboard an autonomous semi truck. Labor is clearly unhappy – they wanted guarantees for their drivers. But, this does pave the way for more autonomy on the roads.
“These 183,000 Books Are Fueling the Biggest Fight in Publishing and Tech” The Atlantic magazine released a tool to help authors determine if their work is in generative AI tools built by Meta, Bloomberg and others. It's based on a dataset called Books3. Meredith Broussard, data journalist and professor at NYU, who appeared in Episode 2 of the podcast, posted that one of her books was in this dataset without her consent.
ChatGPT as therapy, by one of its safety engineers The potential for AI to address mental health issues is immense – we spoke with Roz Picard about this in Episode 6 of the podcast. And we all should be excited about the products we work on. But, this tweet reminded me of Del Harvey, who built the original Trust and Safety team at Twitter. She loved Twitter, but was also deeply suspicious and double checked everything as she was responsible for the health of the platform and its users. She and her team was a model of being able to govern oneself and to have checks and balances on the inside. This post in question is from a member of the OpenAI safety team and leads me to wonder about the culture of that team within OpenAI, and whether the awe and excitement internally is so immense that they need an external party to provide that check and balance.