How Technology Shapes Opportunity
Who really benefits from new tech — and how we can close the gaps.
It’s been a minute! Big changes are coming to the newsletter — starting with monthly themes and Q+A’s with some of the most fascinating minds in tech. First up? Vint Cerf, one of the fathers of the internet. Without him, my 1s and 0s wouldn’t be reaching your screen right now. I can’t wait to talk to him about the future of AI, the internet, and everything in between. Who else would you like to hear from? Let me know.
This month, we’re talking about how technology shapes opportunity. The printing press shattered elite control of knowledge. The Internet put information in billions of hands. AI is now influencing who gets hired, who receives medical care, and who has access to education. But technology doesn’t just change opportunity — it decides who gets that opportunity.
AI is often framed as neutral, but it reflects the priorities of those who design it. It can remove barriers and expand access, but it can also create blind spots, favoring some groups over others. The difference comes down to how it’s built — and who it’s built for. Right now, a small group of engineers, at a handful of companies, make choices — what data to train on, which objectives to optimize for, what trade-offs to accept — and that shapes how AI affects millions of people.
Most of the time, this isn’t about malice — it’s about incentives and blind assumptions. Engineers move quickly, optimizing for performance, engagement, or efficiency, often without stopping to ask who might be left behind. They assume data is neutral when, in reality, it reflects history’s patterns. But sometimes, bias is designed with intent. Robert Moses famously built New York’s parkway overpasses just low enough to keep out buses — ensuring that poorer, Black, and immigrant communities couldn’t easily access Jones Beach. The bias in that system wasn’t accidental.
The same patterns show up in AI.
Take Amazon’s hiring algorithm, one of the first major AI bias stories, reported by Reuters in 2018. Trained on past hiring data, it learned to favor men over women. Amazon scrapped it, but similar issues keep surfacing across industries. Need more examples? Here are just a handful of examples:
A study published in Science found that an AI tool used in US healthcare systematically deprioritized Black patients for advanced care — because it was trained on cost data, not medical need.
A study at New Mexico State University showed that AI graders gave lower scores to students perceived as coming from inner-city schools based on subtle context clues like “listens to rap music.”
UC Santa Cruz researchers found that OpenAI’s GPT-40 expresses more empathy when told they’re responding to females.
A Berkeley study of 133 AI systems found that 44% exhibited gender bias, with 25% exhibiting both gender and racial bias.
The pattern is clear: AI doesn’t just reflect bias — it reinforces it, turning historical trends into automated decisions. And here’s the kicker: Bias doesn’t have to be intentional to be profitable.
Companies don’t necessarily set out to build unfair systems. Efficiency and profitability tend to win out over fairness. AI that prioritizes speed, engagement, or cost savings can end up favoring the groups that have historically been best served by these systems — because they’re the easiest to optimize for. Addressing bias takes extra work: testing for fairness, diversifying training data, slowing down deployment to catch unintended harms. And when there’s no requirement to do it, it often doesn’t happen.
So how do we close the gaps? Three things need to change.
First, transparency. Until recently, AI companies had to disclose what their models were trained on, so independent researchers could check for hidden biases. That requirement, part of a White House executive order, was recently rescinded. Without clear oversight, it’s harder to catch systemic issues before they become deeply embedded in decision-making systems.
Second, better design. AI shouldn’t just optimize for efficiency — it should be built with fairness and empathy in mind. That means using diverse training data, testing for unintended bias, and including the voices of those most affected in the design process. Some companies are trying to get this right, but it can’t be an afterthought. Groups like Joy Buolamwini’s Algorithmic Justice League are pushing for industry-wide standards, and we need more of that work.
Third, public pressure. AI isn’t an inevitability — it’s a set of choices. Companies respond to scrutiny, and they respond to market pressure. The more AI bias gets exposed — by journalists, researchers, and everyday users — the harder it is to ignore. The systems we use every day are shaped by what people demand of them.
This is where you come in. AI is shaping the systems we rely on every day. Be curious. Ask questions. Challenge assumptions. When you interact with an AI — whether it’s a chatbot, a hiring tool, or a recommendation system — ask: Who does this serve? Who might be left out? And if something feels off, don’t just accept it. Push back. The more pressure there is to get this right, the better these systems will be.
If we get this right, AI can expand opportunity, not just reinforce old barriers. Next time, we’ll talk about designing AI that works for all — and we’ll hear directly from Vint Cerf. Stay tuned.