Invisible at the Crossroads: How Autonomous Tech Still Fails to See Us

2025-07-01
6 min read.
As AI enters roads, prisons, and war zones, safety is threatened! Darshana Patel highlights how invisible codes can marginalize visible lives, and why oversight can’t wait.
Invisible at the Crossroads: How Autonomous Tech Still Fails to See Us
Tesfu Assefa

In 2022, I spent months on foot, drifting through San Diego and LA, just another brown, short, unassuming woman blending into the city’s everyday tapestry. I became part of the urban fabric in a way few drivers ever notice, except, it turns out, the ones that weren’t really “driving” at all.

The crossroads of Southern California had become a live testbed for autonomous vehicles with Teslas gliding through intersections, robotaxis waiting at curbs, and cameras and sensors pointed at every edge of the street. I quickly noticed a pattern: I would step into the crosswalk, right-of-way in hand, and the car turning left or right wouldn’t stop. Over and over. It wasn’t just inattentive humans anymore. The machines didn’t see me either.

After losing count of these near-misses, a cold realization set in: I was an anomaly. Not to people, but to the algorithms guiding these cars.

From Anecdote to Evidence

The first time it happened, I shrugged it off. The fifth time, I started to wonder. By the tenth, I questioned if I’d become invisible, not to society, but to the complex code now rolling down our streets. Later, I learned I wasn’t imagining things.

Multiple studies have revealed that many early versions of self-driving car software were more likely to recognize white, adult pedestrians than anyone else. For people with darker skin, smaller statures, or physical differences, the risks are higher. The machines struggle to see what they haven’t been thoroughly trained on.

Research spotlight:

A 2019 study from Georgia Tech found that pedestrian-detection systems were 5% less accurate in identifying people with darker skin tones compared to lighter skin. These algorithms, used in both Tesla’s and other automaker’s autonomous vehicle systems, are trained on datasets that are not truly representative of the public they serve. [Georgia Tech Study].

Similarly, MIT researchers have documented consistent accuracy gaps in facial and object detection AI across skin tone and gender. [MIT Gender Shades Project].

Algorithmic Blind Spots and Real-World Risk

Why does this happen? Autonomous vehicles depend on enormous datasets and “machine vision,” but if those datasets underrepresent people who look like me, I’m literally out of the picture.

Add in the fact that AI developers are rarely drawn from the most diverse backgrounds, and blind spots are almost inevitable. The “edge cases”—whether that means wheelchair users, children, darker-skinned women, or someone wearing non-standard clothing—are all at greater risk of being unseen.

But on the street, edge cases aren’t theoretical. They’re flesh and blood. They are the difference between safety and tragedy.

Beyond Cars: Autonomous Policing and War Machines

This pattern isn’t limited to Teslas and Waymos. Cities from Atlanta to New York to San Francisco are now piloting robot “dogs” and humanoids for policing, and military contractors are deploying AI-powered drones and jets in war zones. In both cases, the promise is the same: greater efficiency, safety, and the “neutrality” of machines. They take the human out of the equation. The peril is just as clear: when bias is embedded in the code, it gets multiplied at machine speed and scale.

Credit: Tesfu Assefa

Recent example:

In 2022, San Francisco’s Board of Supervisors briefly approved a policy allowing police to deploy robots equipped with potentially lethal tools, before reversing course under public pressure . Civil rights advocates warned that introducing armed robots into policing—especially in vulnerable communities—risked disproportionate targeting, mission creep, and normalization of force without human empathy. Though the policy was pulled back, it highlighted how autonomous systems in law enforcement raise the same urgent questions: Who programs the “see something – do something” trigger? Who is held accountable when a machine misses or misjudges? [ACLU News].

In 2025, the U.S. Air Force officially designated Anduril’s autonomous jet, nicknamed the Fury YFQ-44A, as one of the first-ever “uncrewed fighter jets.” With software-driven targeting and surveillance systems that can operate both with and without human pilots, this marks a major shift in how aerial warfare is waged and raises urgent questions: What criteria determine “legitimate targets”? Who controls the mission logic? And who is held accountable if the AI misidentifies an enemy or civilian? As these systems begin to enter active deployment, we’re witnessing a new frontier of invisible bias and real-world consequences. [Business Insider].

More and more, we’re seeing autonomous tech move into spaces where trust and bias carry real weight. In Georgia, Cobb County’s sheriff deployed nearly six-foot-tall “jailbots” equipped with 360° cameras, night vision, heat sensors, and remote tasing capabilities inside its adult detention center as part of a 90-day autonomous security pilot. Promoted as “game-changers” for perimeter patrol and inmate monitoring, these robots raise urgent questions: Who decides when to deploy non-lethal force? How are edge cases—mental health crises, skin tone, cultural expression—accounted for in high-stakes settings? And who’s liable if a robot misjudges and harms someone? These questions echo those around self-driving cars and they’re no less critical when applied inside prison walls. [Cobb County Sherrif Department]

The Myth of Neutral Technology

We’ve been told that code is objective, that machines don’t discriminate. But the data shows otherwise: technology always reflects the values, gaps, and blind spots of its creators.

When we trust our safety or our lives to machines that can’t see us, we’re not just outsourcing judgment. We’re codifying bias. The line between human error and algorithmic error isn’t as clear as we’d like to believe.

Reclaiming the Conversation

So what comes next? If bias is built in, it can be corrected, but only if we’re willing to see the problem and demand change.

  • Audit the black boxes: AI systems, especially those that govern public safety, must be transparent, independently tested, and regularly audited for fairness.
  • Diverse datasets, diverse teams: If the people building AI don’t reflect the people in the world, someone will always be left out.
  • Human in the loop: Even as we accelerate toward autonomous everything, human oversight and moral judgment are more essential than ever.

I’m sharing this not just as an anecdote but as a wake-up call. If I can be invisible to the most advanced vehicles on our roads, who else are our systems failing to see?

As we stand at the threshold of AI policing, autonomous war machines, and self-driving everything, let’s ask harder questions, before we’re all out of sight, and out of mind.

Have you had your own experience of being “unseen” by technology? What stories, data, or ideas do you want to share to shape better systems? The future is being written and coded, now. Let’s make sure we’re all in the picture.

#AIEthics

#AIFavoritism

#AutomatedReasoning

#BiasinAI



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!