Popular on Ex Nihilo Magazine

Startup Stories

How Clearview AI’s Facial Recognition Technology Can Identify Anyone Online

Ever posted a photo on Facebook? Tagged yourself on Instagram? Congratulations…there's a decent chance your face is now sitting

How Clearview AI’s Facial Recognition Technology Can Identify Anyone Online

Ever posted a photo on Facebook? Tagged yourself on Instagram? Congratulations…there’s a decent chance your face is now sitting in a massive database used by police departments across the country, and you probably had no idea it was happening.

The culprit is Clearview AI, a secretive startup that’s been quietly scraping billions of photos from social media sites since 2017. We’re talking 30 billion images pulled from Facebook, Instagram, LinkedIn, YouTube, and countless other websites. All without asking permission. All without telling anyone.

Here’s how it works: A police officer uploads a photo of someone, maybe from a security camera, maybe from a crime scene and Clearview’s software scans through its enormous database to find matches. Within seconds, it spits out other photos of that person along with links to their social media profiles, revealing their name, where they live, who their friends are.

It’s like having a Google search for faces, except way more invasive and with way fewer guardrails.

When Privacy Meets Reality

New York Times reporter Kashmir Hill first exposed Clearview AI back in 2020, and what she found was disturbing. The company had been operating in the shadows, refusing to talk to journalists and giving its app to police officers without their departments even knowing about it.

When Hill finally got to test the technology on herself, the results were unsettling. Clearview had found photos of her that she didn’t even remember existed including the one where she was barely visible in the background of someone else’s picture, wearing a coat she’d bought years earlier in Tokyo. The algorithm spotted her anyway.

Even more troubling: when Hill started investigating the company, Clearview put an alert on her face. Every time a police officer tried to show her how the app worked by running her photo through it, the company would call that officer and tell them to stop talking to the reporter.

The Bias Problem

Like most facial recognition technology, Clearview’s system has a troubling track record with people of color. The algorithms struggle more with darker skin tones because they were originally trained mostly on photos of white people. While the technology has improved, the consequences of false matches are real and devastating.

Take Randall Reed, a Black man from Atlanta who was driving to his mother’s house the day after Thanksgiving when four police cars suddenly surrounded him. He was arrested on a warrant from Louisiana for crimes he’d never committed, in a state he’d never visited. Clearview AI had matched his face to security footage of someone stealing expensive purses with a stolen credit card.

Reed spent a week in jail before lawyers could prove the obvious—he wasn’t the guy in the video. But by then, he’d already hired lawyers in two states and had his life turned upside down.

Beyond Police Work

While Clearview AI now claims it only sells to law enforcement, the technology has found its way into some pretty creepy situations. Madison Square Garden uses facial recognition to ban lawyers whose firms have sued the company. When one attorney tried to attend a Knicks game on tickets bought by someone else, security spotted her face within minutes and kicked her out.

In its early days, Clearview AI was more aggressive about expanding into private business. The company pitched grocery stores, hotels, and real estate buildings on using the technology. One businessman tested it at his chain stores to catch shoplifters, and reportedly used the app to identify his daughter’s date at a restaurant.

Meanwhile, other tech giants like Google and Facebook actually developed similar technology years earlier but decided it was too dangerous to release. They worried about the potential for abuse and how authoritarian governments might use it to control their citizens. Clearview AI saw this hesitation as an opportunity rather than a warning.

The Wild West of Surveillance

What makes this whole situation even more frustrating is how little oversight exists. Police departments are using facial recognition technology with virtually no rules or supervision. Officers often load the app directly onto their personal phones. There’s no requirement for warrants, no monitoring of searches, no accountability when things go wrong.

And once your photo is in Clearview’s database, it’s basically stuck there forever. The company claims their facial recognition technology has been used by U.S. police nearly a million times since 2017. Every search puts more innocent people in what privacy advocates call a “perpetual police lineup.”

Some states are starting to push back. Illinois residents can request to have their photos removed after a successful lawsuit. California, Colorado, Virginia, and Connecticut have privacy laws that give people similar rights. But for everyone else? You’re out of luck.

Why You Can’t Escape This System

The really insidious thing about Clearview AI is that you can’t opt out by being careful with your own photos. Say you never post pictures on social media, keep your privacy settings locked down, the whole nine yards. Doesn’t matter. If you’re in the background of your friend’s wedding photos, if someone tags you in a group shot from high school, if you happen to walk past a street photographer whose work ends up online (boom, you’re in the database).

Your face becomes a permanent biometric fingerprint that can be searched by thousands of police departments whenever they want, for whatever reason they want.

This isn’t some distant dystopian future. It’s happening right now. While we’ve been busy arguing about whether tech companies know too much about our browsing habits, they’ve quietly built a system that can identify us anywhere we go, anytime, based on nothing more than our faces.

The question isn’t whether this technology will spread, it already has. Companies like PimEyes are offering similar services to regular consumers. The question is whether we’re going to do something about it before facial recognition becomes so embedded in daily life that we can’t imagine a world without it.

Because once that happens, anonymous existence becomes impossible. And privacy, as we’ve known it, really will be dead.’

Source: NPR , Business Insider


Ex Nihilo magazine is for entrepreneurs and startups, connecting them with investors and fueling the global entrepreneur movement

About Author

Malvin Simpson

Malvin Christopher Simpson is a Content Specialist at Tokyo Design Studio Australia and contributor to Ex Nihilo Magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *