Clearview’s Controversial Facial Recognition AI Automates Mass Surveillance

There’s no doubt that facial recognition tech can make our lives easier, but the problem is that it also automates practices that are arguably more questionable, such as the mass surveillance of ordinary citizens. As the protests against police brutality continue across the United States and around the globe, the response of law enforcement, by and large, is to use advanced facial recognition tools to gather large amounts of biometric data that identify, track and monitor citizens who are merely exercising their right to peacefully assemble and freely express their grievances in public.
While one might argue that it’s in the interest of public safety to use this kind of technology, the lack of regulation and accuracy around such tools — coupled with AI systems’ tendency toward algorithmic bias against marginalized groups — points to a slippery slope toward potential abuse that could be automated on a systemic scale.
One particularly alarming example is American tech startup Clearview AI, which offers a powerful facial recognition tool built upon a massive database of over three billion photos scraped from social media profiles and websites, mostly without users’ knowledge or consent — effectively violating the policies of tech giants like Facebook and Google. The app is so powerful that it reportedly can identify someone even when half the face is covered — allowing users to then pull up personal information like addresses, and employment history.
Earlier this year, reports in the New York Times and BuzzFeed News revealed that the company’s app has been utilized by hundreds of law enforcement agencies that include local police forces as well as the FBI, Customs and Border Protection (CBP) and Interpol. But despite the company’s claims that its tools are primarily aimed used by law enforcement to solve crimes, recent disclosures make it clear that Clearview is aggressively pushing to expand and find new clients in the commercial sector such as Walmart and Macy’s — in addition to countries where human rights abuses are rife, such as Saudi Arabia.
The uproar caused by these revelations has led to Facebook, Twitter and LinkedIn demanding that Clearview stop trawling their platforms for biometric data, while Apple has suspended the company from its developer program. Clearview is now also promising to delete photos of those who submit a formal request to opt-out — but this only applies to residents of certain countries. Other advocacy groups like the American Civil Liberties Union are now suing Clearview for running afoul of biometric data privacy laws — in this case, specifically Illinois’ Biometric Information Privacy Act (BIPA). According to the ACLU, Clearview’s facial recognition technology not only makes it “dangerously easy” to identify law-abiding citizens at protests, political rallies and religious gatherings, but could also be used to target members of vulnerable communities.
“This case is about Clearview AI capturing people’s faceprints in secret and without consent in order to build and operate a gargantuan face recognition database,” as ACLU senior staff attorney Nathan Freed Wessler explained to The New Stack. “The loss of control over our unique biometric identifiers, including our faceprints, puts us at risk of serious violations of our privacy and security, from pervasive tracking and surveillance to identity theft. The plaintiffs in this case are organizations that represent the interests of survivors of domestic violence and sexual assault, undocumented immigrants, current and former sex workers, and members of other vulnerable communities, who have the most to lose when law enforcement, corporations, or individuals exploit abusive face surveillance tools like Clearview’s.”
The continuing backlash has resulted in Clearview terminating all contracts with private companies and other non-law enforcement entities in Illinois last month. Nevertheless, these concessions only emphasize the overall lack of regulation in the industry. For instance, individual states like Texas and Washington do have biometric privacy laws, but these can only be enforced by the states’ attorneys general. In contrast, Illinois’ BIPA is considered one of the nation’s strongest biometric privacy laws, as it allows individuals to sue over violations of their own rights, but it also highlights the absence of more comprehensive, nationwide policies that would better protect citizens’ privacy, no matter what state they reside in.
“Although this case is brought on behalf of Illinois organizations under an Illinois law, it illustrates the growing resistance to face recognition technology across the country,” said Freed Wessler. “Alert to the dangers of this technology, cities like Oakland and San Francisco, California, and Springfield and Cambridge, Massachusetts, have banned police use of face recognition technology. The New Jersey Attorney General has barred police in the state from using Clearview. This case aims to end Clearview’s violations of Illinois residents rights under BIPA, but it also serves to illustrate the dangers of Clearview’s conduct specifically and of face recognition technology in general.”
There’s no question that facial recognition AI can be useful, but Clearview’s patterns of questionable practices plainly illustrates that we don’t know much about how these biometric data collection systems work — how they affect citizens’ privacy and civil liberties, how possible inaccuracies are caught and handled, and what kind of limits need to be in place so that these tools aren’t misused. Even more problematic is that policymakers are slow to respond when it comes to regulating these formidable tools, at the expense of people’s rights to privacy and freedom of speech, meaning that we could eventually one day wake up to a surveillance society where anonymity is impossible.
Feature image by Josh Hild via Unsplash.
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.