TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Tech Life

Industry Facial Recognition AI Moratoriums Don’t Address Flaws, Privacy Concerns

In a move that surprised civil liberties activists and the tech world alike, Amazon — arguably the world's largest provider of facial recognition AI to law enforcement — implemented a one-year moratorium earlier this month on selling its Rekognition software to police departments.
Jul 10th, 2020 9:21am by
Featued image for: Industry Facial Recognition AI Moratoriums Don’t Address Flaws, Privacy Concerns

In a move that surprised civil liberties activists and the tech world alike, Amazon — arguably the world’s largest provider of facial recognition AI to law enforcement — implemented a one-year moratorium earlier this month on selling its Rekognition software to police departments.

In an industry that currently has little to no regulation, Amazon is taking a step in the right direction, but it may be too little, too late, critics warned. After all, Amazon’s announcement only came after IBM said it would cease development of its facial recognition system completely, prompting Microsoft to chime in and also halt sales of their software to police forces. These responses come against the backdrop of ongoing global protests against police brutality, which are provoking a broader discussion about the kinds of tools local police forces are using against citizens, particularly powerful facial recognition technologies that can identify people in real-time from photos and videos — sometimes with disastrous, life-altering consequences.

Amazon’s Rekognition software presents a particularly troubling case. Launched back in 2016, Rekognition was initially marketed as a general-purpose computer vision tool that uses deep learning to identify objects, scenes and faces in real-time, in addition to enabling users to search and compare faces against databases featuring tens of millions of faces. It uses something known as a confidence score or confidence threshold, which is set by the user. Since facial recognition systems make predictions of whether or not a face matches another face in another image, setting a certain confidence threshold means that the software will output matches that meet that predetermined cutoff, while those that don’t are discarded. Despite Amazon’s recommendations to use a 95% confidence threshold for Rekognition in law enforcement situations, there is no regulation that compels authorities to do so.

Hidden Algorithmic Biases

Beginning in 2017, the Washington, Oregon police department was the first in the US to use Rekognition for facial analysis of suspects, and by 2018 had run over 1,000 facial searches, something that wasn’t publicly known until May 2018. In February of the same year, a team of MIT and Microsoft researchers published a landmark study that revealed how facial recognition software can have harmful racial and gender biases unintentionally baked into their algorithms. By showing that these tools misclassified darker-skinned women significantly more often than white male subjects, the study’s data indicated that these systems are less accurate than claimed.

Another MIT-affiliated audit conducted in mid-2018 and published in early 2019 focused on Rekognition itself, along with facial recognition systems from Microsoft, IBM, Face++ and Kairos, and found that Rekognition had the highest rates of inaccuracy, misclassifying women of color 31% of the time, compared with their white male counterparts. These results were reinforced by a separate experiment done by the American Civil Liberties Union in July 2018, which found that Amazon’s software erroneously matched images of 28 US Congresspersons with mugshots of criminals — with persons of color disproportionately being misidentified the most.

Instead of addressing these concerns, Amazon dismissed the findings, prompting a group of 80 experts — including a former Amazon researcher and several other big names in the AI field — to pen an open letter supporting the MIT researchers’ findings, and also urging Amazon to cease selling Rekognition to law enforcement. Despite its public refutations, Amazon eventually released an updated version of Rekognition, while behind the scenes Amazon was investing in “fairness in AI” research, and also lobbying Congress on privacy, labor and antitrust regulations.

“Accuracy Hasn’t Improved Much”

More recently in May 2020, yet another eye-opening study was conducted with Rekognition, using the same parameters as the 2018 ACLU experiment. This time around, the analysis included photos of members of the UK parliament.

“We repeated our experiment four times, each with a different sample of 25,000 arrest photos from Jailbase.com,” explained consumer privacy expert Paul Bischoff of tech research firm Comparitech, who oversaw the study. “At the same [confidence] threshold (80%), an average of 32 US Congresspeople were misidentified, four more than the ACLU’s study two years ago. This would seem to indicate that Rekognition’s accuracy hasn’t improved much. Out of the 12 politicians who were misidentified at a confidence threshold of 90% or higher, six were not white. That means half of the misidentified people were people of color, even though non-whites only make up about one-fifth of US Congress and one-tenth of UK parliament.”

These recent findings once again underscore the need for more laws that would protect citizens from powerful but imperfect tools. “At the moment, there are almost no regulations on police use of face recognition,” added Bischoff. “Who can use it, how it’s used, where it’s used, who data can be shared with, and whether people are informed that their faces are being scanned are all up to police discretion. This is dangerous because it allows for abuse of the system, can lead to false arrests due to misidentification, and breaches of privacy.”

These fears aren’t unfounded: there’s already been one confirmed case of an innocent man being arrested and detained based on an incorrect facial recognition match earlier this year. Although experts are welcoming Amazon’s one-year pause on offering Rekognition to police departments, many are also pointing out that the company didn’t mention whether that would also apply to federal agencies like the FBI or ICE, which reportedly have used or are considering using Amazon’s software.

“I think there are some legitimate causes for police to use face recognition,” said Bischoff. “Kidnappings and human trafficking are a couple examples. But we need regulations in order to prevent rampant abuse and misuse by authorities. The consequences of not regulating face recognition can ultimately impact freedom of movement and assembly.”

Read the latest study here.

Amazon Web Services is a sponsor of The New Stack.

Images: teguhjati pras via Pixabay; Amazon and Comparitech

At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: feedback@thenewstack.io.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.