TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Security / Tech Life

Are We Ready for AI-Powered Security Cameras?

What happens when powerful cloud-based analytics tools can be applied to footage from security cameras?Jay Stanley, a senior policy analyst for the American Civil Liberty Union's speech, privacy, and technology project, has explored the issue in a new 50-page report – and a new video arguing that smart cameras "pose a major threat to our privacy." Stanley's outlook is dire. "An Army of Robot Surveillance Guards Is Coming," warned the teaser article on the ACLU’s “Free Future” blog.
Jun 23rd, 2019 6:00am by
Featued image for: Are We Ready for AI-Powered Security Cameras?
Feature image via Pixabay.

What happens when powerful cloud-based analytics tools can be applied to footage from security cameras?

Jay Stanley, a senior policy analyst for the American Civil Liberty Union’s speech, privacy, and technology project, has explored the issue in a new 50-page report — and a new video arguing that smart cameras “pose a major threat to our privacy.” Stanley’s outlook is dire. “An Army of Robot Surveillance Guards Is Coming,” warned the teaser article on the ACLU’s “Free Future” blog.

Today’s cameras can now capture high-resolution video and then transmit it to the cloud for storage and analysis, the report points out, which explores the issues in the emerging $3.2 billion “video analytics” industry. As we approach that moment when the world’s cameras are augmented with AI, “It is as if a great surveillance machine has been growing up around us, but largely dumb and inert — and is now, in a meaningful sense, ‘waking up,'” he wrote.

The ACLU examined over 40 papers on computer vision to help predict what technologies may be coming. “What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.” AI can already recognize actions being taken by humans — like putting on a hat, or taking off eyeglasses. But the report also warns that researchers are investigating AI that can recognize emotions, behaviors, and “the patterns of our movements.”

While there’s no knowing which ones the market will adopt, “the capabilities that computer scientists are pursuing, if applied to surveillance, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.”

Machines Watching Machines

The ACLU report points out that today’s most sophisticated cameras offer ultra-high resolution and night-vision sensors — and that AI is now being applied to retroactively increase the resolution of images. Other forms of “computational photography” include light-field cameras — which allow a kind of focusing after the picture has been taken, according to Wikipedia, by capturing not only the intensity of light but the direction the light rays are traveling.

Then the report cites a study which found in 2014 that America had one surveillance camera for every 8.1 people — more than any other nation on earth.

A footnote points out the Chicago police can already access 20,000 camera feeds, and “a number of other cities are working at integrating private cameras into police networks as well.”

Now, what if that footage could be cross-referenced with information gathered online?

The report also cites New York City’s sophisticated “Domain Awareness System,” which was developed in partnership with Microsoft and including both video cameras and license plate readers. In 2012 police commissioner Ray Kelly said the system could already “track where a car associated with a suspect is located, and where it has been in past days, weeks or month.” And if a suspicious package is discovered, “the NYPD can immediately…look back in time and see who left it there” — and then also determine where that person went.

But why stop there? Looking towards the future, the report wonders if this technology will be used preemptively. “Another goal of wide-area surveillance research is to carry out ‘pattern of life’ analysis on subjects — to detect regularities in their movements and activities in order to discover things about them, predict where they will be, or to sound alerts when variations and anomalies in those patterns arise.”

And what about all the video footage that’s already been captured over the years? “The same analytical abilities that enable real-time scrutiny also allow for analysis of existing video libraries,” The ACLU reports.

It’s Already Here

The report provides more examples of where early technologies are already being used. Video analytics software has been installed to detect suspicious or “anomalous” behavior in the schools around Parkland, Florida, where a shooting occurred in 2018. Commercial railroads began using them for security monitoring in 2007. One company claims it can spot shoplifters before they shoplift, by recognizing signs like “fidgeting, restlessness, and other potentially suspicious body language.” There’s even a company offering “crowd management solutions.”

Some companies are already selling “voice analysis” products which claim to detect specific emotions.

And one video analytics vendor claims their technology can identify which people in a crowd have a fever, using thermometric cameras to detect “targets of interest within a defined temperature range.

The report also mentions Amazon’s cloud services — specifically, the facial-recognition service they’ve been offering to police departments. “Cloud service means that the deployment of real-time face recognition on surveillance cameras is no longer restricted to sophisticated and deep-pocketed organizations; any small police department or corner store can now bring the substantial analytics expertise of a company like Amazon to bear on its local camera feeds.” But beyond facial recognition, Amazon is also offering more sophisticated video analytics. And meanwhile, Google’s Nest home security system also offers its own anomaly-detecting cloud analytics service.

The report adds something else to consider. “The term ‘anomaly detection’ is widely used in the research literature with seemingly no recognition of, let alone embarrassment over, the concern that identifying ‘anomalous’ people and behavior is a central part of the kind of oppressively conformist society that pervasive monitoring and surveillance threatens to create.”

Noting that video-watching AI agents are “cheap and scalable,” the report stops to ask a few questions, “as we move from collection-and-storage surveillance to mass automated real-time monitoring.”

Unintended Consequences

The change may even come as an unintended consequence of other projects. “The quest for autonomous robots and self-driving cars is driving a lot of research that will, as a kind of spin-off, result in smarter surveillance-video analytics.” After all, self-driving cars will need to accurately predict the future movements of other cars (and of walking humans). “If a computer can understand the video streaming through its ‘eyes’ well enough to allow a robot to move around in the world and interact with people, it can understand much of what’s happening in a surveillance camera stream.”

And when it comes to analyzing digital data, the report points out that deep learning provides a “short cut.” The hardest part then becomes finding a training dataset.

As AI capacity increases, will the use of surveillance technologies expand just to keep supplying the needed data?

The report acknowledges that “As in most fast-moving technology markets, there is almost certainly a hefty proportion of snake-oil being marketed by video analytics companies.” But as Stanley says in his blog post, “Based on experience, however, that often won’t stop them from being deployed — and from hurting innocent people.”

A screenshot of the ACLU's Free Future blog header image

And with the rapid pace of change, they may arrive suddenly — billions of ’em, “representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior… Think about what it feels like when we’re driving down the highway and we see a police cruiser driving behind us. Do we want to feel that way at all times?”

And what happens when the camera systems actually know our names and identities? “Face recognition is an enormous privacy threat that has the potential to turn every surveillance camera into a digital checkpoint: a node in a comprehensive distributed tracking network capturing people’s identities, associations, and locations on a mass scale.”

What happens when the cameras can correctly guess the details of our social interactions with others?

The report ultimately warns of “omnipresent AI-powered cameras” and their possible effects on society: tracking “our every conscious and unconscious behavior that, combined with our innate social self-consciousness, turns us into quivering, neurotic beings living in a psychologically oppressive world in which we’re constantly aware that our every smallest move is being charted, measured, and evaluated against the like actions of millions of other people — and then used to judge us in unpredictable ways…”

Their conclusion? “Policymakers must contend with this technology’s enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.” The report includes specific recommendations to minimize “darker possibilities,” including:

  • “No government entity should be permitted to deploy video analytics systems without first receiving approval from the relevant governing legislative body following a transparent consideration process…”
  • “Individuals should be legally entitled to access any particular data about them that has been stored and to challenge and correct inaccurate data…”
  • “The full logic and operation behind any decisions or determinations made by video analytics systems that are used in legal proceedings should be subject to mandatory disclosure to all parties… Governments should comply with contemporary best practices for algorithmic transparency and fairness developed by disinterested parties such as academics and non-governmental organizations as well as affected communities.”
  • “[W]ide-area surveillance…should not be engaged in at all by government agencies within the United States.”

And the report’s final pages suggest the same restrictions should apply to private companies: “Video analytics should not be used for the collection of customer data for marketing purposes, for example. And any companies that decide to use this technology need to be transparent about it.”

But the biggest message of the report is that video analytics “is just one example” of an emerging surveillance infrastructure.

The report ends by noting we can be “watched” in other ways — by closely analyzing all the sundry data trails we leave behind. (It specifically notes the new kinds of data that could be collected by IoT devices.) So in that sense, it argues, these concerns aren’t just limited to the specific possibility of AI monitoring security cameras. Instead, that scenario offers a starting point for a larger discussion, “a case study for how we may be affected by increased computer scrutiny in the absence of strong privacy protections.”

Both private and government groups already have an incentive to collect data on people, though they’ve been “limited until now by the practical expense of actually analyzing this flood of data.”

So what happens when that changes?


WebReduce

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.