TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
AI / Operations

Camouflaged Graffiti on Road Signs Can Fool Machine Learning Models

Sep 14th, 2017 11:00am by
Featued image for: Camouflaged Graffiti on Road Signs Can Fool Machine Learning Models

The technology behind self-driving cars has improved considerably in the last couple of years, with car manufacturers relying on a combination of radar, laser, motion sensor and digital camera systems to help it navigate the roads safely. Deep learning algorithms are utilized by the car’s computer vision systems to help it recognize pedestrians and understand road signage. But in a world where signs might be vandalized, obscured by dirt, snow or foliage, how accurate and reliable can these computer “eyes” be?

Unfortunately, as one recent study shows, it’s not difficult to fool visual classification algorithms by merely altering the physical appearance of road signage slightly — something that’s much more achievable and likely compared to electronically hacking into an autonomous driving system. A group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley found that by adding a few stickers or spray paint to signs caused deep neural network-based classifiers to confuse them for other types of signs — understandably, a big cause for concern.

A New Kind of Adversarial Attack

Experts working in the field of A.I. and machine learning are familiar with the problem of “adversarial examples” — inputs into machine learning models that are deliberately designed to flummox machines, causing them to mistakenly classify these inputs as something else. For visual input systems, we’ve seen previous instances where researchers apply a pixel gradient over images which is unnoticeable to the human eye but tends to confound machines most of the time.

But these types of adversarial attacks would be difficult to execute in a real-world scenario unless hackers have direct access to all of a vehicle’s electronic and support navigation systems. On the other hand, it’s much easier to change objects in real life, and fool a computer in that way. In a paper titled “Robust Physical-World Attacks on Machine Learning Models,” the researchers describe how they developed a new “attack algorithm” capable of creating “adversarial perturbations” — by visually altering signs in a number of real-world ways so that computer vision technology will misclassify them, regardless of distance or viewing angle.

Calling these modifications “robust physical perturbations” (or RP2), these changes in the physical world — it’s as simple as pasting a printed, adulterated sign over a real one, or placing stickers on them. What might be most unsettling is that this new technique produces changes that are typically imperceptible to the human eye, but fools the computer into thinking that it is seeing a completely different sign.

Posters and Stickers

The team used a variety of methods to achieve this illusion, ranging from camouflage graffiti and camouflage art and subtle fading. They first tested one implementation that they called “poster-printing attacks,” where a subtly altered sign was produced using their attack algorithm, then printed and affixed over a real sign. These visual changes are noticed only if one looks very closely. The team found that with this approach, they were able to confuse a machine 100 percent of the time into classifying a stop sign as a 45-mile-per-hour speed limit sign, and a right-turn sign as a stop sign.

Examples of poster-printed adversarial attacks.

The second approach dubbed a “sticker attack,” had the team adhering stickers on signs in a pattern that made it similar to the sticker graffiti that’s common in cities. While this method was more apparent than a printed poster trick, these attacks are easier to execute, as they would-be pranksters would only need a color printer, and don’t require them to cover an entire sign with a printout. The researchers found that with sticker camouflage graffiti perturbations, using stickers to form a “LOVE HATE” configuration, they were able to cause the system to misread a stop sign as a speed limit sign in about 67 percent of cases. With sticker camouflage art perturbations, where the stickers were placed in a more abstract way, the system misclassified the same sign 100 percent of all test instances.

Sticker-based adversarial attacks, showing camouflage abstract art attack (top) and camouflage graffiti attack (below).

To carry out their experiments, the team trained their model in TensorFlow, employing a public dataset of road signs. While the dataset of a few thousand training examples was relatively small, the results plainly show the potential vulnerabilities of deep learning artificial neural networks used in autonomous driving systems when real objects are modified.

“Unlike prior work, […] here we focus on evasion attacks where attackers can only modify the testing data instead of training data (poisoning attack),” explained the researchers. “In evasion attacks, an attacker can only change existing physical road signs. Here we assume that an attacker gains access to the classifier after it has been trained (‘white-box’ access).”

The team’s motivation for doing these tests are simple: to investigate the latent weaknesses of autonomous systems currently in use, so that they may be protected against such adversarial attacks, which have the potential to do great harm, especially in split-second situations on the road. “This assumption is practical since even without access to the actual model itself, by probing the system, attackers can usually figure out a similar surrogate model based on feedback,” they added. “We need to evaluate the most powerful attacker in order to inform future defenses that guarantee system robustness.”

In the meantime, such instances demonstrate that the development of autonomous vehicles still has a long way to go. Even then, it raises the question of whether we need to completely rethink infrastructure itself: in the interest of safety, should roads be completely given over to autonomous vehicles, and human control, signage and other possibilities for human error and mischief completely eliminated? Should we install new smart roadway markers that broadcast sign information wirelessly and which may be less easy to modify? While there’s little doubt that self-driving vehicles will eventually proliferate on our roads, many questions as to how to make them safe and secured against a variety of attacks still remain.

Images: The University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.