GhostStripe Attack Haunts Self-Driving Cars
Six boffins mostly hailing from Singapore-based universities say they can prove it’s possible to interfere with autonomous vehicles by exploiting the machines’ reliance on camera-based computer vision and cause them to not recognize road signs.
The technique, dubbed GhostStripe [PDF] in a paper to be presented at the ACM International Conference on Mobile Systems next month, is undetectable to the human eye, but could be deadly to Tesla and Baidu Apollo drivers as it exploits the sensors employed by both brands – specifically CMOS camera sensors.
It basically involves using LEDs to shine patterns of light on road signs so that the cars’ self-driving software fails to understand the signs; it’s a classic adversarial attack on machine-learning software.
Crucially, it abuses the rolling digital shutter of typical CMOS camera sensors. The LEDs rapidly flash different colors onto the sign as the active capture line moves down the sensor. For example, the shade of red on a stop sign could look different on each scan line to the car due to the artificial illumination.
The GhostStripe paper’s illustration of the ‘invisible’ adversarial attack against a self-driving car’s traffic sign recognition
The result is a camera capturing an image full of lines that don’t quite match each other as expected. The picture is cropped and sent to a classifier within the car’s self-driving software, which is usually based on deep neural networks, for interpretation. Because the snap is full of lines that don’t quite seem right, the classifier doesn’t recognize the image as a traffic sign and therefore the vehicle doesn’t act on it.
So far, all of this has been demonstrated before.
Yet these researchers not only distorted the appearance of the sign as described, they said they were able to do it repeatedly in a stable manner. Rather than try to confuse the classifier with a single distorted frame, the team were able to ensure every frame captured by the cameras looked weird, making the attack technique practical in the real world.
“A stable attack … needs to carefully control the LED’s flickering based on the information about the victim camera’s operations and real-time estimation of the traffic sign position and size in the camera’s [field of view],” the researchers explained.
The team developed two versions of this stablized attack. The first was GhostStripe1, which does not require access to the vehicle, we’re told. It employs a tracking system to monitor the target vehicle’s real-time location and dynamically adjusts the LED flickering accordingly to ensure a sign isn’t read properly.
GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a miscreant while the vehicle is undergoing maintenance. It involves placing a transducer on the power wire of the camera to detect framing moments and refine timing control to pull off a perfect or near-perfect attack.
“Therefore, it targets a specific victim vehicle and controls the victim’s traffic sign recognition results,” the academics wrote.
The team tested their system out on a real road and car equipped with a Leopard Imaging AR023ZWDR, the camera used in Baidu Apollo’s hardware reference design. They tested the setup on stop, yield, and speed limit signs.
GhostStripe1 presented a 94 percent success rate and GhostStripe2 a 97 percent success rate, the researchers claim.
One thing of note was that strong ambient light decreased the attack’s performance. “This degradation occurs because the attack light is overwhelmed by the ambient light,” said the team. This suggests miscreants would need to carefully consider the time and location when planning an attack.
Countermeasures are available. Most simply, the rolling shutter CMOS camera could be replaced with a sensor that takes a whole shot at once or the line scanning could be randomized. Also, more cameras could lower the success rate or require a more complicated hack, or the attack could be included in the AI training so that the system learns how to cope with it.
The study joins ranks of others that have used adversarial inputs to trick the neural networks of autonomous vehicles, including one that forced a Tesla Model S to swerve lanes.
The research indicates there are still plenty of AI and autonomous vehicle safety concerns to answer. The Register has asked Baidu to comment on its Apollo camera system and will report back should a substantial reply materialize. ®
Editor’s note: This story was revised to clarify the technique and to include an illustration from the paper.
READ MORE HERE