The latest hack demonstrated by a team of experts is very simple and efficient, a simple sticker attached on a sign board can confuse any self-driving car and potentially lead an accident.
The hack was devised by a group of researchers from the University of Washington that explained that an attacker can print stickers and attach them on a few road signs to deceive “most” autonomous cars into misinterpreting road signs when they are altered by placing stickers or posters.
The sign alterations in the test performed by the researcher were very small, even if they can go unnoticed by humans, the algorithm used by the camera’s software interpreted the road sign in a wrong way.
The problem affects the image recognition system used by most self-driving car cars as explained in a research paper, titled “Robust Physical-World Attacks on Machine Learning Models.”
“Given these real world challenges, an attacker should be able to account for the above changes in physical conditions while computing perturbations, in order to successfully physically attack existing road sign classifiers. In our evaluation methodology, we focus on three major components that impact
how a road sign is classified by, say, a self-driving car. ” reads the paper.
The experts demonstrated different tricks to interfere with the mechanisms implemented in modern self-driving cars to read and classify road signs, just using a color printer and a camera.
A similar camouflage was tested on a RIGHT TURN sign and the cars wrongly classified it as a STOP sign in 66 percent of the cases.
The researchers also tried a Camouflage Abstract Art Attack by applying smaller stickers onto a STOP road sign. In this way, the camouflage interferes with the car systems that interpreted the road sign as a street art in 100 percent of the time.
“Our attack reports a 100% success rate for misclassification with 66.67% of the images classified as a Stop sign and 33.7% of the images classified as an Added Lane sign. It is interesting to note that in only 1 of the test cases was the Turn Right in the top two classes.” reads the paper. “In most other cases, a different warning sign was present. We hypothesize that given the similar appearance of warning signs, small perturbations are sufficient to confuse the classifier. In future work, we plan to explore this hypothesis with targeted classification attacks on other warning signs.”
The experts did not reveal the manufacturer whose self-driving car they used in their tests, anyway their research demonstrates the importance to improve safety and security of such kind of vehicles.
[adrotate banner=”9″] | [adrotate banner=”12″] |
(Security Affairs – self-driving car, car hacking)
[adrotate banner=”13″]