Even modest makeup can thwart facial recognition
Researchers at cyber-defense contractor PeopleTec have found that facial-recognition algorithms’ focus on specific areas of the face opens the door to subtler surveillance avoidance strategies.
In a pre-print paper titled “Novel AI Camera Camouflage: Face Cloaking Without Full Disguise,” David Noever, chief scientist, and Forrest McKee, data scientist, describe their efforts to baffle face recognition systems through the minimal application of makeup and manipulation of image files.
Noever and McKee recount various defenses that have been proposed against facial recognition systems, including CV Dazzle, which creates asymmetries using high-contrast makeup, adversarial attack graphics that confuse algorithms, and Juggalo makeup, which can be used to obscure jaw and cheek detection.
And of course, there are masks, which have the advantage of simplicity and tend to be reasonably effective regardless of the facial recognition algorithm being used.
But as the authors observe, these techniques draw attention.
“While previous efforts, such as CV Dazzle, adversarial patches, and Juggalo makeup, relied on bold, high-contrast modifications to disrupt facial detection, these approaches often suffer from two critical limitations: their theatrical prominence makes them easily recognizable to human observers, and they fail to address modern face detectors trained on robust key-point models,” they write.
“In contrast, this study demonstrates that effective disruption of facial recognition can be achieved through subtle darkening of high-density key-point regions (e.g., brow lines, nose bridge, and jaw contours) without triggering the visibility issues inherent to overt disguises.”
The research focuses on two areas: applying minimal makeup to fool Haar cascade classifiers – used for object detection in machine learning, and hiding faces in image files by manipulating the alpha transparency layer in a way that keeps faces visible to human observers but conceals them from specific reverse image search systems like BetaFaceAPI and Microsoft Bing Visual Search.
Facial recognition, said Noever in a phone interview with The Register, represents a third-rail issue – one that poses significant risks.
“It brings up all the best and worst parts of AI,” he said,” from bias to counting crowds, to all the useful things that can be done with traffic movement.”
“You can systematically attack the recognition algorithm without necessarily drawing attention to yourself,” he said, adding that it provides a way around the Streisand effect – “the moment you seek anonymity, you attract it.”
Noever isn’t taking a position on whether AI and facial recognition are good or bad – as the paper acknowledges there are beneficial and harmful uses. As far as his company is concerned, “we get a lot of technical evaluation work and so this is more like the good housekeeping seal of approval,” he explained. “How do we have confidence that this algorithm does what it says it’s going to?”
But overall, he argues, this kind of technology needs to be viewed with caution. “You saw it with the recent New York street assassin – that manhunt relied on basically everyone being involuntarily in a police lineup because of the way those images are collected from social media. It’s incredibly useful technology for good and incredibly dangerous. AI and facial recognition combined really is Pandora’s Box, so we have to be careful with it.”
Despite a lot of research, masks remain one of the few surefire ways of evading these systems [for now]
Emily Wenger, assistant professor of electrical and computer engineering at Duke University, who has worked on anti-facial recognition projects like Glaze and Fawkes, told The Register in an email that technical defenses against facial recognition and AI face challenges.
“Some colleagues and I wrote a paper taxonomizing this space a few years ago,” she explained. “Our main conclusions were that: (1) methods targeting the ‘setup’ phases of facial recognition systems, like data collection and reference database creation are underexplored; and that (2) there is a fundamental information asymmetry in this problem that puts people trying to evade these systems at a disadvantage.
“If you don’t know where the system is operating, what underlying machine learning/AI model it uses, or whether you’re part of the reference database, you’re left with very few guaranteed options for evasion beyond just wearing a mask.”
Wenger said that masks presently are practical since they’re easy to wear and have become more acceptable in the wake of the COVID-19 pandemic.
“Despite a lot of research, masks remain one of the few surefire ways of evading these systems [for now],” she said. “However, gait recognition is becoming quite powerful, and it’s also unclear if this will supplant face recognition. It is harder to imagine practical and effective evasion strategies against this technology.” ®
READ MORE HERE