The Insecurity of Image-Recognition Technology

Attackers are finding ways to fool AI-powered recognition technology as the software develops and expands.

June 10, 2019

NEW YORK – A Wall Street Journal article reports that companies are finding multiple kinds of digital camouflage, adversarial attacks and misclassification efforts that are posing security risks to image-recognition technology.

Image-recognition software is being used in security systems, self-driving cars and on social networking sites—it’s rapid expansion is going global. But attacks and infiltrations are making it hard to keep these high-tech systems safe.

Facebook, Google and YouTube are investing heavily in AI-powered software, which has helped the media giants block toxic content or remove certain propaganda. But scientific evidence shows that image-recognition systems are vulnerable to adversarial attacks.

“There’s a bunch of work on attacking AI algorithms, changing a few pixels,” says a senior technology executive. “There have been groups trying to use these attacks on some of the large social-media companies in the U.S.” It doesn’t help that the tools to trick image-recognition systems are easily available online.

Entire cities are considering the implications of facial-recognition technology. San Francisco recently banned the use of it, saying that it can perpetuate police bias and give authorities excessive surveillance powers. Across the country, officials, activists and businesses are debating how to balance the usefulness of rapidly improving artificial-intelligence technologies against their potential to invade privacy and challenge civil liberties.