Finding the Balance with AI

C-stores weigh how to use AI to stop crime, while easing privacy concerns.

July 08, 2019

SEATTLE—At Jacksons Food Store in Tacoma, Washington, a camera takes a picture of each customer and uses artificial intelligence (AI) to cross-reference a database of known robbers and shoplifters during evening and overnight hours. But the Seattle Times found that some shoppers are wary of the approach.

“That’s a privacy violation because you should be notified about it,” one customer told the paper. “They should have a sign to notify you that they’re comparing it to photos of criminals.”

When Jacksons’ system goes fully into effect, it will operate from 8 p.m. to 6 a.m. The store will add a sign out front to notify customers that facial recognition technology is in use, and a speaker will ask customers to look at the camera. The door won’t unlock if someone is wearing a mask or if the person has been previously flagged for criminal activity by in-store camera footage.

Using AI is an up-and-coming practice that many c-stores are implementing, but it’s taking some time to get the general community on board. Some feel violated, while others don’t understand why it’s necessary. But it’s catching on as Target, Walmart and Lowes are using AI cameras to prevent criminal activity.

Civil liberty groups are worried that “if it goes unregulated, the increased use of facial recognition software in stores and elsewhere could perpetuate biases and lead to mass surveillance.”

The tech is still in the early stages as stores and companies figure out the right balance and configuration to make people feel comfortable but also be effective in stopping crime and even monitoring merchandise—a new practice some stores are testing.