- smart cities
- Smart Infrastructure
- Data Privacy
- IoT
- Smart Technology
How Do We Find the Right Role for AI-Powered Smart Cameras?
Most Internet of Things sensors on the market share the same problem: they only measure one thing. An air quality sensor can tell you levels of airborne particulates but not the average speed of traffic. A parking sensor can’t tell you how many cyclists use a bike lane at that time of day.
One type of sensor can perform multiple tasks at a time, though it faces challenges. Cameras backed by AI have the ability to drive smart city functioning forward. These digital eyes can fill multiple roles and add functionality with a few lines of code, but their use raises concern over privacy, profiling, and authoritarianism. Should we allow their use, and under what conditions? And what do you call the technology?
Smart Cameras Have Potential for Misuse
In Michigan, a man was detained for 30 hours and suspected of a burglary after facial recognition software falsely matched grainy security footage of the suspect to a picture of his driver’s license photo. Police in Ningbo, China, had to apologize for publicly shaming a woman when an AI-backed camera detected a picture of her face in an advertisement on the side of a bus and accused her of jaywalking. In a demonstration by the American Civil Liberties Union, running the photos of 120 California state legislatures against a database of 25,000 mugshots falsely identified 26 lawmakers as criminals.
While these examples may be the early stumbling blocks of the technology, many see the whole endeavor as an unacceptable breach of privacy. IBM, Amazon, and Microsoft have announced limits on the use of their facial recognition technology by law enforcement while numerous cities have or will soon have bans on the technology. Depending on local culture, the concept of cameras watching you in nearly any public regard is seen as invasive. This mentality poses problems for the deployment of visual sensors in the smart city context.
Sales and branding can be difficult when it’s not clear how to discuss your product and its potential. Calling the products camera systems just underscores the element that people fear the most: the camera with its never-blinking digital eye. Saying surveillance systems may be worse, as that adds a connotation of nefariousness in certain cultures. Visual sensing may be a better, though somewhat vague, alternative.
Edge Computing and Good Communication Can Help
These missteps in deployment are unfortunate since camera-based systems can be incredibly flexible problem-solvers. One camera with the proper software and angle can track parking occupancy, traffic flow patterns, pedestrian intersection usage, and bicycle users all at the same time. So how can their presence be made more acceptable to the general public?
One possibility is through edge computing. Instead of the images being processed on a powerful, centralized server that has access to large databases of identifying information, images are processed on the camera itself. Then, only critical, anonymized information is passed on to servers. This approach lowers the necessary bandwidth for a camera but serves as a privacy check. If the camera can’t store and process hundreds of thousands of faces and the data passed to the servers is limited to a few categorical variables, privacy can be assured.
How can the public be educated about and convinced by this limited scope camera system? And what name do we pair it with? Before its abandonment of the Quayside project in Toronto, Sidewalk Labs attempted to communicate how it was collecting data with a set of 33 data collection symbols, but this did little to assuage public worry. As with most new technologies with promise, how we situate smart camera systems in our broader cultural norms and engage with the public is of critical importance. Our approach thus far has been slapdash.