• AI
  • Machine Ethics
  • smart cities
  • Conferences and Events

Avoiding the Pitfalls of AI Bias in the Smart City

Grant Samms
May 12, 2023

A city skyline with interconnected icons for representing various city services (electricity, WiFi, mobility, etc.), all connecting to a digital brain icon in the center

AI tools like ChatGTP have been a mainstay of news headlines and casual conversation in recent months. For smart cities, AI can be a powerful tool that allows financially and time-constrained municipalities to increase their efficiency and effectiveness at a relatively low cost. AI-powered tools are monitoring road surface conditions, helping alleviate traffic congestion, and predicting dangerous flooding in fast and efficient ways that help cities improve operations. The upcoming Smart Cities Connect conference in Denver, Colorado, has three panels explicitly on smart city AI, and the topic will no doubt be covered in many more. While there is much excitement around how AI tools can aid the smart city, there is also significant concern about the ways in which AI can be biased. Two forms of AI bias in particular stand out for cities to guard against: algorithmic and human biases.

Algorithmic Biases

If the data used to train an AI reflects historic inequalities, those inequalities will be present in the AI’s findings. This has become especially pertinent in criminal justice, as in the case of the COMPAS tool used by some US courts to predict which criminals are the most likely to reoffend. An investigation by ProPublica found that the system was much more likely to misclassify Black defendants as high recidivism risks than white defendants—an apparent bias introduced by the datasets selected during the AI training phase. Cities should also be aware of an especially pernicious form of algorithmic bias known as feedback loop bias. This is a process by which initial biases cause a predictive algorithm to become steadily more biased over time if results feed back into the training set. Feedback loop bias is also of particular concern in criminal justice and policing, as a “neutral algorithm” that slowly becomes more biased over time can be used to defend deeply unjust systems. This underscores why it is important to address biases not only when selecting data but also in determining how predictive models assimilate data generated by their own operation.

Human Biases

AI can also lead to biased outcomes when it is implemented in a biased way. This is seen in criminal justice and policing but also in more mundane municipal uses of predictive algorithms. Electric scooter companies, for instance, may use AI to place assets in the morning at locations most likely to be profitable. This process often reflects long-standing inequalities experienced by certain communities, and if not actively accounted for, it can continue to entrench lower levels of mobility access and economic opportunity in those places. The city of Baltimore, Maryland, addressed this issue when approving dockless scooters by requiring a certain percentage of assets to be placed in “equity zones” every day as a condition of a company’s approval to operate.

While AI can be a powerful tool for smart cities, how these tools are trained and used should always undergo the utmost scrutiny. Steps like hiring more diverse staff and reducing siloing among city departments can help spot potential biases in AI tools before they become problematic. Extreme care should also be taken in how training data is initially selected.

Conversations around the ethical use of AI are critical to have at this moment at conferences like Smart Cities Connect and through organizations working to set ethical AI guidelines. Only by being intentional in the creation and application of AI tools can cities ensure that these systems strengthen communities rather than entrench our greatest inequities.