Thank you for Subscribing to Gov Business Review Weekly Brief
Thank you for Subscribing to Gov Business Review Weekly Brief
By
Gov Business Review | Tuesday, March 04, 2025
Stay ahead of the industry with exclusive feature stories on the top companies, expert insights and the latest news delivered straight to your inbox. Subscribe today.
While no technical solution can ensure total security, the broad use of AI technology in urban planning and administration provides considerable benefits. However, like any technological development, it raises security vulnerabilities that must be addressed in order to safely employ AI in this environment.
Fremont, CA: Governments and experts face severe challenges in planning and development as urban populations continue to rise. Traditional urban planning relies on historical data and demographic trends, yet continued population growth makes it much more difficult to anticipate future needs and complications.
The initiation of artificial intelligence (AI) technology has dramatically aided urban planning and development. For example, urban specialists can use machine learning (ML) technology to evaluate massive amounts of historical statistics, forecast future patterns in urban development, and identify potential issues.
Smart cities surfaced as a natural extension of conventional urban development. They use various digital data technologies, AI, and networking to overcome the numerous issues in traditional towns.
Several countries hope to use AI to build and administer their future smart cities. Increasing reliance on AI technologies to power future cities will pose significant cybersecurity risks. For example, cyberattackers can use numerous tactics to carry out malicious operations during the smart city's planning phase before attacking its digital infrastructure and connectivity.
Data Bias Risk
Massive datasets from various sources, including historical records, GIS systems, and IoT sensors, are utilized to train ML models that fuel intelligent city development and planning.
There are various types of cyberattacks against ML models. Poison attacks are the most prominent part of thoughtful city planning. Threat actors may attempt to poison training data with biased information during the training phase to cause the model to provide altered responses based on the threat actors' harmful aims.
A poisoned ML model, for example, may offer inefficient resource allocation or dangerous infrastructure recommendations (e.g., placing public infrastructure areas in the incorrect places or spreading them in a way that does not benefit all populations equitably).
Risk of Cyberattacks
Smart cities rely primarily on digital infrastructures to provide essential public services to their citizens. Electricity grids, water treatment facilities, transportation, and communication systems in an intelligent city rely on AI and automated technologies.
A cyber attack on any of these services or communication technologies would have disastrous effects on the city's operations (e.g., traffic mayhem, contaminated water supply, blackouts).
Ransomware and Denial of Service (DoS) attacks will be the most common cyberattacks on smart cities.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info