367R_transcript_The fundamental issues and development trends of AI-driven transformations in urban transit and urban space

Check out the episode:

You can find the shownotes through this link.


Are you interested in AI-driven urban transformations?


Our debate today works with the article titled The fundamental issues and development trends of AI-driven transformations in urban transit and urban space from 2025, by Haishan Xia, Renwei Liu, Lu Li, and Yilan Zhang, published in the Sustainable Cities and Society journal.

This is a great preparation to our next interview with Josh Rands in episode 368 talking about AI prediction for urban transportation.

Since we are investigating the future of cities, I thought it would be interesting to see how we can utilise AI and Machine Learning tools and techniques to uncover non-linear urban relationships. This article highlights that AI technology helps address the spatiotemporal imbalance and proposes a future of human and artificial intelligence collaboration.

[intro music]


Welcome to today’s What is The Future for Cities podcast and its Research episode; my name is Fanni, and today we will introduce a research by summarising it. The episode really is just a short summary of the original investigation, and, in case it is interesting enough, I would encourage everyone to check out the whole documentation. This conversation was produced and generated with Notebook LM as two hosts dissecting the whole research.


[music]

Speaker 1: Our topic today is really fundamental change happening in urban planning. The rapid integration of artificial intelligence into city management. We’ve moved past that static idea of a smart city to what experts are calling urban AI or AI urbanism, cities that are dynamically perceived and crucially self-evolving. This technology, it’s profoundly impacting the physical structure of our cities and how we approach transportation networks. So the central question we face is this. Does urban AI with its capacity for autonomous decision making and handling complex non-linear data models offer the only essential future pathway to coordinated sustainable development, or maybe conversely does Its inherent reliance on existing data and its mandate for efficiency, risk, deepening existing social and spatial inequities. I’ll be arguing that AI’s technical capacity provides the necessary mechanism to find and maintain a comprehensive, multidimensional equilibrium in urban systems, something traditional planning tools simply they cannot do.

Speaker 2: And I come at it from a different perspective. While the analytical sophistication of AI is, yeah, undeniable, I maintain that it’s reliance on historical societal data. And its systemic prioritization of efficiency creates fundamental risks. These risks, I believe, intensify social and spatial inequalities, which directly conflicts with the broader goal of truly sustainable, inclusive urban development.

Speaker 1: Okay, so urban AI is a true paradigm shift because it moves beyond crude, sort of one dimensional planning. We are leveraging machine learning and autonomous algorithms. To transform infrastructure into what amounts to almost an organic entity. With dynamic perception and adaptive capabilities, the major breakthrough is AI’s potential for mining the complex, high dimensional, non-linear relationships between urban factors, the messy realities that traditional linear models frankly, we’re forced to ignore. This technological core allows us to precisely align urban structures with human behaviour. Pushing us toward that sought after sustainable multifactor equilibrium. And to address the common technical objection, we are solving the historical black box criticism. We use techniques like gradient boosting decision trees or GBTs, and a transparency tool called Shapley Additive explanations. SHAP. Think of SHAP as what? The x-ray machine for the AI’s decision process. It tells us not just what decision was made, but why. Allowing planners to understand the precise contribution of every variable. This transparency is crucial for accountability.

Speaker 2: I appreciate the focus on technical transparency, but I’m sceptical that just exposing the bias automatically leads to removing it. The city shift toward autonomous decision making is powerful. Sure. But those predictive capabilities are trained on historical data, which is nothing more than recorded human societal experience. That data foundation reflects a history of ignoring vulnerable groups, or is skewed toward optimizing economic profits, which, let’s be honest, it often is. Then the AI’s autonomy will inevitably reinforce and intensify existing social inequalities. We see this play out in the economics of transportation. Rail transit development increases accessibility. Okay? But it simultaneously drives up housing prices leading to gentrification and displacement of the very people who need that public transit the most. The AI acting as a purely rational optimizer simply makes the existing bias system hyper-efficient.

Speaker 1: I accept the concern about embedding historical bias, but the technical tools prove their worth precisely because they allow us to analyse that complexity with unprecedented precision, which leads to better solutions. Let’s look at your example of rail transit and land value using tools like XB Boost and SHA values. Researchers can analyse the relationship between property prices, income levels, and accessibility. This reveals non-linear threshold effects. For instance, we can define the exact impact radius where a rail transit station causes land value, appreciation, and crucially where it stops. This precision is vital for sustainability because it allows cities to stop extensive resource, intensive land development, and instead focus on precision investment. Based on the marginal benefit, we can target resources much more effectively, decoupling economic value growth from environmental care and capacity.

Speaker 2: But I find that a bit counterintuitive, that precision, while technically impressive, seems to me like using a high powered microscope to analyse a social disease without actually offering a cure. You’re simply quantifying the inequality, not correcting the systemic issues driving it. The research you cite often confirms that land value appreciation is highly correlated with community gentrified. AI is simply optimizing the economic function, which has the negative social consequence built into the objective function itself. We are designing systems that favour one outcome, efficiency, and profit over others, such as genuine social fairness. If we fail to override this efficiency first mandate with clear human ethical boundaries, the macro impact on social spatial inequalities will only worsen.

Speaker 1: That’s a huge assumption. The data’s reflection of past and balance dictates the future outcome in an autonomous system. The purpose of these analytical tools isn’t just to quantify, it’s to isolate variables and test interventions. In a simulation before real world deployment, we can model the counterfactual. What happens if we impose, say, rent controls? In that appreciation radius, we can use the AI to find the sweet spot between necessary investment and social protection. This level of preventative insight was simply impossible before urban ai.

Speaker 2: I still maintain that the speed and scale of AI deployment mean that the immediate, often profit-driven decisions frequently outweigh the slow deliberative process of ethical governance. If the default optimization is economic gain, that is the path the city’s self-governing entity will naturally take. It’s the path of least resistance for the algorithm.

Speaker 1: Okay. Let’s shift our focus perhaps from predictive risk to demonstrated benefit. We cannot ignore the tangible, real world efficiency gains that AI delivers right now, which are critical for environmental sustainability. Look at Hong Zoe’s city brain system by dynamically adapting traffic signals based on real-time data. The city dramatically improved traffic efficiency, dropping its national congestion ranking from fifth down to 57th. That’s significant. Similarly, Pittsburgh Sur Track system has dynamically optimized traffic flows. Reducing average pedestrian crossing times by over 25%. This efficiency isn’t just about faster cars. It translates directly into major environmental sustainability gains Big data. Empowered traffic control has the potential to reduce urban carbon emissions significantly. Projections suggest an annual reduction of, what was it, 31.73 million tons of CO2 emissions just by optimizing mobility in congested Chinese cities. This is a vital mechanism for restructuring urban carbon metabolism pathways at scale.

Speaker 2: I’m not entirely convinced by that one of reasoning because those efficiency gains, they’re often accompanied by significant new layers of exclusion. While traffic flow might be optimized for the majority of drivers, the deployment of AI driven systems like contactless ticketing or mandatory smart city dashboards creates a severe digital divide. We are effectively excluding those with low digital literacy. Think about the elderly or low income populations from accessing essential services, and this exclusion creates a vicious cycle. The lack of behavioural data from these vulnerable groups reinforces the mainstream bias in the algorithmic models. The system optimizes based on the affluent data rich segments of the population leading to further misallocation of resources away from those who need the most. Furthermore, we really have to address the environmental paradox here. Does optimizing traffic flow truly justify the massive carbon debt incurred just to build and train the model in the first place? Training a single, large natural language processing model, for example, can result in CO2 emissions equivalent to over 300,000 kilograms. That’s not insignificant.

Speaker 1: That figure on the carbon cost of training large models is concerning. We need to frame it as an infrastructure investment. Surely you wouldn’t argue that the environmental cost of building a massive new subway line isn’t worth the decades of emission savings. It generates down the line. The initial expenditure on training is arguably rapidly offset by the long-term operational savings achieved through efficiency. non-AI systems simply cannot handle the sheer scale and complexity of modern urban traffic flow. The optimization achieved by systems like City Brain isn’t just a marginal improvement. It’s a required adaptation to prevent total gridlock, which has its own massive carbon footprint.

Speaker 2: But the comparison to infrastructure like a subway line kind of breaks down when you consider the social cost, doesn’t it? That subway line once built, serves everyone equally, or at least it’s designed to. AI systems inherently prioritize certain data sets, reinforcing the digital gap. We’re talking about efficiency gains that are structurally biased towards specific demographics. Marginalizing those who lack the means or the skills to interact with the new digital interfaces. If your goal is true sustainability, you cannot sacrifice the social pillar at the altar of the environmental and economic pillars. They have to work together well.

Speaker 1: The framework for addressing both the digital divide and the carbon debt is embedded in the proposed human intelligence plus artificial intelligence equilibrium model. This model is specifically designed to integrate human expertise and necessary checks, ethical judgment directly into the autonomous process, moving away from simple reactive cleanup to proactive intervention. This is achieved through mechanisms like algorithm transparency audits, and continuous feedback loops. The core concept here is predictive intervention, or what the literature calls temporal penetration. Using machine learning to process high dimensional relationships and predict where resource imbalances will occur before they become structural issues. This symbiosis leverages AI’s power while ensuring ethical and social boundaries are maintained achieving dynamic balance across all dimensions simultaneously. That’s the goal anyway.

Speaker 2: That’s a compelling argument for the theoretical future, an idealized governance framework perhaps. I’m sceptical of that theoretical ideal when facing immediate, practical obstacles. This equilibrium model requires massive, expensive infrastructure that many cities, especially in developing economies, simply cannot afford right now. Complex digital twin models, sophisticated, A IOT sensor networks, specialized auditing teams. It’s a huge lift. Furthermore, the fundamental absence of clear and forcible regulatory frameworks concerning responsibility, safety, and data privacy is a massive barrier. Without those regulations, the risk of profit oriented AI driving unregulated urban sprawl, a longstanding issue, particularly in developing countries, remains completely unmanaged. When Nest human governance has the legal mandate, and frankly, the capability to decisively override the purely rational sometimes self-serving logic of the machine, the ideal of equilibrium will likely fail in practice.

Speaker 1: I fully agree that institutional and regulatory maturity must catch up to technological capability that’s undeniable, but we cannot halt progress waiting for a perfect policy environment. The complexity facing high density cities today demands the tools of urban ai. Now, the non-linear dynamics of climate change, population growth, resource allocation, they simply cannot be modelled, let alone managed by older linear methodologies. Pursuing the human machine symbiotic model, even imperfectly, I believe, is the only viable path to managing these inherent risks. And achieving coordinated development at.

Speaker 2: While the analytical power of AI to uncover hidden high dimensional relationships is undeniable and offers clear benefits, the fundamental challenge for me remains one of priority and governance. We must be extremely vigilant that our pursuit of operational efficiency doesn’t simply cement structural biases into the very operating logic of our cities for decades to come. True sustainability demands that this necessary technological evolution is carefully synchronized with genuine social equity and principles of universal inclusion. We can’t let the tech outpace the ethics.

Speaker 1: What the material shows clearly is the immense transformative power of AI shifting cities from merely smart to genuinely self evolving entities. The path forward requires rigorous attention to the very real, institutional and social barriers, the high implementation costs, the inherent risk of data bias and the current regulatory void that prevent this technological capability from translating into universally fair and equitable outcomes. There is clearly much more to explore in this complex interaction between data, autonomy, and the future of urban life.


[music]

What is the future for cities podcast?


Episode and transcript generated with ⁠⁠Descript⁠⁠ assistance (⁠⁠affiliate link⁠⁠).

One response to “367R_transcript_The fundamental issues and development trends of AI-driven transformations in urban transit and urban space”

Leave a comment