Artificial Intelligence and environmental racism: initial reflections 188 fires, landslides, earthquakes, hurricanes, tsunamis, storms, and floods). This program identifies constructions and their level of damage, which might aid in speeding rescue and recuperation initiatives (Software Engineering Institute, 2020). Extreme weather events statistically have more negative impacts on historically marginalized communities, such as black, Latino, and indigenous peoples. Therefore, early warnings and efficient post-event relief is a measure that helps to address at least the consequences of environmental racism by reducing the chances of casualties. In this context, AI emerges as a promising tool for improving predictions for a variety of high-impact events. Nevertheless, if it is not developed and applied ethically and responsibly, it might encompass geographical or population biases and non-representative data, which is likely to lead to environmental racism Among the risks is that data-driven AI climate initiatives can enhance inequality in climate response, by prioritizing resources, such as knowledge and funding to some areas, while others remain excluded (Nost, 2022). For example, studies have shown that many areas with a majority of African American descent in the southeast of the United States are relatively far from radar sites, meaning that it is harder to gather information about storms impacting these areas (McGovern et al., 2022, p. 4). India’s 100 smart cities challenge, for example, had postcolonial and exclusionary impacts. Accessibility was a requirement to participate in the program and those without accessibility were often the share of the population deprived of property and housing. Therefore, such a program expanded existing inequalities (Datta, 2018). Studies have shown the racial violence related to algorithms deployed to aid in decisions regarding the advancement of urban development (including investments or not in public services) in various United States municipalities which enhanced the historical spatialization of race (Safransky, 2019). An additional challenge of the big data landscape is “function creep” (Brayne, 2019; Innes, 2001), which is the risk of data deviation, meaning that data originally collected for one purpose might be used for another objective. In this scenario, data collected for environmental forecast and relief aims might also be used for other settings, such as law enforcement and social scoring, with racially discriminatory effects. When digital data can be easily stored and
RkJQdWJsaXNoZXIy MjEzNzYz