How does ML solve disaster management problems?

Azzadiva Sawungrana
5 min readMar 22


Disaster management is one of the most important subjects in implementing Machine Learning (ML) since it has a direct impact on lives. However, this subject is under-discussed, especially when we talk about disaster management issues related to geospatial science. It becomes more important for me, because I have the experience of seeing with my very own eyes how grief feels in a disaster, namely the 2006 Jogja earthquake. In this article, I would like to share briefly my experience regarding decision-making and stories in creating a disaster response model.

As you may already know, in machine learning, there are three main types of learning: supervised, unsupervised, and reinforcement learning. Each of these learning paradigms serves different purposes and is suitable for different types of problems in disaster management. I’ll go through them one by one and illustrate their uses with examples related to disaster management.

Photo by Jose Antonio Gallego Vázquez on Unsplash

Supervised Learning

Supervised learning involves learning from labeled data, which means each data point is associated with a known output or label. The goal is to learn a mapping from inputs (features) to outputs (labels) based on the labeled training data. In disaster management, supervised learning can be used for tasks like damage assessment or predicting the severity of a disaster.

Example: Given satellite images of an area before and after an earthquake, a supervised learning model can be trained to predict the extent of damage caused by the earthquake. The images are labeled with information about the damage, and the model learns to recognize patterns and features that correlate with different levels of destruction.

GeoEye image of Sendai before and after earthquake. Credit: Amusing Planet

Unsupervised Learning

Unsupervised learning deals with finding patterns or structures in data without any labels. The goal is to discover the underlying structure or relationships among the data points. In disaster management, unsupervised learning can be used for tasks like clustering or anomaly detection.

Example: An unsupervised learning model can be used to find clusters or patterns of unexpected behavior during a disaster given geospatial data from many sources (such as satellite photos, weather data, and social media). Teams tasked with responding to disasters may be better able to prioritize their work and allocate resources as a result.

Photo by Cristofer Maximilian on Unsplash

Reinforcement Learning

Reinforcement learning involves an agent learning to make decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties, depending on the actions it takes. The goal is to learn a policy that maximizes the cumulative reward over time. In disaster management, reinforcement learning can be used for tasks like planning and resource allocation.

Example: In my deep Dyna-Q model for earthquake disaster response, the agent (disaster management system) interacts with the environment (the simulated disaster situation) and learns to make decisions (e.g., allocating rescue teams, evacuating people) to minimize casualties and economic damage. The agent receives a reward when it makes decisions that effectively reduce the impact of the disaster, and it learns to improve its policy over time based on the feedback it receives.

Mexico disaster response. Credit: Miyamoto International

Supervised learning is suitable for tasks where labeled data is available, and the goal is to learn a mapping from inputs to outputs. Unsupervised learning is suitable for tasks where the goal is to discover patterns or structures in data without any labels. Reinforcement learning is suitable for tasks involving decision-making and learning through interaction with an environment. Each of these learning paradigms can be applied to different aspects of disaster management to improve the efficiency and effectiveness of disaster response efforts.

In my case

I attempted to perform Deep Dyna-Q for identifying prone areas to earthquake. However, when it comes to understanding the distribution of population and identifying high-risk areas for disasters, reinforcement learning (RL) might not be the most suitable approach. RL is designed for sequential decision-making problems, where an agent interacts with an environment to learn an optimal policy. In my case, I was more focused on analyzing existing data and identifying patterns, which is more of a supervised or unsupervised learning problem.

We might want to think about using conventional machine learning or deep learning methods for this assignment instead. For instance, we can group regions with comparable features, such population density and disaster risk, using clustering techniques like K-means, DBSCAN, or hierarchical clustering. As an alternative, we can categorize areas into several risk categories depending on the available data using classification methods like logistic regression, decision trees, or support vector machines.

References to note

There are several references we can use to understand more about the usage of geospatial-related machine learning for disaster management:

Supervised Learning

Gupta, R., & Nair, A. (2018). “Flood detection using machine learning in Sentinel-1 satellite data”. In 2018 IEEE Geoscience and Remote Sensing Letters (pp. 1–5). IEEE.

This paper presents a supervised learning approach using machine learning algorithms to detect flooding in Sentinel-1 satellite imagery.

Unsupervised Learning

Bishop, C. M., & Nasrabadi, N. M. (2006). “Pattern recognition and machine learning”. Springer.

This book provides an extensive introduction to machine learning techniques, including unsupervised learning methods such as clustering and dimensionality reduction. It provides a solid foundation for understanding and applying unsupervised learning in various disaster management scenarios. You can go directly to the page related to unsupervised learning.

Reinforcement Learning

Abdulazeez, I. A., & Sadiq, S. (2020). “A review of the application of deep reinforcement learning in urban traffic management and disaster response planning”. Journal of Ambient Intelligence and Humanized Computing, 11(9), 3615–3628.

The use of deep reinforcement learning in planning disaster response and managing urban traffic is covered in this review study. It gives a summary of contemporary methods and explores probable future research trajectories.



Azzadiva Sawungrana