This was trained to locate buildings in satellite imagery and determine their damage level, to help identify areas of impact after a natural disaster. Disasters occur unpredictably, and the extent of the damage they inflict can be difficult, time consuming, and dangerous to assess from the ground. However, if satellite images are available, the footage can be analyzed to not only assess the areas of impact, but also determine areas that need critical aid. This model was designed with this in mind – to assist with humanitarian and disaster relief efforts. It was trained to first locate buildings, and then assess whether they are undamaged, slightly damaged, majorly damaged, or destroyed. Given this information, assistance and resources can be safely planned and efficiently allocated to those in need. This model is Open Source but was developed by the Modzy data science team.
See the model in action with a Modzy MLOps platform demo or start a trial
51% F1 Classification Score
80.5% Localization F1 Score
This model was trained on the xBD dataset, which was provided by the xView2 competition for building damage assessment in overhead imagery. The training set consisted of over 850,000 building polygons from six different types of natural disasters around the world with 5 different damage classification labels. This model was tested on a dataset provided by the xView2 competition and the model’s output was submitted for scoring. This model achieves a localization F1 score of 0.8048 and a damage classification F1 score of 0.5096, as reported by the official xView2 evaluation script. The localization and classification F1 scores are calculated on a pixel-by-pixel basis.
The F1 score is calculated as the harmonic mean of the precision and recall, with a best value of 1.0. It measures the balance between the two metrics. This score calculates how well the model performs at localizing buildings in the pre-disaster imagery on a pixel-by-pixel basis.
F1 score is calculated as the harmonic mean of the precision and recall, with a best value of 1.0. It measures the balance between the two metrics. This score calculates how well the model performs at localizing buildings in the pre-disaster imagery on a pixel-by-pixel basis.
This model takes a pre-disaster satellite image and a post-disaster satellite image of the same area as its input. These images should be aligned with each other for best results. The U-Net architecture was used to localize buildings and produce segmentation masks in pre-disaster images. The segmentation masks were then used to extract the buildings in post-disaster images. Each pre- and post-disaster image pair is then fed into a cascaded VGG16 with Batch Normalization classifier. The first part of the classifier determines whether a building was damaged. If the classifier predicts that the building is damaged, then the image is passed to the second part of the classifier which predicts whether the building sustained minor damage, major damage, or was destroyed. The output of the model is a JSON file containing the Well-Known Text (WKT) polygon information of each detected building, as well as its predicted damage level.
The xBD dataset, provided by the xView2 competition, was used to train and test this model. The xBD dataset contains 1024 x 1024 pixel sized pairs of pre- and post-disaster overhead images, along with annotations consisting of the disaster type, the individual building chips, and their damage levels. The U-Net weights used were provided by the xView2 competition in their baseline repository. The training set is highly unbalanced, with over 300,000 samples labeled “no-damage,” and only approximately 30,000 samples of each of the other three damage types. To remedy this, balanced random under sampling was used. Resizing, random cropping, random horizontal and vertical flips, and normalization were used during preprocessing. The VGG16 portion of the classifier, which determines the level of damage for each detected building, was trained for 151 epochs with a learning rate of 0.001, and a batch size of 64. Each half of the classification network took 8 hours to train using one NVIDIA Tesla V100 GPU.
The test dataset consists of 933 pre-disaster and post-disaster image pairs provided by the xView2 competition.
The input(s) to this model must adhere to the following specifications:
The “preimage.png” (before disaster) and “postimage.png” (after disaster) files should be overhead images of the same scene.
This model will output the following:
The “results.json” file contains a list of detected buildings. Each entry in this list contains the Well-Known Text (WKT) polygon segmentation information, as well as the corresponding predicted damage level: “no-damage”, “minor-damage”, “major-damage”, or “destroyed”.
See how quickly you can deploy and run models, connect to pipelines, autoscale resources, and integrate into workflows with Modzy MLOps platform
d o n o t fill t h i s . f i e l d d o n o t fill t h i s . f i e l d