CrowdAI’s Building Segmentation model uses cutting-edge deep learning and image segmentation techniques to precisely locate and label building footprints on Maxar satellite imagery. Detections are output as both a geoTIFF and polygons, providing denser, more precise labels than traditional bounding boxes. This model is globally flexible in line with Maxar’s coverage.
See the model in action with a Modzy MLOps platform demo or start a trial
74% F1 Score
99% Pixelwise Accuracy
75% Precision
74% Recall
The Building Segmentation model accuracy scores demonstrate that this model correctly detects and labels buildings approximately 75% of the time, when accounting for both false positives and false negatives. Known objects that sometimes cause false positives include: water towers and large industrial facilities and equipment.
F1 Score is the harmonic mean of the precision and recall, with best value of 1. It measures the balance between the two metrics.
Pixelwise accuracy measures the percent of pixels in a predicted area that are classified or predicted correctly.
A higher precision score indicates that the majority of labels predicted by the model for different classes are accurate.
A higher recall score indicates that the model finds and predicts correct labels for the majority of the classes it is supposed to find.
Unlike traditional computer vision techniques, the Building Segmentation model is powered by CrowdAI’s custom-built neural network for image segmentation. By classifying each pixel in the image, the model can provide precise roof footprints, which are then converted into smoothed polygons as an output. This results in tighter detection profiles for each building—and thus better measurements of area—without sacrificing speed or accuracy.
This algorithm was trained to specifically identify and building footprints based on the building’s roof. Interior open courtyards and other similar open-air structures are often excluded.
This algorithm is compatible with Maxar imagery that meets the following criteria: * Compatible sensors: WorldView-1, WorldView-2, WorldView-3, GeoEye-1 * 30-50cm/pixel 4-band RGBN imagery * Pansharpening OFF, Dynamic Range Adjustment (DRA) OFF, Atmospheric Compensation (AComp) OFF
This model was trained on over 25,000 image chips of Maxar 30-50cm, orthorectified imagery across a wide variety of nations/states, geographies, biomes, seasons, and climates on all populated continents.
This model was validated against CrowdAI’s internal Maxar validation set, a globalized set of imagery drawn from the WorldView and GeoEye constellations.
See how quickly you can deploy and run models, connect to pipelines, autoscale resources, and integrate into workflows with Modzy MLOps platform
d o n o t fill t h i s . f i e l d d o n o t fill t h i s . f i e l d