SOD Coverage Model

Model by Booz Allen

This model uses a 10 layer classifier network to create an image segmentation of the location of healthy or unhealthy sod coverage in an aerial image. A sliding window approach is used, where 64×64 pixel tiles are taken from the image and classified based on the health of the sod detected. This gives the model great flexibility in image resolution and shape, however care needs to be taken since this model does not distinguish between healthy vegetation and healthy sod coverage.

  • Description

    Product Description

    PERFORMANCE METRICS:

    99% Accuracy

    75% F1 Score

    74% Precision

    75% Recall

    Accuracy is the fraction of correct predictions made by the classifier. This metric is calculated by dividing the number of correct predictions by the total number of predictions. Further information here.

    F1 Score is the harmonic mean of the precision and recall, with best value of 1. It measures the balance between the two metrics. Further information here.

    A higher precision score indicates that the majority of labels predicted by the model for different classes are accurate. Further information here.

    A higher recall score indicates that the model finds and predicts correct labels for the majority of the classes it is supposed to find. Further information here.

    This model was primarily trained on the sod coverage of levees but has been shown to work well in other situations where the image is primarily sod such as golf courses. This model has not been tested, and is not recommended, for urban areas or other images where vegetation isn’t a significant portion of the image.

    Drone imagery is recommended due to its higher quality, though this model has performed decently on high quality satellite imagery. Please note that performance may degrade around trees or other vegetation that may be mistaken for sod at low resolutions.

    OVERVIEW:

    This model uses a 10 layer classification model to determine the sod coverage of 64×64 tiles of the input image. Lower values indicate healthier and more complete sod coverage. These are compiled into a final image mask which is then given as the output.

    TRAINING:

    This model was trained on aerial drone imagery on a collection of levees within Kansas. Approximately 2000 labeled images were hand labeled, verified, and used to train the model. The model was trained using the Adam optimizer with a learning rate of 0.001.

    VALIDATION:

    Validation was performed on a 10% holdout portion of the original data that was collected.

    INPUT SPECIFICATION:

    The input(s) to this model must adhere to the following specifications:

    Filename Maximum Size Accepted Format(s)
    input.jpg 10M .jpg

    The output is a mask of the same size, with higher values indicating less sod coverage. A value of 0 indicates healthy sod coverage.

    OUTPUT DETAILS:

    This model will output the following:

    Filename Maximum Size Format
    results.jpg 10M .jpg
    results.json 1M .json