COVID-NET

Model by Darwin AI

The COVID-Net neural network accepts chest X-rays of types JPEG and PNG and analyzes the image to return the percentage likelihood of the patient being infected with COVID19 pneumonia, regular pneumonia, or being healthy, in the output. This particular model can only be used on frontal chest x-ray scans, and can be used on a single image, or sets of images.

  • Description

    Product Description

    PERFORMANCE METRICS:

    93.9% Positive-Predictive Value (PPV) COVID-19 – negative, Penumonic – A higher precision score indicates that the majority of labels predicted by the model for different classes are accurate. Further information here.

    98.9% Positive-Predictive-Value (PPV): COVID-19-positive, Pneumonic – A higher precision score indicates that the majority of labels predicted by the model for different classes are accurate. Further information here.

    88.9% Positive-Predictive-Value (PPV): Non-Pneumonic (normal) – A higher precision score indicates that the majority of labels predicted by the model for different classes are accurate. Further information here.

    92% Sensitivity: COVID-19-negative, Pneumonic – A higher recall score indicates that the model finds and predicts correct labels for the majority of the classes it is supposed to find. Further information here.

    93% Sensitivity: COVID-19-positive, Pneumonic – A higher recall score indicates that the model finds and predicts correct labels for the majority of the classes it is supposed to find. Further information here.

    96% Sensitivity: Non-Pneumonic (normal) – A higher recall score indicates that the model finds and predicts correct labels for the majority of the classes it is supposed to find. Further information here.

    The COVID-Net-CXR4-B model was trained on the COVIDx4 dataset, with 473 and 100 COVID-19-positive CXR images reserved for training and testing respectively.

    The model obtains state of the art performance on CXR data, in terms of sensitivity and positive-predictive value. The positive-predictive values are: 88.9% for non-Pneumonic (normal) samples, 93.9% for COVID-19-negative (Pneumonic) samples, and 98.9% for COVID-19-positive (Pneumonic) samples, with similarly high sensitivities (96.0%, 92.0% and 93.0% respectively).

    These results indicate that COVID-net is capable of distinguishing COVID-19 from other Pneumonia cases, without relying on time-consuming reverse transcriptase-polymerase chain reaction (RT-PCR) testing.

    Note that it is recommended to use a GPU when performing inference with this model.

    Please see the below links for more information on how to replicate these results:

    OVERVIEW:

    The COVID-Net design was developed using a human-machine collaborative design strategy, where a human-selected residual architecture prototype is combined with machine-driven design exploration to deliver the resulting deep-neural-network.

    The COVID-Net design demonstrates high architectural diversity and selective long-range connectivity with heavy use of the projection-expansion-projection design pattern. This design facilitates enhanced representational capacity without incurring a significant computational cost when compared to the human-driven prototype model.

    The model is implemented in Tensorflow.

    TRAINING:

    The model was trained using COVIDx, an open access benchmark dataset that we generated consisting of over 14000 CXR images, which was created as a combination of 5 open access data repositories. COVIDx has the largest number of publicly available COVID-19 positive cases to the best of our knowledge.

    The model was trained on a 13898 image subset of COVIDx dataset with 473 COVID-19-positive images, 5459 Pneumonic (Non-COVID-19) images, and 7966 non-Pneumonic images.

    VALIDATION:

    The model was validated on an 300 image subset of COVIDx dataset with 100 COVID-19-positive images, 100 Pneumonic (Non-COVID-19) images, and 100 non-Pneumonic images. Note that, due to COVID-19 CXR data limitations, the Validation and testing datasets are the same, and the split between detection classes is not even.