Running Computer Vision Models at the Edge
Watch this tech talk to learn how to build an automated deployment pipeline for running computer vision models at the edge.
Watch this tech talk to learn how to build an automated deployment pipeline for running computer vision models at the edge.
Modzy Head of ML Engineering, Brad Munday, recently had the opportunity to speak at the ML REPA meetup on Automated Deployment Pipelines: Running Computer Vision Models at the Edge. Computer vision has the potential to transform many applications of today into solutions of the future - from smart cameras in traffic, to MRI image processing, to monitoring quality for manufacturers, the possibilities are endless. But how can you set up a deployment pipeline that allows you to run your computer vision models anywhere?
In this talk, we'll walk through steps that help you build an automated deployment pipeline that allows you to run a personal protective equipment (PPE) object detection model from Hugging Face on a Raspberry Pi. We'll begin by discussing the trends driving the need to run machine learning models in heterogenous environments, specifically at the edge, and discuss an edge-native architecture that can allow you to run models in multiple locations. Then, we'll move into a demonstration of running a personal protective equipment (PPE) detection model from Hugging Face on a Raspberry Pi. First, we package the pre-trained model into a container using open-source Chassis.ml. Then, using Modzy, we'll show you how to deploy and run the model on a single board computer (SBC,) Raspberry Pi. At the end of the talk, you'll walk away with a better understanding of what it takes to build an automated deployment pipeline that enables you to efficiently serve and scale your computer vision models to fleets of SBCs.