Edge ML Architectures
This post walks through four edge architecture design patterns for running your ML models at the edge.
This post walks through four edge architecture design patterns for running your ML models at the edge.
Edge deployment refers to the deployment of machine learning (ML) models on devices at the edge of a network. Running ML models at the edge enables real-time predictions, lower latency, and increased security, but also presents unique architectural challenges.
In this webinar, we explore different paradigms for deploying ML models at the edge, including cloud-edge hybrid architectures and standalone edge models. We cover why device dependencies like power consumption and network connectivity make setting up and running ML models on edge devices chaos today, and discuss the elements needed for an ideal edge architecture and the benefits of this approach.
In this video, we walk through four edge ML architectures:
... and also show three demos to help you see how these design patterns power real ML-enabled solutions running at the edge. You'll see an edge-centric NLP web app, defect detection at the edge, and computer vision running in parking lots. Join us as we go out on the edge of glory to learn more about an edge-centric approach to ML deployments.
Want a breakdown of what we cover? Skip ahead to:
06:00 Why Run ML at the Edge? and Edge Defined
08:56 Device dependencies: power consumption and network connectivity
12:05 Factors that make setting up one device today...chaos!
17:18 Elements for an ideal edge architecture
21:20 Recipe for building an edge-centric architecture
23:57 Benefits of an edge-centric architecture
25:35 Demo 1: Edge-centric NLP web app
32:41 Edge-first design pattern: Native edge
35:11 Edge-first design pattern: Network-local
37:46 Edge-first design pattern: Edge cloud
40:13 Edge-first design pattern: Remote batch
41:07 Demo 2: Defect detection at the edge
47:45 Demo 3: Computer vision parking lots
53:39 Conclusion: the edge of glory