ODSC Webinar: Architectures for Running ML at the Edge

This webinar will explore different paradigms for edge deployment of ML models.

ODSC Webinar: Architectures for Running ML at the Edge

Edge deployment refers to the deployment of machine learning (ML) models on devices at the edge of a network. Running ML models at the edge enables real-time predictions, lower latency, and increased security, but also presents unique architectural challenges.

In this webinar, we will explore different paradigms for edge deployment of ML models, including federated learning, cloud-edge hybrid architectures, and standalone edge models. We will discuss the trade-offs and considerations for each, as well as best practices for designing and deploying ML models at the edge.

Attendees will come away with a deeper understanding of the various approaches to edge deployment and the key factors to consider when designing an architecture for their specific use cases.

Registration details.