Teams can deploy, run, and manage AI models at the edge, enabling real-time predictions, low latency, and better security.
When it comes to deploying and running artificial intelligence (AI) and machine learning (ML) models, the speed at which a model reaches a prediction can have a large impact on automated business or mission critical systems. Running AI models at the edge is the future of enabling faster speed to insight for many systems that demand near real-time processing.
Traditional software deployment patterns see services deployed in geographically dispersed data centers. When interacting with a traditionally deployed application, data typically needs to travel to the application servers to be processed, and then the results then travel back to the source of the request. Cloud computing remains a central pillar in modern software architecture and development, and its myriad advantages have led to remarkable innovations in the last few years, but the networking landscape is evolving.
Advancements in networking technology, such as 5G, allow applications to tap into a dedicated high bandwidth, low latency network to process data at the edge. These dedicated networks allow services to process live, real-time data much closer to the source, enabling faster speed to insight. Modzy offers the capability to deploy AI at the edge where data is collected, which enhances security and reduces latency.
Modzy Edge is a single, lightweight binary that users can deploy on either x86 or ARM architectures. The binary provides an http server that responds to a subset of Modzy’s existing API endpoints, delivering a consistent user experience. The edge version processes and returns model data nearest its point of origination.
To illustrate, picture a scenario where cameras are installed to monitor a factory room floor. ML models analyze the video feeds to identify anomalies in the manufacturing process. In a typical cloud deployment, users must first expose the video stream over a network. Next, a connection between the video feed and a remotely hosted server is established, and the data must travel to a remote processing location. Once processed, it makes the same trek back. This can result in lengthy waits if network speed lags. By the time a user is alerted to a detected anomaly, substantial damage may have already occurred.
Now, consider processing the same data at the edge with Modzy. The Modzy edge binary is small enough to install on cameras, and can tap directly into the feed to process data in near real time. Since the data does not have to travel to a remote datacenter for processing, users can be alerted almost immediately of impactful model predictions. Not only are they able to take swift action based on the model’s results, but they also don’t have to worry about exposing sensitive data over a network. During processing, Modzy utilizes these edge networks to transmit model results and telemetry data to a server for human review and analysis, without sacrificing the speed at which it delivers AI results.
But what about situations without network connectivity? Since Modzy edge can run a model and process the data on the device itself, there is no impact to the model processing operation. Result and telemetry data can be retained on a local filesystem and downloaded to a datastore at any time without affecting model operations.
Edge computing provides exciting opportunities to enhance current model deployments and deploy AI in remote environments, all while ensuring high security, low latency, and faster processing via better networking. Modzy is the AI platform allows organizations to run models anywhere, and to capitalize on the opportunity to leverage AI at the edge.