Introducing Faster, Smaller Container Builds for ML Models with Chassis V1.5 Beta Release

This release yields faster container build times and smaller, faster model containers.

Introducing Faster, Smaller Container Builds for ML Models with Chassis V1.5 Beta Release

🚀 Exciting News: Chassis V1.5 Beta is Here! 🚀

Hey there, amazing members of our community! We are absolutely thrilled to announce a major milestone on our journey – our Chassis v1.5 beta release is now live! 🎉

Chassis is your go-to solution for effortlessly containerizing ML and AI models, streamlining the process of deploying them into production environments. With this new release, we've taken a giant leap forward, enhancing both performance and efficiency.

🔥 What's New in Chassis v1.5 Beta?

This release marks the culmination of an extensive refactor of the Chassis codebase. We've been hard at work behind the scenes to bring you a host of exciting improvements:

Faster Build Times: We understand the importance of speed in your development workflow. That's why we've optimized Chassis to deliver lightning-fast build times. Say goodbye to long waiting periods!

Smaller, Faster Model Containers: We've managed to shrink the size of the model containers without compromising performance. This means quicker deployment and reduced resource consumption – a win-win situation!


  • Container build times are now 2-5x faster, especially for models with few `pip` dependencies
  • Model containers built with Chassis v1.5 are now up to 10x smaller (a 2-4x size reductions is more typical) than those generated by previous versions
  • Support for Docker builds, which means  installing Chassis on a K8s cluster is no longer a requirement
  • The ability to configure the Chassis build server via Helm charts, no object storage dependency, support for multi-platform builds (e.g. `amd64`, `arm32v5`, `arm64v8`, etc.), and reduced CPU and RAM usage
  • Updated Chassisml SDK which includes improvements to usability, performance, size, and number of extra dependencies
  • Convenience OMI client that makes it easier to run inferences against OMI model containers

🌟 Why Chassis?

Chassis empowers you to focus on what you do best – building incredible ML/AI models – while we handle the nitty-gritty of containerization. Seamlessly transition your models into production with ease, speed, and reliability.

🌐 Start Working with the Beta Version

Want to be among the first to experience the power of Chassis v1.5? We’d love for you to give this beta version a try and provide your feedback. While this version is feature complete, we need your help to work out the kinks and fix any uncaught bugs. Your insights will play a crucial role in helping us refine and enhance Chassis further.

🚀 How to Get Started

Getting started with Chassis is a breeze:

  1. Head over to our website at
  2. Pip install the Chassis python library with a simple pip command: `pip install --pre chassisml`
  3. Check out the getting started guide and join our Discord server to stay up-to-date.

💌 Your Feedback Matters

As always, your feedback is incredibly valuable to us. Feel free to share your thoughts, suggestions, and any issues you encounter during the beta phase. Together, we can shape Chassis into the ultimate tool for seamless ML and AI deployment. We're beyond excited to embark on this journey with you. Thank you for being a part of the Chassis community. Here's to a future filled with cutting-edge technology and boundless innovation!

Stay tuned for more updates and happy containerizing! 🚢🤖