Modzy Enterprise AI Platform
Does Modzy support cluster computing capabilities?
Modzy supports horizontal scalability to process workloads in parallel. Modzy itself is built on top of Kubernetes and takes full advantage of the power and flexibility that Kubernetes provides.
What kind of infrastructure is required to run Modzy?
Modzy is intentionally designed to ride on existing infrastructure to make the best use of investments our customers have already made. Modzy can run on prem, in the cloud, or on edge devices.
How does Modzy help me develop and train new algorithms?
Modzy becomes a powerful tool to deploy and manage AI/ML algorithms once they have been developed and trained. There are a wide range of tools, many of which are open source, that support development and others that support training. Rather than replicate those capabilities, the power of Modzy is supporting organizations large and small in managing the range of algorithms they create and ensuring those algorithms can support human and machine users across their enterprise.
How many resources are available to the model containers? Are GPUs available?
GPUs are available, but the number of resources available will ultimately be up to the end user. We recommend making your containers flexible to run on a variety of resource combinations, but if your models/containers require specific hardware requirements (i.e., 2 GPUs & 4 CPUs) we completely understand and that is something you can specify should it be the case.
How do I get a trial of Modzy?
Contact us for more information about setting up a Modzy trial. We’d love to speak with you.
Can I store my data on Modzy?
Modzy isn’t a data storage or management tool, so you’ll want to find another location to store your data. Modzy provides a number of flexible integrations, however, so it’s easy for Modzy to process your data no matter where it’s located.
Where can I find the list of commercial models available through Modzy?
A full list of Modzy’s available commercial models can be found in the Modzy Model Marketplace. Here you’ll find not only a listing of models, but detailed information about the origin, training, and performance of each model available on our marketplace.
How much does Modzy cost?
Modzy’s platform is priced based on the number of processing engines you have in your computing cluster. Essentially, the more infrastructure that Modzy is managing and optimizing, the more you’ll pay. The price of models on Modzy’s marketplace are set by their creator, so those prices vary from model to model. For more specific pricing, please visit our pricing page.
Can I train new models with Modzy?
No. Modzy isn’t a training platform. Instead, Modzy makes it easy to take models you’ve trained in your favorite tool of choice, and deploy them into an environment that supports massive scalability, governance, and a range of integrations.
Where can Modzy be deployed?
Modzy can be deployed just about anywhere you might need to run AI models at scale, including cloud infrastructure, private data centers, or tactical compute resources. We offer support for on-prem deployments, as well as hybrid cloud, and public cloud deployment options.
Can I host my models in Modzy?
Yes. You can host your models in Modzy. If your business has models you’d like to make available to a wider audience, Modzy has a robust Channel Partner program, which is the first step to hosting models in the marketplace. If you are interested in using Modzy in your business or operational environment and want to be able to deploy your own models to your instance of Modzy, the Model Deployment Tool will allow you to containerize and host your own models in just a few clicks. These models will remain yours and exist only in your instance of Modzy.
Can I use Modzy to compare models?
Yes. Modzy is built with transparency in mind, so all models in the Marketplace come with a Model Description page that outlines what the model does, what type of model it is, how it was trained, and even identifies data sets originally used for training and validation. Model Description pages also provide insight into performance metrics like accuracy and precision. In addition to comparing these descriptions, users can compare actual performance. With access to multiple models that provide similar capabilities, users can run their data sets through multiple models to compare outputs.
If I don’t have AI models to deploy yet, how can Modzy help me?
Modzy includes a Model Marketplace that comes stocked with 60+ pre-trained algorithms that you and your organization can begin using immediately. This serves as a cost-effective way to begin investing in AI and assessing how it can benefit your environment.
The Modzy model deployment tool requires me to indicate how much memory my model requires. What does this mean?
You can think of a Docker container like a mini computer that can run with as much or as little resources as you specify. By default, these containers have no resource constraints and will use as many of the resources from the host machine as availability permits. To control for this, we try to make sure each container is allotted only as much memory as it needs to run. You can roughly determine the amount of memory your container needs based on the size of your docker image or use some of Docker’s recommended tools.
Once I deploy a model to Modzy, how can I ensure my packages and frameworks will always be up-to-date?
The great thing about Modzy models is they are completely self-contained in docker containers. This means the versions of the programming language, frameworks, and libraries will always remain the same each time our API spins up the model container and runs inference. As a result, once your container works once, you will never have to worry about conflicting package versions moving forward. On the other hand, if you would prefer to update a particular framework or package within your model container, you can make the desired changes in a new container that would represent a new version of your model that you can also deploy to the platform.
Does Modzy provide any tips or resources to help me with packaging a model in a Docker container?
Yes. Our Data Science and Engineering teams want to make this process as quick and easy for you as possible, so we’ve assembled plenty of materials to help you through this. If you’ve built your model in Python, you can watch a tutorial to help you package your model starting from our template repository. Otherwise, check out our containerization specifications page to help you get started building a Docker container from scratch!
How should I name my input data when trying to pass data through a Modzy model?
Each Modzy model in the marketplace contains a Model Card and Model Details page. These are there help you understand how we built the model, what data the model trained on, specific use cases where the model performs well, and much more. Under the “Technical Details” section, we include an “Input Specification” subsection where we provide you critical information to run this model. You’ll notice a “Filename” header in the table, and that tells you the specific filename the API will expect you to pass through this model. If you pass through a file with a different filename, the API will return a message indicating it cannot find the input file.
Am I able to test models in the marketplace on my data?
All models will allow for a trial period on your own data to test the performance. In the case that it does not perform up to par with your expectations, you are certainly allowed to go directly to the appropriate provider organization.
I have a model that is wrapped up in a Docker container. How can I take this model container and deploy it to Modzy?
Most of the work to get your model into Modzy is already complete, but you must make sure that your container abides by a few of our requirements, which are that it must be self-contained and does not make any external calls to other containers, APIs, or databases. It also must contain the three endpoints the Modzy API will call – a GET /status route, a POST /run route, and a POST /shutdown route. Visit our Modzy documentation page for more information.
Can models in Modzy be pipelined and daisy chained together?
Yes. This can be done in two ways.
1. An application developer can make a call to a Modzy model, take the result from that call, adjust the result to their requirements, and then use those results to make a second call to another algorithm. This approach doesn’t require an understanding of how to package models in Modzy but it does require some I/O overhead in between calls.
2. A Data Scientist can build a custom Modzy container. Modzy containers can call other Modzy containers so if a Data Scientist wants to string containers together (s)he can create a container whose run method explicitly calls the required platform models in order. This approach requires familiarity with Modzy model development but prevents the back and forth I/O and time delays of the development approach.
Do I need to do anything special to use GPU’s on Modzy?
Modzy leverages containers for its processing engines, and that produces a special consideration for GPU based models. In order to leverage a GPU from inside a container, the host computer running the container needs to have a GPU driver installed. As long as this condition is met you can place any compatible version of CUDA, Tensorflow, or any other GPU enabled software component in your container when you build the image and Modzy can run it.
What AI /ML platforms and programming languages are supported by Modzy?
Preparing a Model to run on the Modzy platform requires placing the model in a docker container image. That extra step insulates Modzy from having to know the specifics of your implementation, and allows you to use whatever development environment, language, or tools set you wish. The team currently leverages a variety of Linux distributions, Deep learning platforms, and languages within the Modzy ecosystem.
How often do you update the models on the marketplace?
We continually test, update and maintain our AI models on the marketplace depending on their application and training dataset. We version control our datasets and models and update our AI models according to latest available architectures and technology in the field.
How do you make sure your AI models generalize well to clients’ datasets?
Our AI models are trained on large datasets consisting of diverse classes and data points from a range of possible distributions so that once the model is trained, it can generalize to perform well in different environments and on different test datasets. Further, we provide limited re-training solutions which utilize transfer learning to make sure our AI models are trained to provide their best performance on the users’ datasets.
How do you deal with model drift and data drift?
Modzy Labs is actively working on developing cutting edge solutions for detecting data drift and concept drift. Our drift detection solution will produce a confidence score indicating the possibility of drift happening in the data so that the user can make appropriate decisions accordingly.
Can I use your re-training solution to re-train my own AI models?
Modzy Labs is continually developing re-training solutions for different AI architectures such as YOLO and RCNN family. You can use our re-training solution to re-train Modzy models available on Modzy Marketplace on your own datasets.
Can I apply your explainability solution to my own AI models?
Yes. The AI explainability solution developed by Modzy Labs is designed to work with any black-box AI model.
How does your explainability solution work?
Our explainability solution uses adversarial AI to understand how a model makes its predictions and then explains the outcomes of an AI model by producing the most important input features affecting those predictions. This sheds light on how an AI model distinguishes between different classes and what factors greatly contribute to model predictions.
Why is it important to explain the outcomes/predictions of AI models?
Explaining the outcomes of AI models is a perquisite for establishing trust between the machines and users. As humans increasingly rely on AI to process large amounts of data and make decisions, it is crucial to develop solutions that can interpret the predictions of DNNs in a user-friendly manner. Explaining the outcomes of a model can help reduce bias and contribute to improvements in model design, performance, and accountability by providing beneficial insights into how AI models behave.
Is there a trade-off between adversarial robustness and prediction performance/accuracy?
Mostly no. However, depending on the application, training dataset and type of architecture sometimes adding robustness can reduce an AI model’s prediction confidence for specific classes.
Can I use your defensive solutions to enhance robustness of my own AI models?
Yes, the detection AI model can be used to filter out adversarial inputs before they are fed into any AI model. Further, the training methodology can be used to make AI models more robust.
How do I make my AI models robust against adversarial attacks?
Modzy has developed two unique solutions for dealing with adversarial attacks. We train our AI models based on a novel robust training methodology developed by Modzy Labs. This novel training methodology ensures robustness against adversarial attacks by encouraging AI models to learn and reason based on more holistic understanding of the context provided in the inputs. Further, Modzy Labs has developed a novel AI model that can detect adversarial inputs before they are fed into models to make sure an AI model is only making predictions on clean inputs similar to what they have been trained on.
What makes my AI model vulnerable to adversarial attacks?
AI models are usually trained to rely only on a few input features to make their predictions. This means that the adversary can introduce a small amount of engineered noise to specific input features and drastically affect the models’ predictions. At Modzy, we have designed a new training method to develop AI models that make predictions based on a larger group of input features and as a result do not have this vulnerability.
What are adversarial attacks?
AI models are vulnerable to inputs that are maliciously created by the adversary to mislead the AI model into making incorrect predictions. These adversarial inputs are hard to detect as they are indistinguishable from original inputs that the AI model expects, but at the same time these inputs can quietly fool the AI model and degrade its overall performance. For example, we’ve inserted two sample images (original and adversarial) below.