AI models are being deployed every day to support the human decision-making process in high-stakes domains. In many environments, an AI model is the sole decision-maker or a user’s final decision is informed by the predictions of an AI model. A successful partnership between a user and an AI model requires that the user track the model’s performance, including failures. Further, users should always have a way to provide feedback to improve AI models. Modzy calls this human-in-the-loop machine learning, where users either provide feedback that will later be used to re-train our AI models or that users are enabled to perform limited training on-premise to utilize transfer learning for performance enhancement on their own data sets.
What you need to know
Not even a carefully constructed deep neural network trained on a large data set of well-labeled instances can maintain high performance forever. The model’s predictive ability decays over time. Broadly speaking, there are two ways a deep learning model can have a decaying performance: concept drift and data drift (). Concept drift happens when the underlying hypothesis about the phenomenon that the deep learning model is trained to predict changes. For instance, a deep learning model that is trained to detect fraud in financial transactions may perform worse over time as fraudsters change their methods. Data drift naturally happens as data evolves over time, potentially introducing previously unseen patterns in labeled data. For example, images that are captured by newer sensors may have a slightly different pixel distribution than what the model was trained on, which in turn may affect model performance over time.
One solution is to have a solidly engineered pipeline from the input data to the output of the deep learning model which is capable of being re-trained on the new labeled or non-labeled data over time through a user-provided feedback mechanism. Another solution is based on transfer learning, where the last layers of the deep neural network are fined tuned on-premise on the specific data set of interest before inference starts. Much of machine learning involves acquiring general concepts from specific training examples. Broadly speaking, the shallow layers of a deep neural network learn more general concepts that can transfer from one data set to another one more easily, while the deeper layers learn more data set specific concepts (). For example, the first layers of a deep neural network trained to classify faces learn to recognize basic concepts such as edges and shapes, and the deeper layers learn to identify specific concepts, such as eyes or noses. This limited training on-premise does not require a lot of computational power, as it only focuses on the last layers of the model to fine-tune the specific concepts to align with the objectives presented in the user’s data set.
Modzy’s solutions are actively designed to deal with concept drift and data drift (Figure 1). Modzy proposes two solutions to this problem. Our first solution relies on the human-in-the- loop feedback re-training on labeled data, using active learning, and non-labeled data, using semi-supervised learning, to enhance our models’ performance over time. The second solution relies on restricted training on-premise with a limited computational overload. Our objective is to combine these solutions with a detection framework capable of detecting concept drift and data drift post-production. As an example, concept drift causes decision boundaries between classes to change. Re-labeling some of the new data, re-training a parallel model and comparing the new decision boundaries with the old ones can provide a measure of this drift and indicate whether it is necessary to re-train or re-design the model per our current understating of the data distribution during inference.
Deep learning models are a specific type of AI model, so if we want to stay generic, I would stick to AI model in the summary.
Drifts describe the effect of the data’s distribution/labeling changing over time (which in turn causes a shift in the decision boundary). The shift in the decision boundary is caused by a drift (if that makes sense).
What this means to you
Concept drift and data drift can affect the performance of deep learning models over time. Concept drift arises when our interpretation of the data changes over time. Data drift occurs when the distribution of the input data received by the model during deployment changes. Continuous re-labeling of the data and re-training of the models is necessary for maintaining performance. If concept drift is detected, then affected old data and the new data need to be re-labeled and the model needs to be re-trained. If data drift is detected, then the new data needs to be labeled to introduce new classes and the model needs to be re-designed and re-trained. User feedback is a crucial part of this process, and at Modzy, we’re ensuring a human-AI centric approach to incorporate user feedback in the decision-making process of the AI systems to ensure successful AI development.
- Gerhard Widmer, Miroslav Kubat Learning in the presence of concept drift and hidden context, https://rd.springer.com/article/10.1023/A:1018046501280
- Tom M. Mitchell, Machine Learning, http://www.cs.cmu.edu/~tom/mlbook.html
The effectiveness and predictive power of machine learning models are highly dependent on the quality of data used during the training phase. In most real-world scenarios, models are trained using domain-specific data provided by known and trusted…