In simple terms, transfer learning is a machine learning approach where a model that is already trained on a specific dataset and developed for a specific task is reused as the starting point for training on a different dataset for a different task. Transfer learning is a popular approach for model training, and pre-trained computer vision and natural language processing models are used commonly as a starting point for specific user applications. At Modzy, our models are designed to work out of the box for your specific applications, but we also provide the option for limited training on via CPU, utilizing transfer learning and domain adaptation so that our models are even more tailored to your specific application.
What you need to know
Transfer learning works surprisingly well in a plethora of applications focused around computer vision and natural language processing . Transfer learning and domain adaptation refer to situations where what was learned in one setting is exploited to improve generalization in another setting . There is always a distribution mismatch between the source and target data distribution, but the reason for the mismatch differs from one setting to another. Transfer learning and domain adaptation approaches are designed to address this issue. The problem to solve here is a challenging one because one of the core assumptions in the science of machine learning is that the training and test datasets are drawn according to the same probability distribution. However, this is not the case in real-world applications. Thus, it is important to leverage the existing source knowledge possessed by the model after training for the source task to solve a different target problem, given that the source and target may exhibit a distribution mismatch .
The underlying assumption made for transfer learning and domain adaptation is that the source and target domains differ in terms of marginal data distributions, but that the labelled datasets for the two domains are the same. There are also cases where the marginal distributions of the source and target datasets are related, but the source and target tasks have different labelled datasets. Depending on the specific application, the transferred knowledge can be in the form of data instances, feature representations, or model parameters. At Modzy, we focus on features learned when the model was trained on the source dataset for the source task, and then adapt those features to a new target dataset and target task. As an example, if we have a YOLO-based object detection model for detecting buildings in a specific dataset, the features learned by that model can be utilized on the target dataset to detect buildings in a dataset with a different pixel distribution; this is done by performing a limited re-training for a few of layers in the YOLO model.
At Modzy, we approach transfer learning and domain adaptation from a “learning to learn” perspective. The learning to learn ability, shared by humans and animals, implies that as a biological cognitive system gains more experience, it becomes better at learning new tasks. We train our models on large datasets consisting of data points from different probability distributions. Our objective is that our pre-trained models should have a very low generalization error. Further, we provide the opportunity for customized limited re-training of some of our models on the user’s dataset. Our limited re-training option occurs under the following conditions:
- Limited time and computation power. Re-training of our models should take only a short time and require only limited computation resources, but should enhance performance on the user’s dataset.
- Re-training is done according to the science behind feature-based transfer learning and domain adaptation. As an example, a deep learning model used for re-training will have most of its layers frozen, and the re-training of the model will only affect the weights in a few layers so that the features learned previously are utilized more efficiently for the new application and dataset.
What this means for you
Continual learning or learning to learn is an important topic in the AI field. Domain adaptation and transfer learning are important emerging solutions in the field of continual learning for computer vision and natural language processing tasks. At Modzy, we develop our models to not only perform well on a range of datasets and applications, but also to transfer well to very specific applications for which large datasets may not exist. We use limited re-training options based on transfer learning and domain adaptation to bridge the probability distribution gap between the source and target dataset to minimize the effects of domain-induced changes in the learned feature distribution. In this way, we can reduce the generalization error as we apply a model to different tasks.
- Torrey, Lisa, and Jude Shavlik. “Transfer learning.” Handbook of research on machine learning applications and trends: algorithms, methods, and techniques. IGI Global, 2010. 242-264.
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
- Saenko, Kate, et al. “Adapting visual category models to new domains.” European conference on computer vision. Springer, Berlin, Heidelberg, 2010.
The effectiveness and predictive power of machine learning models are highly dependent on the quality of data used during the training phase. In most real-world scenarios, models are trained using domain-specific data provided by known and trusted…
- November 15, 2019