Transfer learning is a technique in which knowledge gained by a model in one problem or domain is transferred to another related problem.
Rather than training a model from scratch for a new task, one starts with a pre-trained model (often trained on a large dataset like ImageNet or on language corpora) and fine-tunes it on the target task. This works because the early layers of a model often learn general features (edges, shapes in vision; syntax in language) that are useful across tasks. By leveraging a pre-trained model’s learned features, transfer learning can significantly speed up training and improve performance, especially when the new task has limited data.
For example, one can take a network pre-trained on millions of images and adapt it to a medical imaging task with only thousands of images. Transfer learning is prevalent in deep learning, enabling state-of-the-art results in many applications with relatively low computational cost compared to training from scratch.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More