One-shot learning is a scenario in machine learning, particularly in classification, where the model must learn to recognize categories from a single training example (or very few examples, then it’s few-shot). It’s a special case of few-shot learning (1-shot being the extreme). For instance, given one image of a new class, the system should correctly identify images of that class in the future. This is tough for standard models that usually require many examples to generalize. Approaches to one-shot learning include metric learning (learning a similarity function such that images of same class are closer than those of different classes) combined with nearest neighbor type inference; Siamese networks were an early deep learning approach (two identical networks comparing two inputs). Another approach is data augmentation to synthetically create variations of the one example. Also, Bayesian approaches or using prior knowledge from previously learned classes can help. In practice, one-shot learning is often addressed by meta-learning: training a model on many small tasks so that it learns how to learn from one example. Applications: character recognition (Omniglot dataset is a famous one-shot benchmark), face recognition (we often have one reference image per person).
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More