Triplet loss is a loss function commonly used in metric learning (e.g., face recognition) to learn an embedding space where similar examples are closer together and dissimilar examples are farther apart.It operates on triplets of samples: an anchor (the reference sample), a positive (a sample of the same class or otherwise similar to the anchor), and a negative (a sample of a different class or dissimilar to the anchor).The triplet loss is formulated to ensure the distance between the anchor and positive embeddings is smaller (by a margin) than the distance between the anchor and negative embeddings. Mathematically, if f(x) is the embedding function and we use a distance like Euclidean distance, triplet loss aims to satisfy: d(f(anchor), f(positive)) + α < d(f(anchor), f(negative)), for a chosen margin αBy training with triplet loss, the model learns a representation where intra-class similarity is high and inter-class similarity is low, which is vital for tasks like clustering or identification without explicit classification.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More