The F1 score is the harmonic mean of precision and recall, used as a measure of a test’s accuracy, especially in binary classification problems with imbalance. It is given by: F1 = 2 * (precision * recall) / (precision + recall). Precision is TP/(TP+FP) and recall is TP/(TP+FN). The F1 score ranges from 0 to 1, with 1 being perfect precision and recall. It balances the two — meaning that a classifier gets a high F1 only if both precision and recall are reasonably high, and if one is low, F1 will be closer to that lower number. This makes it useful when one wants a single metric that trades off false positives and false negatives, for example in information retrieval or medical tests where both kinds of errors matter. There are also generalizations like Fβ (which weights recall β times as important as precision) and micro/macro-averaged F1 for multi-class evaluations.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More