In a binary classification context, precision (also called positive predictive value) is the fraction of predicted positives that are truly positive. It’s given by TP / (TP + FP), where TP is true positives and FP is false positives. Precision answers: “when the model predicts positive, how often is it correct?” A high precision means that when the model flags something, it’s usually right (few false alarms). Precision is important in scenarios where false positives are costly (e.g., an email spam filter’s precision: if low, it means many non-spam emails get flagged as spam, which is bad). Precision is often considered with recall (TP/(TP+FN)) since there’s usually a trade-off: increasing recall (catch more positives) might lower precision (you also catch more negatives as positives). Precision is one component of the F1-score (harmonic mean of precision and recall). In multi-class or other contexts, precision can be defined per class (like “of those predicted as class X, how many truly were X”) and averaged (macro-precision, weighted precision).
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More