Mean Average Precision (mAP) is a metric used to evaluate object detection (and sometimes information retrieval) performance. It’s the mean of Average Precision (AP) scores for each class. For each class, Average Precision is calculated by plotting precision vs recall for the detector as the confidence threshold varies, and computing the average precision across recall levels (often approximated by integration or 11-point interpolation in older definitions). In object detection, an output is considered correct if the detected object has IoU greater than a threshold (e.g., 0.5) with a ground truth and the class matches. The precision-recall curve is derived from the set of all detections (true positives and false positives) as confidence varies. AP is essentially the area under the precision-recall curve. Then mAP is the mean of AP across all object classes. mAP@0.5 (IoU threshold 0.5) and mAP@[.5:.95] (average mAP at IoU thresholds 0.5,0.55,…,0.95, as used in COCO) are common metrics. mAP provides a single number summary that balances precision and recall across classes.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More