The True Positive Rate (TPR), also known as sensitivity or recall, is the proportion of actual positive instances that are correctly identified as positive by the model​.In other words, TPR = TP / (TP + FN). It measures coverage of the positive class: a TPR of 1.0 (100%) means the model finds all positives in the data, whereas a lower TPR indicates some positives went undetected. TPR is often discussed alongside False Positive Rate (FPR) in the context of ROC curves, as well as alongside precision in the context of precision-recall analysis. A high TPR is crucial for applications like disease screening (catch all patients who have the disease), although it may come at the expense of a higher false positive rate. Improving TPR usually involves making the model more sensitive (e.g., lowering classification thresholds), which must be balanced against precision or specificity depending on the use case.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More