A confusion matrix is a table used to evaluate the performance of a classification model by comparing its predictions with the ground truth. For an N-class problem, it’s an N×N matrix where the cell at row i, column j indicates the number of samples whose true class is i that were predicted as class j. In the binary case (2×2), it is commonly organized as: True Positives (TP), False Negatives (FN) in the first row, and False Positives (FP), True Negatives (TN) in the second row (depending on convention). From the confusion matrix, one can derive metrics like accuracy (overall correctness), precision (TP/(TP+FP)), recall (TP/(TP+FN)), and F1 score. It provides detailed insight into which classes are confused with which others by the model.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More