Model accuracy generally refers to the proportion of correct predictions made by a classification model. More formally, for classification, accuracy = (number of correct predictions) / (total predictions). It’s a basic metric to evaluate a model’s performance. However, accuracy can be misleading in imbalanced datasets (e.g., 95% accuracy could mean it always predicts the majority class which comprises 95% of data, ignoring the minority class performance). Thus, accuracy is often reported alongside precision, recall, and others when classes are imbalanced. For regression, one wouldn’t use “accuracy” but rather metrics like RMSE or R^2. In the context of a model’s parameters, sometimes one hears “model accuracy” colloquially to mean the accuracy on a test set, which reflects generalization. It’s also used in choosing between models: a model with 90% accuracy vs one with 85% on the same test set.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More