A loss function (or cost function) is a measure of how well a machine learning model’s predictions match the target values. It maps the model’s outputs and the ground truth to a non-negative number, where 0 indicates a perfect prediction and larger values indicate worse performance. During training, learning algorithms aim to minimize the loss. Examples include: Mean Squared Error (MSE) for regression (the average of squared differences between predicted and actual values), Mean Absolute Error (MAE), Cross-Entropy (log loss) for classification (measures the negative log likelihood of the true class given the model’s probabilities), and more specialized ones like hinge loss for SVMs, or segmentation IoU loss, etc. Some models have custom losses (e.g., perceptual loss in super-resolution uses feature differences instead of pixel differences). The choice of loss function guides what the model will optimize – e.g., MSE penalizes large errors more due to squaring, while MAE treats all errors linearly. Correctly specifying the loss is crucial for model performance and alignment with the problem’s goals.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More