In machine learning, variance refers to the sensitivity of a model to fluctuations in the training data.A high-variance model will fit the training data very closely (even noise), and thus its predictions can change significantly with small changes in the training dataset. This typically manifests as overfitting: the model performs well on seen data but generalizes poorly to unseen dataVariance is one component of the bias-variance tradeoff (the other being bias). While low variance is desirable for generalization, too low variance might indicate an overly simple model (high bias) that underfits. Techniques to reduce variance include using more training data, simplifying the model (reducing complexity), regularization (like dropout or weight decay), or ensemble methods (bagging can reduce variance). Ultimately, managing variance is about finding the right model complexity that captures true patterns but not idiosyncratic noise in the training set.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More