Model validation is the process of evaluating a trained model on data that it was not trained on, in order to assess its generalization performance and to tune hyperparameters. Typically, one splits available data into training and validation sets (and often also a separate test set). The model is trained on the training set and then its performance (accuracy, loss, etc.) is measured on the validation set. This guides choices like model architecture, hyperparameter values, early stopping (stop training when validation performance stops improving to avoid overfitting). It is a crucial part of the development cycle because it provides an unbiased estimate of model performance on unseen data, helping to prevent overfitting to the training data. Techniques like cross-validation use multiple validation rounds by partitioning the data in different ways to get a more robust estimate. Once hyperparameters are finalized, a final model is often trained on train+val and then tested on a hold-out test set to report final performance.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More