Hyperparameters are the settings or configurations for learning algorithms that are set before training and not learned from the data (in contrast to parameters which the model learns). Examples include learning rate, number of epochs, batch size, number of layers in a neural network, number of neurons per layer, regularization strength (like lambda in ridge regression), tree depth in a random forest, etc. These values can significantly affect model performance and training behavior but need to be chosen via methods like grid search, random search, or more advanced techniques (Bayesian optimization). They often control model complexity (e.g., depth of a tree) or optimization dynamics (learning rate, momentum). Hyperparameter tuning is the process of finding the optimal combination of hyperparameters for a given model and dataset, typically by evaluating performance on a validation set.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More