Model parameters are the internal coefficients or weights that the machine learning algorithm tunes during training. These parameters define the model and are what get learned from data. For example, in a linear regression, the weights (including bias) are parameters. In a neural network, all the connection weights and biases are parameters (often millions of them). In a decision tree, the splitting thresholds and selected features at each node are the parameters (though usually not all learned by continuous optimization, but still fit from data). Distinguishing from hyperparameters: hyperparameters are set by the practitioner (like learning rate, regularization strength) and not directly learned by the model. Once training is done, the parameter values are what is saved (model checkpoint) and used to make predictions. Model complexity is often discussed in terms of number of parameters; e.g., a model with billions of parameters (like GPT-3 with 175B params) is highly complex and expressive.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More