Dropout is a regularization technique for neural networks that randomly “drops out” (sets to zero) a subset of neurons in a layer during each training iteration. This means that for each training example, each neuron (and its connections) has a certain probability (e.g., 0.5) of being temporarily removed from the network. Dropout prevents the network from relying too much on any single neuron, forcing it to learn redundant representations that are useful in conjunction with different subsets of other neurons. This generally improves the network’s ability to generalize and reduces overfitting. At test time (or when not using dropout), all neurons are active but their weights are scaled down by the dropout rate to account for the effect of averaging. Dropout is simple yet very effective and widely used in deep learning.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More