In machine learning, especially deep learning, an epoch is one full pass through the entire training dataset during the training process. When training with mini-batch gradient descent, multiple batches will constitute one epoch such that by the end of an epoch, every training example has been seen once. If you have N training samples and a batch size of B, then it takes N/B iterations (weight updates) to complete an epoch. Training usually involves multiple epochs; during each epoch the model parameters are updated and ideally the model’s performance (on training and validation data) improves until convergence. The number of epochs to train is typically chosen via validation or based on convergence criteria (like when further epochs yield no significant improvement). It is possible to “overtrain” if too many epochs lead to overfitting on training data.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More