Normalization in the context of machine learning usually refers to re-scaling input features to have certain properties (often to ease optimization). Common normalization techniques:Min-Max normalization: scaling features to [0, 1] range (or [-1, 1]) by subtracting the min and dividing by the range.Z-score normalization (standardization): subtract mean and divide by standard deviation, making features have mean 0 and variance 1.Normalization helps gradient descent converge faster, especially for models sensitive to feature scale (like neural nets, logistic regression, KNN, SVM, etc.), and can prevent some features from dominating just because of scale. In some contexts, normalization also refers to adjusting a probability distribution so that it sums (or integrates) to 1 (like the softmax function normalizes a vector of logits into probabilities). There’s also normalization layers like Batch Normalization (already discussed) or Layer/Instance Norm used in deep networks. In summary, normalization generally means making data “normalized” either in distribution or range so that subsequent processing is easier or more meaningful.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More