Fisher’s Linear Discriminant (FLD) is a method used in pattern recognition to find a linear combination of features that separates two or more classes of objects. In the two-class case (often called Linear Discriminant Analysis (LDA) when used for classification), it seeks a projection (a line) such that the separation between the means of two classes is maximized relative to the variability within each class. It does this by maximizing the Fisher criterion: the ratio of between-class variance to within-class variance in the projected data. The resulting linear discriminant function can be used as a one-dimensional classifier (threshold on that line). For multiple classes, LDA generalizes to finding k-1 dimensional subspace for k classes. It’s effectively a dimensionality reduction supervised by class labels, and is optimal when data is Gaussian with equal covariance matrices for all classes. The directions found are akin to eigenvectors of the scatter matrices ratio.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More