Not to be confused with Latent Dirichlet Allocation, Linear Discriminant Analysis in machine learning is both a dimensionality reduction technique and a classifier. As a classifier, LDA assumes each class’s data follows a Gaussian distribution and all classes share the same covariance matrix. From these assumptions, it derives a linear decision boundary. LDA finds a linear combination of features (a projection) that best separates the classes by maximizing the between-class variance relative to the within-class variance (Fisher’s criterion). For a two-class case, this is a single direction (line) onto which data is projected and thresholded. For multi-class, it finds C-1 discriminant axes for C classes. As a result, LDA can also be seen as a dimensionality reduction method yielding features that are good for discrimination. It’s related to Fisher’s Linear Discriminant. LDA works well if assumptions hold and classes are approximately linearly separable in some subspace. It’s less flexible than non-linear methods but can be very efficient and robust when its conditions are met.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More