The Expectation-Maximization (EM) algorithm is an iterative method to find maximum likelihood estimates of parameters in statistical models that depend on unobserved latent variables. EM has two main steps per iteration: Expectation (E-step), where it computes the expected value of the log-likelihood with respect to the current estimate of the distribution of latent variables; and Maximization (M-step), where it maximizes this expected log-likelihood to update the parameter estimates. The classic use case is fitting Gaussian Mixture Models, where which Gaussian each point belongs to is latent. In the E-step, one computes “soft” assignments of points to Gaussians (posterior probabilities given current parameters), and in the M-step, update the Gaussian parameters (means, variances, mixture weights) using those assignments. EM is guaranteed to not decrease the likelihood at each step and converges to a local maximum. It’s widely applicable to problems from clustering to incomplete data scenarios.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More