An Extreme Learning Machine (ELM) is a learning model for single-hidden-layer feedforward neural networks that forgoes iterative tuning of hidden-layer weights. In ELM, the hidden layer’s weights and biases are randomly assigned and fixed (not learned), and only the output layer weights are learned, typically in one step by a least-squares fit. This makes training extremely fast, essentially solving a linear system for the output weights. Despite the random hidden layer, if enough hidden nodes are provided (potentially even more than needed), the network can still fit complex functions due to the universal approximation capability of neural nets. ELMs often use activation functions like sigmoids or RBFs for hidden nodes. They trade off a larger hidden layer (sometimes requiring more nodes than a tuned network would) for quick training. ELMs have been applied to classification and regression tasks where rapid training is important, though they may not match the accuracy of fully-optimized multi-layer networks in complex tasks.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More