A Perceptron is one of the earliest and simplest types of artificial neural networks, consisting of a single layer of linear weights followed by a step (threshold) activation function. It was introduced by Frank Rosenblatt in the 1950s. It basically computes a weighted sum of inputs, and outputs +1 if the sum exceeds a threshold (or 0/ -1 depending on convention), otherwise outputs -1 (or 0). The perceptron learning algorithm adjusts weights iteratively for a set of labeled examples, guaranteed to converge if the data is linearly separable (finds a linear decision boundary). It is a binary linear classifier. However, a single perceptron cannot solve problems that aren’t linearly separable (the classic example is XOR, which it fails at). Multi-layer perceptrons (with hidden layers) overcome that, but training them only became effective with backpropagation decades later. Historically, the perceptron is important because it’s the ancestor of modern neural networks and because Minsky and Papert’s book (1969) highlighting its limitations led to a temporary decline in neural network research.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More