Logistic regression is a widely-used statistical model for binary classification. Despite its name, it’s actually a classification algorithm, not a regression algorithm. It models the probability that an instance belongs to the positive class as p = σ(w·x + b), where σ is the logistic sigmoid function σ(z) = 1/(1+e^{-z}). This maps any real-valued input to a [0,1] probability. The model is trained to fit w, b by maximizing the likelihood of the training labels or, equivalently, minimizing the logistic loss (which is convex). In practice, it often uses the cross-entropy loss. Logistic regression decision boundary is linear in the feature space (hence it’s a linear classifier) at w·x + b = 0 (where p = 0.5). Multinomial logistic regression (a.k.a. softmax regression) generalizes it to multi-class problems. It’s relatively interpretable (each weight indicates how that feature influences log-odds of the positive class) and outputs well-calibrated probabilities if trained properly, useful for downstream decision-making.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More