Histogram of Oriented Gradients (HOG) is a feature descriptor used in computer vision for object detection. The idea is to characterize local object appearance and shape by the distribution of intensity gradients or edge orientations in localized regions of an image. To compute HOG for an image: (1) divide the image into small regions called cells; (2) for each cell, compute a histogram of gradient directions (e.g., 9 bins for 0-180°) weighted by gradient magnitude; (3) optionally normalize these histograms over larger regions called blocks to ensure contrast invariance. The concatenated histograms from all cells form the HOG descriptor. HOG features, when used with a linear SVM, were famously effective for detecting pedestrians (Dalal and Triggs, 2005). They capture edge structure in a way that’s invariant to small geometric and photometric transformations, making them robust hand-crafted features for vision tasks.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More