A-Z of Machine Learning and Computer Vision Terms

  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ghost Frames
Ghost Frames
Gradient Descent
Gradient Descent
Greyscale
Greyscale
Ground Truth
Ground Truth
H
H
Hierarchical Clustering
Hierarchical Clustering
Histogram of Oriented Gradients (HOG)
Histogram of Oriented Gradients (HOG)
Human Pose Estimation
Human Pose Estimation
Human in the Loop (HITL)
Human in the Loop (HITL)
Hyperparameter Tuning
Hyperparameter Tuning
Hyperparameters
Hyperparameters
I
I
Image Annotation
Image Annotation
Image Augmentation
Image Augmentation
Image Captioning
Image Captioning
Image Classification
Image Classification
Image Degradation
Image Degradation
Image Generation
Image Generation
Image Processing
Image Processing
Image Recognition
Image Recognition
Image Restoration
Image Restoration
Image Segmentation
Image Segmentation
Imbalanced Data
Imbalanced Data
Imbalanced Dataset
Imbalanced Dataset
In-Context Learning
In-Context Learning
Instance Segmentation
Instance Segmentation
Instance Segmentation
Instance Segmentation
Interpolation
Interpolation
Interpretability
Interpretability
Intersection over Union (IoU)
Intersection over Union (IoU)
J
J
Jaccard Index
Jaccard Index
Jupyter Notebooks
Jupyter Notebooks
K
K
K-Means Clustering
K-Means Clustering
Keypoints
Keypoints
Knowledge Graphs
Knowledge Graphs
L
L
LIDAR
LIDAR
Label
Label
Label Errors
Label Errors
Large Language Model (LLM)
Large Language Model (LLM)
Latent Dirichlet Allocation (LDA)
Latent Dirichlet Allocation (LDA)
Latent Space
Latent Space
Learning Rate
Learning Rate
Linear Discriminant Analysis (LDA)
Linear Discriminant Analysis (LDA)
Linear Regression
Linear Regression
Logistic Regression
Logistic Regression
Long Short-Term Memory (LSTM)
Long Short-Term Memory (LSTM)
Loss Function
Loss Function
M
M
Machine Learning (ML)
Machine Learning (ML)
Manifold Learning
Manifold Learning
Markov Chains
Markov Chains
Mean Average Precision (mAP)
Mean Average Precision (mAP)
Mean Squared Error (MSE)
Mean Squared Error (MSE)
Medical Image Segmentation
Medical Image Segmentation
Micro-Models
Micro-Models
Model Accuracy
Model Accuracy
Model Parameters
Model Parameters
Model Validation
Model Validation
Motion Detection
Motion Detection
Motion Estimation
Motion Estimation
Multi-Task Learning
Multi-Task Learning
N
N
NIfTI
NIfTI
Natural Language Processing (NLP)
Natural Language Processing (NLP)
Neural Architecture Search
Neural Architecture Search
Neural Networks
Neural Networks
Neural Style Transfer
Neural Style Transfer
Noise
Noise
Normalization
Normalization
O
O
Object Detection
Object Detection
Object Localization
Object Localization
Object Recognition
Object Recognition
Object Tracking
Object Tracking
One-Shot Learning
One-Shot Learning
Optical Character Recognition (OCR)
Optical Character Recognition (OCR)
Optimization Algorithms
Optimization Algorithms
Outlier Detection
Outlier Detection
Overfitting
Overfitting
P
P
PACS (Picture Archiving and Communication System)
PACS (Picture Archiving and Communication System)
PR AUC
PR AUC
Pandas and NumPy
Pandas and NumPy
Panoptic Segmentation
Panoptic Segmentation
Parameter-Efficient Fine-Tuning (Prefix-Tuning)
Parameter-Efficient Fine-Tuning (Prefix-Tuning)
Pattern Recognition
Pattern Recognition
Perceptron
Perceptron
Pixel
Pixel
Pool-Based Sampling
Pool-Based Sampling
Pooling
Pooling
Pose Estimation
Pose Estimation
Precision
Precision
Predictive Model Validation
Predictive Model Validation
Principal Component Analysis
Principal Component Analysis
Prompt Chaining
Prompt Chaining
Prompt Engineering
Prompt Engineering
Prompt Injection
Prompt Injection
C

Clustering

Clustering is an unsupervised learning technique that involves grouping a set of data points into clusters such that points in the same cluster are more similar to each other than to points in other clusters​. Unlike classification, clustering operates on unlabeled data – the algorithm tries to discover inherent groupings or structure in the data without any ground truth labels. The goal is to maximize intra-cluster similarity (data points within a cluster should be as alike as possible) and maximize inter-cluster difference (distinct clusters should be well separated or different in characteristics).A classic example is clustering customers based on their purchase behavior: the algorithm might find one cluster of customers who buy mainly baby products, another cluster who buy luxury items, and so on – without having been told what those groups are beforehand. The “similarity” is defined via a distance or similarity measure (Euclidean distance is common for numeric data, but other measures or learned embeddings can be used). There are many clustering algorithms, each with different assumptions about cluster shape or formation: K-means clustering assumes clusters are roughly spherical in the feature space and partitions data into $k$ clusters by iteratively assigning points to the nearest cluster centroid and updating centroids; hierarchical clustering builds a tree of clusters by either successively merging the closest clusters (agglomerative) or splitting clusters (divisive), which allows one to choose a clustering at any level of granularity; DBSCAN defines clusters as areas of high density and can find arbitrarily shaped clusters while marking outliers as noise (it’s good for datasets with irregular cluster shapes); Gaussian mixture models assume data is generated from a mixture of Gaussian distributions and use statistical inference (EM algorithm) to soft-cluster points. Despite different approaches, the common theme is that clustering algorithms try to capture the natural structure in data.Clustering is often used for exploratory data analysis – to discover patterns that weren’t immediately apparent. For example, in biology, gene expression data might be clustered to find groups of genes with similar expression profiles (perhaps indicating co-regulation). In image processing, one might cluster pixel colors to compress images (color quantization) or cluster images in an unsupervised way to organize a photo collection by content. It’s also used in anomaly detection (points that don’t fit well into any cluster can be considered anomalies). One challenge with clustering is evaluating the results: since there are no true labels, validation uses metrics like silhouette score or Davies–Bouldin index (which assess cohesion and separation of clusters), or one uses domain knowledge to interpret clusters. Another challenge is that clustering can be sensitive to scaling of features and the choice of distance metric. Often, some preprocessing (like PCA for dimensionality reduction or feature normalization) is done to make clustering more effective. Overall, clustering is a powerful tool to let the data speak for itself by revealing potential groupings that can lead to insights or serve as a preprocessing step for other tasks (e.g., cluster then classify, or initialize labels via clustering).

Explore Our Products

Lightly One

Data Selection & Data Viewer

Get data insights and find the perfect selection strategy

Learn More

Lightly Train

Self-Supervised Pretraining

Leverage self-supervised learning to pretrain models

Learn More

Lightly Edge

Smart Data Capturing on Device

Find only the most valuable data directly on devide

Learn More

Ready to Get Started?

Experience the power of automated data curation with Lightly

Learn More