A-Z of Machine Learning and Computer Vision Terms

  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
PyTorch
PyTorch
Q
Q
Quantum Machine Learning
Quantum Machine Learning
Query Strategy (Active Learning)
Query Strategy (Active Learning)
Query Synthesis Methods
Query Synthesis Methods
R
R
RAG Architecture
RAG Architecture
ROC (Receiver Operating Characteristic) Curve
ROC (Receiver Operating Characteristic) Curve
Random Forest
Random Forest
Recall (Sensitivity or True Positive Rate)
Recall (Sensitivity or True Positive Rate)
Recurrent Neural Network (RNN)
Recurrent Neural Network (RNN)
Region-Based CNN (R-CNN)
Region-Based CNN (R-CNN)
Regression (Regression Analysis)
Regression (Regression Analysis)
Regularization Algorithms
Regularization Algorithms
Reinforcement Learning
Reinforcement Learning
Responsible AI
Responsible AI
S
S
Scale Imbalance
Scale Imbalance
Scikit-Learn
Scikit-Learn
Segment Anything Model (SAM)
Segment Anything Model (SAM)
Selective Sampling
Selective Sampling
Self-Supervised Learning
Self-Supervised Learning
Semantic Segmentation
Semantic Segmentation
Semi-supervised Learning
Semi-supervised Learning
Sensitivity and Specificity of Machine Learning
Sensitivity and Specificity of Machine Learning
Sentiment Analysis
Sentiment Analysis
Sliding Window Attention
Sliding Window Attention
Stream-Based Selective Sampling
Stream-Based Selective Sampling
Supervised Learning
Supervised Learning
Support Vector Machine (SVM)
Support Vector Machine (SVM)
Surrogate Model
Surrogate Model
Synthetic Data
Synthetic Data
T
T
Tabular Data
Tabular Data
Text Generation Inference
Text Generation Inference
Training Data
Training Data
Transfer Learning
Transfer Learning
Transformers (Transformer Networks)
Transformers (Transformer Networks)
Triplet Loss
Triplet Loss
True Positive Rate (TPR)
True Positive Rate (TPR)
Type I Error (False Positive)
Type I Error (False Positive)
Type II Error (False Negative)
Type II Error (False Negative)
U
U
Unsupervised Learning
Unsupervised Learning
V
V
Variance (Model Variance)
Variance (Model Variance)
Variational Autoencoders
Variational Autoencoders
W
W
Weak Supervision
Weak Supervision
Weight Decay (L2 Regularization)
Weight Decay (L2 Regularization)
X
X
XAI (Explainable AI)
XAI (Explainable AI)
XGBoost
XGBoost
Y
Y
YOLO (You Only Look Once)
YOLO (You Only Look Once)
Yolo Object Detection
Yolo Object Detection
Z
Z
Zero-Shot Learning
Zero-Shot Learning
C

Canonical Correlation Analysis (CCA)

Canonical Correlation Analysis (CCA) is a multivariate statistical technique that explores the relationships between two sets of variables by finding linear combinations (projections) of each set that are maximally correlated with each other​.In formal terms, given two random vectors $X \in \mathbb{R}^p$ and $Y \in \mathbb{R}^q$, CCA finds vectors $a$ and $b$ such that the canonical variables $U = a^T X$ and $V = b^T Y$ have the highest possible Pearson correlation coefficient​.It produces a sequence of such pairs $(U_1, V_1), (U_2, V_2), \dots$ where each subsequent pair captures the next largest remaining correlation under the constraint of being uncorrelated with previous pairs. These $U_i$ and $V_i$ are called canonical variates. By examining the coefficients (the elements of $a$ and $b$), one can interpret how the original variables contribute to the shared patterns between the two datasets.CCA is particularly useful when we have two different sets of features describing the same observations and we want to understand the common underlying factors. It has been applied in fields like psychology (e.g., relating test scores and physiological measurements) and neuroscience (relating brain activity features to stimulus features), among others​. Notably, CCA generalizes several other techniques: for instance, many multivariate significance tests (MANOVA, multivariate regression) can be framed as special cases of CCA​.The method was first introduced by Harold Hotelling in 1936 and remains a cornerstone of multi-view learning, with modern extensions like kernel CCA and deep CCA allowing nonlinear and deep learning-based correlations to be captured​. In summary, CCA finds the best cross-covariance structure between two sets of variables, helping to uncover latent associations that are not apparent from individual correlations within either set alone.

Explore Our Products

Lightly One

Data Selection & Data Viewer

Get data insights and find the perfect selection strategy

Learn More

Lightly Train

Self-Supervised Pretraining

Leverage self-supervised learning to pretrain models

Learn More

Lightly Edge

Smart Data Capturing on Device

Find only the most valuable data directly on devide

Learn More

Ready to Get Started?

Experience the power of automated data curation with Lightly

Learn More