A-Z of Machine Learning and Computer Vision Terms

  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
  • This is some text inside of a div block.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
PyTorch
PyTorch
Q
Q
Quantum Machine Learning
Quantum Machine Learning
Query Strategy (Active Learning)
Query Strategy (Active Learning)
Query Synthesis Methods
Query Synthesis Methods
R
R
RAG Architecture
RAG Architecture
ROC (Receiver Operating Characteristic) Curve
ROC (Receiver Operating Characteristic) Curve
Random Forest
Random Forest
Recall (Sensitivity or True Positive Rate)
Recall (Sensitivity or True Positive Rate)
Recurrent Neural Network (RNN)
Recurrent Neural Network (RNN)
Region-Based CNN (R-CNN)
Region-Based CNN (R-CNN)
Regression (Regression Analysis)
Regression (Regression Analysis)
Regularization Algorithms
Regularization Algorithms
Reinforcement Learning
Reinforcement Learning
Responsible AI
Responsible AI
S
S
Scale Imbalance
Scale Imbalance
Scikit-Learn
Scikit-Learn
Segment Anything Model (SAM)
Segment Anything Model (SAM)
Selective Sampling
Selective Sampling
Self-Supervised Learning
Self-Supervised Learning
Semantic Segmentation
Semantic Segmentation
Semi-supervised Learning
Semi-supervised Learning
Sensitivity and Specificity of Machine Learning
Sensitivity and Specificity of Machine Learning
Sentiment Analysis
Sentiment Analysis
Sliding Window Attention
Sliding Window Attention
Stream-Based Selective Sampling
Stream-Based Selective Sampling
Supervised Learning
Supervised Learning
Support Vector Machine (SVM)
Support Vector Machine (SVM)
Surrogate Model
Surrogate Model
Synthetic Data
Synthetic Data
T
T
Tabular Data
Tabular Data
Text Generation Inference
Text Generation Inference
Training Data
Training Data
Transfer Learning
Transfer Learning
Transformers (Transformer Networks)
Transformers (Transformer Networks)
Triplet Loss
Triplet Loss
True Positive Rate (TPR)
True Positive Rate (TPR)
Type I Error (False Positive)
Type I Error (False Positive)
Type II Error (False Negative)
Type II Error (False Negative)
U
U
Unsupervised Learning
Unsupervised Learning
V
V
Variance (Model Variance)
Variance (Model Variance)
Variational Autoencoders
Variational Autoencoders
W
W
Weak Supervision
Weak Supervision
Weight Decay (L2 Regularization)
Weight Decay (L2 Regularization)
X
X
XAI (Explainable AI)
XAI (Explainable AI)
XGBoost
XGBoost
Y
Y
YOLO (You Only Look Once)
YOLO (You Only Look Once)
Yolo Object Detection
Yolo Object Detection
Z
Z
Zero-Shot Learning
Zero-Shot Learning
C

Case-Based Reasoning

Case-Based Reasoning (CBR) is an approach to problem-solving in artificial intelligence that involves reusing past experiences (cases) to solve new problems​.Instead of relying on general rules or an explicit model, a CBR system stores a knowledge base of cases, where each case is a specific problem situation paired with its solution (and often an explanation or outcome). When a new problem arises, the system retrieves a prior case that is similar to the current one and then reuses (adapts) that case’s solution to fit the new problem’s context​.This process is inspired by how humans often reason by analogy, recalling how a similar issue was resolved in the past and applying that experience to the current situation.In practice, case-based reasoning typically follows a structured four-step cycle​Retrieve: Identify the most similar past case(s) from the case library that resemble the new problem. (E.g., a help-desk system finds a past incident report that matches a new customer’s issue.)​Reuse: Copy or adapt the solution from the retrieved case to propose a solution for the current problem. Some adaptation may be needed if there are differences between the old and new cases​Revise: Test the proposed solution in the real world (or through simulation) and revise it if necessary. If the solution doesn’t fully solve the problem, adjust it until it works (this step is essentially error-correcting based on feedback)​Retain: After successfully solving the new problem, incorporate this experience as a new case into the case base for future reference​. The system thus “learns” by storing the solved case, enriching its knowledge for the next queries.CBR has been used in domains such as legal reasoning (where past legal cases inform decisions on new cases), customer support systems, and medical diagnosis. One advantage of CBR is its ability to provide explanations for solutions: since a solution is derived from a specific past case, the system can present that analogy (“We solved a similar issue this way before”). This approach naturally handles incremental learning (each new case solved becomes a training example for future problems) and can work even when an explicit general theory of the domain is hard to formulate. However, maintaining an efficient and relevant case library (avoiding case overload or redundancy) and designing good similarity metrics are important challenges in case-based reasoning systems.

Explore Our Products

Lightly One

Data Selection & Data Viewer

Get data insights and find the perfect selection strategy

Learn More

Lightly Train

Self-Supervised Pretraining

Leverage self-supervised learning to pretrain models

Learn More

Lightly Edge

Smart Data Capturing on Device

Find only the most valuable data directly on devide

Learn More

Ready to Get Started?

Experience the power of automated data curation with Lightly

Learn More