In-Context Learning is a capability of large language models (LLMs) that allows them to learn new tasks or behaviors at inference time, without any parameter updates or fine-tuning. Instead of retraining the model, users provide task instructions, examples, or demonstrations directly within the input prompt, and the model uses that context to generate appropriate responses.
For example, given this prompt:
Translate to French:
Dog → Chien
Cat → Chat
Bird → ...
The model will likely complete with “Oiseau” — not because it was specifically trained on this prompt, but because it can generalize patterns from the examples provided in the context.
In-context learning supports:
This behavior emerges in large transformer-based models as their scale increases, and is a defining characteristic of modern LLMs like GPT-3 and GPT-4. It enables rapid adaptation to new tasks without retraining, making LLMs flexible and powerful for a wide variety of real-world applications.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More