Build Better Models Faster with Self-supervised Pre-training

LightlyTrain is a model training plug-in for self-supervised learning. It helps to pre-train models, generate embeddings, and backbones with 1 line of code

What are the benefits of self-supervised pre-training?

Models were pre-trained on the full COCO training set without using labels and then fine-tuned on 10% of the COCO train set using labels

29.6% higher mAP with pre-training

Better generalization
Reduce need of lots of labeled data
Higher accuracy

Get more insights

Self-supervised learning enables models to extract meaningful features from vast amounts of unlabeled images
Embeddings obtained from self-supervised pre-training are robust and generalizable, improving performance in various computer vision task
Reduce the reliance on large labeled datasets and extensive training

Case Study

10% of the data yields already >80% of total dataset accuracy

Which version is
right for you?

Lightly SSL

Open-source version for research and individuals with community support

  • Low level building blocks for research
  • SOTA Self-supervised methods
  • Compatible with Pytorch and Pytorch Lightning
GitHub
LightlyTrain

Enterprise version for embedding model endpoints and co-shaping the roadmap

  • Off-the-shelf modules for pre-training, optimized for downstream tasks such as object detection, classification, and segmentation.
  • Easy to use interface for training embedding models and generating embeddings with a single command
  • Automatic SSL method selection
  • Export multiple model formats
  • Available as Python or Docker Version
  • Tailoring & Hands-on Support
Contact Us
Trusted by major companies & research organizations

Experience LightlyTrain to optimize your data pipeline.

Take advantage of pre-training and self-supervised learning for your of machine learning pipeline. Contact us to learn more.

Get a demo