Build Better Models Faster with Self-supervised Pre-training

LightlyTrain is a model training plug-in for self-supervised learning. It helps to pre-train models, generate embeddings, and backbones with 1 line of code

What are the benefits of self-supervised pre-training?

Models were pre-trained on the full COCO training set without using labels and then fine-tuned on 10% of the COCO train set using labels

16.9% higher mAP with pre-training

Better generalization
Reduce need of lots of labeled data
Higher accuracy

Case Study

10% of the data yields already >80% of total dataset accuracy

Embeddings become significantly more meaningful through self-supervised learning pre-training

Which version is
right for you?

Lightly SSL

Open-source version for research and individuals with community support

  • Low level building blocks for research
  • SOTA Self-supervised methods
  • Compatible with Pytorch and Pytorch Lightning
GitHub
LightlyTrain

Enterprise version for embedding model endpoints and co-shaping the roadmap

  • Off-the-shelf modules for pre-training, optimized for downstream tasks such as object detection, classification, and segmentation.
  • Easy to use interface for training embedding models and generating embeddings with a single command
  • Automatic SSL method selection
  • Export multiple model formats
  • Available as Python or Docker Version
  • Tailoring & Hands-on Support
Contact Us
Trusted by major companies & research organizations

Experience LightlyTrain to optimize your data pipeline.

Take advantage of pre-training and self-supervised learning for your of machine learning pipeline. Contact us to learn more.

Get a demo