A decision tree is a tree-structured model for classification or regression that splits data based on feature values to arrive at a prediction. Each internal node of the tree corresponds to a decision on a feature (e.g., “Is age < 30?”), each branch represents an outcome of that decision, and each leaf node gives a final prediction (class label or continuous value). Building a tree often involves a greedy algorithm like CART or C4.5 that chooses splits to maximize information gain or impurity reduction at each step. Decision trees are popular because they are easy to interpret (they resemble human decision processes) and handle heterogeneous data and non-linear relationships. However, they can overfit (so pruning is used to limit depth or complexity). Ensembles of trees (Random Forests, Gradient Boosted Trees) overcome many limitations of single decision trees.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More