Chi-squared Automatic Interaction Detection (CHAID) is a decision tree learning algorithm that uses chi-square statistical tests to determine how to split data at each step.It is one of the earliest decision tree methods (developed by Gordon V. Kass in 1980) and is designed to handle categorical predictors by finding statistically significant splits without requiring binary partitions. At each node of the tree, CHAID examines all possible splits of the input features and performs a chi-square independence test between each feature and the target outcome to evaluate how significant the association is. The algorithm may merge categories of a predictor that are not significantly different with respect to the target (using methods like Bonferroni or Holm corrections to adjust for multiple comparisons) and chooses the split that yields the most significant p-value.This often results in multi-way splits: a single categorical feature might split into more than two branches if multiple distinct groups of categories are found, as opposed to binary splits in CART. The tree growing continues recursively – performing chi-square tests at each node – until no further statistically significant splits can be made (based on a chosen significance threshold) or until other stopping criteria (like minimum node size) are reached.The output of CHAID is a decision tree where each internal node is an attribute and its branches correspond to groups of attribute values that differentiated the outcomes. Because CHAID uses a significance test, it naturally handles interaction detection – it finds interactions between predictors in terms of how they affect the response (hence the name). It has some advantages: since it can produce multi-branch splits, the resulting tree can be more compact and interpretable in certain cases, and it doesn’t assume monotonic relationships or require binary encoding of categorical variables. However, multi-way splits mean that CHAID typically needs a large sample size to ensure that each child node has enough data; otherwise, the chi-square tests might not detect significance reliably.CHAID is non-parametric and does not assume linear relationships, making it flexible for exploration. It’s been widely used in marketing analytics (for customer segmentation based on survey responses, for example) and in other social science and medical domains where researchers want an explainable tree that highlights which input variables (and value groupings) lead to different outcomes. The reliance on chi-square tests means it works best when the target is categorical (for prediction/classification tasks) or can be discretized into categories (for, say, ranking or segmentation tasks). Overall, CHAID is a useful tool when one’s goal is to uncover statistically significant splits and interactions in data and produce an easily interpretable decision tree.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More