Tree based methods divide the predictor space, that is, the set of possible values for X1, X2,… Xp ,into J distinct and non-overlapping regions, R1, R2….. RJ. In theory, the regions could have any shape. However, we choose to divide the predictor space into high-dimensional rectangles, or boxes, for simplicity and for ease of interpretation of the resulting predictive model

The goal is to find boxes R1, R2, ….. RJ that minimize the Residual sum of Squares (RSS), given by

Unfortunately, it is computationally infeasible to consider every possible partition of the feature space into J boxes. For this reason, we take a** top-down, greedy** approach that is known as

**recursive binary splitting.**The approach is top-down because it begins at the top of the tree and then successively splits the predictor space; each split is indicated via two new branches further down on the tree.

It is greedy because at each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better tree in some future step.

We first select the predictor **Xj **and the cutpoint *s* such that splitting the predictor space into the regions** {X|Xj < s } **leads to the greatest possible reduction in RSS.

Next, we repeat the process, looking for the best predictor and best cutpoint in order to split the data further so as to minimize the RSS within each of the resulting regions.

However, this time, instead of splitting the entire predictor space, we split one of the two previously identified regions. We now have three regions. Again, we look to split one of these three regions further,so as to minimize the RSS. The process continues until a stopping criterion is reached; for instance, we may continue until no region contains more than five observations.

Example :-

Since, extreme values or outliers, never cause much reduction in RSS, they are never involved in split.

Hence, tree based methods are insensitive to outliers.

## Trackbacks/Pingbacks