Explain decision tree in simple terms
Question
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
It will take less than 1 minute to register for lifetime. Bonus Tip - We don't send OTP to your email id Make Sure to use your own email id for free books and giveaways
Answers ( 6 )
Decision Tree is an algorithm which can be used to solve both Regression and Classification problems.
While building a Decision tree, the most important predictor will be at the top and based upon a cut-off value the predictor
will be split in two nodes. The same procedure will be carried out for the nodes that are created until all the samples in the
training data fall into some or the other leaf node. A leaf node is one which is not split any further.
Behind every split that occurs the aim is to have maximum reduction in the error rate. Categorical variables are split according to their category.
Decision tree also form the basis behind other popular algorithms like Random forest, XGBoost etc.
What are you going to do tomorrow? I don’t know yours but I can tell mine.
I will create my priorities based on some rules. From the image, you can see that If I have to work then I will stay in otherwise I will check the outlook. If it is sunny outside then I will go to the beach with friends, if it is overcast outside then I will for running or if it is rainy outside then I will ask my friends if they are free then we can go for a movie else I will stay in.
I hope this gives you a glimpse of what a decision tree looks like in general. First, we will know some of the technical terms which are used in the decision tree.
1) Root Node – represents the entire sample and this further gets divided into two or more homogenous sets. In our case, work_to_do is the root node.
2) Splitting – the process of dividing into two or more subfields
3) Decision Node – When a decision node is further divided into two or more subfields. In our example, Outlook is a Decision Node.
4) Leaf/Terminal Node – The final node, which can not be split further. In our case, go to beach, go running, etc are leaf nodes.
Decision tree use set of decision rules which split our population into multiple splits. But the question comes, from which feature we should start the splitting or what criteria we should consider for splitting?
It depends on your task. If it is classification task then we use either Information Gain or Cross entropy as our splitting criterion, or if it is a regression task then we use mean squared error as one of our criteria. (I am not going deep in these terms).
I hope, this will help newcomers to understand the decision tree in simple way.
Decision Trees : helps to capture non-linear relationship of the data
the main crux is in real life data may not follow linear relationship so we first segregating the data so it doesn’t matter whether the data is linear or not linear.
In decision tree the most important variable or predictor is at the top of the tree (Root) and based on some cutoff values it split into two nodes and this process continues until all the variables in the training data acquire one or other leaf node(which cannot split further)
Decision trees itself have low bias and high variance due to which a small change in the data results into completely different tree. So after segregating the data and build the decision tree model and then aggregating the model results into less bias and less variance this is also known as Ensemble Learning.
1) A decision tree is a flowchart-like structure in which each internal node represents a “test” on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes).
2) Decision Trees are mostly used for classification problems and are a type of supervised machine learning algorithm.
3) The decision tree chooses most important variable or predictor to be at the top of the tree (Root) and based on some cutoff values it split into two nodes and this process continues until all the variables in the training data acquire one or other leaf node(which cannot split further)
4) The main idea of a decision tree is to identify the features which contain the most information regarding the target feature and then split the dataset along the values of these features such that the target feature values at the resulting nodes are as pure as possible. A feature that best separates the uncertainty from information about the target feature is said to be the most informative feature. The search process for a most informative feature goes on until we end up with pure leaf nodes.
A decision tree is same as data structures tree having root node leaf node etc. It is used to solve both regression and classification problems. It works on thr concept of reducing entropy or simply reducing variation in data means same type of data is clustered in a group . Most important predictor is at top and using it division is done and same happens for coming nodes until leaf node.
Decision tree is base of random forest and other popular algo.
Decision tree is a type of supervised learning algorithm. It works for both categorical and continuous input and output variables. In this technique, we split the population or sample into two or more homogeneous sets (or sub-populations) based on most significant splitter / differentiator in input variables.
Types of decision tree is based on the type of target variable we have. It can be of two types:
1. Binary Variable Decision Tree: Decision Tree which has binary target variable then it called as Binary Variable Decision Tree. Example:- In above scenario of student problem, where the target variable was “Student will play cricket or not” i.e. YES or NO.
2. Continuous Variable Decision Tree: Decision Tree has continuous target variable then it is called as Continuous Variable Decision Tree.
Terminology:
ROOT Node: It represents entire population or sample and this further gets divided into two or more homogeneous sets.
SPLITTING: It is a process of dividing a node into two or more sub-nodes.
Decision Node: When a sub-node splits into further sub-nodes, then it is called decision node.
Leaf/ Terminal Node: Nodes do not split is called Leaf or Terminal node.
Pruning: When we remove sub-nodes of a decision node, this process is called pruning. You can say opposite process of splitting.
Branch / Sub-Tree: A sub section of entire tree is called branch or sub-tree
Parent and Child Node: A node, which is divided into sub-nodes is called parent node of sub-nodes where as sub-nodes are the child of parent node.
The decision of making strategic splits heavily affects a tree’s accuracy. The decision criteria is different for classification and regression trees. Decision trees use multiple algorithms to decide to split a node in two or more sub-nodes.
1. Gini:
Gini says, if we select two items from a population at random then they must be of same class and probability for this is 1 if population is pure.
It works with categorical target variable “Success” or “Failure”.
It performs only Binary splits
Higher the value of Gini higher the homogeneity.
CART (Classification and Regression Tree) uses Gini method to create binary splits.
2. Chi-Square:
It is an algorithm to find out the statistical significance between the differences between sub-nodes and parent node. We measure it by sum of squares of standardized differences between observed and expected frequencies of target variable.
It works with categorical target variable.
It can perform two or more splits.
Higher the value of Chi-Square higher the statistical significance of differences between sub-node and Parent node.
Chi-Square of each node is calculated using formula,
Chi-square = ((Actual – Expected)^2 / Expected)^1/2
It generates tree called CHAID (Chi-square Automatic Interaction Detector)
3. Information Gain:
Less impure node requires less information to describe it. And, more impure node requires more information. Information theory is a measure to define this degree of disorganization in a system known as Entropy. If the sample is completely homogeneous, then the entropy is zero and if the sample is an equally divided (50% – 50%), it has entropy of one.
Entropy can be calculated using formula:-Entropy, Decision Tree
Information Gain = 1 – Entropy
Here p and q is probability of success and failure respectively in that node. Entropy is also used with categorical target variable. It chooses the split which has lowest entropy compared to parent node and other splits. The lesser the entropy, the better it is.
4. Reduction in Variance:
Reduction in variance is an algorithm used for continuous target variables. This algorithm uses the standard formula of variance to choose the best split. The split with lower variance is selected as the criteria to split the population.