Just a little example on how to use Decision Trees and Random Forests in Python. Basically – tress are a type of “flow-chart”, a decision tree, with nodes/edges to determine a likely outcome. Given that trees can be very difficult to predict, we can utilize Random Forests to take specific features/variables to build multiple decision trees and then average the results. This is a very very brief and vague explanation – as these posts are meant to be quick shots of how-to code and not to teach the theory behind the method. In this example we’re utilizing a small healthcare dataset that predicts if spinal surgery for an individual was successful to help a particular condition (Kyphosis). Both methods are shown to see differences in accuracy. The dataset can be found on my github here.