Just had one question... I looked at the CSV files. And I was wondering how large the original data set is. Meaning I understand that you took data out of a larger data set to create the training set.. and I understand because you put 3,000 in. 3000 from the original data set. Is used for the learning set... Or did I not understand that aspect
@DaveOnDataАй бұрын
The CSVs a are cleaned subset of the original dataset. You can get the original raw data here: archive.ics.uci.edu/dataset/2/adult
@VastCNCАй бұрын
Very interesting and helpful. The greedy aspect I struggle with, are there alternatives that combine root nodes or is it a problem that gets solved with larger data sets?
@DaveOnDataАй бұрын
Glad you enjoyed the video! As to your question, what aspect of greedy selection are you struggling with?
@VastCNCАй бұрын
@@DaveOnData I just thought the selection of the first that met the criteria would neglect the 2nd, or more parameters that also met the criteria, so could be leaving out key criteria that could inter-relate to future nodes more effectively than the first. I think it’s more of a glaring issue with the contrived example, and probably moot in larger data samples, in which it’s intended to function.
@DaveOnDataАй бұрын
If I understand your concern correctly, one way to think about decision trees' greedy nature is computational complexity. To keep this simple, let's think only about the tree's root node. In an ideal world, the decision tree algorithm would always pick the root note based on the single most optimal combination of dataset and hyperparameter values. However, this is computationally infeasible as the algorithm must search through every possible tree that could be learned before knowing the single best root node.