Quantcast
Viewing latest article 2
Browse Latest Browse All 2

Data exploration using Random Forests

The Stumbleupon web page classification competition on Kaggle ended recently. With some luck, I got into the final top 10%. During the initial data exploration, I tried to derive a set of linguistic features from the text. This includes something like the ratios of nouns, adjectives, and adverbs in the web pages. In addition, I suspected that subjectivity might be important. So I added the ratios of positive and negative words in the feature set as well.

To see these linguistic features are useful or not, I plugged the data into a Random Forests. Random Forests is a collection of decision trees. Each tree is trained on a bootstrapped sample of the original data and is grown using a random subset of the input variables. In spite of the fact that it is a black box model, it is probably one of the best off-the-shelf classifiers with virtually no parameter tuning and good accuracy. In addition, as each tree is trained using different variables, the variable importance measure comes as a byproduct [1].

Image may be NSFW.
Clik here to view.

Feature importances measured by Random Forests

The estimated accuracy of the model is only 62%, which shows that the linguistic variables are not that useful. However, the above picture still surprised me. It showed that positive and negative word usages are not useful and my hypothesis was completely wrong. On the other hand, the ratio of nouns ranked top on the feature list and it was something that I didn’t think of.

[1] Feature importances with forests of trees,  scikit-learn 0.14 documentation

Image may be NSFW.
Clik here to view.

Viewing latest article 2
Browse Latest Browse All 2

Trending Articles