Blog

Blog2017-01-30T11:40:33-08:00
7Jul, 2015

Recommend-ify Zillow

July 7th, 2015|Tags: , |0 Comments

I love Zillow. It's such an amazing search interface for real estate. But that's it... it's just a search interface. And because it's just search, I have to sort through good properties and bad. Maybe that situation benefits their business model, which I won't pretend to know. However, with a little data science we could take the treasure trove of data they already have, add a few UI elements to capture some more, and provide personalized recommendations to house hunters. Users who get what they want quickly are happy customers! Let's look at what they could do to increase their [...]

23Apr, 2015

Tableau 9 – Binning by Aggregate with Level of Detail Expressions

April 23rd, 2015|Tags: , |1 Comment

We've recently worked on some visualizations in Tableau and overall it's been great. Tableau is absurdly easy to drag and drop your way to really slick, interactive visualizations. If you need to build visualizations and you've got the money for a license, it's well worth it. One task that was a bit of an issue during that project was building bins by aggregate. Visualizing "How many had how many?" was surprisingly kludgy in Tableau 8.0. Each record in our dataset represents an event, and each has an associated "client". We wanted to look at how many clients had how many [...]

16Dec, 2014

Kaggle Titanic Competition Part XI – Summary

December 16th, 2014|Tags: , , |6 Comments

This series was probably too long! I can't even remember the beginning, but once I started I figured I may as well be thorough. Hopefully it will provide some assistance to people getting started with scikit-learn and could use a little guidance on the basics. All the code is up on Github with instructions for running it locally, if anyone tries it out and has any issues running it on their machine please let me know! I'll update the README with whatever steps are missing. Thoughts: It can be tricky figuring out useful ways to transform string features, but with [...]

16Dec, 2014

Kaggle Titanic Competition Part X – ROC Curves and AUC

December 16th, 2014|Tags: , , , |0 Comments

In the last post, we looked at how to generate and interpret learning curves to validate how well our model is performing. Today we'll take a look at another popular diagnostic used to figure out how well our model is performing. The Receiver Operating Characteristic (ROC curve) is a chart that illustrates how the true positive rate and false positive rate of a binary classifier vary as the discrimination threshold changes. Did that make any sense? Probably not, hopefully it will by the time we're finished. An important thing to keep in mind is that ROC is all about confidence! [...]

12Dec, 2014

Kaggle Titanic Competition Part IX – Bias, Variance, and Learning Curves

December 12th, 2014|Tags: , , , , |0 Comments

In the previous post, we took at how we can search for the best set of hyperparameters to provide to our model. Our measure of "best" in this case is to minimize the cross validated error. We can be reasonably confident that we're doing about as well as we can with the features we've provided and the model we've chosen. But before we can run off and use this model on totally new data with any confidence, we would like to do a little validation to get an idea of how the model will do out in the wild. Enter: [...]

3Dec, 2014

Kaggle Titanic Competition Part VIII – Hyperparameter Optimization

December 3rd, 2014|Tags: , , , , |0 Comments

In the last post, we generated our first Random Forest model with mostly default parameters so that we could get an idea of how important the features are. From that we can further reduce the dimensionality of our data set by throwing out some arbitrary amount of the weakest features. We could continue experimenting with the threshold with which to remove "weak" features, or even go back and experiment with the correlation and PCA thresholds as well to modify how many parameters we end up with... but we'll move forward with what we've got. Now that we've got our final [...]

Go to Top