Kaggle Titanic Competition Part IX – Bias, Variance, and Learning Curves

2023-01-17T16:19:42-08:00December 12th, 2014|0 Comments

In the previous post, we took at how we can search for the best set of hyperparameters to provide to our model. Our measure of "best" in this case is to minimize the cross validated error. We can be reasonably confident that we're doing about as well as we can with the features we've provided and the model we've chosen. But before we can run off and use this model on totally new data with any confidence, we would like to do a little validation to get an idea of how the model will do out in the wild. [...]