A short exploration of using bayesian optimization to make a good choice of hyperparameters. The detailed jupyter notebook can be found in RegressionWithBayesOpt.ipynb.
Consider the following data set (pictured at two separate scales):
There seems to be a linear relationship between x and y, and the y-values seem to concentrate near x=0 and disperse for large values of x. We want to model the data near x=0 via the following model
where
is noise which depends on x.
Notice that there are some extreme outliers, so using a least-squares approach doesn't lead to a good fit:
We need
to heavy-tailed; so we fit a student t distribution (where the mode, scale, and shape all depend on x) using gradient descent.
Of course, this is a toy problem, which we are playing with because it is simple to visualize; this exploration is really about Bayesian optimization:
The challenge is that we won't acheive a good fit without proper regularization, and we then need to choose hyperparameters
to control the regularization. For any given choice of hyperparameters, we can fit our model on a training subset of the data, and then evaluate the fit on a cross-validation subset of data leading to an error function:
which we want to minimize. To minimize this we could use:
- A grid search for optimal values of
, - A random search for optimal values of
, - Numerical Optimization (such as Nelder-Mead),
- Bayesian Optimization.
Note that sampling
at a choice of hyperparameters can be costly (since we need to fit our model each time we sample); so rather than sampling
either randomly or on a grid, we'd like to make informed decisions about the best places at whcih to sample
. Numerical Optimization and Bayesian Optimization both attempt to make these informed decisions, and we focus on Bayesian Optimization in this tutorial.
The basic idea is as follows: we will sample
at a relatively small number of points, and then fit a gaussian process to that sample: i.e. we model the function
(pictured in red):
This model give us estimates of both
- the expected (mean) value of
if we were to sample it at novel points (pictured in green), as well as - our uncertainty (or expected deviation) from that mean (the region pictured in grey),
and we use this information to choose where to sample
next. Now it is important to note that our primary concern is not to accurately model
everywhere with our gaussian process; our primary concern is to accurately model
near it's minimums. So we sample
at points where we have the greatest expected improvement of fitting our model to the minimums of
:
and we repeat until our model fits
accurately enough near it's minimums:
Finally, we use the resulting model to make an optimal choice for our hyperparameters
.
This leads to a much better fit (green is the probability density, purple is one standard deviation - only when defined):
The full tutorial (with lots of comments and details) can be found in the jupyter notebook RegressionWithBayesOpt.ipynb.







