My goal is to do a grid search over various VW models in their parameter space (trying different loss functions and regularizations etc). Since the model could use multiple passes, I would like to use cross validation. I am wondering if I should implement my own cross validation code (perhaps as a bash script) or am I reinventing the wheel. Any pointers on whether this has been done before etc or best ways to proceed would be useful. I was looking at implementing cross validation in a bash script and using GNU parallel to parallelize the Grid Search
You should try the vw-hypersearch perl script ( https://github.com/JohnLangford/vowpal_wabbit/blob/HEAD/utl/vw-hypersearch ) which can also be found in the utl directory of VW. It can help you tune the VW parameters, but as for as cross-validation you have to implement your own code, feeding the algorithm with the data folds you intend to validate.
Allow me to answer this question in 2 folds,
- Cross Validation: There is no flag for the same in vw. The reason being that even post cross validation, one would test on a future split and evaluate the learning of the model based on some metric derived from the Confusion Matrix.
Hyper-parameter Search: the vw-hypersearch uses golden ration search to search for an optimal value of a given parameter when the range is provided. Golden Ratio Search works for a function which is monotonically increasing or decreasing. When doing a search over a bunch of parameters the function is no longer a monotonically increasing or decreasing function. This can be handled using as you had pointed out
-- Grid Search: very CPU intensive and time consuming.(we always fight with time)
-- Random Search: Very efficient Reference: [http://dl.acm.org/citation.cfm?id=2188395][1]
© 2022 - 2024 — McMap. All rights reserved.