This site collects and presents results from an ongoing research effort to construct optimization algorithms from a probabilistic basis.
Optimization
Optimization is the problem of finding a minimum, maximum, or root of a function. Put simply, it is about doing things right: Maximizing return; minimizing loss; making no (zero) error. Reaching a goal as quickly as possible, wasting the least amount of resources, deviating from a target by the smallest possible margin.
There are several disparate communities within the scientific, technical and financial fields dealing with optimization problems of various kinds. Their requirements differ. Some search for numerically cheap and precise algorithms, others require sample-efficient algorithms making good use of scarce, expensive data. Others again need algorithms robust to evaluation noise. In some problems, we search for the global extremum of a complicated, unknown function; but just finding one local minimum of a relatively simple but very high-dimensional function can also be a formidable challenge.
Probabilistic Inference
Probability theory is a mathematically rigorous extension of formal (Aristotelian) logic that allows reasoning in the face of uncertainty. What does this mean for optimization? Probabilistic optimization algorithms can ...
- make efficient use of information, both by choosing evaluation points efficiently, and by making efficient use of the information they convey.
- deal with various kinds of noise.
- adapt to, and learn about latent aspects of the problem.
- be adapted to focus on computational cost over information efficiency, or the other way round
work available on this site
At the moment, you can find on this site,- HKopt, a conceptual generalisation of quasi-Newton algorithms (such as the well-known BFGS algorithm), at very limited additional cost. This is an "inner-loop" algorithm for use in numerical optimization problems.
- Entropy Search, a sample efficient global optimization algorithm. This is an "outer-loop" algorithm: It has considerable computational cost (each iteration takes several seconds), but if evaluation your objective function has a physical cost (such as performing an experiment, or spending money), then this algorithm may save you time, work, and money.
Disclaimer: All code provided on this site is research-grade. You should not assume that it is free of bugs, nor that it will work in every case. To improve the code over time, we need input from people (like you!) with real-world optimization problems. We have tried to make our code relatively easy to use. If you cannot get it to work, please write to us. In any case, the code is provided "as is", without any warranty, explicit or implied, including, but not limited to, warranties of merchantability, fitness for a particular purpose and non-infringement. In no event shall the authors, or copyright holders, be liable for any claim, damages or other liability, wether in action of contract, tort, or otherwise, arising from, out of, or in connection with the software of the use or other dealings in the software.
Despite the above caveats, we are convinced that the publications and code provided on this site can help improve the speed, cost, and precision of optimization efforts. If you find our work helpful, please cite it.