Optimization , random function , random field , gaussian process , bayesian optimization
Abstract:
This PhD Thesis presents a distributional view of optimization. I motivate this view with an investigation of the failure point of classical worst-case optimization.
After a review of Bayesian optimization I outline how a distributional view may explain predictable progress of optimization in high dimension and provide insights into optimal step size control of gradient descent. In this process we touch on mathematical tools to deal with random input to random functions and a characterization of non-stationary isotropic covariance kernels.
Finally, I outline how assumptions about the data can lead to random objective functions in machine learning and analyze its landscape.
Dieser Eintrag ist Teil der Universitätsbibliographie.
Das Dokument wird vom Publikationsserver der Universitätsbibliothek Mannheim bereitgestellt.