Bounding the Regret of a Dynamic Bayesian Optimization Algorithm

Contact: Anthony Bardou

Bounding the Regret of a Dynamic Bayesian Optimization Algorithm

Contact: Anthony Bardou

Warning: this project is for Master students only.

Context

Black-box optimization (also called derivative-free optimization or 0th-order optimization) refers to the optimization of an objective function without having access to its closed-form nor higher-information oracles (e.g., gradient, Hessian). One of the most popular subdomains of black-box optimization is Bayesian optimization (BO),which exploits a Gaussian process (GP) as a surrogate model [1]. BO is used in numerous applications where the objective function is too complex to be analytically modeled (e.g. hyperparameters tuning of deep neural networks [2], wireless communications [3] or computational biology [4]).

In such applications, the objective function often varies with time. Used in this particular context, BO is called dynamic Bayesian optimization (DBO). There exist a few algorithms addressing DBO (e.g., [5, 6, 7]), but they work under limiting assumptions and do not behave properly in practice.

Subject

A DBO algorithm is currently researched at INDY lab. It has shown excellent empirical performance, outperforming the reference DBO algorithms by a comfortable margin, but lacks theoretical understanding.

The student will participate in the theoretical analysis of this DBO algorithm. More precisely, he/she will get familiar with the proof techniques used in BO and work on deriving a regret bound for the researched DBO algorithm.

What's in it for You?

This project is an opportunity to acquire skills and proof techniques related to the analysis of stochastic processes in a bandit setting, and to apply them to the state-of-the-art of DBO. Should you make significant contributions to the theoretical understanding of the researched algorithm, you will be given the opportunity to author a publication submitted at a top AI/ML conference.

Skills

  • Excellent analytical skills.
  • Strong interest in theoretical modeling.
  • Familiarity with stochastic processes.
  • Familiarity with Bayesian optimization would be a plus.
  • Interest in pursuing a research-oriented career would be a plus.

References

[1] Williams, C., & Rasmussen, C. (1995). Gaussian processes for regression. Advances in neural information processing systems, 8.

[2] Bergstra, J., Yamins, D., & Cox, D. (2013, February). Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In International conference on machine learning (pp. 115-123). PMLR.

[3] Bardou, A., & Begin, T. (2022, October). Inspire: Distributed bayesian optimization for improving spatial reuse in dense wlans. In Proceedings of the 25th International ACM Conference on Modeling Analysis and Simulation of Wireless and Mobile Systems (pp. 133-142).

[4] González, J., Dai, Z., Hennig, P., & Lawrence, N. (2016, May). Batch Bayesian optimization via local penalization. In Artificial intelligence and statistics (pp. 648-657). PMLR.

[5] Nyikosa, F. M., Osborne, M. A., & Roberts, S. J. (2018). Bayesian optimization for dynamic problems. arXiv preprint arXiv:1803.03432.

[6] Brunzema, P., von Rohr, A., Solowjow, F., & Trimpe, S. (2022). Event-triggered time-varying bayesian optimization. arXiv preprint arXiv:2208.10790.

[7] Bogunovic, I., Scarlett, J., & Cevher, V. (2016, May). Time-varying Gaussian process bandit optimization. In Artificial Intelligence and Statistics (pp. 314-323). PMLR.