The dramatic increase of computational resources, the availability of enormous data, and the significant advances in machine learning and artificial intelligence are currently shaking up our societies. As part of this change, our use of technology is evolving at a fast pace, with lasting effects on our traditional routines. Decisions that always required considerable expert knowledge can suddenly be taken (or be prepared) by automated procedures. This automation does not only support manual work, but it also drastically changes the objectives of various research domains, including Computer Science itself.
In the Benchmarked: Optimization Meets Machine Learning workshop we will discuss the impact of automated decision-making on an important sub-domain: heuristic optimization. More specifically, we will discuss how the possibility to automatically select and configure optimization heuristics changes the requirements for their benchmarking.
The key objectives of this Lorentz Center workshop are
- to develop a joint vision on the next generation of benchmarking optimization heuristics in the context of automated algorithm selection and configuration, and
- to design a clear road-map guiding the research community towards this vision.
The workshop brings together researchers from different sub-domains in optimization heuristics with colleagues from automated machine learning. Together we will discuss what an ideal benchmarking environment would look like, how such an ``ideal tool'' compares to existing software, and how we can close the gap by improving the compatibility between ongoing and future projects. Concretely, we aim at designing a full benchmarking engine that ranges from modular algorithm frameworks over problem instance generators and landscape analysis tools to automated algorithm configuration and selection techniques, all the way to a statistically sound evaluation of the experimental data.
|09:30||10:00||Registration and coffee|
|10:00||10:15||Workshop opening by Lorentz Center|
|10:15||10:30||Welcome by organizers, goals of the workshop, organizational matters|
|10:30||11:30||Short introduction of participants (1 slide each)|
|11:30||12:00||Plenary discussion: key components of the benchmarking pipeline|
|13:30||14:00||Tutorial: Automated Algorithm Configuration and Selection|
|14:00||14:30||Presenation of tools I|
|15:00||15:30||Presentation of tools II|
|15:30||16:30||Brianstorming in small groups: Think Big! Next Generation Benchmarking Environments|
|16:30||17:00||Wrap-up of the brainstorming session, organizational matters|
|17:00||18:30||Wine&Cheese welcome reception|
|09:30||10:00||Tutorial: Modular Algorith Frameworks|
|10:00||10:30||Tutoral: Perfomance Measures and the Role of Statistics|
|13:30||14:15||Individual discussions, flex time|
|14:45||15:15||Tutorial: Landscape Analysis|
|16:00||16:45||Wrap up and organizaton of evening gatherings|
|11:00||12:00||Creativity session: challenges and pitfalls|
|13:30||17:00||Individual discussions, flex time|
|15:30||17:00||Individual discussions, flex time|
|13:30||15:30||Individual discussions, flex time|
|16:00||16:45||Plenary discussion: Open Science, Reproducibility, Communication|
|16:45||17:00||Wrap-up of the day, announcements for evening gatherings|
|11:15||11:45||Flash coaching: The last mile is the longest (but yet so important): how to keep up your motivation for the final steps before releasing a research "product"|
|11:45||12:15||Plenary discussion: what have we archieved? Open questions? Next steps?|
|12:15||12:30||Wrap-up & feedback|
|14:00||15:00||Individual discussions and departures|