Large-Scale, Reproducible Implementation and Evaluation of Heuristics for Optimization Problems

What you will learn

Research developing new heuristics for optimization problems is often not reproducible; for instance, only 4% of papers for two famous optimization problems published their source code. This limits the impact of the research both within the heuristics community and more broadly among practitioners. In this work, the authors built a large-scale open-source code-base of heuristics. Each heuristic was then evaluated on a library of 3,296 instances. Such large-scale evaluation allows insight into which heuristics work well for what types of problem instances. The researchers also provide predictions on which heuristic will work best on a novel problem instance using machine learning methods.

Assessments of Reproducibility, Generalizable Tools, Reproducible Study Designs