Follow us on
We study optimization in the presence of uncertainty such as noise in
measurements, and advocate a novel approach of tackling it. The main difference
to any existing approach is that we do not assume any knowledge about the nature
of the uncertainty (such as for instance a probability distribution). Instead,
we are given several instances of the same optimization problem as input, and,
assuming they are typical w.r.t. the uncertainty, we make use of it in
order to compute a solution that is good for the sample instances as well as for
future (unknown) typical instances.
We demonstrate our approach for the case of two typical input instances. We
first propose a measure of similarity of instances with respect to an
objective. This concept allows us to assess whether instances are indeed
typical. Based on this concept, we then choose a solution randomly among all
solutions that are near-optimum for both instances. We show that the exact
notion of near-optimum is intertwined with the proposed measure of similarity. Furthermore, we will show that our measure of similarity also allows us to derive
formal statements about the expected quality of the computed solution: If the
given instances are not similar, or are too noisy, our approach will detect
this. We demonstrate for a few optimization problems and real world data that our
approach works well not only in theory, but also in practice.