Lorentz Center

International center for scientific workshops

International center for scientific workshops

Current Workshop | Overview | Back | Home | Search | | ||||||||||

## WorKer 2010: Workshop on Kernelization |

Kernelization is a vibrant and rapidly developing area. The update meeting on kernelization aims at consolidate the results achieved in the recent years, discuss future research directions, and explore further the applications potential of kernelization algorithms, and give excellent oppontunities for the participants to engage in joint research and discussions on open problems and future directions. Many important optimization problems from various
applications appear to be intractable (e.g., NP-hard), i.e., algorithms that
solve these optimally can be assumed to use much time in the worst case. Many
of the investigations in algorithms research are directed to the design and
analysis of methods that cope with this intractability, e.g., using heuristics
and approximation, looking at special cases, etc. In this workshop, we look at
another aspect, that appears universally in almost every
practical implementation that aims to deal with such NP-hard problems,
namely preprocessing. Using relatively fast (say polynomial time) data
reduction steps that keep the answer to the problem invariant, but reduce the
instance of the problem, we aim at building a (hopefully) smaller but
equivalent instance. A slower exact algorithm can then be run on this smaller
instance. In this workshop, we look at a mathematical analysis of such
preprocessing algorithms, now termed How can we measure the efficiency of such a kernelization subroutine? For a long time, the mathematical analysis of polynomial time preprocessing algorithms was not studied. The basic reason for this was that if we seek to start with an instance I of an NP-hard problem and try to find an efficient (P-time) subroutine to replace I with an equivalent smaller sized instance I' then success would imply P=NP - discouraging efforts in this research direction, from a mathematically-powered point of view. The situation in regards the systematic, mathematically sophisticated investigation of preprocessing subroutines has changed drastically with advent of parameterized complexity, where the issues are naturally framed. More specifically, we ask for upper bounds on the reduced instance sizes as a function of a parameter of the input, assuming a polynomial time reduction/preprocessing algorithm. A typical example is the famous Nemhauser-Trotter kernel for the Vertex Cover problem, showing that a "kernel" of at most 2k vertices can be obtained, with k the requested maximum size of a solution. A large number of results have been obtained in the past years, and the research in this area shows a rapid growth, not only in terms of number of papers appearing in top Theoretical Computer Science and Algorithms conferences and journals, but also in terms of techniques. Important very recent developments were the introduction of new lower bound techniques, showing (under complexity theoretic assumptions); showing that certain problems must have kernels of at least certain sizes, and meta-results that show that large classes of problems all have small (e.g., linear) kernels --- these include a large collection of problems on planar graphs. In this workshop, the recent developments in the area of Kernelization will be discussed. To this end, there are nine keynote lectures planned, and participants can present their recent results in the field. In addition, there are excellent opportunities for joint research and discussions on open problems and future directions. [Back] |