Lorentz Center

International center for scientific workshops

International center for scientific workshops

Current Workshop | Overview | Back | Home | Search | | ||||||||||

from 12 Nov 2007 through 16 Nov 2007 |

Model
checking is emerging as a practical tool for automated debugging of complex
reactive systems such as embedded controllers and network protocols. In model
checking, specifications about the system are expressed as (temporal) logic
formulas, and efficient symbolic algorithms are used to traverse the model
defined by the system and check if the specification holds or not. Extremely
large state-spaces can often be traversed in minutes. Whereas
traditional model checking focuses on the absolute correctness of systems, in
practice such rigid notions are hard, or even impossible, to guarantee.
Instead, systems are subject to various phenomena of stochastic nature, such as
message loss or garbling, unpredictable environments, faults, and delays.
Correctness thus is of a less absolute nature. Accordingly, instead of checking
whether system failures are impossible a more realistic aim is to establish,
for instance, whether "the chance of shutdown occurring is at most
0.01%". Similarly, the question whether a distributed computation
terminates becomes "will it eventually terminate with probability 1?". These queries can be checked using Stochastic
model checking is based on conventional model checking, since it relies on reachability analysis of the underlying transition system,
but must also entail the calculation of the actual likelihoods through
appropriate numerical methods. In addition to the qualitative statements made
by conventional model checking, this provides the possibility to make
quantitative statements about the system. Stochastic model checking uses
extensions of temporal logics with probabilistic operators, affording the
expression of these quantitative statements. Stochastic model checking is
typically based on discrete-time and continuous-time Markov chains (DTMCs and CTMCs, respectively),
or Markov decision processes (MDPs). Whereas Markov
chains are fully stochastic, MDPs allow for nondeterminism. The former models are intensively used in
performance and dependability analysis, whereas MDPs
are of major importance in stochastic operations research and automated
planning in AI. Extensions of model checking to stochastic models originate
from the mid 1980s, first focusing on qualitative properties where the
probability bounds are restricted to 0 or 1, but later also considering
quantitative properties. During the last decade, these methods have been
extended, refined and improved, and - most importantly - been supported by
software tools. Currently, model checkers for DTMCs, CTMCs and MDPs do exist, and have
been applied to several case studies ranging from randomized distributed
algorithms to dependability analysis of workstation clusters. With the
currently available technology, models of 10^6 - 10^7 states can be
successfully checked. In
December 2002, the VOSS project consortium organized a successful Dagstuhl seminar on Validation of Stochastic Systems aimed
at young(er) people, where themes were assigned to be
worked upon and presented on by the participants. In the years following the Dagstuhl seminar, there have been major developments in the
field, and therefore the organizers believe the time is there to organize a
successor workshop to survey again state-of-the-art techniques for the modeling
and verification of random aspects of computer systems, but also to identify
the main research challenges for the next decade. [Back] |