Lorentz Center - Error in the Sciences from 24 Oct 2011 through 28 Oct 2011
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Error in the Sciences
    from 24 Oct 2011 through 28 Oct 2011


The aim of the workshop is to explore various practices of dealing with error to attain reliability, and to gain a deeper understanding of what error in science and its treatment entails. While the daily practice of empirical research, in and outside the laboratory, is dominated by dealing with all kinds of errors to increase the reliability of the results, there exists no general cross-disciplinary framework for dealing with errors. Various sophisticated procedures for the systematic handling of observational and measurement errors, and procedures for data-analysis were and still are being developed, but they all are fragmented and mainly developed to address specific epistemological and methodological problems within a particular scientific domain. The reason that a more general account is – still – lacking is that the kind of error to be corrected differs from case to case and depends upon the effects of many different conditions peculiar to the subject under investigation, the research design, and the equipment and/or models used, so is context dependent and field specific. The various practices of dealing with errors have developed their own separate methods and techniques, with only little cross fertilization. While these different methods are not likely to be integrated, solutions to their common problem – how to take account of reliability – may well be. That is, while contextual knowledge is not easily transmittable to different scientific domains, methods for achieving reliability may well have an over-arching feature.


Our aim is to develop such a general framework while doing justice to the idiosyncrasy of the circumstances in which errors arise. This means that beside existing statistical analyses of data which in measurement science is called Type A evaluation, we wish to discuss Type B evaluations of uncertainty. Examples of Type A evaluations are calculating the standard deviation of the mean of a series of independent observations; using the method of least squares to fit a curve to data in order to estimate the parameters of the curve and their standard deviations; and carrying out an analysis of variance in order to identify and quantify random effects in certain kinds of measurements. The underlying assumptions for legitimizing Type A evaluations are the availability of a large number of independent observations, equally trustworthy so far as skill and care are concerned, and obtained with instruments with known precision, which may apply to many experimental practices, but are too abstract for many empirical research practices outside the laboratory. Type B evaluations are based on scientific judgment using all of the relevant information available, which may include experience with, or general knowledge of, the behavior and property of relevant materials and instruments, manufacturer’s specifications, models, data provided in calibration and other reports, and uncertainties assigned to reference data taken from handbooks.