|Current Workshop | Overview||Back | Home | Search ||
Mathematical challenges in climate science
Representing Model Error in Climate Models and Probabilistic Forecasts by Stochastic Parameterizations
Weather and climate predictions are uncertain because there is uncertainty in the initial conditions and in the formulation of the prediction model. Ensemble prediction systems provide the means to estimate flow-dependent uncertainty but are commonly underdispersive, leading to overconfident uncertainty estimates and an underestimation of extreme weather events. Climate models have persistent model errors that might arise in part from a misrepresentation of the
unresolved subgrid-scale processes. In this talk, we will show the how stochastic parameterizations (here in a particular a stochastic kinetic backscatter scheme) can partially remedy the underdispersion in probabilistic forecast and certain aspects of model error in climate models.
Equilibrium and Non-equilibrium Cumulus Convection: Implications for
Parameterisation and Data Assimilation
The two standard
paradigems for cumulus convection in the atmosphere are
presented: statistical equilibrium, and triggered convection. These situations
correspond to different constraints from large scale dynamics and boundary
processes. The amount of convection (measured by mass flux or precipitation) is
limited by the creation of instability (
impact of radar or other precipitation data in an assimilation system.
Palaeoclimate challenges : development and calibration of dynamical systems
Climate exhibits a vast range of modes of variability, with characteristic times ranging from a few days to thousands of years. Moreover, all of these modes are interdependent; in other words, they are coupled.
One approach to coping with climate's complexity is to integrate the equations of atmospheric and oceanic motion with the finest possible mesh. But is this the best approach? Aren't we missing another characteristic of the climate system: its ability to destroy and generate information at the macroscopic scale? Paleoclimatologists consider that much of this information is present in palaeoclimate archives. So another approach is to build climate models that are explicitly tuned to these archives.
This is the strategy we pursue here, based on low-order non-linear dynamical systems and Bayesian statistics, and taking due account of uncertainties, including uncertainties arising from the limitations of the dynamic model.
It is illustrated through the problem of the timing of the next great glaciation. Is glacial inception overdue, or do we need to wait for another 50,000 years before ice caps grow again? Our (provisional) results indicate a glaciation inception in 50,000 years.
Data assimilation algorithms
Data assimilation methods are used to combine the results of a large scale numerical model with the measurement information available in order to obtain an optimal reconstruction of the dynamic behavior of the model state. Many data assimilation schemes are based on the Ensemble Kalman filtering algorithm. The last decade many new algorithms have been proposed in literature. Very recently it was found out that the symmetric version of the Ensemble Square Root filter seems to be a very good choice for applications where model error is not relevant. This has been shown in some applications, but can also be motivated with a theoretical analysis. For applications where model error is important the symmetric Reduced Rank Square Root filter it an attractive alternative for a number of applications. Variational data assimilation or "the adjoint method" has also been used very often for data assimilation. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between the model results and the data. In order to obtain a computational efficient procedure, the minimization is performed with a gradient-based algorithm where the gradient is determined by solving the adjoint problem. Variational data assimilation requires the implementation of the adjoint model. Even with the use of the adjoint compilers that have become available recently this is a tremendous programming effort, that hampers new applications of the method. Therefore we propose an alternative approach to variational data assimilation by using model reduction. This method does not require the implementation of the adjoint of (the tangent linear approximation of) the original model. Model reduced variational data assimilation is based upon a POD (Proper Orthogonal Decomposition) approach to determine a reduced model for the adjoint of the tangent linear approximation of the original nonlinear forward model. This results in a ensemble type algorithm for variational data assimilation. In the presentation we will first formulate the general data assimilation problem and will discuss a number algorithms. For a class of algorithms we will present a convergence theorem. The characteristics and performance of the methods will be illustrated with a number of test problems and with real life data assimilation applications in storm surge forecasting and emission reconstruction problems in air pollution modeling.
Subgrid-stochastic models for organized convection: convective inhibition and moisture preconditioning
this talk, I shall discuss two stochastic models for the parametrization
of subgrid effects for organized tropical convection.
The first model aims for the representation of convective inhibition (CIN) . CIN refers to the
existence of a thin layer of air of negative buoyancy, separating the
well-mixed boundary layer near the sea surface and the tropospheric
interior layer. Thus, CIN is viewed as
an energy barrier for spontaneous convection and the formation of deep
convective clouds. Through ideas borrowed from statistical mechanics and
material science, we propose a stochastic model of representing CIN by an order
parameter that takes the values zeros or ones, on a discrete lattice, embedded
within each large-scale grid box (of the climate model), depending on whether
convection is inhibited or not at, any given microscopic/lattice site. We
assume that the boundary layer is a heat bath for CIN to admit Hamiltonian
dynamics with a Gibbs equilibrium distribution in the manner of the Ising model for magnetization. The Hamiltonian has an internal interaction
part accounting for the interactions between neighbouring
sites and an external potential permitting feedback from the large
into the microscopic CIN model. Statistically consistent
spin-flip dynamics yields a Markov-jump process, for each site, switching back
and forth from zero to one according to plausible probability laws motivated by
physical intuition. A systematic
coarse-graining procedure leads to a simple birth-death Markov process for the
area fraction of deep convection on each climate-model grid box, to be run in
concert with the deterministic large-scale (climate) model with very little
computational overhead. Numerical tests for the case of an idealized/toy
climate model together with a detailed numerical demonstration of various
regimes of intermittency will be presented. The second model is concerned with the
representation of organized convective-cloud clusters. Observations reveal that
tropical convection is organized into a hierarchy of scales ranging from the
individual clouds of a few kilometres and a few
hours, to propagating cloud clusters of a few hundreds km and one-to-two days (e.g. squall lines),
to superclusters of thousands km and five to 10 days
(convectively-coupled waves), and their planetary scale (tens of thousands km)
and intra-seasonal (40 to 60 days) envelopes, known as the Madden-Julian
oscillation. One striking features of these propagating/complex convective systems resides in the self-similarity of their
cloud morphology and vertical structure. It has been confirmed by various
observational data sets that
their cloud field is, statistically speaking, composed of
shallow/low level (congestus) clouds in front of the
wave, in the lower troposphere below the 0 oC level,
followed by deep convective towers reaching the top of the troposphere and
which in turn are trialled by stratiform anvil clouds limited to the top of the
troposphere. It is now widely recognized
that this specific cloud `structurization' of the
cloud field into three cloud types is essential for the convective
organization, the vertical structure of the associated flow field, and the propagation of the cloud systems. It has
been confirmed by observations, numerical simulations of cloud clusters, and
theory that moisture distribution in the middle of the troposphere plays a
crucial role in the life cycle of these three cloud types and the organization
of the cloud clusters, in general. Here, we use a Markov chain lattice model to
represent small scale convective elements which interact with each other and
with the large scale environmental variables through convective available
potential energy (
What (if any) constraints are desirable on near grid-scale noise?
Data assimilation attempts to make best use of avaliable observations. The assimilation is often subject to certain constraints which ensure that the observations are incorporated in such a way as to respect (approximate) physical laws. In this talk, I want to suggest that a useful way to think about stochastic parameterisation is to focus on the physical constraints that are satisfied by the imposed near grid-scale noise. There is increasing acceptance that it is desirable to incorporate a stochastic aspect in the parameterisations used by weather forecast and climate models. But there is no general agreement about how that aspect should be introduced, and the methods used so far are very wide ranging: from the simplest noise-generators at one extreme to new parameterisations explictly designed to be stochastic at the other. Indeed, I have designed one of these new parameterizations myself (Plant and Craig 2008) and I shall discuss the physical and statistical basis of this method as an example at that end of the range. But how complex a method do we actually need? Perhaps the answer to this can be sought by asking to what extent it is useful for the grid-scale noise to satisfy (approximate) physical constraints.
Mathematical Methods for Data Assimilation in Biogeochemical Models
Data assimilation plays an important role in biogeochemical models of the ocean. The aim is to validate results and improve models and their parameters.
From the mathematical viewpoint, periodic states of the system have to be computed efficiently such that an optimization becomes feasible.
presentation, some recent work (which is in progress) is described. This
includes the application of
A Mathematical Framework for Data Assimilation
Data assimilation has evolved, for the most part, in an applied context and the mathematical framework has not been fully developed. In this talk I will develop on abstract framework for inverse problems, viewed in the framework of Bayesian statistics, and show that a common framework underlies a range
of application, including data assimilation in fluid mechanics, but encompassing other problems arising in filds such as nuclear waste management and oil recovery. This common framework will be demonstrated to be useful for a number of reasons, including the proper specification of prior information
(regularization) as well as the development of algorithms which do not degenerate under mesh refinement.
Stochastic modeling, multiscale computations, and sub-grid scale
Many dynamical systems of interest in atmosphere/ocean science are too large for fully resolved simulations. Stochastic modeling offers a way around this difficulty by deriving reduced equations at the macro- or meso-scale for these systems. These reduced models can be used in two different ways. We can perform multi-scale simulations in which the relevant parameters in the equations are computed on-the-fly via short bursts of simulation with the full model which are localized both in space and time. Or we can estimates these parameters once and for all before hand by precomputing or from the data available. In this talk, both procedures will be illustrated on a simple example proposed by Lorenz in 96.
Gerard van der Schrier
Reconstructing paleoclimate using a data assimilation method - challenges and problems
data assimilation techniques requires very detailed
knowledge of the atmospheric state, till at the level of the smallest spatially
resolved scales and at a temporal scale of days. In paleoclimatology,
this precise knowledge of the past atmospheric state is absent, where one
typically has a low-spatial resolution, statistical reconstruction of
atmospheric circulation for a limited domain based on proxy or documentary data
from only a few locations. These reconstructions often have a temporal
resolution of months or longer. This renders the traditional data-assimilation
techniques in paleoclimatology useless. Here we apply
a recently developed technique which can be used to overcome this problem. This
technique determines small perturbations to the time evolution of the
prognostic variables which optimally adjust the model-atmosphere in the direction
of a target pattern.
These tendency perturbations are referred to as Forcing Singular Vectors (FSVs) and can be computed using an adjoint
model. With FSVs,
it is possible to make a simulation of global climate which reproduces, in a
time-averaged sense, the statistical reconstructions of atmospheric
circulation, while leaving the atmosphere free to respond in a dynamically
consistent way to any changes in climatic conditions. Importantly,
synoptic-scale variability internal to the atmospheric or climatic
system is not suppressed and can
adjust to the changes in the large-scale atmospheric circulation. This gives a
simulation of paleoclimate which is close to the
scarce observations. Two applications of
FSV to paleoclimatology are discussed ("Little
Ice Age" climate in Europe and droughts and pluvials
in 19th century
Validation of assimilation algorithms
Data assimilation, considered as a problem in bayesian estimation, requires the a priori knowledge of the probability distributions that describes the uncertainty (‘errors’) on the various data (observations, assimilating model). This raises a basic question. Is it possible to objectively determine, either entirely or partially, those probability distributions ?
The only way to obtain objective information on the errors is to eliminate the unknowns from the data. In the case of linear Gaussian estimation, this yields the innovation vector. The problem of determining the probability distributions of the errors from the statistics of the innovation is entirely undetermined, with the consequence that the determination of the required probability distributions must ultimately rely entirely on hypotheses that cannot be objectively verified on the data.
Inconsistencies between a priori assumed and a posteriori observed statistics of the innovation can however be resolved by ad hoc tuning based on reasonable assumptions and intelligent guess. Examples are presented and discussed.
Regularization models of the Navier-Stokes equations
Since most turbulent flows cannot be computed directly from the Navier-Stokes equations, a dynamically less complex mathematical formulation is sought. In the quest for such a formulation, we consider regularizations of the Navier-Stokes equations that can be analyzed within the mathematical framework devised by Leray, Foias, Temam, et al. Basically the the nonlinear term in the Navier-Stokes is atered to restrain the convective energetic exchanges. This can be done in various ways, yielding different regularization models. Unlike subgrid models that enforce the dissipative processes, regularization models modify the spectral distribution of energy. Ideally, the large scales remain unaltered, whereas the tail of the modulated spectrum falls of much faster than the tail of the Navier-Stokes spectrum. Existence and uniqueness of smooth solutions can be proven. Additionally it can be shown that the solution of some of the regularized systems - on a periodic box in dimension three - actual has a range of scales with wavenumber k for which the rate at which energy is transferred (from scales >k to those <k is independent of k. In this so-called inertial subrange the energy behaves like k^(-5/3). Compared to Navier-Stokes, the inertial subrange is shortened yielding a more amenable problem to solve numerically.
* * * * *