Lorentz Center - Extracting Information from Weak Lensing: Small Scales=Big Problem from 16 Feb 2015 through 20 Feb 2015
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Extracting Information from Weak Lensing: Small Scales=Big Problem
    from 16 Feb 2015 through 20 Feb 2015

 

Scientific Report Workshop: Extracting information from weak lensing surveys

 

Organizers: Thomas Kitching, Catherine Heymans, Henk Hoekstra,
                      Katrin Heitmann

Our meeting was focussed on arguably the most important question in physics today: what is dark energy? The observational evidence for dark energy, that is causing the expansion of the Universe to accelerate, indicates that either general relativity is incorrect on cosmic scales, that there is a fundamental misunderstanding of vacuum energy, that there is a new fundamental field, or that we live multiverse. One of the most promising approaches that have been proposed to measure dark energy properties from cosmological data is to use weak gravitational lensing. This is the effect that images of distant galaxies are distorted by the presence of mass along the line of sight. By measuring the distortion we can trace the growth of massive structures over the evolution of the Universe and the geometry of the Universe simultaneously.

 

Understanding the origin of dark energy will require the measurement of its impact on the growth of structures and the expansion history of the Universe.  Weak Lensing has the ability to measure both these effects, but the majority of the information comes from small-scales where feedback effects between the dark matter distribution and the baryonic matter complicate our interpretation of the measurements. This workshop looked at simulations, theory and observations to address the issue of how to best analyze the information-rich, but poorly understood, small-scale matter distribution for cosmology.

 

The most tangible outcome of the meeting was the creation of several new collaborations that cross between theory, observations and simulations. An example was a “simulation task force” that was set up to begin comparing n-body simulation codes. Key moments in the meeting were in the transfer of knowledge between these disciplines, and the pedagogical presentations, and long discussion time meant that “simple” questions could be asked, and understanding gained. A particular highlight was the pedagogical lecture on simulations. One speaker reminded us of the parable of the blind men describing an elephant: each person describing one part of the whole, but no one appreciating the entirety of the problem.

 

There were several discussions and presentations on the theoretical side of analytically predicting the distribution of matter on small-scales. We heard some brand-new work on bringing particle physics-like field theory analysis into a cosmological setting, and we were re-assured that the LHC “is the other successful dirty physics experiment”, and that small-scale-to-big-scale problems are everywhere in physics; the problem is that on small-scales velocity and momentum interact.

 

More questions than answers abounded. How many simulations do we need? How can we combine baryonic feedback and intrinsic alignments? How do we calibrate hydrodynamical simulations? Can we use strong lensing to help? Is the halo model adequate? Are scaling relations sufficient, or too simplistic? Can we use perturbation theory to bootstrap simulations? And many more.

 

There was some lively debate on the correct approach to modelling of small-scales for current data, the differing opinions being that we should either remove these small-scale by creating tailored statistics, or model the small-scales using a halo model ansatz. Both approaches were deemed to be useful. What became clear is that whilst the halo model approach is promising there are some fundamental assumptions (e.g. sphericity and randomness) that will need to be validated before we have the next generation of data. We were also reminded that if baryonic feedback is not hard enough, we also have to deal with the intrinsic alignment of galaxies. “Clipping” off the bad information in the data also was seen as a good approach.

 

The central problem was seen to be a lack of independent non-lensing data that can be used to calibrate the proposed simple models. Meanwhile the lensing data continues to accumulate. We were given state-of-the-art reports from KiDS, DES, and HSC surveys. All of which promise cosmology results within the next year. There was no consensus on whether methods discussed were sufficiently accurate, precise or validated to work for these surveys.

 

The overall mood was one of an increasing appreciation for the magnitude of the task ahead in the community. That there are challenges in all aspects of the problem – simulations, theory, and observations – but that it was extremely useful to meet in the context of a week long meeting where these problems could be articulated. Our keynote speaker ended the conference by setting us a challenge, saying that it was “unthinkable that we not use high-l [small-scale] information”. 

 

A problem shared is a problem halved, and the first step to recovery is admitting you have a problem. Both of these aphorisms were true of this workshop, and we now plan for many more workshops on the same topic, to meet the challenge in time for dark energy physics.  

 



   [Back]