Lorentz Center

International center for scientific workshops

International center for scientific workshops

Current Workshop | Overview | Back | Home | Search | | ||||||||||

## Econophysics and Networks Across Scales |

Diego Garlaschelli, Leiden
University, Netherlands H. Eugene Stanley, Boston
University, USA
Maria I. Loffredo, University
of Siena, Italy Peter Denteneer, Leiden
University, Netherlands Iman van Lelyveld,
De Nederlandsche Bank, Netherlands Reindert Stoffer, Duyfken
Trading Knowledge BV, Netherlands Ferry Vos, Anthos,
Netherlands
“Maximum Entropy Matrices: Binary vs
Weighted properties of financial time series” We introduce a novel information-theoretic approach to
the analysis of single and multiple time series, with empirical applications to
real financial time series. Our formalism allows us to connect stochastic
processes with ensembles of time series inferred from partial information, and
to extract and quantify information from single or multiple time series. The
method allows us to assign a level of uncertainty to a time series given
measured properties (of the time series), and also to compare which property of
a specific time series is more informative.
"The dependence between credit default risk and
recovery rates." While many studies in Econophysics
have been devoted to equity, we must not forget debt markets. In 2010, for
example, the value of outstanding bonds was twice the global equity-market
capitalization. Debt analysis is mainly driven by the key-figure of credit
risk, the default probability. We will show that it is important to also
include the recovery rate of the bonds. Investigating historic default events,
we examine its behavior. Furthermore, we will illustrate that the two measures
empirically follow a functional dependence, which can be derived from the
original Merton model of 1974.
“Views of Econophysics from the
perspective of Finance” The most commonly used measure of stock market volatility
has been the standard deviation. However, since it exhibits some drawbacks we
purpose an alternative approach based on the concept of entropy, whose main
advantage relies on the fact that it makes possible a more comprehensive
description of such volatility. In view of the fact that Shannon entropy is
only suitable for describing equilibrium systems we consider Tsallis entropy, more appropriate to describe anomalous
systems, into which category financial markets appear to fall. More
specifically, a comparison is made in this research between the results of the traditional
approach based on the standard deviation and those provided by the entropy, on
the other. A sample is used consisting of the returns of the main stock market
indexes of the G7 countries during the period from January, 1999 to January,
2009. The results evidence the limitations of the standard deviation-based
approach in fully characterizing volatility.
“Are your data really power law distributed?” Power laws have shown to be very useful models to
describe many different phenomena, from physics to finance. In recent years,
the econophysical literature has proposed a large
amount of papers and models justifying the presence of power laws in economic data.
But is it really that good? In this talk I review three heuristic methods that may be
used to search for power laws in empirical data: log-log plots, mean excess
plots and self-similarity plots. The first tool has been extensively used in
the last years, while the other two methods are less popular. However, I will
show that all these methods can generate serious misunderstanding, and that one
needs to use them carefully and to combine them, in order to have a good idea
about the nature of data. Two "new" graphical tools that can be helpful
to verify the presence of Paretianity are also
presented.
“New Metrics for Economic Complexity: Measuring the Intangible
Growth Potential of Countries” Economic Complexity is a new line of research which portrays
economic growth as an evolutive process of ecosystems
of technologies and industrial capabilities. Complex systems analysis,
simulation, systems science methods, and big data capabilities offer new
opportunities to empirically map technology and capability ecosystems of
countries and industrial sectors, analyse their
structure, understand their dynamics and measure economic complexity. This
approach provides a new perspective for data-driven fundamental economics in a
strongly connected, globalised world. In particular here we discuss how it is possible to assess the
competitiveness of country and complexity of products starting from the
archival data on export flows that is the COMTRADE dataset which provides the
matrix of countries and their exported products. According to the standard
economic theory the specialization of countries towards certain specific
products should be optimal. The observed data show that this
is not the case and that diversification is actually more important.
Specialization may be the leading effect in a static situation but the strongly
dynamic and globalized world market suggests instead that flexibility and
adaptability are essential elements of competitiveness as in bio-systems. The crucial challenge is therefore how these qualitative
observations can be turned into quantitative variables. We have introduced a
new metrics for the Fitness of countries and the Complexity of products which
corresponds to the fixed point of the iteration of two nonlinear coupled
equations. The nonlinearity is a key feature because it translates in
mathematical terms the fact that the upper bound on the Complexity of a product
must be mainly given by the less developed country able to produce it. The
information provided by the new metrics can be used in several ways. As an
example, the direct comparison of the Fitness with the country GDP per capita
(Fitness-Income Plane) gives an assessment of the non-expressed potential of
growth of a country. This can be used as a predictor of GDP evolution or stock
index and sectors performances. The global dynamic in the Fitness-Income Plane reveals, however,
a large degree of heterogeneity which implies that countries can evolve with
different level of predictability according to the specific zone of the
Fitness-Income plane they belong to. This heterogeneous dynamics is often
disregarded in usual economic analysis. When dealing with heterogeneous
systems, in fact, the usual tools of linear regressions become inappropriate.
Making reliable predictions of growth in the context of economic complexity
will then require a paradigm shift in order to catch the information contained
in the complex dynamic patterns observed. These methods and concepts can give concrete contributions, as
other possible applications, to risk analysis, investment opportunities
analysis, policy-modelling of country growth and
industrial planning.
“Spread of risk across
financial markets: better to invest in the peripheries”
I will discuss how
this approach can be used to build a well-diversified portfolio that
effectively reduces investment risk. Specifically I will show that
investments in stocks that occupy peripheral, poorly connected regions in the
financial filtered networks are most successful in diversifying
investments even for small baskets of stocks. On the contrary, investments
in subsets of central, highly connected stocks are characterized by
greater risk and worse performance [3]. I will also
introduce a general graph-theoretic approach that use these filtered
networks to simultaneously extract clusters and hierarchies in an
unsupervised and deterministic manner, without the use of any prior
information and without need to specify any threshold [4-5]. I will
show that applications to financial data-sets can meaningfully identify
industrial activities and structural market changes. [1] T. Aste, T. Di Matteo, S. T. Hyde, Physica A 346 (2005) 20-26. [2] M. Tumminello, T. Aste, T. Di Matteo, R. N. Mantegna, PNAS 102, n.
30 (2005) 10421. [3] F. Pozzi, T. Di Matteo and T. Aste,
Scientific Reports 3 (2013) 1665. [4] Won-Min Song, T. Di
Matteo, T. Aste, Discrete Applied Mathematics
159 (2011) 2135. [5] Won-Min Song, T. Di
Matteo, T. Aste, PLoS One
7(3) (2012) e31929.
“Chained Financial Failures at Nation-wide
Scale in Japan” I will talk about recent
studies based on real data of propagation of financial failures in the past
financial crises and the present one due to the earthquake at nation-wide
scales in Japan. Leading credit research agencies in Tokyo and Nikkei have
accumulated a huge amount of data on banks-firms and supplier-customer links
with financial information and failures of nodes. By using these large-scale
data, we measure the actually occurred propagation of financial distress on the
real data of large-scale economic networks comprising of firms, banks, and
their relationships at the order of millions and even more. Exogenous shocks
due to global financial crisis and mass destruction by disasters such as
earthquakes cause propagation resulting in a sluggish relaxation, typically
observed as an Omori-law.
“Jan Tinbergen's legacy
for economic networks: from the gravity model to quantum statistics” Jan Tinbergen, the first
recipient of the Nobel Memorial Prize in Economics in 1969, obtained his PhD in
physics at the University of Leiden under the supervision of Paul Ehrenfest in 1929. Among many achievements as an economist
after his training as a physicist, Tinbergen proposed the so-called Gravity
Model of international trade. The model predicts that the intensity of trade
between two countries is described by a formula similar to Newton's law of
gravitation, where mass is replaced by Gross Domestic Product. Since
Tinbergen's proposal, the Gravity Model has become the standard model of
non-zero trade flows in macroeconomics. However, its intrinsic limitation is
the prediction of a completely connected network, which fails to explain the
observed intricate topology of international trade. Recent network models
overcome this limitation by describing the real network as a member of a
maximum-entropy statistical ensemble. The resulting expressions are formally
analogous to quantum statistics: the international trade network is found to
closely follow the Fermi-Dirac statistics in its purely binary topology, and
the recently proposed mixed Bose-Fermi statistics in its full (binary plus
weighted) structure. This seemingly esoteric result is actually a simple effect
of the heterogeneity of world countries, which imposes strong structural
constraints on the network. Our discussion highlights similarities and
differences between macroeconomics and statistical-physics approaches to
economic networks.
“Onset of cooperation
between layered networks” Functionalities of a
variety of complex systems involve cooperation among multiple components; for
example, a transportation system provides convenient transfers among airlines,
railways, roads, and shipping lines. A layered model with interacting networks
can facilitate the description and analysis of such systems. In this paper we
propose a model of traffic dynamics and reveal a transition at the onset of
cooperation between layered networks. The cooperation strength, treated as an
order parameter, changes from zero to positive at the transition point.
Numerical results on artificial networks as well as two real networks, Chinese
and European railway-airline transportation networks, agree well with our
analysis.
“Individual Expectations
and Aggregate Outcomes in Asset Pricing Experiments” We discuss `learning to
forecast' laboratory experiments with human subjects to study formation of
individual expectations, their interactions and the aggregate market structure
they co-create. Three different patterns in aggregate price behavior have been
observed: slow monotonic convergence, permanent oscillations and dampened
fluctuations. We show that a simple model of individual learning and
evolutionary selection of heterogeneous expectation rules can explain these
different aggregate outcomes within the same experimental setting. In markets
with positive feedback trend-following strategies are likely to survive
evolutionary selection causing persistent deviations and market fluctuations
around the rational fundamental benchmark.
“Making Sense of Big
Data, Network Science and Economics” First Thought: A Uniform Framework for Network Data Analysis. Many real-world networks can be looked at considering several
levels of complexity, such as the binary, weighted and directed level. A
prominent example is the World Trade Web. On each level, different network
measures quantify different flow processes, which, in turn, evaluate certain
expectations about underlying network processes. Often, those processes are not
well understood by themselves, creating the desire for a uniform framework of
evaluating and comparing multiple perspectives. Motivated by this, we present
the composite centrality framework which is based on proper measure standardisation. In additional, the exploration of
collective scaling behaviours observed among
different metrics leads to the concept of exceptionality, defined as a
deviation thereof, which allows for the discovery of peculiar graph
configurations. Second Thought: Early-Warning Indicators for Financial Crisis. Since the global financial crisis 2008, there is a growing
awareness of the inter-connectedness and inter-dependency of financial markets,
and the resulting systemic risk. However, much of the relevant information/data
is deemed confidential on the institutional level. We present results from the
analysis of financial networks serving as proxies for the global
financial/economic architecture, namely cross-border portfolio investment
networks. A set of indicators for stability and inter-connectedness of
financial markets can be identified. These show signs of potential distress
already well ahead of the actual crisis.
“An effective early-warning signal for the Lehman
Brothers collapse based on information theory” The largest financial crash in the past decades is the
bankruptcy of Lehman Brothers which was followed by a trust-based crisis
between banks. In this talk we introduce information dissipation length (IDL)
as a leading indicator of global instability of dynamical systems based on the
transmission of Shannon information, and apply it to the time series of USD and
EUR interest rate swaps (IRS). In both markets, we find that the IDL steadily
increases toward the bankruptcy, then peaks at the time of bankruptcy, and
decreases afterwards. These results suggest that the IDL may be used as an
early-warning signal for critical transitions in financial systems. We strongly
believe that the methodology can be applied a wide range of complex systems.
“Innovation spreading using communication data” Theories of innovation spreading (or innovation
diffusion) rely on social interaction and media influence. Until recently only
sparse data were available about the detailed mechanism of spreading. We use
data from Skype, the largest Internet-based free phone provider to follow in
detail the process. As Skype has free and pay services, the investigated
structure is a three-layered network: social network - free service network -
pay service network. We study the innovation spreading in a large number of different
countries. We assume an epidemic spreading model with peer pressure plus media
effect and solve the mean field equations with simple assumptions, which can be
checked with the data. The theory works surprisingly well in characterising different scenarios and it even enables to
make predictions.
“Methods and Techniques for Multifractal
Spectrum Estimation in Financial Time Series” Scaling properties belong to the most important
signatures describing complexity involved in dynamics of many real systems,
financial markets are no exception. Presence of scaling usually points to the
presence of some underlying non-trivial fractal structure in temporal
correlations. Techniques of fractal geometry can be then applied to reveal the
potential scaling behavior. Often systems exhibit a multiple scaling, then the
scaling exponents can be found via methods of multifractal
analysis. The presence of an array of scaling exponent usually points to the
presence of such phenomena, ass economic cycles, crises, etc. In this talk are
compared some of the existing techniques of estimation of multifractal
spectrum. Our particular focus will be on Multifractal
detrended analysis and Multifractal
entropy analysis. We outline their respective interpretations, and compare the
methods from both theoretical and practical points of view. Finally, we apply
the methods in question to analyze financial time series of S&P500 gathered
over period of 50 years.
“Leverage effect between financial returns and
volatility: A long-range cross-correlations perspective” A negative relationship between returns and changes in
volatility is a well-documented phenomenon in the financial economics.
Investigation of the leverage effect is frequently connected to the asymmetric
volatility phenomenon, which is usually treated as a situation when volatility
of a growing market tends to be lower than volatility of a falling market. The
interconnection is thus very tight and it is mostly quite hard to distinguish
between these two effects. Nonetheless, most of the authors agree on several
regularities – correlation between returns and volatility is negative but
rather weak, the effect comes from the returns to volatility and lasts for
several periods while the effect remains negative and quite persistent. We look
at the leverage effect from the long-range cross-correlations perspective. As
the financial returns are usually treated as serially uncorrelated and
volatility is taken as a long-range correlated process regardless of the
volatility measure applied, the leverage effect makes the pair an ideal
candidate for long-range cross-correlations inspection. We focus on 14 stock
indices and analyze the leverage effect while focusing on the presence of
long-range cross-correlations between returns and realized volatility. We then
analyze whether the effect arises from the processes properties we find and
whether it can be mimicked by a simple model that we propose. We first describe
the dataset and its statistical properties. Then, we test whether the processes
of returns and volatility are long-range cross-correlated and since the
majority of the analyzed indices turns out to be long-range cross-correlated,
we follow with the analysis of power law coherency. As the power law coherency
is not observed, we close with the discussion of the leverage effect and we
propose a simple model that can mimic the effect as well as other stylized
facts of the financial returns.
“The relation between the trading activity of financial
agents and the stock price dynamics” In recent years databases containing the trading activity
of all the agents in a financial markets have become available to researchers.
This opens the possibility of interesting empirical analyses of the ways in
which agents are affected by other agents or by external inputs, and the
relation between agents and price dynamics. I review the results of three
recent papers where: (1) we study how the number of agents and the imbalance
between the number of buyers and the number of sellers affect and is affected
by price returns and volatility. (2) we study the relative role of endogenous
(returns and volatility) and exogenous (news) factors in the trading decision
of agents; (3) we identify clusters of investors trading in a financial market
characterized by a very high degree of synchronization in time when they decide
to trade and in the trading action taken.
“Is occupational mobility predictable?” We explore the early career mobility of the Romanian
higher education graduates using the network analysis approach. The nodes are
represented by occupations (3 digits groups according to ISCO 88), while the
links represent movements of individuals switching from one job to another. A
job change is defined as an experience of inter-organisational
mobility. The network is constructed as a weighted and directed one with
self-loops (graduates changing their job in the same occupational category).
Having in mind the idea that occupations are related to each other via
transferable skills, we visualize paths of mobility and calculate network
indicators in order to understand models of connectivity between occupations.
Exploiting a dataset on working histories of higher education graduates from
Romania during their early career, we provide novel evidence on the fact that
individuals move according to certain career pathways and that the entrance
occupation influence their subsequent career.
“Extending Community Detection to Probe the Structure of
Financial Time Series” Community detection in complex
networks has been a hot topic over the past decade, with research producing
myriad models and algorithms to partition a vast number of structurally
different complex networks. One such set of networks is that created from time
series data, where the edges between each pair of nodes are derived in some way
from the cross correlations of the node pairs. To date, community detection
algorithms as they have been applied to such networks continue to use the
original formulation, whose null model is derived from the degree of nodes in
the network. In this presentation we discuss the use of more appropriate null
models, based on covariance and show how to rework some of the more popular
community detection algorithms to use these null models. We apply this
technique to sets of financial time series’ from a variety of different,
international equities markets to show how it can be used to accurately discern
communities of stocks. In particular, we show these communities are internally,
more correlated with each other than expected, while at the same time being
anti correlated with the other communities, a property of considerable value
particularly in fields such as portfolio optimization and risk management.
“Statistically validated networks of market members
trading at the LSE electronic and dealers' market” We empirically detect and analyze trading networks, which
are present among all market members of the London Stock Exchange (LSE) trading
shares of a specific stock in a selected period of time. We analyze the
anonymous electronic book and the networked dealers' market separately, and we
statistically validate a link between two market members if the number of
transactions of a selected stock that occur between the two market members is
too large to be explained according to a null hypothesis of random trading
between them. Specifically, we separately analyze the trading networks of
market members trading five highly liquid stocks, in the two LSE venues, from
daily to yearly time scale, during the calendar year 2005. For the selected
stocks, we find that trading networks for the dealers' market are bigger and
more stable over time than those observed for the electronic market. Our
results confirm that anonymity in the electronic order book minimizes the
probability of preferential pair interactions and implies that concerns about
adverse selection in the dealers' market are somewhat compensated by other
positive aspects, such as the possibility of exchanging large volumes in a
single transaction or obtaining a transaction price within the current spread,
which are specific to the dealers' market.
“Weighted networks with given strengths and degrees: a
fast and unbiased method” In the analysis of real networks, it is essential to
filter out the effects of local topological properties in order to detect
nontrivial higher-order patterns. In
binary graphs, this is done using a null model that controls for the degrees of
all vertices. In weighted networks, the standard approach is to control for the
strengths of all vertices. However, recent counter-intuitive results suggest
that, even in weighted networks, degrees are as fundamental as strengths, and
irreducible to the latter. This conjecture
implies that null models of weighted networks should control for both
quantities, a computationally hard and bias-prone problem. Here we solve this problem by introducing an
analytical and unbiased method that works in shortest possible time and does not
require the explicit generation of randomized networks. We apply our method to
economic systems and rigorously confirm the conjecture by showing that, while
the strengths alone are poorly informative, the additional knowledge of the
degrees is extremely informative and at the same time does not overfit the network.
“Can online data anticipate economic behaviour?” Economic crises affect humans worldwide. Vast stock
market datasets offer a window into the catastrophic combination of decisions
that lead to such crises, but do not tell us how these decisions were reached.
In the work I will describe in this talk, we ask whether Internet usage data
might help us understand the early information gathering stages of traders'
decision making processes. By analysing changes in
the frequency with which Wikipedia [1] and Google [2] users look for
information related to finance, we find patterns that may be interpreted as
“early warning signs” of stock market moves. These results suggest that big
data capturing our everyday interactions with the Internet may allow us to gain
insight into early information gathering stages of collective decision making,
on a scale previously impossible to achieve [3]. [1] Moat, H. S., Curme, C., Avakian, A., Kenett, D.Y.,
Stanley, H. E., & Preis, T. (2013). Quantifying
Wikipedia usage patterns before stock market moves. Scientific
Reports, 3, 1801. [2] Preis, T., Moat, H. S.,
& Stanley, H. E. (2013). Quantifying trading behavior in financial markets
using Google Trends. Scientific Reports, 3, 1684. [3] Moat, H. S., Preis, T., Olivola, C.Y., Liu, C., & Chater,
N. (in press). Using big data to predict collective behavior in the real world.
Behavioral and Brain Sciences.
“Dynamical analysis of clustering on financial market
data” In this talk I will show the application of the DBHT
method [1] to a set of 342 US stocks daily prices during the time period
between 1997 and 2012. The DBHT method is a novel approach to extract cluster
structure and to detect hierarchical organization in complex data-sets, it is
based on the study of the properties of topologically embedded graphs [2], it
is deterministic, requires no a-priori parameters and it does not need any
expert supervision. In the case of financial data, the method yields a
clustering set of stocks. I will discuss the dynamical evolution of these
clusters and show results about their persistence over time, together with
analyses about their varying similarity with the Industrial Sectors
classification. To this aim I will introduce dynamical measures taken from the
theory of temporal networks; these measures point out peculiar behaviours in coincidence with the 2007-08 financial crisis
[3]. [1] Won-Min Song, T. Di Matteo, T. Aste,
"Hierarchical information clustering by means of topologically embedded
graphs", PLoS One 7(3) (2012) e31929. [2] T. Aste, T. Di Matteo, S.
T. Hyde, "Complex networks on hyperbolic surfaces”, Physica
A 346 (2005) 20-26. [3] N. Musmeci, T. Di Matteo,
T. Aste, working paper 2013.
“Multilevel networks in science: from individual careers
to Europe” Quantitative measures are becoming increasingly prevalent
at all scales of scientific evaluation, from countries, to universities,
departments, laboratories, and individuals. In this talk I will discuss the
multi-level scientific networks that can be constructed from these output
measures and the growth factors associated with the knowledge, human, and public capital spillovers which are facilitated
by the network structure. Indeed, there is mounting evidence that both career
growth and economic growth are intrinsically related to underlying features of
co-evolving scientific networks. At the level of careers, I will discuss the
role of strong ties in superstar careers, and the evolution of these ties
longitudinally across the career. At the level of countries, I will discuss
recent results obtained by analyzing 4 networks constructed from 2.4 million
patent applications filed with the European Patent Office (EPO) over the
25-year period 1986-2010 [Science 339, 650-651 (2013)]. Combining econometric
methods with network science we perform a comparative network analysis across
time and between EU and non-EU countries to determine the “treatment effect”
resulting from EU integration policies. Using non-EU countries as a control
set, we provide quantitative evidence that, despite decades of efforts to build
a European Research Area, there has been little integration above global trends
in patenting and publication. This analysis provides concrete evidence that
Europe remains a collection of national innovation systems.
“Quantifying the complex world we inhabit using big data” Society’s steadily increasing interactions with
technology are creating volumes of digital traces documenting our collective
human behaviour, fuelling the rapid development of
the new field of computational social science. In this talk, I will outline some recent highlights of
our research, addressing two questions. Firstly, can we provide insight into
international differences in economic wellbeing by comparing patterns of
interaction with the Internet? To answer this question, we introduce a
future-orientation index to quantify the degree to which Internet users seek
more information about years in the future than years in the past. We analyse Google logs and find a striking correlation between
the country's GDP and the predisposition of its inhabitants to look forward
[1]. Secondly, can data from Flickr, a popular website for
sharing personal photographs, provide insights into user attention to the
Hurricane Sandy disaster in 2012? We find that the number of photos taken and
subsequently uploaded to Flickr with titles, descriptions or tags related to
Hurricane Sandy bears a striking correlation to the atmospheric pressure in the
US state New Jersey during this period. Appropriate leverage of such information
could be useful to policy makers and others charged with emergency crisis
management. Our results illustrate the potential that combining extensive behavioural data sets offers for a better understanding of
large scale human behaviour. [1] T. Preis, H. S. Moat, H. E.
Stanley, S. R. Bishop, Quantifying the Advantage of Looking Forward. Sci. Rep.
2, 350 (2012).
“KNOWeSCAPE - Dynamics of
knowledge spaces” This talk introduces into the goals of a COST Action in
which physicists, computer scientists, sociologists, digital humanities
scholars, and information professionals try to better understand the dynamics
in large information spaces and to develop knowledge maps for better navigation
through them.
“Stationary and non-stationary behavior of meso-scale and macro-scale networks” Networks belonging to different scale regimes can show
very different kinds of temporal evolution. In the present talk we consider two
different economic networks, belonging to different scales: the Dutch Interbank
Network (DIN) over the period 1998-2008 (meso-scale)
and the World Trade Web (WTW) over the period 1950-2000 (macro-scale). By
employing a recently proposed analytical pattern-detection method, we study the
role that local properties have in shaping higher-order patterns of both the
WTW, in all its possible representations (binary or weighted, directed or
undirected, aggregated or disaggregated by commodity and across several years)
and the DIN as a binary, directed network. In particular, we focus on the
occurrence of dyadic motifs (two-vertices subgraphs)
and triadic motifs (three-vertices subgraphs). The
two systems have a completely different behavior: whereas the triadic z-scores
of the WTW show the same profile across the considered temporal period,
pointing out the substantial stationarity of this network, the triadic z-scores
of the DIN give origin to four different profiles, subdividing the analysed decade in four subperiods,
directly related to the evolution of the system towards the critical
configuration of 2008. Moreover, whereas the higher-order properties of the WTW
binary representations are well reproduced by constraining the nodes' degrees,
the higher-order structure of the DIN is not reproduced by the same kind of
topological constraints. What is most interesting in the second case is the
detection of a slow and continuous transition of the (otherwise unexplainable)
topological properties from the crisis period to a much earlier stationary
phase, providing a clear early-warning signal of the upcoming “big” event. Our results highlight the importance of understanding the
(non) stationary character of the considered network, since this can
dramatically affect the possibility of forecasting the specific network's
behavior, as evident when thinking about the risk of a systemic contagion in a
financial network.
[TBA]
“Market States and Planar Maximally Filtered Graphs” Münnix et al. purposed a method to identify states of a
financial market through a clustering of the similarities in the correlation
structure of (daily) stock returns [1].
Assigning a correlation matrix C(t) to every time point
t one can measure similarity (resp. dissimilarity) of the time points. These
can be then clustered into the market states. The market states we obtained by
using different clustering methods are all consistent with those in [1]. In
every case the number of market states is not given by the system itself and
requires additional prior information (fixed number of states, clustering
threshold etc.). In [2] the authors present a clustering method by means
of topologically embedded graphs - the DBHT technique (the authors don't give
the explanation of the abbreviation) applied to the Planar Maximally Filtered
Graphs (PMFG), which works without any use of prior information. A PMFG is
constructed out of a similarity (resp. dissimilarity) matrix, which is given in
our case. As a future project we want to construct a PMFG, the
nodes of which are not companies or countries, but time points, and to apply
then the DBHT technique to this time point network, since for this clustering
procedure no prior information is needed. [1] M.C. Münnix, T. Shimada, R.
Schäfer, F. Leyvraz, T.H.
Seligman, T. Guhr and H.E. Stanley. 'Identifying
States of a Financial Market' , Scientific Reports 2 :
644 (2012) [2] Song W-M, Di Matteo T, Aste
T. 'Hierarchical Information Clustering by Means of Topologically Embedded
Graphs'. PLoS ONE 7(3), (2012)
“Systemic risk as a multiplex: three lessons from three
of its layers” [Abstract TBA]
“Spreading of economic crisis” Does economic crisis spread like an epidemic? on which types of
economic networks? Can we model the spreading of economic crisis by the susceptable-infected-susceptable
(SIS) or susceptable-infected-recovered (SIR)
epidemic model? In this talk, I would like to introduce a set of research
questions and possible approaches about the epidemics on economic networks and
collect inputs and comments from our audience.
“The formation of a core-periphery network in over-the-counter
markets” Recent evidence suggests that financial networks exhibit
a core-periphery network structure. This paper aims at giving an explanation
for the existence of such a structure by using the tools of network formation
theory. Focusing on intermediation benefits, we find that a core-periphery
network cannot be unilaterally stable when agents are homogenous. A
core-periphery network structure can be explained if we allow for heterogeneity
among agents.
“A Network Perspective on Regulatory Data” Financial sector regulators and supervisors base their
supervisory approach on the concept of the legal entity in their jurisdiction.
Firms, however, are generally not bound by location or precise legal structure.
Moreover, firms can only report their own exposures – not those of the market
as a whole. Our understanding of the risks firms pose, both individually and as
a group, is therefore limited. Fortunately the collection of data – a necessary first
step – has gained momentum since the 2007-2009 crisis. This opens up the
possibility to improve our understanding of the risk profile of individual
institutions but also of the system as a whole. In this talk I will discuss current gaps, how these gaps
are being tackled and what we could expect network methods to contribute.
I propose a stylised dynamic
model of "globalization," understood as the process by which even
agents who are geographically far apart come to interact, thus being able to
overcome what would otherwise be a fast saturation of local opportunities. One
of the main insights of the model is that, in order for the social network to
turn global, the economy needs to display a degree of "cohesion" that
is neither too high (for then global opportunities simply do not arise) nor too
low (then the meeting mechanism displays too little structure for the process
to take off). Our model of the phenomenon admits an interpretation at different
scales, from the micro level (say, at the level of an organization) to a macro
perspective (e.g. at the level of countries). Focusing on the latter, I will
provide systematic empirical evidence that, at the world level, countries that
are more globalized indeed perform better, i.e. they grow faster. This adds a
novel network perspective to economic growth that enriches the received
approach to the phenomenon.
“Innovation diffusion in networks: the microeconomics of
percolation” We implement a diffusion model for an innovative product
in a market with a structure of social relationships. Diffusion is described
with a percolation approach in the price space. Percolation shows a phase
transition from a diffusion to a no-diffusion regime.
This has strong implications for market demand and pricing. Small-worlds are
often mentioned as being efficient in spreading innovation due to shorts cuts
leading to short average path length. We show that diffusion, if defined as a
percolation process in line with microeconomic theory, actually benefits from
low clustering rather than low average path length. Network connectivity
``spreading'' is the most important factor for diffusion size. Hence, social
structures with low clustering ("individualistic society") are most
beneficial for innovation to spread. [Back] |