Bernat Corominas-Murtra

Medical University of Vienna
The comprehension of the mechanisms behind scaling patterns has become on of the hot topics of modern statistical physics [1]. From complex networks to critical phenomena, scaling laws emerge in somewhat regular way. In these two lectures we will revise parts of current explanatory proposals for the emergence of such behaviours. We will put a special emphasis on the so-called Zipf's Law [2]. In such probability distribution, the probability of observing the k-th most common event scales as 1/k. Its remarkable ubiquity spans from word frequencies in written texts to the distribution of city sizes, family names, wealth distribution or the size of avalanches in systems exhibiting self-organized criticality. Its origin and the consequences one can extract from its observation in real systems is a matter of an intense debate.
We will start with a short critical review of the scope of the results one can derive from the statistical study of scaling behaviours, both from the fundamental and practical side [3,4]. We will then briefly revise some of the standard approaches for the emergence of such statistical behaviours, such as the 'rich-gets-richer' class of models or the critical exponents appearing at the percolation threshold of a random graph [5].
The main focus will be on two non-standard frameworks, having, however, a huge explanatory potential: Information-theoretic constraints for the evolution of complex codes. This leads to a mathematical formalisation of the so-called "least effort" hypothesis, informally proposed by G.K. Zipf as the origin of the scaling behaviour observed in complex codes [2,6,7]. The increase of complexity under information-theoretic constraints has, however, larger ranges of application than the communication systems alone [8].
The "Sample Space Reducing" (SSR) processes, a recently introduced family of stochastic processes displaying a minimal degree of history dependence [9]. SSR process are a totally new route to scaling which can explain a huge range of power-law exponents thanks to the unique assumption that the sampling space is reduced as long as the process unfolds. The intuitive rationale behind the SSR processes and the surprisingly simple mathematical apparatus needed to understand them makes the SSR process approach a new research area with promising applications.
[1] M.E.J. Newman. Power laws, Pareto distributions and Zipf's law. Contemp. Phys. 46 (2005) 323–351.
[2] G.K. Zipf. Human Behavior and the Principle of Least Effort. Addison-Wesley, Reading: MA, 1949.
[3] G.A. Miller, N. Chomsky. Finitary Models of Language Users. In: Handbook of Mathematical Psychology, Vol. II. Ed. R.D. Luce, R.R. Bush & E. Galanter. New York, Wiley, 1963.
[4] A. Clauset, C.R. Shalizi, M.E.J. Newman. Power-law distributions in empirical data. SIAM Rev. 51 (2009) 661-703.
[5] M.E.J. Newman, S.H. Strogatz, D.J. Watts. Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E, 64 (2001) 026118.
[6] P. Harremoes, F. Topsoe. Maximum entropy fundamentals. Entropy, 3 (2001) 191-226.
[7] B. Corominas-Murtra, J. Fortuny, R.V. Sole. Emergence of Zipf’s law in the evolution of communication. Phys. Rev. E, 83 (2011) 036115.
[8] B. Corominas-Murtra, R.V. Sole. Universality of Zipf’s law. Phys. Rev. E, 82 (2010) 011102.
[9] B. Corominas-Murtra, R. Hanel, S. Thurner. Understanding Zipf's law with playing dice: history-dependent stochastic processes with collapsing sample-space have power-law rank distributions. J. R. Soc. Interface, 12 (2015) 20150330.
Personal webpage