Random walks in random environment with Markov dependence on time

1 Dipartimento di Matematica, Università di Roma “La Sapienza”, Piazzale Aldo Moro 2, 00185 Roma, Italy. Partially supported by INdAM (G.N.F.M.) and M.U.R.S.T. research founds 2 Institute for Problems of Information Transmission, Russian Academy of Sciences, B. Karetnyi Per. 19, 127994, GSP–4, Moscow, Russia. Partially supported by RFBR grants 99–01–024, 97–01–00714 and CRDF research funds N RM1–2085 3 Dipartimento di Matematica, Università di Roma Tre, Largo S. Leonardo Murialdo 1, 00146 Roma, Italy. Partially supported by INdAM (G.N.F.M.) and M.U.R.S.T. research founds


Introduction
There are many models of random walk in random environment, which describe interesting phenomena of different kind.Here we are interested in the case of a discrete time random walk on Z ν , ν = 1, 2, . .., depending on an environment which evolves in time as a Markov chain, with local dependence in space.
The environment is a random field ξ t = {ξ t (x) : x ∈ Z ν }, t ∈ Z + = {0, 1, . ..}, with ξ t (x) ∈ Λ where Λ is the set of the local values of the field.Ω = Λ Z ν is the space state of the environment at a given time, and Ω = Λ Z ν ×Z+ is the space of the trajectories of the environment.On Ω and Ω, we consider the topology generated by the cylinder sets, and the corresponding Borel σ-algebras.
Throughout the present exposition we consider a version of the model which is as simple as possible, in order that it may be easier to understand the basic facts and constructions.For a general overview we refer to the recent paper [1].
The environment.We assume that the evolution of the environment is given at each site x ∈ Z ν by an independent copy of an ergodic Markov chain (local Markov chain) with two states: Λ = {±1}.We take a symmetric stochastic matrix so that the invariant measure is π = ( 1 2 , 1 2 ).The eigenvalues of Q are µ 0 = 1 and µ 1 = µ, corresponding to the eigenvectors e 0 = (1, 1) and e 1 = (1, −1), respectively.We assume µ = 0.The case µ = 0 corresponds to independence in time and will be only briefly discussed below.
The environment ξ t evolves as a Markov chain given by the product of the local Markov chains, and the product measure Π = π Z ν is a stationary measure for the environment.The measure induced by an initial probability Π 0 on the space of the trajectories Ω = Λ Z ν ×Z+ is denoted by ℘ Π0 .Π 0 can also be a δ-measure, which amounts to considering conditional probabilities for a fixed initial configuration of the environment.
As is usual for random walks in random environments, one can consider two different problems that require different techniques and can in fact be considered as two different models.In physical terminology they are called the "quenched" and the "annealed" random walk.
The "quenched" random walk.It is the inhomogeneous random walk for a fixed trajectory of the environment ξ = {ξ t (x) : (t, x) ∈ Z + × Z ν } ∈ Ω.We assume transition probabilities of the form where P 0 is a non-degenerate random walk on Z ν , such that the quadratic form of the second derivatives of its characteristic function p0 (λ) := v P 0 (v)e i(λ,v) , λ ∈ T ν , where T ν is the νdimensional torus, at λ = 0 is negative definite.The parameter a ∈ (−1, 1) and the function c(u) are such that u |c(u)| 1 and P 0 (u) ± a c(u) ∈ [0, 1) for all u ∈ Z ν .We also assume two conditions that greatly simplify the analysis: finite range, i.e., P 0 (u) = c(u) = 0 if u / ∈ D, for some finite D ⊂ Z ν , and the condition It is often convenient to consider P 0 and c to be fixed and a to be a variable parameter, which provides the weight of the stochastic term.As both P 0 and the expression in (1.2) sum up to 1, we have u c(u) = 0.
The pair (ξ t , X t ), t ∈ Z + , where ξ t is the Markov environment and the distribution of X t+1 , for X t and ξ t fixed, is given by (1.2), is also a Markov chain, which describes the joint evolution of the environment and the random walk.We assume that the walk starts at the origin, so that the initial distribution of (ξ 0 , X 0 ) is Π 0 × δ X0,0 , for some initial probability measure Π 0 on Ω.The corresponding measure on the space of the trajectories of the joint evolution is denoted by ℘ Π0,0 .
The "annealed" random walk.Starting with the quenched transition probabilities (1.2) one can consider the marginal distribution of X t induced by ℘ Π0,0 .This is the "annealed random walk", also called "averaged" r.w., as we take averages over the environment.It is not a Markov process.
As is usual for such models, the annealed random walk is easier to deal with than the quenched one, especially in low dimension ν = 1, 2. However, all the results obtained so far for either the annealed or the quenched case, require some smallness condition: either the stochastic term or time dependence (represented by the parameters |a| and |µ|) should be small enough.
As a guideline to the understanding of the possible behavior of random walks with a Markov dependence in time, one can look at the results for the time-independent model (corresponding to µ = 0).In this case the annealed random walk is trivial and corresponds to the random walk with transition probabilities P 0 .The quenched random walk is not trivial and has been completely solved quite recently [7], for all the values of the parameter a that permit non-degeneracy.For all such values, the quenched random walk is diffusive, for almost all histories of the environment, with the same correlation matrix as for the unperturbed random walk.The environment only affects the corrections to the leading term of the Central Limit Theorem: there are only [ ν+1 2 ] terms, depending on the environment, in the asymptotic expansion for large t.
For our model with Markov dependence, the annealed random walk, due to time correlations, differs from the random walk with transition probabilities P 0 , but we can still expect, in analogy with the time-independent case, that, for small |µ| or small |a|, both the annealed and the quenched random walks are diffusive with the same leading term (almost-surely for the quenched case).In fact for ν 3, this has been proved in [6], and at present the results in low dimension ν = 1, 2 are available as well [8].
The main open problem for the case of Markov dependence in time is to understand whether there are thresholds, i.e., critical values of the parameters a, µ, at which either the annealed or the quenched random walks exhibit transitions to a different asymptotic behavior.Models of random walk with Markov dependence in time and local dependence in space such as the one we consider here have been studied using different methods.Here we will be mainly concerned with an approach based on the construction of invariant subspaces for the stochastic operator (or "transfer matrix", in physical terminology) of the process.
Other possible approaches are based on graph expansion [4], on renovation times [2] and martingale methods [8].
In section 2 we will show in some detail how one can construct, in the standard L 2 space of the joint process, appropriate invariant subspaces for the transfer matrix.Due to the simple form of the transition probabilities (1.2) we will be able to give a simple proof of this central fact, with an explicit inequality for the parameters a, µ instead of the usual smallness condition.We will then show how this result permits to prove the Central Limit Theorem for the annealed random walk.
Section 3 of our exposition is devoted to a review of the main results that have been obtained for this model by the study of invariant subspaces of the transfer matrix and spectral methods.The most relevant of these results are related to the existence and the properties of "the environment from the point of view of the random walk".
The results reviewed in section 3 are stated for the model described in section 1, with possible additional hypotheses.We cannot report full proofs, but we give some indication on the methods.

Invariant subspaces for the transfer matrix
The transfer matrix (or stochastic operator) of the joint process (ξ t , X t ), t ∈ Z + , is a linear operator which acts on the bounded measurable functions on Ω × Z ν as We introduce, for any bounded subset Γ ⊂ Z ν , the function (For convenience we identify the eigenvector e 1 = (1, −1) with the identical function on Λ = {±1}.)Clearly e 2 1 (s) = e 0 (s) ≡ 1, s = ±1, and (e 1 , e 0 ) π = 0, so that, if we take the scalar product in L 2 (Ω, Π) (Π = π Z ν is the stationary measure), we have where G denotes the class of the bounded subsets of Z ν .The set {Ψ Γ } Γ∈G is clearly an orthonormal basis in L 2 (Ω, Π).
In the Hilbert space of the joint process we take as orthonormal basis the set {Ψ Γ,z } Γ∈G,z∈Z ν with For f ∈ H we write the corresponding expansion as By computing the conditional expectation in (2.1) we get a formula that permits to find the explicit expression of the components of T : where | • | denotes the cardinality of a set.Let ξ + u, u ∈ Z ν , ξ ∈ Ω, denote the space translations.The Fourier transform of the elements of H is defined as f is, for almost all λ ∈ T ν , a function of L 2 (Ω, Π), so that it can be expanded in terms of the orthonormal basis {Ψ Γ } Γ∈G : By translation invariance we have T Γ,x;Γ ,x = T Γ−x ,x−x ;Γ −x ,0 , and the Fourier transform T (λ) of T has components ,u) .
Setting Γ = ∅ in (2.5), we find that T Γ∅ (λ) = 0 except for Γ = ∅, {0}, and where c(λ) = u c(u)e i(λ,u) is the Fourier transform of c(•).For Γ = ∅, taking into account that All other components of T (λ) vanish.For a = 0, the random walk does not depend on the environment and we have non-vanishing components only if Γ is a shift of Γ.For a = 0, new components appear only if Γ is a shift of Γ followed by creation or annihilation of a point at the origin.We are interested in the action of T (λ) on a convenient subspace of regular vectors H M , the collection of all vectors {f Γ } Γ∈G with components that fall off so fast that for a suitable M > 1. H M with the norm (2.10), is a Banach space, and it is easy to see, by the explicit expressions (2.8), (2.9), that T (λ) is a bounded operator on H M .
We will also consider the adjoint T † (λ) of T (λ), with components T † ΓΓ (λ) = T Γ Γ (λ), which is also a bounded operator on H M .Theorem 2.1 If |µ|(1 + |a|) < 1, one can find M > 1 and a neighborhood U ⊂ T ν of the origin such that for λ ∈ U the eigenvalue equations with the normalization condition χ ∅ (λ) = χ * ∅ (λ) = 1, have a unique solution in H M , which is analytic in λ for λ ∈ U.Moreover, the space H M is represented as a sum of two subspaces, invariant with respect to T (λ): Here {•} denotes the linear span of a vector, and (2.12) Substituting, we get a quadratic equation for the vector χ(λ) ∈ HM , where χ Γ (λ) = χ Γ (λ), Γ = ∅, and HM is the subspace of the functions f ∈ H M with f ∅ = 0. Introducing the linear operators on HM which are bounded and such that A λ HM = 1, C λ HM 1, the equation becomes where G λ is another linear operator such that and (G λ f ) Γ = 0 in all other cases.G λ is also bounded and G λ HM |aµ|M .
In order to solve (2.13) we apply the contraction principle in HM .As c(0) = u c(u) = 0, the quantity s := sup λ∈U M |ac(λ)| | p0(−λ)| is made as small as we desire by a suitable choice of U. The norm of the right side of (2.13) is then bounded by the expression < 1, and, for s small enough, by an appropriate choice of U, we can find r > 0 such that s + ζr(1 + r) < r.Hence, the operator defined by the right side of (2.13) maps the sphere of radius r in HM into itself.It is also a contraction in this sphere, as it is easy to check, so that, by the contraction principle, we have a unique fixed point χ(λ) ∈ HM for λ ∈ U.
Passing now to the second equation (2.11), one gets, similarly to equation (2.12) p * (λ) = p(λ) = p0 (λ) + ac(λ)χ * {0} (λ), (2.14) and introducing the adjoint A † λ of A λ , and the operator we get the analogue of equation (2.13) for χ * (λ) ∈ HM where g Γ (λ) = aµ v e −i(λ,v) c(−v)I Γ={v} .Since the norm of Ḡλ has the same bound as that of G λ , the right hand side of (2.15) is bounded by Taking into account that s is as small as we desire for λ near the origin, we see, following the same lines as for equation (2.13), that for ζ < 1 we can again apply the contraction principle in some appropriate neighborhood U of λ = 0.
Observe that the fixed point χ and the eigenvalue p(λ) are analytic for λ ∈ U, as a consequence of the analyticity of the matrix elements T ΓΓ (λ).
Theorem 2.1 now follows by a well known fact of the theory of linear operators.
As we will see, the leading eigenvalue of T (λ) is p(−λ) and the linear span {χ(λ)} of the eigenvector can be called the "leading" one-particle subspace.

Central Limit Theorem for the annealed random walk
We consider the probabilities of the annealed random walk starting at the origin for a fixed initial configuration of the environment ξ ∈ Ω, i.e., we take a δ-measure as initial distribution for the environment.Such probabilities are written as where ξ ∈ Ω denotes, as in (1.1) a point in the space of the trajectories, Φ (x) ( ξ, y) = δ x,y = Ψ ∅,x ( ξ, y), δξ is the initial distribution of the environment, ℘ δ ξ the corresponding measure over the trajectories of the environment, and we use angular brackets • to denote averaging.The Fourier transform is easily computed, (Φ (x) (λ)) Γ = δ Γ,∅ e i(λ,x) , so that having applied the inverse Fourier transform (see (2.6), (2.7) ), we have (2.16) By Theorem 2.1, the vector δ Γ,∅ for λ ∈ U can be written as (2.17) Observe that if a = 0, h λ is just the span of {Ψ Γ , Γ = ∅} and the spectrum of the restriction of T (λ) to h λ is contained in the interval (−|µ|, |µ|).Hence, it is almost evident from the expression of the matrix elements (2.8), (2.9) (see [3]), that if a is small enough, then for all λ ∈ U and, as a → 0, μ → |µ|.Furthermore, a similar argument, based on the explicit expression (2.8) (2.9) of the matrix elements, and on condition (1.3), shows that for a small, we have, for some Finally, by an inspection of (2.12), (2.13), we see that χ(0) = 0, and p(0) = 1.Moreover, for a small, p(λ) is very close to p0 (−λ), its first derivatives at λ = 0 are purely imaginary, by reality, and the quadratic form of the second derivatives is negative definite, i.e., the expansion of log p(λ) at λ = 0 is of the form log p(λ where b ∈ R ν and A is a positive definite matrix.p(λ) plays the same role as a characteristic function for the usual CLT for i.i.d.variables.In fact, by (2.19) the contribution of the integral outside U in (2.16) falls off exponentially fast in t, and the same happens, by (2.18) to the contribution involving δ(λ) in the representation (2.17).The main contribution is then given by the integral over U of the first term in the representation (2.17), which has the form One can now obtain the local Central Limit Theorem in the usual way, by expanding the function log p(λ) at λ = 0, recalling that χ Γ (0) = 0, Γ = ∅, which implies d(0) = 1.Taking into account the expansion (2.20) we get to the following theorem.Theorem 2.2 (Local Central Limit Theorem for the annealed random walk) If a is small enough, then the asymptotic for t → ∞ of the probabilities for the annealed random walk, in the region |x − bt| < t α , for α < 2  3 is given, for any choice ξ of the initial configuration of the environment by the formula where the ν × ν matrix A is defined by the expansion (2.13), C = √ det A and the correction is uniformly small for x in the given range.
The leading term of the asymptotic does not depend on the initial environment ξ because, as we already observed, χ Γ (0) = 0 for Γ = ∅.One can compute the following terms in the usual expansion in inverse powers of t − 1 2 , which would depend on ξ.

Random walk of two particles
The annealed random walk of two particles in the same environment can also be studied by constructing invariant subspaces for the transfer matrix [3].The task is, however, technically more involved, and requires, as we will see, additional assumptions.
Within the framework of the model of section 1, for simplicity we consider the case when the quenched random walks are independent t = x 2 ) = (P 0 (u 1 ) + ac(u 1 )ξ t (x 1 )) (P 0 (u 2 ) + ac(u 2 )ξ t (x 2 )) , u 1 , u 2 ∈ Z ν , but it would not be hard to allow for a short-range interaction between the particles.
By Theorem 3.1, the leading term in the asymptotics of the annealed probabilities is given by the first element in the decomposition (3.2).One is then reduced to considering the restriction T H (2) of T (2) to H (2) .As is usual for asymptotic expansions in such cases, it is convenient to apply the general resolvent formula: if R(ζ) is the resolvent of T H (2) , then where γ is a path in C which goes around the spectrum of T H (2) .Computations are fairly involved and are performed in the appropriate Fourier space.As is well known, the asymptotics is determined by the leading part of the spectrum, which is obtained by analyzing the singularities of the resolvent R(ζ).The new conditions on p0 (λ), c(λ) (which imply that the drift vanishes: b = 0) are assumed for the analysis of the singularities of the kernel of the resolvent to be manageable.We also need the dimension to be at least 3.The result can be stated as follows.
Theorem 3.2 If ν 3, under the same assumptions as for Theorem 3.1, for any bounded measurable set G ⊂ R ν × R ν , we have, as t → ∞, for any initial configuration ξ, , is the gaussian density with correlation matrix A −1 appearing in Theorem 2.2.

Quenched random walk
The quenched Central Limit Theorem for the random walk of one particle follows, at least in L 2 -sense, from the annealed random walk of two particles.In fact, denoting for brevity the annealed probabilities as P ξ (X t = x|X 0 = 0)|ξ 0 = ξ = P t ξ (x), we have, for any bounded measurable set, . Now Theorem 3.2 implies that, in L 2 -sense, we have, as t → ∞, for any initial configuration ξ of the environment, where g(u) is again the limiting gaussian density in Theorem 2.2.Almost-sure asymptotics requires accurate estimates of the correction terms.This is done in [6], where the quenched CLT is proved for ν 3, almost surely with respect to ℘ Π , where Π = π Z ν is the equilibrium measure for the environment.The result is as follows.
Theorem 3.3 For ν 3, if, in addition to the previous assumptions of section 1, we assume that min λ∈T ν |p 0 (λ)| > |µ|, then, for a small enough, one can find a subset Ω such that ℘ Π ( Ω ) = 0, and, for ξ / ∈ Ω and any measurable set G ⊂ R ν , as t → ∞ we have where g is the limiting gaussian density for the annealed random walk of Theorem 2.2.
generated by the variables η(x) : x ∈ Γ, for some finite set Γ ⊂ Z ν .Then the following assertions hold.i) There are positive constants C F and κ, C F depending only on F and κ independent of F and of the initial measure Π 0 such that (3.5) ii) There are positive constants C F and q ∈ (0, 1), C F depending only on F and q independent of F and Γ such that where d(Γ, 0) is the distance of the set Γ from the origin in Z ν .
iii) The probability measures Π t tend weakly, as t → ∞, to the measure Π, which is stationary for the process η t : t ∈ Z + .
The second relation (3.6 ) follows from an explicit estimate of the components of h * .Finally, assertion iii) follows from assertion i), as convergence on cylinder functions is sufficient for weak convergence of probability measures.Π is obviously invariant for the Markov chain η t and is absolutely continuous with respect to Π.
The last result that we report here concerns the decay of time correlations of the EPV.For simplicity we consider the functions of one site only, but it would not be hard to extend the result to general cylinder functions.