Activity induced first order transition for the current in a disordered medium

It is well known that particles can get trapped by randomly placed obstacles when they are pushed too much. We present a model where the current in a disordered medium dies at a large external field, but is reborn when the activity is increased. By activity we mean the time-variation of the external driving at a constant time-averaged field. A different interpretation of the resurgence of the current is that the particles are capable of taking an infinite sequence of potential barriers via a mechanism similar to stochastic resonance. We add a discussion regarding the role of"shaking"in processes of relaxation.


Introduction
Caging is a widely discussed effect for diffusion and transport in disordered media [1][2][3][4][5]. It is also well known that currents may vanish (or die) when the external field exceeds some threshold. A typical cause indeed is that strong driving keeps the particles trapped in regions without exit in the direction of the field. Thermal fluctuations permit escape to some point, and their strength determines the field threshold. Above that threshold, the current is zero. In this paper, we give an example of a zero-temperature first order transition for the current in a random environment as a function of the activation, from zero to some finite value. It physically realizes what has been called a dynamical phase transition. The activation or "shaking" is realized by the time-dependence in the driving which counterbalances the possible localization effects induced by disorder and inhomogeneities.
Activity is a more recent but a very active subject of statistical mechanics. The most immediate reference is towards the many studies today of active particles and of active matter more generally. The concept of dynamical activity has, however, also appeared in more formal constructions of nonequilibrium physics, in particular extending it beyond soft matter. It has also appeared in the exploration of dynamical phase transitions, and in response theory, we have been speaking about the frenetic contribution [6,7]. It makes contact with older concerns like in the understanding of glassy behaviour and jamming transitions. The fact that disorder as well as interactions can slow down relaxation and can even cause localization of matter or energy is an important issue within the study of dynamical activity [8][9][10][11].
In the present paper, we concentrate on some simple specific toy-models where the effect of activation to get a higher conductivity is particularly clear. It is presented as a proof of a principle while various realizations in condensed matter could greatly vary. We start in the next section 2 by retelling the story of how negative differential conductivity can be seen as an effect of negative correlation between current and dynamical activity. We show that the non-monotonousness of the current gets removed by activation. Section 3 contains our main result with the first order transition of the current in a zero-temperature overdamped dynamics. In the last section, we present some general ideas on activation and its relation to the notion of dynamical phase transition.

Removing negative differential conductivity
We recall here, by way of introduction, a number of considerations that have appeared elsewhere [6,7,12]. At the end of this section we can then give a first illustration of how dynamical activation can avoid the decrease of current as a function of external driving.
We start with the mathematically trivial example of a biased random walker in continuous time to enable the emergence of a conceptually more general idea. We put it on the one-dimensional lattice with transition rates for x → x + 1 given by k(x, x + 1) = p and similarly k(x, x − 1) = q for jumps to the left (where p, q > 0 are fixed and sum to 1). That is just a simple Markov process possibly representing a great wealth of physical situations of particles being pushed through a channel possibly as a coarse graining of a much more inhomogeneous or disordered dynamics. Physically, the two parameters p, q are determined by an applied external field E but also by more kinetic details of the channel through which the particle is moving. A first natural correspondence is to ask that the ratio p/q = exp(βE d) where β is the inverse temperature and d is the distance of the hopping. The product s(E) = βE d is then the entropy flux (or dissipated work per temperature) per k B , and we have thus assumed a form of a local detailed balance. Yet, p and q are not determined completely by their ratio p/q. A further physical characterization is to introduce the escape rate, and say how exactly it depends on the applied field E. It gives the inverse of the expected residence time at each cell or position. The stationary current J = J(E), of course, depends on it: In particular, when ξ(E) decreases with E for large E, then so will the current J there. We have, in other words, a simple mechanism for negative differential conductivity where dJ/dE < 0 for large enough E.
It is easy to find various physical realizations. Typically, the decrease of ξ(E) is caused by the architecture of the channel where due to some roughness, the particles can get trapped in dead ends. See, e.g., [6] for more details on a model inspired by [13], and represented in figure 1. In many cases, a current will keep flowing, however, for no matter how large driving E. If, however, the traps (dead ends) have a distribution where the residence time can become infinite for E > E c , then the current will vanish for E E c . That scenario is exactly realized for certain random walks in random environments; see [14]. A well-known example has been constructed by Barma and Dhar for a walker on a percolation cluster [2]. In that paper they also present a one-dimensional version which has stimulated us to find an activation recipe and which is part of the next section. Before we continue with the true transition (where the previously vanishing current resurrects by a modulated field) we still add here a specific realization of the above where, by modulation of the driving, the negative differential conductivity disappears. Consider the random walker as in figure 1. There is a long periodic channel consisting of identical cells each divided into 4 parts, labelled i = 1, . . . , 4. The transition rates are as a function of the driving E. One imagines a wall between parts 2 and 4 of the same cell and from 4 to 3 of the next cell in the forward direction. There are periodic boundary conditions in the horizontal direction. The stationary current initially increases (in the linear response regime) but then decreases for a large field; see the solid black line in figure 2. There is a negative differential conductivity around E = 1. As explained in detail in [6], the negative differential mobility can more generally be attributed to the frenetic contribution. As an effective model, it is the escape rate ξ(E) in (2.1) which decreases exponentially in E which causes the non-monotonousness of the current as a function of E. Look now at figure 2. There we take the same rates as (2.2) in figure 1 but where the driving field E is time-dependent, following with either = 0 (no time-dependence) or = 1 (modulation). Figure 2 is the plot of the time-averaged current for different values of the period τ. As seen there, intermediate frequencies appear indeed best to avoid the negative differential conductivity. The linear response regime is mostly unaffected but at large driving E 0 , the modulation makes a serious difference, erasing the negative differential conductivity for the period τ of order 1-10. We take that example as an illustration of what is perhaps a more general idea; the modulation of the field is capable of liberating the particles from the traps and gives back a monotonous current as a function of the driving amplitude. The fact that the period must be tuned is not surprising and will depend on the depth of the traps. Another viewpoint is as follows: the walker has to repeatedly overcome a potential hurdle whose height grows with the amplitude; modulation at the correct range of frequencies makes the particles overcome these barriers in the same sense as happens in a stochastic resonance [15]. We are now ready to turn that scenario in a first order dynamical phase transition in the next section.

Death and resurrection of current
That currents may die as a function of the external driving is clear from the ideas in the previous section. The negative differential conductivity there is in essence the same phenomenon but where the scale of the trap depths is fixed. When the traps are arbitrarily deep with a not too small probability for such an exceeding depth, the dying is unavoidable. Let us first explain this again via the random walk model but now we must be explicit about the random environment. The latter is represented via a collection w = {w x } x ∈Z of independent Bernoulli random variables, 1 with probability ρ, and 0 with probability 1 − ρ and the density ρ ∈ [1/2, 1]. We then take the walker with rates 2), for a wire with random geometry. See also [2].
In words, we think of log p/(1 − p) as the strength of the field, but depending on the location, the particle has to move with or against that field. At x where w x = 1, there is a bias to the right for p > 1/2, and there is a bias to the left at w x = 0. Obviously then, there are unbounded (exponentially distributed) intervals where the field is pointing to the left, even though there is a larger density of places where the field is to the right. We know from [14] that there is an asymptotic current, For a given density ρ > 1/2 we will have zero current when p grows larger than ρ. That mathematical scenario as in figure 3 can be physically realized in various ways. A well-known example is that of Barma and Dhar [2] where a random walker is driven on a percolation cluster. They also present a one-dimensional model there and our model next has been inspired by that setting.

Random wires
We imagine an infinitely long wire which is folded at random points to give the zig-zag-like structure exhibited in figure 4. We think of left to right as the overall bias of an external field and we will consider the current to the right (upward to the right in figure 4). The black curve is the current-carrying wire with the red balls indicating particles, possibly responsible for a macroscopic current or a diffusion process. Let us denote by x ∈ R an arc-length coordinate parameterizing the wire, increasing as we move along the wire in the forward direction, and let x n denote the sequence of nodes. Between these nodes there are strands of wire but their length is random. The even nodes at x 2n are turning points for the wire, taking a left-to-right stretch x 2n−1 → x 2n into a stretch x 2n → x 2n+1 going to the left. From every even node 2n there is a strand forward (towards 2n + 1) with length L + (2n) = x 2n+1 − x 2n which are taken independently and identically distributed. Similarly, the lengths of the backward strands L − (2n) = x 2n − x 2n−1 are random and identically distributed but that distribution is different from that of the L + . We typically want the expected lengths E(L − ) > E(L + ) to have a net trend to the right when the average external field points to the right. In fact, we suppose here that L + is bounded, and for simplicity we can assume simply that L + = L, where L is some finite cut-off length, which will enable the phenomena to be discussed.
A first natural dynamics on the random wire would be to take an overdamped dynamics for the position x t of the particle, where σ(x) is +1 in the intervals (x 2n−1 , x 2n ) and σ(x) = −1 in the intervals (x 2n , x 2n+1 ) (where moving forward in the wire is moving against the bias of the field). Keeping the temperature T fixed in front of

33002-4
Activity induced first order transition for the current in a disordered medium the standard white noise ξ t , it can be shown that while for small biasĒ one may see a response in the form of a small current, the random geometry of the wire may often lead to a total arrest of that current for allĒ E c .
Indeed, rare but occasionally very deep traps such as in figure 4 may trap a particle for ever longer time-intervals, leading to a gradual stagnation. The disproportionate biasĒ quells the dynamic activity and, therefore, the current as in figure 3.
A way out of this choking is to shake up the internal fabric of the wire by introducing a time-modulation in the field E. The frequencies that are needed depend on various scales of trapping. However, instead of continuing with the dynamics (3.2) we go for a model which is closer to the random walks we had in the previous section, and where a first order transition to resurrection of the current can be experienced.

Resurrection dynamics
The dynamics of the (non-interacting) particles, when subjected to a strong slowly time-dependent external field E(t), is taken as an overdamped zero-temperature dynamics and runs as follows: (1) when the particle is not in a node, it travels in the same direction as the field with velocity v(t) = E(t); (2) when the particle is in a node where it is not allowed to proceed in the direction of the field. It remains momentarily stuck there, just as long as this field orientation persists; (3) when the particle is in a node where the two adjacent strands are in the direction of the field, it immediately leaves the node on one of both strands. The protocol for choosing either strand is random: probability 1/2 for either option and independent of all other elements of the dynamics.
The field E(t) is a piecewise constant, with period τ; see figure 5. There is a first time-interval [0, h] where the field is negative equal to −ε 1 with product = h ε 1 representing the leftward tendency of the field. Then, in the time-interval [h, τ], the field is positive equal to ε 2 with right-ward tendency r = (τ − h) ε 2 . The average bias equals E 0 = (r − )/τ. When we fix E 0 and the period τ, we increase the activation by letting grow (and r in the same way). The increase of (or r) at fixed E 0 and τ represents the amplitude of "shaking" of the field.
In figures 6, 7, 8 we play the dynamics following the times t i indicated in figure 5.
There are three types of nodes in the random wire which merit special attention: Right: At time t 4 , particle 2 has gotten stuck in node 2n + 1, where it must remain until the next fieldreversal at time t 5 . We see that particle 1 has made the choice to go forward while particle 3 has gone backward: a scenario with probability 1/4 to proceed from the situation at time t 3 . l E(t 5 ) the field suddenly reverses once more. Particles 1 and 2 were previously stuck in odd nodes and, therefore, now make a random, unbiased choice to continue either forward or backward. Particle 3 has proceeded by a length -i.e., the leftward tendency -away from node 2n − 2. Right: A possible state of affairs at time t 6 . There are 16 possible t 6 -states for the initial condition at t 1 , all of which occur with probability 1/16.
• A type-0 node is an even-numbered node (2n) which is inescapable. Particles which hit it at some point will thereafter never be capable of journeying to 2n − 1 or 2n + 1, let alone 2n − 2, 2n + 2. A system where type-0 nodes are in every positive and negative tail (2n) n>M , (−2n) n>M does not allow for a nonzero current or diffusion as the particles must, almost surely, relax in a periodic orbit in the vicinity of a type-0 node.
• A type-1 node is an even-numbered node (2n) where all journeys are possible. One can easily verify that within a single period τ, a particle leaving that node will travel to 2n − 2 or 2n + 2 with probability 1/4 while the option to return to 2n has probability 1/2.
• A type-2 node is an even-numbered node (2n) where only the journey to 2n + 2 is possible. Within a single period τ such a journey has probability 1/4, while returning to 2n happens with probability 3/4.

First-order transition in the current
We can write down an exact expression for the asymptotic time-averaged currentv in a system where only nodes of type 1 and type 2 occur and are distributed according to a product measure, with 0 p 1 being the probability for a given node to be of type 1: for almost all wire geometries. The 1 , 2 > 0 are disorder-averaged wire-lengths between a given node 2n − 2 and the node 2n conditioned on the latter being of type 1 and type 2, respectively. τ is the period of the field E while T is the disorder-averaged journey-time for a particle to travel from a node 2n − 2 to the node 2n, conditioned on the latter being of type 2. A derivation of formula (3.3) is given in the Appendix A accompanied by figure 12. In formula (3.3) the bias E 0 is (only) in the T . We make formula (3.3) explicit. Imagine L − is exponentially distributed in length -with average λ -while L + = L. As long as the activity of the field < L is strictly smaller than L, there are plenty of type-0 nodes so that all particles enter localized periodic orbits and no current flows. However, as soon as L, the following values should be plugged into formula (3.3): (3.4) The resulting current for L = 1, τ = 1, λ = 2 and r − = 3 is given in figure 9 (blue curve). The two regimes are summarized in figure 12. Note that this scenario is strictly at zero-temperature. We expect the orange curve in figure 9 to represent the smoothened-out resurrection at low temperature. Finally, for this model of the geometry of the wire, we can calculate the linear response regime when E 0 = (r − )/τ is small. In that linear regime, we, of course, have zero current,v = 0, when < L and otherwise, by expansion in E 0 for fixed activation ,v Consider a second example: suppose now that L − is homogeneously distributed in the interval [0, L ] while still L + = L. Again, no current flows as long as < L . However, when L L , the following values have to be plugged into formula (3.3):  The current corresponding to those values L = 1, L = 4 , . . . is again given in figure 9 (green curve). For > L , the current is again zero, while, on the other hand, the diffusion of the particle has not lessened; see figure 10. In figure 11, we show the dissipated work for the same system. L , one has a regime of activity where particles can diffuse unhindered along the chain (due to the external field). Yet, the external bias of the field is no more translated into a current/ballistic motion.

Conclusion: on the role of activation
Activation in nonequilibrium situations as in the above has been less often considered. Yet, it appears in many kinetic questions, such as in the problem of relaxation to equilibrium. The reason why systems do not relax instantaneously is, of course, related to the fact that the metric for steepest descent in a free energy landscape is controlled by factors of activity. Let us take an example of a Markov jump process to illustrate the issue more concretely. To make it very simple, we consider a mesoscopic process where the reduced system has states x = 1, 2, . . . , n, thought here as the positions on a ring of size n. Each x thus represents a state of an N-particle system. On the considered level of description, we assume a Markov evolution for a random walker with transition rates Note that to each pair {x, x + 1} we have associated an activity or frequency a x > 0, telling us the local time-scale for the traffic over x ↔ x + 1. Other choices and parameterizations are possible and sometimes wished in the case of Arrhenius kinetics but the main idea of what follows does not depend on it. For every choice, though there is a detailed balance for the potential V at inverse temperature β, and indeed the Boltzmann-Gibbs weight ∼ exp[−βV(x)] gives the stationary occupation at site x. The equation describing relaxation to equilibrium is here the Master equation, but let us simply imagine that at some time all particles sit at position x. There are two interesting questions now concerning the relaxation; one is what the preferred next position is, and the other is about the time it takes to make the move. For the first question, we need to know the fraction of particles that go right x → x + 1 to those that go left x → x − 1. Clearly, that is given by the ratio of fluxes Hence, when a x a x−1 particles will typically flow to the right when the potential does not vary much, V(x − 1) V(x + 1). Even when V(x − 1) V(x + 1) and the potential would guide the particles to the left, still the reactivities decide when (4.1) is very large. That is a kinetic effect, not visible from the stationary occupations which are determined thermodynamically. In the long run, the effect cancels, and the occupations are determined by the potential and the temperature and all currents disappear. The important reminder is that time-symmetric effects as quantified here in the local frequencies a x can determine the current direction in the transient regime for the relaxation to equilibrium and they are, of course, directly giving a measure of the dynamical activity over the corresponding bonds x ↔ x + 1; see more in [16,17].
For the second question, we take all a x = a fixed while we assume that for x, the differences β[V(x + 1) − V(x)] and β[V(x − 1) − V(x)] are so big that effectively the particles prefer to stay in position x. That is to say, that the escape rate k(x, x + 1) + k(x, x − 1) ∼ a exp(−N) is minute for some big N, where N could be an extensive parameter indeed as the number of particles or scaling like the volume of the total system. Remember that we are dealing here with an effective and coarse-grained system where we assume that x is such that an escape from it is thermodynamically not available. The system is then essentially trapped in condition x and will be seen to reside there for macroscopic times. However, if we are able to change the activities a → a s N in the same way for some parameter s > 0, we will again see the activity and eventual relaxation to equilibrium. The reason is clear from looking at the escape rates, but there is a more interesting calculation which involves the path-probabilities and which makes contact with spatially extended models. The point is that the probability of a trajectory ω = (x s , 0 s t) scales like

33002-9
where K t x,x+1 is the number of jumps over x ↔ x + 1 (always counted positive) in the time-interval [0, t]. Trajectories where β[V(x t ) − V(x 0 )] ∼ N 1 are very damped, unless we change the a → a s N , which amounts here to changing the dynamical ensemble into We have ignored the change in the escape rates ξ's for which we assume that they remain of the order one. We note that for s < 0, the system would not show any activity or relaxational behaviour, while for s > 0, the system is biased to show the activity and would relax to equilibrium. That is not unlike the dynamical phase transitions discussed in [9][10][11].
We finally remark that the opposite scenario is much better known. Suppose indeed that a a o exp(− ) where can be huge depending on certain parameters or constraints in the total system. Even when the escape from x is thermodynamically allowed, we need activation to overcome the barrier set by the a's.
The point of the paper has been to illustrate the previous formalities for a more specific set-up, where (1) the current in a steady system dies when the amplitude of a forcing exceeds a certain value, and (2) the current resurrects to a non-zero saturation value when the forcing is appropriately modulated in time. The latter, the time-modulation, is thus a physical way of introducing activation. It effectively increases the activities and results in the flowing of a current which previously vanished.

Acknowledgement
We thank Pieter Baerts for help in providing the figures, and we are grateful to Luca Avena, Frank den Hollander, François Huveneers and Thimothée Thiery for private communication and discussions.

A. Derivation of current formulae for the current
We refer to the figure 12. Suppose the activation > L, so that only nodes of type 1 and type 2 remain. We partition the wire into cells (c k ) k ∈Z which begin and end in a node of type 2 with only nodes of type 1 in between. Instead of constructing the wire by choosing the connecting strands as the random variables L − and L + , one can devise a scheme where the wire is constructed by choosing the cells i.i.d. from an appropriate probability space C. Any element c ∈ C has a random number of type 1 nodes included: the probability for precisely m such nodes is given by p m (1 − p). Moreover, the element c also carries the information on the strand-lengths connecting all its constituent nodes. In particular, we denote by |c| the sum of all c's strand-lengths. L. Particles suddenly start flowing, giving rise to a nonzero current. Though nodes of type 2 prevent orbits deviating to −∞, the motion is nonetheless also diffusive in the sense that Var(x(t)) ∝ t for large t.
Next, we create another auxiliary probability space X. The elements Ξ = (c, o) ∈ X are doublets where c ∈ C is a cell as described before. o = (n 1 (o), n 2 (o), n 3 (o), t(o)) ∈ N 3 × R is the essential information of an orbit-fragment obtained by dropping a particle at the left-end of the cell c, starting the external field E(t) at time t = 0 and playing the dynamics just until the (first) time t(o) = t(Ξ) ∈ τN where the particle has arrived at the right-end of the cell c and the field has just completed a period. n 1 (o) is the orbit o's total number of journeys from the leftmost (type 2) node to itself. Similarly, n 2 (o) is the total number of journeys starting at a type 1 node and ending at the same node. n 3 (o) is the number of journeys where the destination differs from the starting point. Suppose Q ⊂ C is a set of cells such that ∀c ∈ Q, o = (n 1 , n 2 , n 3 , t) is a possible orbit. Then, the probability measure dµ on X is defined where dµ c is the probability measure inherited from the space C.

A.1. The expected cell-length
The expectation value for |Ξ| = |c| is given by

A.2. The expected time to traverse a cell
.
Now, for every journey which ends in a type 1 node, the travel time is a single period τ. The same is true for journeys from a type 2 node to itself. So, an orbit o ↔ Ξ = (o, c) which needs J(Ξ) ∈ N jumps to traverse the cell c will last for a duration of [J(Ξ) − 1]τ +T (Ξ). (The last jump was from a node of type 1 toward a node of type 2 of the next cell and, therefore, lasts a timeT (Ξ) =T (c) ∈ τN which is random but only dependent on the strand-length between the two final nodes of the cell. We denote by T the expectation value of the latter variable.) So, calculating τ( j) is equivalent to calculating Notice that the entries of the last column of P sum only to 3/4: with probability 1/4, a particle at the last site will jump to the next cell. Initially, the particle starts at the first site, hence ρ(0) = (1, 0, . . . , 0) T . Then, the future probabilities are given by ρ(J) = P J ρ(0). The proportion of orbits, where the particle leaves precisely after J periods, is given by .
However, in that expression, the law of large numbers implies convergence (respectively in probability, or strictly almost surely) for n → ∞ in both the numerator and the denominator. So, definingv = lim n→∞ v n , we getv It is not hard to show that the related currentv = lim t→∞ x(t) t is almost surely equal tov since the orbit is at every time situated between the right-ens of some cell k and k + 1 and, therefore, from where the equality (3.3) follows by sandwiching.