Gradient and Lipschitz estimates for tug-of-war type games

We define a random step size tug-of-war game, and show that the gradient of a value function exists almost everywhere. We also prove that the gradients of value functions are uniformly bounded and converge weakly to the gradient of the corresponding $p$-harmonic function. Moreover, we establish an improved Lipschitz estimate when boundary values are close to a plane. Such estimates are known to play a key role in higher regularity theory of partial differential equations. The proofs are based on cancellation and coupling methods as well as improved version of the cylinder walk argument.


Introduction
Higher regularity of value functions to the tug-of-war type games is largely open. In this paper, we develop several techniques in order to study gradient regularity of value functions. In particular, we introduce a version of a tug-of-war with noise that has, unlike the standard tug-of-war type game, a bounded gradient. We also derive an improved Lipschitz estimate in a ball with boundary values close to a plane. Such estimates are known to play a key role in higher regularity theory of partial differential equations.
The theory of tug-of-war type games has obtained attention after the seminal paper of Peres, Schramm, Sheffield and Wilson [PSSW09] showing that the solutions of the infinity Laplace equation can be approximated by value functions of a two player random turn zero-sum game called tug-of-war. For the 1-Laplacian Kohn and Serfaty established a deterministic game counterpart in [KS06]. Later Peres and Sheffield introduced a game theoretic approach to the p-Laplacian, 1 < p < ∞ [PS08] by using a tug-of-war with noise. The connection between the tug-of-war with noise and p-harmonic functions can be compared to the classical connection between the Brownian motion and the Laplace equation. The p-Laplace operator obtained as a limit case also appears in many applications in physics with different values of p: electrostatistics and electric networks, non-Newtonian fluids, reaction-diffusion problems, nonlinear elasticity, glaceology, and the thermal radiation of a hydrogen bomb, just to mention a few examples. The analytic and probabilistic results we obtain also apply to this limiting case. Moreover, let us also point out that the game theoretic p-Laplacian has gained interest in image processing and machine learning [Does11,ETT15,EDT17,ELT16]. It has also applications in economic modeling [NP17].
In [MPR12] Manfredi, Parviainen and Rossi studied a variant of the tug-of-war game and its connection to the dynamic programming principle (DPP) where u ε denotes the value of the game, α and β are given probabilities, and ε > 0 denotes the upper bound for the step size. Roughly, at each round either the game position moves to a random point with probability β, or with probability α the players toss a coin and the winner of the toss decides where to move. The game is played in a domain Ω, and once the game position exits the domain, Player II pays Player I the amount given by a payoff function. As ε → 0, the value functions converge to the corresponding p-harmonic function with suitable choices of α and β. The game in [MPR12] has good symmetry properties, and this allows a rather straightforward proof of Lipschitz continuity [LPS13] of p-harmonic functions. The proof is based on a suitable choice of strategies and is thus quite different from the PDE proofs.
In this paper we study a different version of the game where we randomize the step size for the tug-of-war part, that is, (upper bound for) the step size of the players is chosen according to the uniform distribution on [0, ε]. We give a detailed description of the game in Section 2. The key outcome is that, randomizing the step size for the tug-of-war part has a regularizing effect on the value function. We will also show that the game has a value and that the value function satisfies the following DPP In one of our main results, in Theorem 3.2, we show almost everywhere that the gradient of the value function u ε exists and is bounded. As in the standard tug-of-war with noise, the value functions converge uniformly to the corresponding p-harmonic function as the step size tends to zero, but now also the gradients converge weakly to the gradient of the p-harmonic functions as stated in Theorem 3.3. In order to obtain the existence and boundedness of the gradient in Theorem 3.2, we need to control the small scale behavior of the value function. This is missing in the standard tug-of-war game and the value can even be discontinuous. However, when randomizing over the step size there is a considerable overlap in the small scale and thus we can establish cancellation effect, see the estimate (3.11).
The sharper Lipschitz estimate when boundary values are close to a plane is obtained in Theorem 4.2. The key idea is to modify the cylinder walk argument introduced in [LPS13] so that boundary values are encoded into the cylinder walk.
Moreover, the modified cylinder walk directly gives an estimate for the oscillation of the value function.
More regular, and in particular continuous, versions of tug-of-war type games have been suggested by Lewicka in [Lew]. Despite the lack of infinitesimal regularity of the standard tug-of-war type game, its regularity can be studied asymptotically. For asymptotic Hölder and Lipschitz regularity results see, in addition to the references mentioned above, for example [Ruo16,AHP17,LP18,ALPR]. We are mostly interested in the regularity theory of games on its own right, but mention that as an application our regularity results for games imply new proofs for regularity results for p-harmonic functions and the corresponding numerical discretization schemes.

Randomized step size game
Consider a bounded domain Ω ⊂ R n satisfying the uniform exterior sphere condition and let ε ∈ (0, 1). We denote the compact ε-boundary strip by We also set Ω ε := Ω ∪ Γ ε .
Here and subsequently, we denote by B t (x) the open ball of radius t centered at x. We assume that n ≥ 2 and 2 < p < ∞. Here p is related to the p-Laplacian in the limiting problem.
2.1. Rules of the game. We define a variant of tug-of-war with noise that we call random step size TWN played by Player I and Player II as follows. First, a token is placed at a point x 0 ∈ Ω and the players toss a biased coin with probabilities α = p − 2 n + p ∈ (0, 1) and β = n + 2 p + n = 1 − α.
If they get tails (probability β), the game state moves randomly (according to the uniform distribution) to a point x 1 in the ball B ε (x 0 ). If they get heads (probability α), a step size ε 1 is chosen randomly on [0, ε] (according to the uniform distribution) and a fair coin is tossed, then the winner of the toss is allowed to move the game position to any point x 1 ∈ B ε 1 (x 0 ). They continue playing according to the same rules at x 1 . The game continues until the token hits Γ ε for the first time, and Player II pays Player I the amount F (x τ ). The point x τ denotes the first point outside the domain Ω and τ refers to the first time we hit Γ ε . The payoff function F : Γ ε → R is a given, bounded, and Borel measurable function. Player I attempts to maximize the payoff, while Player II attempts to minimize it. A history of the game up to step k is given by a vector • coin tosses c i ∈ {0, 1, 2} where 1 denotes that Player I wins, 2 that Player II wins and 0 that a random step occurs, • the game states x i .
We associate to the history of the game the filtration {F k } ∞ k=0 , where F 0 := σ(x 0 ) and for k ≥ 1 A strategy for Player I that we denote for short S I is a collection of measurable functions (with respect to a suitable filtration F ′ k ) that give the next game position given the history of the game and the next step size, that is if Player I wins the toss. Similarly Player II uses a strategy S II . The rules of the game give one step probability measures. Using this, with the fixed starting point x 0 and the strategies S I and S II , we can construct a unique probability measure P x 0 S I ,S II on the game trajectories.
Let S I be the strategy for the first player and S II the strategy for the second player. We define the value of the game for Player I as and the value of the game for Player II as Due to the fact that β > 0, the game ends almost surely for any choice of strategies.
2.2. The DPP and the comparison principle. An important property of value functions of tug-of-war type games is the dynamic programming principle (DPP).
Using similar arguments as in [LPS14], we can show that the game has a value and that the value function satisfies the following DPP.
Lemma 2.1 (Existence, uniqueness and the DPP). There exists a unique value function A slight modification of the arguments used in [LPS14] implies the existence and uniqueness of the bounded function satisfying the DPP (2.1) and taking boundary values F on Γ ε . This function can then be used to show that the game has a value i.e. u ε I = u ε II , see Appendix A.
The proof of the previous lemma also gives us the following comparison principle.
Proposition 2.2. Letū be a bounded function satisfying the DPP (2.1) and such thatū ≥ F in Γ ε , and u ε the value of the game with boundary values F . Then it holdsū ≥ u ε in Ω ε . Similar result also holds if the inequalities are reversed.
From the comparison principle, we get the uniform boundedness of u ε . Lemma 2.3. Let u ε be the value function of the random step size TWN with boundary values F on Γ ε . Then it holds

Existence, boundedness and weak convergence of the gradient
In order to obtain a Lipschitz estimate independent of ε, we proceed in two steps. First we provide a large scale estimate that has an ε-dependent error using a cylinder walk method introduced in [LPS13]. Then we utilize overlap and cancellation in the small scale to improve the estimate. In the sequel C will denote a generic constant which may change from line to line.
Lemma 3.1. Let u ε be the value function of the random step size TWN with boundary values F . Assume that B 6r (z 0 ) ⊂ Ω with r > ε. Then there exists a constant C > 0 depending only on p, n, r and ||F || L ∞ (Γε) such that, for x, y ∈ B r (z 0 ), it holds (3.2) Proof.
Suppose that the game starts at x. At every step k we can describe the game position as a sum of vectors Here J k 1 denotes the indexes of rounds when Player I has moved, vectors v j are her moves, and J k 2 denotes the indexes of rounds when Player II has moved, the w j represent the moves of Player II. The set J k 3 denotes the indexes when we have taken a random move, and these vectors are denoted by h j . Let us define a strategy S 0 II for Player II for the game that starts from x. Player II always tries to cancel the moves of Player I which he has not yet been able to cancel and otherwise he moves to the direction z − x with the aim What we mean by "cancellation" is that Player II is backtracking the path made by Player I and going towards z. Since the player is allowed to step to a point within an open ball, we will have to choose a radius slightly smaller that ε k that is not accumulating too much errors. More precisely, if at the (k + 1)-step Player II wins the coin toss, then first he looks at the remaining part of the track made by Player I that he has not yet been able to cancel, let's call it by If all the moves of Player I at that moment are canceled that is V k = 0 and Player II wins the coin toss, then he moves to the direction z − x by vectors of the form We stop this process if one of the following conditions holds: where for j ≥ 1 we set a j = 1 if Player I wins at the j-th step, a j = −1 when Player II wins and a j = 0 if the random move occurs, and a 0 = 0. The quantities ε j are the (upper bounds of) step sizes of the game.
We define τ ′ as the stopping time defined by those conditions. With probability 1 this stopping time is finite. An important point to note here is that this stopping time does not depend on the strategies. Notice that when the game has ended by condition C1 and one of the player is using the cancellation strategy, then the final point x τ ′ is randomly chosen around z: The condition C1 guarantees that the Player II has played sufficiently many turns with sufficient step sizes to place the token at z (modulo the random noise). Indeed notice that when using the cancellation strategy (since we either add a vector that cancels the moves of the other player or add vectors of the form c(z − x)) we have i.e. H = 0. We can utilize the cancellation effect by using the symmetry of this construction. Letting S 0 I be the corresponding cancellation strategy for Player I when starting from the point y, it holds for any choices of the strategies S I , S II . Hence we can eliminate the symmetric part when estimating u ε (x) − u ε (y). Also observe that in all cases, we are guaranteed that, when the game is still running, we never exit B 4r (z 0 ). By an abuse of notation, for i ∈ {1, 2, 3}, we denote by C i the event the game ends by condition Ci. We have that where P denotes the probability that the process ends by C1.
The idea is that we associate the t-component of the cylinder walk process with the random variable where a j are defined as in (3.4). Similarly, we associate x-position of the cylinder walk process with Each of the stopping conditions in the original process is associated with reaching the boundary in the cylinder walk.
By using a functionv satisfying (2)v ≤ 0 on the sides and the top of the cylinder, we can prove that 1 − P ≤ C|x − y| + Cε.
(3.8) To see this, we use the Taylor expansion and the fact thatv satisfies (3.7), and we getv It follows that, if we consider the sequence of random variablesv(x j , t j ), j = 0, 1, 2, ..., with (x j , t j ) j∈N being the positions in the cylinder walk, then we have Consequently, M j :=v(x j , t j ) + Cjε 3 is a submartingale for a constant C depending on n and |D 3v |. Now we apply the optional stopping, using the stopping timeτ that corresponds to exit from the domain and by the convexity ofv, we have where O(ε) is an error coming from not stopping on the boundary of the cylinder. A slight modification of the reasoning of [LPS13, Appendix A], see Appendix C, gives us the estimate This estimate together with (3.5), complete the proof.
Next, combining the previous result and a small scale overlap, we can prove the existence and boundedness of the gradient for value functions.
Theorem 3.2. Let u ε be the value function to the random step size TWN with boundary values F . Assume that B 5r (z 0 ) ⊂ Ω with r > ε. Then, there exists a constant C > 0 depending on p, n, r and ||u ε || L ∞ (Ωε) such that, for x, y ∈ B 2r (z 0 ), it holds (3.9) Moreover, Du ε exists almost everywhere in B r (z 0 ) and Proof. Fix x, y ∈ B 2r (z 0 ). If |x − y| ≥ ε, then the estimate (3.9) follows from (3.2), and thus we may focus our attention to the case |x − y| ≤ ε. Using the DPP formulation, we have Since |x − y| is small we can utilize the overlap between the balls and benefit from the resulting cancellations. We treat the tug-of-war part and the random noise part in a different manner.
Step 1: Tug-of-war part. Define We rearrange G as We start by an estimate for I Here we used that B t (x) ⊂ B t+|x−y| (y). Next we estimate the second term in (3.11) by using the result of Lemma 3.1. We have It follows that for some C > 0 that depends on p, n, r and ||F || L ∞ (Γε) . Similarly for J, we have Then, using the result of Lemma 3.1, we estimate In the same way Combining the estimates for I and J, we get that (3.12) Step 2: Random part. Here we want to estimate which arises from (3.10). Recall that |x−y| ≤ ε. We fix a pointh ∈ ∂(B ε (x)∩B ε (y)). We have Using the estimate coming from Lemma 3.1, we have Similarly, it holds (3.13) Here we used that where ω n is the surface area of the (n − 1)-dimensional unit sphere. Summing the estimates (3.12) and (3.13), we get that for all x, y ∈ B 2r (z 0 ).
We are now in a position to show the weak convergence of the gradient and the relation to p-harmonic functions. For the theory of p-harmonic functions, see for example [HKM93] or [Lin06]. These references mostly deal with the weak theory of partial differential equations. The tug-of-war approach leads to the viscosity solutions of the normalized p-Laplacian, but in the homogeneous case these solutions coincide with the usual p-harmonic functions [JLM01,KMP12].
Theorem 3.3. Let F ∈ C(Γ ε ), 2 < p < ∞ and let u ε be the value function of the random step size TWN with boundary values F . Assume that Ω satisfies a uniform exterior sphere condition. Let u be the unique p-harmonic function in Ω with u = F on ∂Ω. Then u ε → u uniformly on Ω, and for any q ∈ [1, ∞) and B 2r (z 0 ) ⊂ Ω it holds up to a subsequence that Du ε ⇀ Du weakly in L q (B r (z 0 )).
Proof. From Theorem 3.2, we know that for B 2r (z 0 ) ⊂ Ω, there exists a constant C independent of ε such that ||Du ε || L ∞ (B 2r (z 0 )) ≤ C. First, a straightforward modification of the arguments used in [MPR12] allows us to prove that as ε → 0, the value functions converge uniformly to the unique p-harmonic function u in Ω with u = F on ∂Ω. For the convenience of the reader, we work out the details in the Appendix B.
The weak convergence of a subsequence in the Sobolev spaces W 1,q (B r (z 0 )) for 1 < q < ∞ also follows from the above estimate since it implies that the sequence is uniformly bounded in these reflexive spaces. The case q = 1 follows from the equi-integrability of Du ε and the Dunford-Pettis theorem.

Lipschitz estimate
In this section we provide a sharper Lipschitz estimate for the value functions u ε when we have additional knowledge about the boundary values. If the boundary function is relatively close to a plane, does the Lipschitz estimate of the value function stay close to the slope of the linear function inside the domain. This is related to the strong convergence in Sobolev spaces, see for example [ES11,Theorem 4.1]. However, due to some subtle errors we could not reach a quite sufficient estimate |Du ε | ≤ |ν| + Cδ.
First we state immediate bounds arising from the comparison with planes.
Lemma 4.1. Let ν ∈ R n , b ∈ R and δ > 0. Assume that F is a continuous function which satisfies in Γ ε Let u ε be the value function for the random step size TWN with boundary values F . Then for x ∈ Ω ε , we have Proof. Sinceū satisfies the DPP (2.1) andū ≥ F , the comparison principle of Proposition 2.2 implies that u ε (x) ≤ū for x ∈ Ω ε . The same argument implies that u ε (x) ≥ u for x ∈ Ω ε .
Proof. The key idea is to consider again the cancellation strategy, the previous stopping rules for the associated cylinder walk but to use a different barrier function (that we will construct explicitly) which directly gives an estimate for the difference of values and thus immediately gives Lipschitz estimate for the value function instead of just giving an estimate for the hitting probabilities. Such technique should be of independent interest. The proof will be divided into 4 steps.
We define the same cancellation strategy as in Section 3 and the same stopping rules C1, C2, C3. For i ∈ {1, 2, 3}, we denote by C i the event the game ends by condition Ci. Using the cancellation strategies we have again: (4.14) Next we can write using a shorthand P := P x where P (C i ) are independent of strategies. In the sequel we will the notation used in [Klen14, Section 8.3] for conditional expectations. We introduce the random variable Y taking values in R by where µ is the probability distribution of Y . Next, notice that for any point z 1 ∈ Ω, we have This follows from the comparison Lemma 4.1.
Now to illustrate, suppose that the original process starting at x has some realization satisfying τ ′ j=0 ε j a j = s. We take the corresponding paths both starting at x and starting at y with the same realization. Denote by x τ ′ and y τ ′ the end points of the paths. Recalling that one of the players is using the cancellation strategy the paths we need to concentrate on are of the form where |q|, |q| ≤ |x − z| + τ ′ j=0 ε j a j + ε = s + |x − z| + ε. Thus by (4.16)
We denote the probability measure for the cylinder walk by P . When computing the value for the cylinder walk, we use a new barrier functionū compared to Section 3 with different boundary values given below. The functionū is an explicit solution to p − 2 3ū tt + ∆ū = 0 (4.20) that we construct below. The functionū is a solution to the same equation as the one utilized in Step 2 in the proof of Lemma 3.1 but we modify the boundary values taking into account the more precise behavior of F . The idea to choose again a solution of this equation is to be able to use the fact that, when taking a sequence (x j , t j ) of positions in the cylinder walk, the sequence is a supermartingale. The boundary conditions onū are the following: (4.21) The choice of the side values is motivated by the following observations. Suppose that the original process starting at x ends because of stopping condition C3 or condition C2 with some realizations i j=0 ε j a j and j∈J i 3 h j . Then its associated path in the cylinder walk hits either the side boundary strip of the cylinder or the top of the cylinder at (ζτ , tτ ) := ( j∈J τ ′ 3 h j , |x − z| + τ ′ j=0 ε j a j + ε). At this point (ζτ , tτ ) we have thatū (ζτ , tτ ) ≥ 2|ν|tτ + 2δ The case where the original process ends because of stopping condition C1 corresponds to the exiting through the bottom strip of the cylinder in the cylinder walk, where we would like to set boundary conditions 0. However, the explicit function that we use below might be slightly negative causing a small error.
Next, letτ be the first time the cylinder walk starting from (0, |x − z| + ε) exits the cylinder and introduce the random variable Y defined by Define EB as the event that the cylinder walk starting from (0, |x − z| + ε) exits the cylinder through the bottom, ET as the event that the cylinder walk starting from (0, |x − z| + ε) exits the cylinder through the top and ES as the event that the cylinder walk starting from (0, |x − z| + ε) exits the cylinder through the sides. We have where µ is the probability distribution of Y . Observing that by construction the involved probabilities are the same as in (4.19), we may combine this with (4.19) and obtain The term − inf Br(0)ū (ζ, −ε) on the last line arises from the fact that our explicit function constructed below can be slightly negative in the bottom strip of the cylinder.
Step 3: construction of the barrier functionū In order to construct an explicit solutionū as mentioned above, we define the following domain (see Figure  1). The center of the bottom is at (0, 0) and otherwise the bottom is a part of an ellipsoid E 1 centered at 0, − √ p−2 √ 3 R , with 2r ≤ R. Let C > 2, and define the function Step 4: Estimate of the value functionū on the cylinder walk giving the estimate of |u ε (x) − u ε (y)| It follows from the Taylor expansion that Hence, using that where C depends on n and the third derivatives ofū.
Consider the sequence of random variablesū(ζ j , t j ), j = 0, 1, 2, ..., where (ζ j , t j ) j∈N are the positions in the cylinder walk. From (4.22), we have that is a supermartingale. Then, applying the optional stopping, using the stopping timē τ that corresponds to exit from the domain B r (0) × [0, r + |x − z|], we get that (4.23) It remains to estimate the terms on the right hand side.
Appendix A. Existence and uniqueness of functions satisfying the DPP In this section we prove the existence and uniqueness of the value of the game for the random step size TWN. The proof is an easy adaptation of the arguments of [LPS14]. In the case of the obstacle problem, the existence is considered in [LM17] and in the case p = ∞ in [LS15].
Lemma A.1 (Existence for DPP). There exists a bounded Borel function u ε satisfying the DPP Proof. We can check that for a Borel function u the functions are also Borel functions. Now consider the following iteration process u j+1 := T (u j ) where and the first function is The sequence u j is increasing and bounded from above by sup y∈Γε F (y).
It follows that u j converges to a function u ε when j → ∞. Proceeding by contradiction, we can show that the convergence is uniform. Indeed, if this is not true, then, For any η > 0 we may find x 0 ∈ Ω such that for l > k large enough, it holds Moreover, using the the dominated convergence theorem, we may also assume that sup x∈Ω Bε(x) u ε (y) − u k (y) dy ≤ η.

It follows that
Here we used that We get that (1 − α)A ≤ (α + 3)η and we end up with a contradiction if we choose 0 < η < (1−α)A 2(α+3) . The uniform convergence of u j to u ε implies that we can pass to the limit in the DPP functional and hence the limit u ε obviously satisfies the DPP and it has the right boundary values by construction.
The uniqueness of the function u ε satisfying the DPP (2.1) and having boundary values F is a consequence of the following lemma.
Lemma A.2 (Comparison). Let u ε and u be bounded functions satisfying the DPP (2.1) in Ω and u ≥ u ε on Γ ε . Then it holds Proof. We argue by contradiction. Assume that u ε (y) >ū(y) for some y ∈ Ω. Since u ε −ū is bounded, we have sup Ω (u ε −ū) =: M > 0. Using the DPP (2.1), we have The inequality (A.25) and the absolute continuity of the integral imply that the set is non-empty and also satisfies G ⊂ Ω by using the boundary data assumption. We deduce that, if ζ ∈ G, then u ε −ū = M almost everywhere in a ball B ε (ζ). By continuing, this contradicts the assumption that G ⊂ Ω.
The previous lemma also holds if we reverse the inequalities. Thus it implies that the function u ε satisfying the DPP (2.1) with u ε = F on Γ ε is unique. Now we are ready to show that the game has a value.
Lemma A.3. Let u ε be the unique bounded function satisfying the DPP (2.1) with u ε = F on Γ ε . Let u ε I be the value of the game for Player I and u ε II be the value function of the game for Player II. Then u ε II = u ε = u ε I .
Proof. Since we always have u ε I ≤ u ε II , in order to show that u ε = u ε I = u ε II , it is enough to prove that u ε II ≤ u ε ≤ u ε I . We will only show that u ε II ≤ u ε since the proof of u ε I ≥ u ε is analogous. Fix a point x ∈ Ω, a starting point for a game. Player I plays with any strategy and Player II plays with the following strategy S 0 II . From a point x k−1 ∈ Ω taken that the radius t has been selected, Player II steps to a point for some fixed η > 0. In order to ensure that this kind of strategies are Borel, we can adapt the arguments used in [LPS14]. Then we have It follows that the process M k := u ε (x k ) + η2 −k is a supermartingale when using the strategies S I and S 0 II . It follows that, Since η > 0 was arbitrarily chosen, we get that u ε II ≤ u ε . A similar argument where Player II chooses any strategy and Player I steps to a point almost maximizing u ε gives that u ε I ≥ u ε in Ω ε .
Hence M k := |x k − z| 2 − C(n)ε 2 k is a supermartingale. It follows that and together with (B.27) we get thatv(x k ) + kε 2 is a supermartingale (by using a pulling towards z strategy in the whole annulus). Defining the stopping timeτ as τ := inf k : x k ∈ B r (z) , it follows that where the process is defined through a reflection at the outer boundary, see [MPR12,Lemma 4.5]. Since τ ≤τ , we get the desired estimate. The triangle inequality and the uniform continuity of the boundary function together with the previous estimate give the desired result for x ∈ Ω and y ∈ Γ ε : there exist r 0 > 0 and ε 1 > 0 such that if |y − x| < r 0 , we have |u ε (x) − u ε (y)| ≤ η/2.
The triangle inequality also gives the desired result for points x, y ∈ Ω and satisfying dist({x, y} , Γ ε ) ≤ r 0 /2. Next, when dist(x, y, Γ ε ) ≥ r 0 2 , we use the local Lipschitz continuity of u ε to get the desired result.
Identifying the limit. Next, we prove that the limit function u is a p-harmonic function. The proof is similar to [MPR10]. Observe that from [JLM01](usual p-Laplacian) and [KMP12] (normalized p-Laplacian), we can restrict the class of test functions ϕ to those with non vanishing gradient at the contact points.
Let ϕ be a smooth test function and suppose that ϕ touches u from below at x ∈ Ω and that Dϕ(x) = 0. From the uniform convergence of u ε , we get that there exists a sequence x ε that converges to x and such that (B.28) Without loss of generality, we can assume that ϕ(x ε ) = u ε (x ε ). Using that u ε satisfies the DPP, we get (plugging the inequality (B.28) into the DPP) that α 2ε Denote byx t ε a point in which ϕ attains its minimum over a ball B t (x ε ). Evaluating the Taylor expansion at y =x t ε and then at the opposite point y = 2x −x t ε , we have and ϕ(2x ε −x t ε ) = ϕ(x ε ) + Dϕ(x ε ) · (x ε −x t ε ) + 1 2 D 2 ϕ(x ε )(x t ε − x ε ) · (x t ε − x ε ) + o(t 2 ).