An alternative extragradient projection method for quasi-equilibrium problems

For the quasi-equilibrium problem where the players’ costs and their strategies both depend on the rival’s decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multi-valued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.


Introduction
The equilibrium problem has been considered as an important and general framework for describing various problems arising in different areas of mathematics, including optimization problems, mathematical economic problems and Nash equilibrium problems.
As far as we know, this formulation has been followed for a long time by several studies on equilibrium problems considered under different headings such as quasi-equilibrium problem, mixed equilibrium problem, ordered equilibrium problem, vector equilibrium problem and so on [1][2][3][4]. It should be noted that one of the interests of this common formulation is that many techniques developed for a particular case may be extended, with suitable adaptations, to the equilibrium problem, and then they can be applied to other particular cases [5][6][7][8][9][10][11][12][13][14][15][16]. In this paper, we mainly deal with existence of solutions and approximate solutions of the quasi-equilibrium problem.
Let X ⊂ R n be a nonempty closed convex set, K be a point-to-set mapping from X onto itself such that, for any x ∈ X, K(x) is a nonempty closed convex set of X, and let f : X × X → R be a function such that, for any x ∈ X, f (x, x) = 0 and f (x, ·) is convex on X. The quasi-equilibrium problem QEP(K, f ) is to find a vector x * ∈ K(x * ) such that f x * , y ≥ 0, ∀y ∈ K x * . (1.1) Throughout this paper, we denote the solution set by K * .
Certainly, when f (x, y) = F(x), yx with F being a vector-valued mapping from X to R n , then the quasi-equilibrium problem reduces to the generalized variational inequality or quasi-variational inequality problem [17][18][19][20] which is to find vector x * ∈ K(x * ) such that F x * , yx * ≥ 0, ∀y ∈ K x * . (1.2) To move on, we recall the classical equilibrium problem, and classical Nash equilibrium problem (NEP) [21]. Assume the function f i : R n → R is continuous, and suppose K i is a nonempty closed set in R n i for i = 1, 2, . . . , N with n = N i=1 n i . Suppose that there are N players such that each player controls the variables x i ∈ R n i . Denote x = (x 1 , . . . , x N ), and x -i = (x 1 , . . . , x i-1 , x i+1 , . . . , x N ). Player i needs to take an x i ∈ K i ⊂ R n i that solves the following optimization problem: based on the other players' strategies x -i . If these N players do not cooperate, then each players' strategy set may vary with other players' strategies, that is, the ith player's strategy set varies with the other players' strategies x -i . In this case, we need to use K i (x -i ) instead of K i to indicate the ith player's strategy set, and the ith player needs to choose a strategy x * i ∈ K i (x -i ) that solves the following optimization problem: In [22], the non-cooperative game model is called generalized Nash equilibrium problem (GNEP), which can be formulated as the quasi-equilibrium problem where the involved functions are nondifferentiable [23]. For the problem GNEP, when the functions f i (·, x -i ) are convex and differentiable, then the problem can be equivalently formulated as the quasi-variational inequalities (1.2) by setting When the function f i (·, x -i ) is convex and nondifferentiable, then the GNEP reduces to the quasi-equilibrium problem (1.1) [24] via the Nikaido Isoda funtion On the other hand, the quasi-equilibrium problem (QEP) has received much attention of researchers in mathematics, economics, engineering, operations research, etc. [17,22]; for more information see [19,25,26]. There are many solution methods for solving QEP. Recently, [27] considered an optimization reformulation approach with the regularized gap function. Different from the variational inequalities problem, the regularized gap function is in general not differentiable, but only directional differentiable. Furthermore, supplementary conditions must be imposed to guarantee that any stationary point of these functions is a global minimum, since the gap functions is nonconvex [28]. It should be noted that such conditions are known for variational inequality problem but not for QEP. So, [23] proposed several projection and extragradient methods rather than methods based on gap functions, which generalized the double-projection methods for variational inequality problem to equilibrium problems with a moving constraint set K(x).
It is well known that the extragradient projection method is an efficient solution method for variational inequalities due to its low memory and low cost of computing [29,30].
Based on those advantages, it was extended to solve QEP recently [20,23,31] and this opened a new approach for solving the problem. An important feature of this method is that it has the contraction property in the sense that the generated sequence has contraction property with respect to the solution set of the problem [29], i.e.
The numerical experiments given in [20,23,31] show that the extragradient projection method is a practical solution method for the QEP.
It should be noted that not all the extragradient projection methods have the contraction property [32]. In that case, it may not slow down the convergence rate significantly. Although the extragradient projection method has no contraction property, it still has a good numerical performance [32]. Now, a question can be posed naturally, can this method be applied to solve the QEP? And if so, how about its performance? This constitutes the main motivation of this paper.
Inspired by the work [23,32], we propose a new type of extragradient projection method for the QEP in this paper. Different from the extragradient projection method proposed in [23], the generated sequence by the newly designed method possesses an expansion property with respect to the initial point, i.e., The remainder of this paper is organized as follows. In Section 2, we recall some concepts and related conclusions to be used in the sequel. The newly designed method and its global convergence are developed in Section 3. Some preliminary computational results and experiments are presented in Section 4.

Preliminaries
Let X be a nonempty closed convex set in R n . For any x ∈ R n , the orthogonal projection of x onto X is defined as and denote P X (x) = y 0 . A basic property of the projection operator is as follows [33]. Lemma 2.1 Suppose X is a nonempty, closed and convex subset in R n . For any x, y ∈ R n , and z ∈ X, we have Remark 2.1 The first statement in Lemma 2.1 also provides a sufficient condition for vector y ∈ X to be the projection of vector x, i.e., y = P X (x) if and only if To proceed, we present the following definitions [34].
Definition 2.2 Suppose X is a nonempty, closed and convex subset of R n . A multi-valued mapping K : X → 2 R n is said to be (i) upper semicontinuous at x ∈ X if for any convergent sequence {x k } ⊂ X withx being the limit, and for any convergent sequence {y k } with y k ∈ K(x k ) andȳ being the limit, thenȳ ∈ K(x); (ii) lower semicontinuous at x ∈ X if given any sequence {x k } converging to x and any y ∈ K(x), there exists a sequence {y k } with y k ∈ K(x k ) converges to y; (iii) continuous at x ∈ X if it is both upper semicontinuous and lower semicontinuous at x.
To end this section, we make the following blanket assumption on bifunction f : X × X → R and multi-valued mapping K : X → 2 R n [20,23].

Assumption 2.1
For the closed convex set X ⊂ R n , the bifunction f and multi-valued mapping K satisfy: ( As noted in [23], the assumption (iv) in Assumption 2.1 guarantees that the solution set of problem (1.1) K * is nonempty.

Algorithm and convergence
In this section, we mainly develop a new type of extragradient projection method for solving QEP. The basic idea of the algorithm is as follows. At each step of the algorithm, we obtain a solution y k by solving a convex subproblem. If x k = y k , then stop with x k being a solution of the QEP; otherwise, find a trial point z k by a back-tracking search at x k along the direction x ky k , and the new iterate is obtained by projecting x 0 onto the intersection of K(x k ) of two halfspaces which are, respectively, associated with z k and x k . Repeat the process until x k = y k . The detailed description of our designed algorithm is as follows.
Step 1. For current iterate x k , compute y k by solving the following optimization problem: If x k = y k , then stop. Otherwise, let z k = (1η k )x k + η k y k , where η k = γ m k with m k being the smallest nonnegative integer such that Step 2. Compute x k+1 = P H 1 Set k = k + 1 and go to Step 1.
Indeed, for every x k ∈ K(x k ), since y k ∈ K(x k ), z k ∈ K(x k ), so we have K( To establish the convergence of the algorithm, we first discuss the relationship of the halfspace H 1 k with x k and the solution set K * . Proof First, by the fact that f (x, ·) is convex and we obtain which can be written as which means x k / ∈ H 1 k . On the other hand, by Assumption 2.1, it follows that K * is nonempty. For any x ∈ K * , from the definition of K * and the pseudomonotone property of f , one has f z k , x ≤ 0, which implies that the curve ∂H 1 k separates the point x k from the set X * . Furthermore, by the definition of K * , it is easy to see that and the desired result follows.
The justification of the termination criterion can be seen from Proposition 2 in [23], and the feasibility of the stepsize rule (3.1), i.e., the existence of point m k can be guaranteed from Proposition 7 in [23].
Next, to show that the algorithm is well defined, we will show that the nonempty set K * is always contained in H 1 k ∩ H 2 k ∩ X for the projection step.

Lemma 3.2 Let Assumption 2.1 be true. Then we have K
Proof From the analysis in Lemma 3.1, it is sufficient to prove that K * ⊆ H 2 k for all k ≥ 0. By induction, if k = 0, it is obvious that Suppose that For any x * ∈ K * , by Lemma 2.1 and the fact that we know that Thus K * ⊆ H 2 l+1 , which means that K * ⊆ H 2 k for all k ≥ 0 and the desired result follows.
In the following, we show the expansion property of the algorithm with respect to the initial point.

Lemma 3.3 Suppose {x k } is the generated sequence of Algorithm
Proof By Algorithm 3.1, one has By the definition of H 2 k , we have Thus, x k = P H 2 k (x 0 ) from the Remark 2.1. Then, from Lemma 2.1, we obtain which can be written as and the proof is completed.
To prove the boundedness of the generated sequence {x k }, we assume that the algorithm generates an infinite sequence for simple. Proof By Assumption 2.1, we know that K * = ∅. Since x k+1 is the projection of x 0 onto H 1 k ∩ H 2 k ∩ X, by Lemma 3.2 and the definition of projection, we have Since {x k } is bounded, it has an accumulation point. Without loss of generality, assume that the subsequence {x k j } converges tox. Then the sequences {y k j }, {z k j } and {g k j } are bounded from the Proposition 10 in [23], where g k j ∈ ∂f (z k j , x k j ).
Before given the next result, the following lemma is needed (for details see [23]).

Lemma 3.5 For every y
In particular, f (x k , y k ) + x ky k 2 ≤ 0. Lemma 3.4. If x k j = y k j , then

Lemma 3.6 Suppose {x k j } is the sequence presented as in
Proof We distinguish for the proof two cases.
(1) If lim inf k→∞ η k > 0, by Lemma 3.4, one has Thus, the sequence { x kx 0 } is nondecreasing and bounded, and hence convergent, which implies that On the other hand, by Assumption 2.1(i) and the fact that which can be written as By (3.1), one has f z k j , x k j ≥ cη k j x k jy k j 2 > 0.
Then we will prove that where g k j ∈ ∂f (z k j , x k j ). For all x ∈ H 1 k j , from the remark of Lemma 2.1, we only need to prove which is equivalent to Since g k j ∈ ∂f (z k j , x k j ), by the definition of subdifferential we have So, from the definition of H 1 k j , for all x ∈ H 1 k j we have f z k j , x ≤ 0, which implies that (3.3) holds. Moreover, (3.2) is right.
By (3.2) and the fact that there is a constant M > 0 such that g k j ≤ M, we obtain which implies that x k jy k j → 0, j → ∞, and the desired result holds.
(2) Suppose that lim inf k→∞ η k = 0, and for any subsequence Let {x k j } →x as j → ∞, it follows that By the definition of {η k j }, one has Letȳ be the limit of {y k j }. By Lemma 3.5 we have Taking j → ∞ and remembering the fact that f is continuous, we obtain which implies that f (x,ȳ) ≥ 0. So x k jy k j → 0, j → ∞ and the desired result follows.
Based on the analysis above, we can establish the main results of this section that the generated sequence {x k } globally converge to a solution of the problem (1.1). Proof By Lemma 3.4, without loss of generality, assume that the subsequence {x k j } converges tox. By Lemma 3.6, one has x k jy k j → 0 and where y k j ∈ K(x k j ) for every j. Thusx ∈ K(x) from the fact that K is upper semicontinuous.
To prove thatx is a solution of the problem (1.1), since the optimality condition implies that there exists ω ∈ ∂f (x k , y k ) such that where s k ∈ N K(x k ) (y k ) is a vector in the normal cone to K(x k ) at y k . Then we have On the other hand, since ω ∈ ∂f (x k , y k ) and by the well-known Moreau-Rockafellar theorem [35], one has f x k , yf x k , y k ≥ ω, yy k . (3.5) By (3.4) and (3.5), we have Taking j → ∞ and remembering that f is continuous, we obtain that is,x is a solution of the QEP and the proof is completed. Proof By Theorem 3.1, we know that the sequence {x k } is bounded and that every accumulation point x * of {x k } is a solution of the problem (1.1). Let {x k j } be a convergent subsequence of {x k }, and let x * ∈ K * be its limit. Letx = P K * (x 0 ). Then by Lemma 3.2, for all j. So, from the iterative procedure of Algorithm 3.1, one has Thus, where the inequality follows from (3.7). Letting j → ∞, it follows that Due to Lemma 2.1 and the fact thatx = P K * (x 0 ) and x * ∈ K * , we have Combining this with (3.8) and the fact that x * is the limit of {x k j }, we conclude that the sequence {x k j } converges tox and Since x * was taken as an arbitrary accumulation point of {x k }, it follows thatx is the unique accumulation point of this sequence. Since {x k } is bounded, the whole sequence {x k } converges tox.

Numerical experiment
In this section, we will make some numerical experiments and give a numerical comparison with the method proposed in [23] to test the efficiency of the proposed method. The MATLAB codes are run on a PIV 2.0 GHz personal computer under MATLAB version 7.0.1.24704(R14). In the following, 'Iter. ' denotes the number of iteration, and 'CPU' denotes the running time in seconds. The tolerance ε means the iterative procedure terminates when x ky k ≤ ε.
where q, P, Q are chosen as follows:   The moving set K(x) = 1≤i≤5 K i (x) where for each x ∈ R 5 and each i, the set K i (x) is defined by This problem was tested in [36] with initial point x 0 = (1, 3, 1, 1, 2) T . They obtained the appropriate solution after 21 iterates with the tolerance ε = 10 -3 .
For this example, we choose X = K(x) and take c = γ = 0.5. During the experiments, we set the stopping criterion ε = 10 -6 . The numerical comparison of our proposed method with the algorithms, i.e., Alg.1, Alg.1a, Alg.1b, proposed in [23] are given in Tables 3 and 4.

Conclusions
In this paper, we have proposed a new type of extragradient projection method for the quasi-equilibrium problem. The generated sequence by the newly designed method possesses an expansion property with respect to the initial point. The existence results of the  problem is established under pseudomonotonicity condition of the equilibrium function and the continuity of the underlying multi-valued mapping. Furthermore, we have shown that the generated sequence converges to the nearest point in the solution set to the initial point. The given numerical experiments show the efficiency of the proposed method.