 Research
 Open Access
An alternative extragradient projection method for quasiequilibrium problems
 Haibin Chen^{1}Email author,
 Yiju Wang^{1} and
 Yi Xu^{2}
https://doi.org/10.1186/s1366001816199
© The Author(s) 2018
 Received: 7 November 2017
 Accepted: 23 January 2018
 Published: 30 January 2018
Abstract
For the quasiequilibrium problem where the players’ costs and their strategies both depend on the rival’s decisions, an alternative extragradient projection method for solving it is designed. Different from the classical extragradient projection method whose generated sequence has the contraction property with respect to the solution set, the newly designed method possesses an expansion property with respect to a given initial point. The global convergence of the method is established under the assumptions of pseudomonotonicity of the equilibrium function and of continuity of the underlying multivalued mapping. Furthermore, we show that the generated sequence converges to the nearest point in the solution set to the initial point. Numerical experiments show the efficiency of the method.
Keywords
 Quasiequilibrium problems
 Extragradient projection method
 Pseudomonotonicity
 Multivalued mapping
MSC
 90C30
 15A06
1 Introduction
The equilibrium problem has been considered as an important and general framework for describing various problems arising in different areas of mathematics, including optimization problems, mathematical economic problems and Nash equilibrium problems. As far as we know, this formulation has been followed for a long time by several studies on equilibrium problems considered under different headings such as quasiequilibrium problem, mixed equilibrium problem, ordered equilibrium problem, vector equilibrium problem and so on [1–4]. It should be noted that one of the interests of this common formulation is that many techniques developed for a particular case may be extended, with suitable adaptations, to the equilibrium problem, and then they can be applied to other particular cases [5–16]. In this paper, we mainly deal with existence of solutions and approximate solutions of the quasiequilibrium problem.
On the other hand, the quasiequilibrium problem (QEP) has received much attention of researchers in mathematics, economics, engineering, operations research, etc. [17, 22]; for more information see [19, 25, 26]. There are many solution methods for solving QEP. Recently, [27] considered an optimization reformulation approach with the regularized gap function. Different from the variational inequalities problem, the regularized gap function is in general not differentiable, but only directional differentiable. Furthermore, supplementary conditions must be imposed to guarantee that any stationary point of these functions is a global minimum, since the gap functions is nonconvex [28]. It should be noted that such conditions are known for variational inequality problem but not for QEP. So, [23] proposed several projection and extragradient methods rather than methods based on gap functions, which generalized the doubleprojection methods for variational inequality problem to equilibrium problems with a moving constraint set \(K(x)\).
It should be noted that not all the extragradient projection methods have the contraction property [32]. In that case, it may not slow down the convergence rate significantly. Although the extragradient projection method has no contraction property, it still has a good numerical performance [32]. Now, a question can be posed naturally, can this method be applied to solve the QEP? And if so, how about its performance? This constitutes the main motivation of this paper.
The remainder of this paper is organized as follows. In Section 2, we recall some concepts and related conclusions to be used in the sequel. The newly designed method and its global convergence are developed in Section 3. Some preliminary computational results and experiments are presented in Section 4.
2 Preliminaries
Lemma 2.1
 (i)
\(\langle P_{X}(x)x, zP_{X}(x) \rangle\geq0\);
 (ii)
\(\Vert P_{X}(x)P_{X}(y) \Vert^{2} \leq \Vert xy \Vert^{2} \Vert P_{X}(x)x+yP_{X}(y) \Vert^{2}\).
Remark 2.1
To proceed, we present the following definitions [34].
Definition 2.1
 (i)strongly monotone on X with \(\beta> 0\) iff$$f(x,y)+f(y,x)\leq\beta \Vert xy \Vert ^{2},\quad\forall x, y \in X; $$
 (ii)monotone on X iff$$f(x,y)+f(y,x)\leq0, \quad\forall x,y \in X; $$
 (iii)pseudomonotone on X iff$$f(x,y)\geq0\quad \Rightarrow\quad f(y,x)\leq0,\quad\forall x,y \in X. $$
Definition 2.2
 (i)
upper semicontinuous at \(x\in X\) if for any convergent sequence \(\{ x_{k}\}\subset X\) with x̄ being the limit, and for any convergent sequence \(\{y_{k}\}\) with \(y_{k}\in K(x_{k})\) and ȳ being the limit, then \(\bar{y}\in K(\bar{x})\);
 (ii)
lower semicontinuous at \(x\in X\) if given any sequence \(\{x^{k}\}\) converging to x and any \(y\in K(x)\), there exists a sequence \(\{y^{k}\} \) with \(y^{k}\in K(x^{k})\) converges to y;
 (iii)
continuous at \(x\in X\) if it is both upper semicontinuous and lower semicontinuous at x.
To end this section, we make the following blanket assumption on bifunction \(f:X\times X\to R\) and multivalued mapping \(K: X \to2^{ \mathbb{R}^{n}}\) [20, 23].
Assumption 2.1
 (i)
\(f(x,\cdot)\) is convex for any fixed \(x\in X\), f is continuous on \(X\times X\) and \(f(x,x)=0\) for all \(x\in X\);
 (ii)
K is continuous on X and \(K(x)\) is a nonempty closed convex subset of X for all \(x\in X\);
 (iii)
\(x\in K(x)\) for all \(x \in X\);
 (iv)
\(S^{*}=\{x\in S \mid f(x,y)\geq0, \forall y \in T \}\) is nonempty for \(S=\bigcap_{x\in X}K(x)\) and \(T=\bigcup_{x\in X}K(x)\);
 (v)
f is pseudomonotone on X with respect to \(S^{*}\) i.e. \(f(x^{*}, y)\geq0\Rightarrow f(y,x^{*})\leq0\), \(\forall x^{*}\in S^{*}\), \(\forall y\in X\).
As noted in [23], the assumption (iv) in Assumption 2.1 guarantees that the solution set of problem (1.1) \(K^{*}\) is nonempty.
3 Algorithm and convergence
In this section, we mainly develop a new type of extragradient projection method for solving QEP. The basic idea of the algorithm is as follows. At each step of the algorithm, we obtain a solution \(y^{k}\) by solving a convex subproblem. If \(x^{k}=y^{k}\), then stop with \(x^{k}\) being a solution of the QEP; otherwise, find a trial point \(z^{k}\) by a backtracking search at \(x^{k}\) along the direction \(x^{k}y^{k}\), and the new iterate is obtained by projecting \(x^{0}\) onto the intersection of \(K(x^{k})\) of two halfspaces which are, respectively, associated with \(z^{k}\) and \(x^{k}\). Repeat the process until \(x^{k}=y^{k}\). The detailed description of our designed algorithm is as follows.
Algorithm 3.1
 Step 0. :

Choose \(c, \gamma\in(0, 1)\), \(x^{0}\in X\), \(k=0\).
 Step 1. :

For current iterate \(x^{k}\), compute \(y_{k}\) by solving the following optimization problem:If \(x^{k}=y^{k}\), then stop. Otherwise, let \(z^{k}=(1\eta_{k})x^{k}+\eta_{k} y^{k}\), where \(\eta _{k}=\gamma^{m_{k}}\) with \(m_{k}\) being the smallest nonnegative integer such that$$\min_{y\in K(x^{k})} \biggl\{ f\bigl(x^{k},y\bigr)+ \frac{1}{2} \bigl\Vert yx^{k} \bigr\Vert ^{2}\biggr\} . $$$$ f\bigl(\bigl(1\gamma^{m}\bigr)x^{k}+ \gamma^{m} y^{k}, x^{k}\bigr)f\bigl(\bigl(1 \gamma^{m}\bigr)x^{k}+\gamma^{m} y^{k}, y^{k}\bigr)\geq c \bigl\Vert x^{k}y^{k} \bigr\Vert ^{2}. $$(3.1)
 Step 2. :

Compute \(x^{k+1}=P_{H_{k}^{1}\cap H_{k}^{2} \cap X}(x^{0})\) whereSet \(k=k+1\) and go to Step 1.$$\begin{gathered} H_{k}^{1}=\bigl\{ x\in \mathbb{R}^{n} \mid f\bigl(z^{k},x\bigr)\leq0 \bigr\} , \\ H_{k}^{2}=\bigl\{ x\in \mathbb{R}^{n} \mid \bigl\langle xx^{k}, x^{0}x^{k} \bigr\rangle \leq 0 \bigr\} . \end{gathered} $$
Indeed, for every \(x^{k}\in K(x^{k})\), since \(y^{k}\in K(x^{k})\), \(z^{k}\in K(x^{k})\), so we have \(K(x^{k})\cap H_{k}^{1}\neq\emptyset\) and \(K(x^{k})\cap H_{k}^{2}\neq\emptyset\). To establish the convergence of the algorithm, we first discuss the relationship of the halfspace \(H_{k}^{1}\) with \(x^{k}\) and the solution set \(K^{*}\).
Lemma 3.1
Proof
The justification of the termination criterion can be seen from Proposition 2 in [23], and the feasibility of the stepsize rule (3.1), i.e., the existence of point \(m_{k}\) can be guaranteed from Proposition 7 in [23].
Next, to show that the algorithm is well defined, we will show that the nonempty set \(K^{*}\) is always contained in \(H_{k}^{1}\cap H_{k}^{2} \cap X\) for the projection step.
Lemma 3.2
Let Assumption 2.1 be true. Then we have \(K^{*}\subseteq H_{k}^{1}\cap H_{k}^{2} \cap X\) for all \(k\geq0\).
Proof
In the following, we show the expansion property of the algorithm with respect to the initial point.
Lemma 3.3
Proof
To prove the boundedness of the generated sequence \(\{x^{k}\}\), we assume that the algorithm generates an infinite sequence for simple.
Lemma 3.4
Suppose Assumption 2.1 is true. Then the generated sequence \(\{x^{k}\}\) of Algorithm 3.1 is bounded.
Proof
Since \(\{x^{k}\}\) is bounded, it has an accumulation point. Without loss of generality, assume that the subsequence \(\{x^{k_{j}}\}\) converges to x̄. Then the sequences \(\{y^{k_{j}}\}\), \(\{z^{k_{j}}\}\) and \(\{g^{k_{j}}\}\) are bounded from the Proposition 10 in [23], where \(g^{k_{j}} \in\partial f(z^{k_{j}},x^{k_{j}})\).
Before given the next result, the following lemma is needed (for details see [23]).
Lemma 3.5
Lemma 3.6
Proof
We distinguish for the proof two cases.
Based on the analysis above, we can establish the main results of this section that the generated sequence \(\{x^{k}\}\) globally converge to a solution of the problem (1.1).
Theorem 3.1
Suppose \(\{x^{k}\}\) is an infinite sequence generated in Algorithm 3.1. Let conditions of Lemma 3.6 be true. Then each accumulation point of \(\{x^{k}\}\) is a solution of the QEP under the Assumption 2.1.
Proof
Theorem 3.2
Proof
Since \(x^{*}\) was taken as an arbitrary accumulation point of \(\{x^{k}\}\), it follows that x̄ is the unique accumulation point of this sequence. Since \(\{ x^{k}\}\) is bounded, the whole sequence \(\{x^{k}\}\) converges to x̄. □
4 Numerical experiment
In this section, we will make some numerical experiments and give a numerical comparison with the method proposed in [23] to test the efficiency of the proposed method. The MATLAB codes are run on a PIV 2.0 GHz personal computer under MATLAB version 7.0.1.24704(R14). In the following, ‘Iter.’ denotes the number of iteration, and ‘CPU’ denotes the running time in seconds. The tolerance ε means the iterative procedure terminates when \(\Vert x^{k}y^{k} \Vert\leq\varepsilon\).
Example 4.1
This problem was tested in [36] with initial point \(x^{0}=(1, 3, 1, 1, 2)^{T}\). They obtained the appropriate solution after 21 iterates with the tolerance \(\varepsilon=10^{3}\).
Numerical results for Example 4.1
Initial point \(x^{0}\)  

\((0, 0, 0, 0, 0)^{T}\)  \((1, 3, 1, 1, 2)^{T}\)  \((1, 1, 1, 1, 2)^{T}\)  \((1, 0, 1, 0, 2)^{T}\)  \((0, 1, 1, 0, 2)^{T}\)  
Iter.  5  13  9  7  8 
CPU  0.2060  0.5340  0.3590  0.2190  0.3430 
Now, we consider a quasivariational inequality problems and we solve it by using Algorithm 3.1 with the equilibrium function \(f(x, y)= \langle F(x), yx\rangle\).
Example 4.2
Numerical results for Example 4.2
Alg.31  Initial point \(x^{0}\)  

(10,10)  (10,0)  (9,1)  (9,3)  (9,9)  (8,10)  
Iter.  53  2  2  8  51  48 
CPU(s)  0.7030  0.0160  0.0310  0.2030  0.8440  0.7810 
Iterations from Alg.31, Alg.1, Alg.1a and Alg.1b respectively
Initial point  Number of iterations  

Alg.31  Alg.1  Alg.1a  Alg.1b  
(10,0)  2  3  2  2 
(10,10)  53  255  120  120 
The CPU time from Alg.31, Alg.1, Alg.1a and Alg.1b respectively
Initial point  CPU (s)  

Alg.31  Alg.1  Alg.1a  Alg.1b  
(10,0)  0.02  0.26  0.20  0.15 
(10,10)  0.70  8.43  3.70  2.57 
5 Conclusions
In this paper, we have proposed a new type of extragradient projection method for the quasiequilibrium problem. The generated sequence by the newly designed method possesses an expansion property with respect to the initial point. The existence results of the problem is established under pseudomonotonicity condition of the equilibrium function and the continuity of the underlying multivalued mapping. Furthermore, we have shown that the generated sequence converges to the nearest point in the solution set to the initial point. The given numerical experiments show the efficiency of the proposed method.
Declarations
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 11601261,11671228), and Natural Science Foundation of Shandong Province (Grant No. ZR2016AQ12), and China Postdoctoral Science Foundation (Grant No. 2017M622163).
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Cho, S.Y.: Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 9, 1083–1092 (2016) MathSciNetView ArticleMATHGoogle Scholar
 Huang, N., Long, X., Zhao, C.: Wellposedness for vector quasiequilibrium problems with applications. J. Ind. Manag. Optim. 5(2), 341–349 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Li, J.: Constrained ordered equilibrium problems and applications. J. Nonlinear Var. Anal. 1, 357–365 (2017) Google Scholar
 Su, T.V.: A new optimality condition for weakly efficient solutions of convex vector equilibrium problems with constraints. J. Nonlinear Funct. Anal. 2017, Article ID 7 (2017) Google Scholar
 Chen, H.: A new extragradient method for generalized variational inequality in Euclidean space. Fixed Point Theory Appl. 2013, 139 (2013) View ArticleGoogle Scholar
 Chen, H., Wang, Y., Zhao, H.: Finite convergence of a projected proximal point algorithm for generalized variational inequalities. Oper. Res. Lett. 40, 303–305 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Chen, H., Wang, Y., Wang, G.: Strong convergence of extragradient method for generalized variational inequalities in Hilbert space. J. Inequal. Appl., 2014, 223 (2014) View ArticleGoogle Scholar
 Qin, X., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18(5), 925–935 (2017) MathSciNetGoogle Scholar
 Xiao, Y.B., Huang, N.J., Cho, Y.J.: A class of generalized evolution variational inequalities in Banach spaces. Appl. Math. Lett. 25(6), 914–920 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Chen, H.B., Qi, L.Q., Song, Y.S.: Column sufficient tensors and tensor complementarity problems. Front. Math. China (2018). https://doi.org/10.1007/s1146401806814 Google Scholar
 Chen, H.B., Wang, Y.J.: A family of higherorder convergent iterative methods for computing the Moore–Penrose inverse. Appl. Math. Comput. 218, 4012–4016 (2011) MathSciNetMATHGoogle Scholar
 Sun, H.C., Wang, Y.J., Qi, L.Q.: Global error bound for the generalized linear complementarity problem over a polyhedral cone. J. Optim. Theory Appl. 142, 417–429 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Wang, Y.J., Liu, W.Q., Cacceta, L., Zhou, G.L.: Parameter selection for nonnegative \(l_{1}\) matrix/tensor sparse decomposition. Oper. Res. Lett. 43, 423–426 (2015) MathSciNetView ArticleGoogle Scholar
 Wang, Y.J., Cacceta, L., Zhou, G.L.: Convergence analysis of a block improvement method for polynomial optimization over unit spheres. Numer. Linear Algebra Appl. 22, 1059–1076 (2015) MathSciNetView ArticleMATHGoogle Scholar
 Wang, C.W., Wang, Y.J.: A superlinearly convergent projection method for constrained systems of nonlinear equations. J. Glob. Optim. 40, 283–296 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Chen, H.B., Chen, Y.N., Li, G.Y., Qi, L.Q.: A semidefinite program approach for computing the maximum eigenvalue of a class of structured tensors and its applications in hypergraphs and copositivity test. Numer. Linear Algebra Appl. 25(6), e2125 (2018) View ArticleGoogle Scholar
 Facchinei, F., Pang, J.S.: FiniteDimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003) MATHGoogle Scholar
 Harker, P.T.: Generalized Nash games and quasivariational inequalities. Eur. J. Oper. Res. 54, 81–94 (1991) View ArticleMATHGoogle Scholar
 Pang, J.S., Fukushima, M.: Quasivariational inequalities, generalized Nash equilibria, and multileaderfollower games. Comput. Manag. Sci. 2, 21–56 (2005) MathSciNetView ArticleMATHGoogle Scholar
 Zhang, J., Qu, B., Xiu, N.: Some projectionlike methods for the generalized Nash equilibria. Comput. Optim. Appl. 45, 89–109 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Nash, J.: Noncooperative games. Ann. Math. 54, 286–295 (1951) MathSciNetView ArticleMATHGoogle Scholar
 Facchinei, F., Kanzow, C.: Generalized Nash equilibrium problems. Ann. Oper. Res. 175, 177–211 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Strodiot, J.J., Nguyen, T.T.V., Nguyen, V.H.: A new class of hybrid extragradient algorithms for solving quasiequilibrium problems. J. Glob. Optim. 56, 373–397 (2013) MathSciNetView ArticleMATHGoogle Scholar
 Blum, E., Oettli, W.: From optimization and variational inequality to equilibrium problems. Math. Stud. 63, 127–149 (1994) MATHGoogle Scholar
 Pang, J.S., Fukushima, M.: Quasivariational inequalities, generalized Nash equilibria, and multileaderfollower games. Erratum. Comput. Manag. Sci. 6, 373–375 (2009) MathSciNetView ArticleMATHGoogle Scholar
 Pham, H.S., Le, A.T., Nguyen, B.M.: Approximate duality for vector quasiequilibrium problems and applications. Nonlinear Anal., Theory Methods Appl. 72(11), 3994–4004 (2010) MathSciNetView ArticleMATHGoogle Scholar
 Taji, K.: On gap functions for quasivariational inequalities. Abstr. Appl. Anal. 2008, Article ID 531361 (2008) MathSciNetView ArticleMATHGoogle Scholar
 Kubota, K., Fukushima, M.: Gap function approach to the generalized Nash equilibrium problem. J. Optim. Theory Appl. 144, 511–531 (2010) MathSciNetView ArticleMATHGoogle Scholar
 He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. 35(1), 69–76 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Iusem, A.N., Svaiter, B.F.: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 42, 309–321 (1997) MathSciNetView ArticleMATHGoogle Scholar
 Han, D.R., Zhang, H.C., Qian, G., Xu, L.L.: An improved twostep method for solving generalized Nash equilibrium problems. Eur. J. Oper. Res. 216(3), 613–623 (2012) MathSciNetView ArticleMATHGoogle Scholar
 Wang, Y.J., Xiu, N.H., Zhang, J.Z.: Modified extragradient method for variational inequalities and verification of solution existence. J. Optim. Theory Appl. 119, 167–183 (2003) MathSciNetView ArticleMATHGoogle Scholar
 Zarantonello, E.H.: Projections on convex sets in Hilbert space and spectral theory. In: Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971) Google Scholar
 Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001) View ArticleMATHGoogle Scholar
 Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976) MathSciNetView ArticleMATHGoogle Scholar
 Tran, D.Q., LeDung, M., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008) MathSciNetView ArticleMATHGoogle Scholar