Skip to main content

Iteration complexity of generalized complementarity problems

Abstract

The goal of this paper is to discuss iteration complexity of two projection type methods for solving generalized complementarity problems and establish their worst-case sub-linear convergence rates measured by the iteration complexity in both the ergodic and the nonergodic sense.

1 Background

The distinguishing feature of a complementarity problem is the set of complementarity conditions. Each of these conditions requires that the product of two or more nonnegative quantities should be zero. Complementarity conditions made their first appearance in the optimality conditions for continuous variable nonlinear programs involving inequality constraints, which were derived by Karush [14] in 1939. But the significance of complementarity conditions goes far beyond this. They appear prominently in the study of equilibrium problems and arise naturally in numerous applications from economics, engineering and the sciences.

The motivation for studying the linear complementarity problem (LCP) was because the KKT optimality conditions for linear and quadratic programs (QPs) constitute an LCP of the form (1.1). The complementarity problem is one of the basic problems in nonlinear analysis along with such problems as optimization, fixed point problems, equilibrium problems and variational inequalities. The nonlinear complementarity problem (NCP) was introduced by Cottle in his Ph.D. thesis in 1964 [5], and the closely related variational inequality problem (VIP) was introduced by Hartman and Stampacchia in 1966 [10], primarily with the goal of computing stationary points for nonlinear programs. While these problems were introduced soon after the LCP, most of the progress in developing algorithms for these problems did not begin until the late 1970s.

Recall that the conventional complementarity problem requires that, for a given single-valued mapping \(F:\mathbb{R}^{n}\longrightarrow \mathbb{R}^{n}\), one finds a point \(x\in \mathbb{R}^{n}\) such that

$$ x\geq 0, \quad F(x)\geq 0,\qquad x^{\top }F(x) = 0. $$
(1.1)

The bulk of the above publications concern problems of the form (1.1), it is especially true of the work related to numerical methods for searching solutions.

Meanwhile, many applications lead to set-valued mappings. As a result, set-valued complementarity problems must be solved; that is, a point \(x\in \mathbb{R}^{n}\) must be found such that

$$ x\geq 0, \quad\exists g\in G(x),\quad g\geq 0,\qquad x^{\top }g = 0, $$
(1.2)

where \(G:\mathbb{R}^{n}\longrightarrow \prod (\mathbb{R}^{n})\) is a set-valued mapping, hereafter, the symbol \(\prod (S)\) stands for the collection of all subsets of the set S.

The simplest and most widely studied of the complementarity problems is the LCP, which has often been described as a fundamental problem for given \(M \in \mathbb{R}^{n\times n}\), \(q\in \mathbb{R}^{n}\), and \(w = (w_{j})\in \mathbb{R}^{n}\), \(z = (z_{j})\in \mathbb{R}^{n}\) satisfying

$$ w - Mz = q,\quad w,z > 0,\qquad w^{\top }z = 0. $$
(1.3)

We denote this LCP by the symbol \((q,M)\). The LCP \((q,M)\) is said to be monotone if the matrix M is positive semi-definite (PSD). A slight generalization of the LCP is the mixed LCP (mLCP): given \(A\in \mathbb{R}^{n\times n}\), \(B\in \mathbb{R}^{m\times m}\), \(C \in \mathbb{R} ^{n\times m}\), \(D\in \mathbb{R}^{m\times n}\), \(a\in \mathbb{R}^{n}\), \(b \in \mathbb{R}^{m}\), find \(u \in \mathbb{R}^{n}\), \(v\in \mathbb{R}^{m}\) satisfying

$$ a + Au + Cv = 0, \qquad b+ Du + Bv \geq 0,\quad v \geq 0, \qquad v^{\top }(b + Du + Bv) = 0. $$
(1.4)

Another generalization of the LCP is the horizontal LCP or hLCP: given \(N \in \mathbb{R}^{n\times n}\), \(M \in \mathbb{R}^{n\times n}\), \(q\in \mathbb{R}^{n}\), find \(w \in \mathbb{R}^{n}\), \(z \in \mathbb{R}^{n}\) satisfying

$$ Nw - Mz = q,\quad w,z>0,\qquad w^{\top }z = 0. $$
(1.5)

The hLCP (1.5) becomes the standard LCP if \(N =I\). Also, if N is nonsingular, then (1.5) is equivalent to the LCP \((N^{-1}q,N^{-1}M)\). The hLCP is said to be monotone if for any two pairs of points \((w^{1}, z^{1})\) and \((w^{2}, z^{2})\) satisfying \(Nw - Mz = q\) we have

$$ \bigl(w^{1} - w^{2} \bigr)^{\top } \bigl(z^{1} - z^{2} \bigr)>0. $$

Note that if \(N = I\), this is equivalent to the matrix M being PSD.

Now, we will present nonlinear generalizations of the LCP. The most important of these is the NCP: given a mapping \(F(z)=(F_{i}(z)): \mathbb{R}^{n} \longrightarrow \mathbb{R}^{n}\), and a \(z \in \mathbb{R}^{n}\) satisfying

$$ z >0,\quad F(z)>0, \qquad z^{\top }F(z) = 0, $$
(1.6)

If \(F(z)\) is the affine function \(q+ Mz\), then (1.6) becomes the LCP \((q,M)\).

A further generalizations of the NCP is the VIP which is defined thus: given a mapping \(F(x)=(F_{i}(x)):\mathbb{R}^{n}\longrightarrow \mathbb{R}^{n}\), and \(\emptyset \neq C\subset \mathbb{R}^{n}\), find a \(x^{\ast }\in C\) satisfying

$$ \bigl(y - x^{\ast } \bigr)^{\top }F \bigl(x^{\ast } \bigr)\geq 0,\quad \forall y \in C, $$
(1.7)

denoted by \(VIP(C,F)\) where C is a nonempty closed convex subset of \(\mathbb{R}^{n}\). If C is polyhedral and F is affine, it can be verified that \(VIP(C,F)\) is an LCP. C is a rectangular region defined by

$$ C = \prod_{i=1}^{n}[\ell _{i},u_{i}],\quad -\infty \leq \ell _{i}< u_{i} \leq \infty ,i=1,2,\dots ,n. $$

This is called the box constrained VIP (BCVIP), which is also commonly referred to as the (nonlinear) mixed complementarity problem (MCP).

Our main goal is to analyse the convergence rates for projection methods under mild assumptions and together with strict contraction property, we establish convergence rates in asymptotical sense under certain error bound condition. More specifically, we want to derive the worst-case sub-linear convergence rate measured by the iteration complexity. This kind of convergence rate analysis based on iteration complexity traces back to [1, 4, 12, 16, 18]. Furthermore, our convergence rate analysis does not need to assume the boundedness of the feasible set which is usually required by iteration complexity analysis of projection methods for generalized complementarity problems; see, [7,8,9, 13, 15].

We assume that \(x^{\ast }\) is a fixed but arbitrary point in the solution set \(C^{\ast }\) of generalized complementarity problems and \(x^{\ell }\) refers to specific vector usually denotes an iteration index. For any matrix M and vector y, we denote their transposes by \(M^{T}\) and \(y^{T}\), respectively. The Euclidean norm is denoted by \(\Vert \cdot \Vert \). Let C be a nonempty closed convex subset of \(\mathbb{R}^{n}, M\in \mathbb{R}^{n\times n}\) and \(q\in \mathbb{R} ^{n}\). We consider the generalized complementarity problems for finding a vector \(x^{\ast }\in C\) such that

$$ x^{\ast }\geq 0,\quad \bigl\langle x-x^{\ast },Mx^{\ast }+q \bigr\rangle = 0. $$
(1.8)

We consider the case where the matrix M is positive semi-definite (but not necessary symmetric). Moreover, the solution set of (1.8), denoted by \(C^{\ast }\), is assumed to be nonempty.

Proposition 1.1

([2])

Let \(C^{\ast }\) be a nonempty closed convex subset of \(\mathbb{R}^{n}\) and let G be any \(n\times n\) symmetric positive definite matrix. Then \(x^{\ast }\) solves the problem (1.8) if and only if

$$ x^{\ast }=P_{C}^{G} \bigl[x^{\ast }-G^{-1} \bigl(Mx^{\ast }+q \bigr) \bigr], $$
(1.9)

where \(G\in \mathbb{R}^{n\times n}\) is a symmetric positive definite matrix, and \(P_{C}^{G}(\cdot )\) denotes the projection onto C with respect to the G-norm:

$$ P_{C}^{G}(y)=\arg \min \bigl\{ \Vert x-y \Vert _{G} \mid x\in C\bigr\} , $$

where \(\Vert x \Vert _{G}=\sqrt{x^{T}Gx}\) for any \(x\in \mathbb{R}^{n}\).

Remark 1.2

If \(G=I\), then the notation \(P_{C}(\cdot )\) becomes \(P_{C}^{I}(\cdot )\) or simply \(P_{C}(\cdot )\).

Moreover, for given \(x\in \mathbb{R}^{n}\), we denote

$$ \bar{x}=P_{C} \bigl[x-(Mx+q) \bigr]. $$

Hence, we have

$$ \bar{x}^{\ell }=P_{C} \bigl[x^{\ell }- \bigl(Mx^{\ell }+q \bigr) \bigr]. $$
(1.10)

We further use the notation

$$ e \bigl(x^{\ell } \bigr)=x^{\ell }- \bar{x}^{\ell }. $$
(1.11)

From (1.10), x is a solution of (1.8) if and only if \(x=\bar{x}\). Thus the residual of projection equation \(\Vert e(x^{ \ell }) \Vert ^{2}\) can be used to measure the accuracy of an iterate \(x^{\ell }\) to a solution of (1.8).

More specifically, we proposed the following iterative scheme:

$$ x^{\ell +1}=x^{\ell }-\zeta \alpha ^{\ast }_{\ell }G^{-1} \bigl(I+M^{T} \bigr) \bigl(x ^{\ell }-\bar{x}^{\ell } \bigr), $$
(1.12)

where \(\zeta \in (0,1)\) is a relaxation factor and the step size \(\alpha ^{\ast }_{\ell }\) is determined by

$$ \alpha ^{\ast }_{\ell }=\frac{ \Vert x^{\ell }-\bar{x}^{\ell } \Vert ^{2}}{ \Vert G ^{-1}(I+M^{T})(x^{\ell }-\bar{x}^{\ell }) \Vert ^{2}_{G}}. $$
(1.13)

Obviously, for the step size

$$ \alpha ^{\ast }_{\ell }\geq \frac{1}{ \Vert (I+M)G^{-1}(I+M)^{T} \Vert _{2}}= \alpha _{\mathrm{min}}, $$
(1.14)

where \(\Vert \cdot \Vert _{2}\) denotes the spectral norm of a matrix.

The sequence \(\{x^{\ell }\}\) generated by (1.12) satisfies the following inequality:

$$ \bigl\Vert x^{\ell +1}-x^{\ast } \bigr\Vert ^{2}_{G} \leq \bigl\Vert x^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G}- \zeta (2-\zeta )\alpha ^{\ast }_{\ell } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}, $$
(1.15)

where \(x^{\ast }\) is an arbitrary solution of (1.8). We note that

$$ \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}=0 $$

if and only if \(x^{\ell }\) is a solution of (1.8).

Now, we consider another iterative scheme for projection method:

$$ x^{\ell +1}=P_{C}^{G} \bigl[x^{\ell }-\zeta \alpha _{\ell }^{\ast }G^{-1} \bigl\{ \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\} \bigr], $$
(1.16)

where the step size \(\alpha _{\ell }^{\ast }\) is also defined in (1.13).

We note that the sequence \(\{x^{\ell }\}\) is generated by (1.16) also satisfies the property (1.15) and thus its convergence is ensured.

In this work, we establish the worst-case \(O(\frac{1}{t})\)-convergence rate measured by the iteration complexity for (1.12) and (1.16), where t is the iteration counter.

In our analysis, we know that

$$ P_{C}^{G}(y)=\arg \min \biggl\{ \frac{1}{2} \Vert x-y \Vert ^{2}_{G} \Bigm| x\in C \biggr\} $$

and

$$ \bigl\langle y-P_{C}^{G}(y) , G \bigl(x-P_{C}^{G}(y) \bigr) \bigr\rangle \leq 0,\quad \forall y \in \mathbb{R}^{n},x\in C. $$
(1.17)

Lemma 1.3

([3])

Let C be a nonempty closed convex subset of a real Hilbert space H. Then, for any \(x,y,z \in H\), the following properties holds:

  1. (i)

    \(\Vert x-P_{C}(x) \Vert \leq \Vert x-y \Vert \);

  2. (ii)

    \(\langle x-P_{C}(x), P_{C}(x)-z\rangle \geq 0\), or \(\langle x-P_{C}(x), z-P_{C}(x)\rangle \leq 0\);

  3. (iii)

    \(\Vert P_{C}(x)-P_{C}(y) \Vert ^{2}\leq \langle x-y, P_{C}(x)-P _{C}(y)\rangle \);

  4. (iv)

    \(\Vert P_{C}(x)-z \Vert ^{2}\leq \Vert x-z \Vert ^{2}- \Vert x-P_{C}(x) \Vert ^{2}\);

  5. (v)

    \(\Vert P_{C}(x)-P_{C}(y) \Vert \leq \Vert x-y \Vert \).

Let \(x^{\ast }\) be any fixed solution of generalized complementarity problems. Since \(\bar{x}^{\ell }\in C\), it follows from (1.8) that

$$ \bigl\langle \bar{x}^{\ell }-x^{\ast }, Mx^{\ast }+q \bigr\rangle = 0,\quad \forall x ^{\ast }\in C^{\ast }. $$
(1.18)

Setting \(y=x^{\ell }-(Mx^{\ell }+q)\), \(G=I\) and \(x=x^{\ast }\) in (1.17), because of the notation \(\bar{x}^{\ell }\), we have

$$ \bigl\langle \bar{x}^{\ell }-x^{\ast }, \bigl[x^{\ell }- \bigl(Mx^{\ell }+q \bigr) \bigr]- \bar{x}^{\ell } \bigr\rangle \geq 0,\quad \forall x^{\ast }\in C^{\ast }. $$
(1.19)

Adding Eqs. (1.18) and (1.19) we have

$$ \bigl\langle \bar{x}^{\ell }-x^{\ast }, \bigl(x^{\ell }- \bar{x}^{\ell } \bigr)-M \bigl(x ^{\ell }- x^{\ast } \bigr) \bigr\rangle \geq 0,\quad \forall x^{\ast }\in C^{\ast }, $$

and consequently

$$ \bigl\langle x^{\ell }-x^{\ast }, \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \geq \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2},\quad \forall x^{\ast } \in C^{\ast }. $$
(1.20)

2 An ϵ-approximated solution of generalized complementarity problems

To estimate the worst-case convergence rates measured by the iteration complexity for (1.12) and (1.16), we need to precisely define an ϵ-approximated solution of (1.8). We will consider the following two definitions, which are based on the generalized complementarity problems characterization and projection equation residual, respectively.

From [6], we know that \(C^{\ast }\) is a convex and it can be characterized as follows:

$$ C^{\ast }=\bigcap_{x\in C} \bigl\{ y\in C:\langle x-y, Mx+q\rangle = 0 \bigr\} . $$

Therefore, from [17] \(y\in C\) is an ϵ-approximated solution point of (1.8) if it satisfies

$$ y\in C \quad\text{and}\quad \inf_{x\in \mathcal{D}(y)} \bigl\{ \langle x-y, Mx+q\rangle \bigr\} \geq -\epsilon , $$

where

$$ \mathcal{D}(y)=\bigl\{ x\in C: \Vert x-y \Vert _{G}\leq 1\bigr\} . $$

Later, we show that, for given \(\epsilon >0\), after at most \(O(\frac{1}{\epsilon })\) iterations, both (1.12) and (1.16) can find y such that

$$ y\in C \quad\text{and} \quad\sup_{x\in \mathcal{D}(y)} \bigl\{ \langle y-x, Mx+q\rangle \bigr\} \leq \epsilon. $$
(2.1)

As we mentioned as regards \(e(x)\) defined in (1.11), \(\Vert e(x) \Vert ^{2}\) is a measure of the distance between the iterate x and the solution set \(C^{\ast }\). We call y an ϵ-approximation solution of generalized complementarity problems in the sense of a projection equation residual if \(\Vert e(y) \Vert ^{2}\leq \epsilon \).

Now, we define

$$ q^{\ell }(\zeta )=\zeta (2-\zeta )\alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}, $$
(2.2)

where \(\alpha _{\ell }^{\ast }\) is given by (1.13) and use the notation

$$ D=M+M^{T}, $$

where M is the matrix in (1.8). We show that the sequence \(\{x^{\ell }\}\) generated by either (1.12) or (1.16) satisfies the inequality

$$ \zeta \alpha _{\ell }^{\ast } \bigl\langle x- \bar{x}^{\ell }, Mx+q \bigr\rangle \geq \frac{1}{2} \bigl( \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G}- \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \bigr)+ \frac{1}{2}q^{\ell }(\zeta ),\quad \forall x\in C, $$
(2.3)

where \(q^{\ell }(\zeta )\) is defined in (2.2).

Lemma 2.1

For given \(x^{\ell }\in \mathbb{R}^{n}\), let \(\bar{x}^{\ell }\) be defined by (1.8) and the new iterate \(x^{\ell +1}\) be generated by (1.12). Then the assertion (2.3) is satisfied.

Proof

Setting \(y=x^{\ell }-(Mx^{\ell }+q)\) in (1.17), and using \(\bar{x}^{\ell }=P_{C}[x^{\ell }-(Mx^{\ell }+q)]\), we have

$$ \bigl\langle x-\bar{x}^{\ell }, \bigl(Mx^{\ell }+q \bigr)- \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \geq 0,\quad \forall x \in C, $$

can be rewritten as

$$ \bigl\langle x-\bar{x}^{\ell }, (Mx+q)-M \bigl(x-\bar{x}^{\ell } \bigr)+ \bigl(M+M^{T} \bigr) \bigl(x ^{\ell }- \bar{x}^{\ell } \bigr)- \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \geq 0. $$

Using the notation \(M+M^{T}=D\) and the Cauchy–Schwarz inequality, we have

$$\begin{aligned} & \bigl\langle x-\bar{x}^{\ell }, Mx+q \bigr\rangle \\ &\quad\geq \bigl\langle x-\bar{x}^{\ell }, M \bigl(x-\bar{x}^{\ell } \bigr)- \bigl(M+M^{T} \bigr) \bigl(x ^{\ell }- \bar{x}^{\ell } \bigr)+ \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad= \bigl\langle x-\bar{x}^{\ell }, \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle + \frac{1}{2} \bigl\Vert x-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}- \bigl\langle x-\bar{x} ^{\ell }, D \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad\geq \bigl\langle x-\bar{x}^{\ell }, \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle - \frac{1}{2} \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. \end{aligned}$$

Moreover, it follows from (1.12) that

$$ \zeta \alpha _{\ell }^{\ast } \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr)=G \bigl(x ^{\ell }-x^{\ell +1} \bigr). $$

Thus, we obtain

$$ \zeta \alpha _{\ell }^{\ast } \bigl\langle x- \bar{x}^{\ell },Mx+q \bigr\rangle \geq \bigl\langle x-\bar{x}^{\ell }, G \bigl(x^{\ell }-x^{\ell +1} \bigr) \bigr\rangle -\frac{ \zeta \alpha _{\ell }^{\ast }}{2} \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. $$
(2.4)

For the crossed term in the right-hand side of (2.4):

$$ \bigl\langle x-\bar{x}^{\ell }, G \bigl(x^{\ell }-x^{\ell +1} \bigr) \bigr\rangle , $$

it follows from the identity

$$ \bigl\langle a-b,G(c-d) \bigr\rangle =\frac{1}{2} \bigl( \Vert a-d \Vert ^{2}_{G}- \Vert a-c \Vert ^{2} _{G} \bigr)+ \frac{1}{2} \bigl( \Vert c-b \Vert ^{2}_{G}- \Vert d-b \Vert ^{2}_{G} \bigr) $$

that

$$ \bigl\langle x-\bar{x}^{\ell }, G \bigl(x^{\ell }-x^{\ell +1} \bigr) \bigr\rangle = \frac{1}{2} \bigl( \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G}- \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \bigr)+ \frac{1}{2} \bigl( \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{G}- \bigl\Vert x^{\ell +1}- \bar{x}^{\ell } \bigr\Vert ^{2}_{G} \bigr). $$
(2.5)

Now, we treat the second part of the right-hand side of (2.5). Using (1.12), we get

$$\begin{aligned} & \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{G}- \bigl\Vert x^{\ell +1}- \bar{x}^{\ell } \bigr\Vert ^{2}_{G} \\ &\quad= \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{G}- \bigl\Vert \bigl(x^{\ell }- \bar{x}^{\ell } \bigr)- \zeta \alpha _{\ell }^{\ast }G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\Vert ^{2}_{G} \\ &\quad=2\zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell }- \bar{x}^{\ell }, \bigl(I+M ^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle - \bigl(\zeta \alpha _{\ell }^{ \ast } \bigr)^{2} \bigl\Vert G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\Vert ^{2}_{G} \\ &\quad=2\zeta \alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}+ \zeta \alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}_{D} - \bigl( \zeta \alpha _{\ell }^{\ast } \bigr)^{2} \bigl\Vert G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x} ^{\ell } \bigr) \bigr\Vert ^{2}_{G}. \end{aligned}$$
(2.6)

Recall the definition of \(\alpha ^{\ast }_{\ell }\) in (1.13). It follows from (2.6) that

$$ \bigl(\zeta \alpha _{\ell }^{\ast } \bigr)^{2} \bigl\Vert G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x} ^{\ell } \bigr) \bigr\Vert ^{2}_{G}=\zeta ^{2}\alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}, $$

and consequently

$$ \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{G}- \bigl\Vert x^{\ell +1}- \bar{x}^{\ell } \bigr\Vert ^{2}_{G}=\zeta (2-\zeta )\alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }-\bar{x} ^{\ell } \bigr\Vert ^{2}+\zeta \alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }-\bar{x}^{ \ell } \bigr\Vert ^{2}_{D}. $$

Substituting it into the right-hand side of (2.5) and using the definition of \(q_{\ell }(\zeta )\), we obtain

$$ \bigl\langle x-\bar{x}^{\ell }, G \bigl(x^{\ell }-x^{\ell +1} \bigr) \bigr\rangle = \frac{1}{2} \bigl( \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G}- \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \bigr)+ \frac{1}{2}q^{\ell }( \zeta )+\frac{\zeta \alpha _{\ell }^{\ast }}{2} \bigl\Vert x ^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. $$
(2.7)

Adding (2.4) and (2.7) together, we get the assertion (2.3) and the theorem is proved. □

Now, we prove the assertion (2.3) for (1.16) in the following lemma.

Lemma 2.2

For given \(x^{\ell }\in \mathbb{R}^{n}\), let \(\bar{x}^{\ell }\) be defined by (1.10) and the new iterate \(x^{\ell +1}\) be generated by (1.16). Then the assertion (2.3) is satisfied.

Proof

We have

$$ \bigl\langle x-\bar{x}^{\ell }, Mx+q \bigr\rangle - \bigl\langle x- \bar{x}^{\ell }, \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \geq 0. $$

It follows from the Cauchy–Schwarz inequality that

$$\begin{aligned} & \bigl\langle x-\bar{x}^{\ell }, Mx+q \bigr\rangle - \bigl\langle x- \bar{x}^{\ell }, \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad= \bigl\langle x-\bar{x}^{\ell }, M \bigl(x-x^{\ell } \bigr)-M^{T} \bigl(x^{\ell }-\bar{x} ^{\ell } \bigr) \bigr\rangle \\ &\quad= \bigl\langle x-\bar{x}^{\ell }, M \bigl(x-x^{\ell } \bigr)- \bigl(M+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad=\frac{1}{2} \bigl\Vert x-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}- \bigl\langle x-x^{\ell },D \bigl(x ^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad\geq -\frac{1}{2} \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. \end{aligned}$$

Consequently, we obtain

$$ \zeta \alpha _{\ell }^{\ast } \bigl\langle x- \bar{x}^{\ell }, Mx+q \bigr\rangle \geq \bigl\langle x- \bar{x}^{\ell }, \zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx^{ \ell }+q \bigr)+M^{T} \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr] \bigr\rangle -\frac{\zeta \alpha _{\ell }^{\ast }}{2} \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. $$
(2.8)

Now, we investigate the first term in right-hand side of (2.8) and divide it into the following two terms:

$$ \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, \zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle $$
(2.9)

and

$$ \bigl\langle x-x^{\ell +1}, \zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx^{\ell }+q \bigr)+M ^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle . $$
(2.10)

First, we deal with term (2.9). Let us set \(y=x^{\ell }-(Mx ^{\ell }+q)\) in (1.17). Since \(\bar{x}^{\ell }=P_{C}^{G}[x ^{\ell }-(Mx^{\ell }+q)]\) and \(x^{\ell +1}\in C\), it follows that

$$ \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, Mx^{\ell }+q \bigr\rangle \geq \bigl\langle x^{\ell +1}-\bar{x}^{\ell },x^{\ell }- \bar{x}^{\ell } \bigr\rangle . $$

Adding the term

$$ \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle $$

to both sides in the above inequality, we obtain

$$ \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, \bigl(Mx^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \geq \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, \bigl(I+M ^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle , $$

and it follows that

$$\begin{aligned} & \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, \zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \\ &\quad\geq \zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell +1}- \bar{x}^{ \ell }, \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad= \zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell }- \bar{x}^{\ell }, \bigl(I+M ^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\rangle -\zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell }-x^{\ell +1}, \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad\geq \zeta \alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}+\frac{ \zeta \alpha _{\ell }^{\ast }}{2} \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2} _{D} -\zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell }-x^{\ell +1}, \bigl(I+M ^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle . \end{aligned}$$
(2.11)

For the crossed term of right-hand side in (2.11), using the Cauchy–Schwart inequality and (1.13), we get

$$\begin{aligned} &{-}\zeta \alpha _{\ell }^{\ast } \bigl\langle x^{\ell }-x^{\ell +1}, \bigl(I+M^{T} \bigr) \bigl(x ^{\ell }-\bar{x}^{\ell } \bigr) \bigr\rangle \\ &\quad=- \bigl\langle x^{\ell }-x^{\ell +1}, G \bigl[ \zeta \alpha _{\ell }^{\ast }G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \\ &\quad\geq -\frac{1}{2} \bigl\Vert x^{\ell }-x^{\ell +1} \bigr\Vert ^{2}_{G}-\frac{1}{2} \zeta ^{2} \bigl( \alpha _{\ell }^{\ast } \bigr)^{2} \bigl\Vert G^{-1} \bigl(I+M^{T} \bigr) \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr\Vert ^{2}_{G} \\ &\quad=-\frac{1}{2} \bigl\Vert x^{\ell }-x^{\ell +1} \bigr\Vert ^{2}_{G}- \frac{1}{2}\zeta ^{2}\alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}. \end{aligned}$$

Substituting it into the right-hand of (2.11) and using the notation \(q^{\ell }(\zeta )\), we obtain

$$\begin{aligned} & \bigl\langle x^{\ell +1}-\bar{x}^{\ell }, \zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx ^{\ell }+q \bigr)+M^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \\ &\quad\geq \frac{1}{2}q^{\ell }(\zeta )+\frac{\zeta \alpha _{\ell }^{ \ast }}{2} \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}- \frac{1}{2} \bigl\Vert x^{ \ell }-x^{\ell +1} \bigr\Vert ^{2}_{G}. \end{aligned}$$
(2.12)

Now, we turn to treating the term (2.10). The update form of (1.16) means that \(x^{\ell +1}\) is the proximal of the vector

$$ \bigl(x^{\ell }-\zeta \alpha _{\ell }^{\ast }G^{-1} \bigl[ \bigl(Mx^{\ell }+q \bigr)+M^{T} \bigl(x ^{\ell }- \bar{x}^{\ell } \bigr) \bigr] \bigr) $$

onto C. Thus, from (1.17), we have

$$ \bigl\langle \bigl(x^{\ell }-\zeta \alpha _{\ell }^{\ast }G^{-1} \bigl[ \bigl(Mx^{\ell }+q \bigr)+M ^{T} \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr] \bigr)-x^{\ell +1}, G \bigl(x-x^{\ell +1} \bigr) \bigr\rangle \leq 0,\quad \forall x\in C, $$

and consequently

$$ \bigl\langle x-x^{\ell +1},\zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx^{\ell }+q \bigr)+M ^{T} \bigl(x^{\ell }- \bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \geq \bigl\langle x-x^{\ell +1}, G \bigl(x ^{\ell }-x^{\ell +1} \bigr) \bigr\rangle ,\quad \forall x\in C. $$

Using the identity

$$ \langle a,Gb\rangle =\frac{1}{2} \bigl\{ \Vert a \Vert ^{2}_{G}- \Vert a-b \Vert ^{2}_{G}+ \Vert b \Vert ^{2}_{G} \bigr\} $$

for the right hand side of the last inequality, we obtain

$$\begin{aligned} & \bigl\langle x-x^{\ell +1},\zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx^{\ell }+q \bigr)+M ^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \\ &\quad\geq \frac{1}{2} \bigl( \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G}- \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \bigr)+ \frac{1}{2} \bigl\Vert x^{\ell }-x^{\ell +1} \bigr\Vert ^{2}_{G}. \end{aligned}$$
(2.13)

Adding (2.12) and (2.13) together, we get

$$\begin{aligned} & \bigl\langle x-\bar{x}^{\ell },\zeta \alpha _{\ell }^{\ast } \bigl[ \bigl(Mx^{\ell }+q \bigr)+M ^{T} \bigl(x^{\ell }-\bar{x}^{\ell } \bigr) \bigr] \bigr\rangle \\ &\quad\geq \frac{1}{2} \bigl( \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G}- \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \bigr)+ \frac{1}{2}q^{\ell }( \zeta )+\frac{\zeta \alpha _{\ell }^{*}}{2} \bigl\Vert x ^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}_{D}. \end{aligned}$$
(2.14)

Finally, substituting it into (2.8), the proof is complete. □

Based on Lemma 2.1 and 2.2, the strict contraction property of the sequences generated by (1.12) and (1.16) can easily be derived. On the basis of the above two lemmas, we have the following corollary.

Corollary 2.3

The sequence \(\{x^{\ell }\}\) generated by either (1.12) or (1.16) is strictly contractive with respect to the solution set \(C^{\ast }\) of (1.8).

Proof

From Lemma 2.1 and Lemma 2.2, we have proved that the sequence \(\{x^{\ell }\}\) generated by either (1.12) or (1.16) satisfies inequality (2.3). Setting \(x=x^{\ast }\) in (2.3), where \(x^{\ast }\in C^{\ast }\) is an arbitrary solution of (1.8), we get

$$ \bigl\Vert x^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G}- \bigl\Vert x^{\ell +1}-x^{\ast } \bigr\Vert ^{2}_{G} \geq 2\zeta \alpha _{\ell }^{\ast } \bigl\langle \bar{x}^{\ell }-x^{\ast }, Mx ^{\ast }+q \bigr\rangle +q^{\ell }(\zeta ). $$

Since

$$ \bigl\langle \bar{x}^{\ell }-x^{\ast }, Mx^{\ast }+q \bigr\rangle = 0, $$

it follows from the last inequality and (2.2) that

$$ \bigl\Vert x^{\ell +1}-x^{\ast } \bigr\Vert ^{2}_{G} \leq \bigl\Vert x^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G}- \zeta (2- \zeta )\alpha _{\ell }^{\ast } \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}, $$

which means that the sequence \(\{x^{\ell }\}\) generated by either (1.12) or (1.16) is strictly contractive with respect to the solution set \(C^{\ast }\). The proof is completed. □

3 Iteration complexity in ergodic sense

We first derive the worst-case convergence rates measured by iteration complexity in the ergodic sense. For this purpose, we need the definition of an ϵ-approximated solution of generalized complementarity problems.

Theorem 3.1

Let the sequence \(\{x^{\ell }\}\) be generated by either (1.12) or (1.16) starting from \(x^{0}\) and \(\bar{x}^{\ell }\) be given by (1.10). For any integer \(t>0\), let

$$ \bar{x}_{t}=\frac{1}{\lambda _{t}}\sum _{\ell =0}^{t}\alpha _{\ell }^{ \ast } \bar{x}^{\ell }\quad \textit{and}\quad \lambda _{t}=\sum _{\ell =0}^{t}\alpha _{\ell }^{\ast }. $$
(3.1)

Then it satisfies

$$ \langle \bar{x}_{t}-x, Mx+q\rangle \leq \frac{ \Vert x-x^{0} \Vert ^{2}_{G}}{2 \alpha _{\mathrm{min}}\eta (t+1)}, \quad\forall x\in C. $$
(3.2)

Proof

Since Lemma 2.1 and Lemma 2.2 satisfied for any \(\zeta >0\) and the strict contraction in Corollary 2.3 is guaranteed for \(\zeta \in (0,2)\). In this proof, we can slightly extend the restriction of ζ to \(\zeta \in (0,2]\). For this case, we still have \(q^{\ell }(\zeta )\geq 0\). It follows from the property of M, (2.2) and (2.3) that

$$ \bigl\langle x-\bar{x}^{\ell }, \alpha _{\ell }^{\ast }(Mx+q) \bigr\rangle +\frac{1}{2 \zeta } \bigl\Vert x-x^{\ell } \bigr\Vert ^{2}_{G} \geq \frac{1}{2\zeta } \bigl\Vert x-x^{\ell +1} \bigr\Vert ^{2}_{G},\quad \forall x\in C. $$

Summarizing the above inequality over \(\ell =0,1,\ldots ,t\), we obtain

$$ \Biggl\langle \Biggl(\sum_{\ell =0}^{t} \alpha _{\ell }^{\ast } \Biggr)x- \sum_{\ell =0}^{t} \alpha _{\ell }^{\ast }\bar{x}^{\ell }, Mx+q \Biggr\rangle + \frac{1}{2 \zeta } \bigl\Vert x-x^{0} \bigr\Vert ^{2}_{G}\geq 0, \quad\forall x\in C. $$

Using the notation of \(\lambda _{t}\) and \(\bar{x}^{t}\) in the above inequality, we obtain

$$ \bigl\langle \bar{x}^{t}-x, Mx+q \bigr\rangle \leq \frac{ \Vert x-x^{0} \Vert ^{2}_{G}}{2 \zeta \lambda _{t}},\quad x\in C. $$
(3.3)

Indeed, \(\bar{x}^{t}\in C\), because it is a convex combination of the iterates \(\bar{x}^{0},\bar{x}^{1},\ldots ,\bar{x}^{t}\).

Since

$$ \alpha _{\ell }^{\ast }\geq \alpha _{\mathrm{min}}\quad(\text{see [11]}), $$

it follows from (3.1) that

$$ \lambda ^{t}\geq (t+1)\alpha _{\mathrm{min}}. $$

Substituting it into (3.3), the proof is completed. □

The next theorem shows that the worst-case \(O(\frac{1}{t})\)-convergence rate measured by the iteration complexity in the ergodic sense for (1.12) and (1.16).

Theorem 3.2

For any \(\epsilon >0\) and \(x^{\ast }\in C^{\ast }\), with an initial iterate \(x^{0}\), either (1.12) or (1.16) requires no more iterations than \(\lceil \frac{d}{2\alpha _{\mathrm{min}}\zeta \epsilon } \rceil \) to produce an ϵ-approximated solution of generalized complementarity problems (2.1), where

$$ d=3+9 \bigl\Vert x^{0}-x^{\ast } \bigr\Vert ^{2}_{G}+ \frac{6 \Vert G \Vert _{2} \Vert x^{0}-x^{\ast } \Vert ^{2}_{G}}{\zeta (2-\zeta )\alpha _{\mathrm{min}}}. $$
(3.4)

Proof

For \(x\in \mathcal{D}(\bar{x}^{t})\), using the Cauchy–Schwarz inequality and the convexity of \(\Vert \cdot \Vert ^{2}_{G}\), we have

$$\begin{aligned} \bigl\Vert x-x^{0} \bigr\Vert ^{2}_{G} &\leq 3 \bigl\Vert x- \bar{x}^{t} \bigr\Vert ^{2}_{G}+3 \bigl\Vert x^{0}-x^{ \ast } \bigr\Vert ^{2}_{G}+3 \bigl\Vert \bar{x}^{t}-x^{\ast } \bigr\Vert ^{2}_{G} \\ &\leq 3+3 \bigl\Vert x^{0}-x^{\ast } \bigr\Vert ^{2}_{G}+3 \max_{0\leq \ell \leq t} \bigl\Vert \bar{x}^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G} \\ &\leq 3+3 \bigl\Vert x^{0}-x^{\ast } \bigr\Vert ^{2}_{G}+6 \max_{0\leq \ell \leq t} \bigl\Vert x ^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G}+6 \max_{0\leq \ell \leq t} \bigl\Vert x^{\ell }- \bar{x}^{\ell } \bigr\Vert ^{2}_{G}. \end{aligned}$$
(3.5)

On the other hand, it follows from (1.15) that

$$ \bigl\Vert x^{\ell }-x^{\ast } \bigr\Vert ^{2}_{G} \leq \bigl\Vert x^{0}-x^{\ast } \bigr\Vert ^{2}_{G} $$
(3.6)

and

$$ \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}\leq \frac{ \Vert x^{0}-x^{\ast } \Vert ^{2} _{G}}{\zeta (2-\zeta )\alpha _{\ell }^{\ast }}\leq \frac{ \Vert x^{0}-x^{ \ast } \Vert ^{2}_{G}}{\zeta (2-\zeta )\alpha _{\mathrm{min}}}. $$
(3.7)

Since

$$ \bigl\Vert x^{\ell }-\bar{x}^{\ell } \bigr\Vert ^{2}\leq \Vert G \Vert _{2} \bigl\Vert x^{\ell }-\bar{x} ^{\ell } \bigr\Vert ^{2}, $$

and from (3.5)–(3.7), we have

$$ \bigl\Vert x-x^{0} \bigr\Vert ^{2}_{G} \leq 3+9 \bigl\Vert x^{0}-x^{\ast } \bigr\Vert ^{2}_{G}+ \frac{6 \Vert G \Vert _{2} \Vert x^{0}-x^{\ast } \Vert ^{2}_{G}}{\zeta (2-\zeta )\alpha _{\mathrm{min}}}=d. $$
(3.8)

This, together with (3.2), completes the proof of the theorem. □

4 Iteration complexity in nonergodic sense

In this section, we derive the worst-case \(O(\frac{1}{t})\) convergence rates measured by the iteration complexity in a nonergodic sense for (1.12) and (1.16). For this purpose, we need the definition of an ϵ-approximated solution of (1.8) in the sense of a projection equation residual characterization.

Theorem 4.1

For any \(\epsilon >0\) and \(x^{\ast }\in C^{\ast }\), with an initial iterate \(x^{0}\), either (1.12) or (1.16) requires no more iterations than \(\lceil \frac{ \Vert x^{0}-x^{\ast } \Vert ^{2}_{G}}{ \alpha _{\mathrm{min}}\zeta (2-\zeta )\epsilon }\rceil \) to obtain an ϵ-approximated solution of (1.8) in sense of \(\Vert e(x^{\ell }) \Vert ^{2}\leq \epsilon\).

Proof

Summarizing inequality (1.15) over \(\ell =0,1,\ldots , t\) and using the inequality \(\alpha _{\ell }\geq \alpha _{\mathrm{min}}\), we derive that

$$ \sum_{\ell =0}^{\infty } \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}\leq \frac{ \Vert x^{0}-x^{ \ast } \Vert ^{2}_{G}}{\alpha _{\mathrm{min}}\zeta (2-\zeta )}. $$
(4.1)

This implies

$$ (t+1)\min_{0\leq \ell \leq t} \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}\leq \sum_{\ell =0} ^{t} \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}\leq \frac{ \Vert x^{0}-x^{\ast } \Vert ^{2}_{G}}{ \alpha _{\mathrm{min}}\zeta (2-\zeta )}, $$

which proves the theorem. □

Indeed, the worst-case \(O(\frac{1}{t})\)-convergence rates in nonergodic sense established in Theorem 4.1 can be easily refined as an \(O(\frac{1}{t})\) order. We summarize it in the following.

Corollary 4.2

Let the sequence \(\{x^{\ell }\}\) be generated by either (1.12) or (1.16), and \(e(x^{\ell })\) be defined in (1.11). For any integer \(t>0\), we have

$$ \min_{0\leq \ell \leq t} \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}=O \biggl(\frac{1}{t} \biggr),\quad \textit{as } t\longrightarrow \infty. $$
(4.2)

Proof

Notice that

$$ \frac{t}{2}\min_{0\leq \ell \leq t} \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}\leq \sum _{\iota =\lfloor \frac{t}{2}\rfloor +1}^{t} \bigl\Vert e \bigl(x^{\ell } \bigr) \bigr\Vert ^{2}\longrightarrow 0 \quad\text{as } t\longrightarrow \infty , $$
(4.3)

where Eq. (4.3) holds due to (4.1) and the Cauchy principle. The proof is completed. □

References

  1. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation, Numerical Methods. Prentice-Hall, Englewood Cliffs (1989)

    MATH  Google Scholar 

  2. Billupsa, S.C., Murty, K.G.: Complementarity problems. J. Comput. Appl. Math. 124, 303–318 (2000)

    Article  MathSciNet  Google Scholar 

  3. Brezis, H.: Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert. North-Holland, Amsterdam (1973)

    MATH  Google Scholar 

  4. Chen, C., Fu, X., He, B.S., Yuan, X.: On the iteration complexity of some projection methods for monotone linear variational inequalities. J. Optim. Theory Appl. 172, 914–928 (2017)

    Article  MathSciNet  Google Scholar 

  5. Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, Boston (1992)

    MATH  Google Scholar 

  6. Facchinei, F., Pang, J.S.: Finite Dimensional Variational Inequalities and Complementarity Problems, Vol. I and II, Springer Series in Operations. Springer, New York (2003)

    MATH  Google Scholar 

  7. Ferris, M.C., Pang, J.-S. (eds.): Complementarity and Variational Problems: State of the Art. SIAM, Philadelphia (1997)

    Google Scholar 

  8. Ferris, M.C., Pang, J.-S.: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669–713 (1997)

    Article  MathSciNet  Google Scholar 

  9. Harker, P.T., Pang, J.-S.: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48, 161–220 (1990)

    Article  MathSciNet  Google Scholar 

  10. Hartman, P., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)

    Google Scholar 

  11. He, B.S.: A globally linearly convergence projection and contraction method for a class of linear complementarity problems. Schwerpunkprogramm der Deutschen Forschungsgemeinschaft Anwendungsbezogene Optimierung und Steuerung. Report No. 352, (1992)

  12. He, B.S.: A new method for a class of linear variational inequalities. Math. Program. 66, 137–144 (1994)

    Article  MathSciNet  Google Scholar 

  13. He, B.S., Yuan, X.M.: On non-ergodic convergence rate of Douglas–Rachford alternating direction method multipliers. Numer. Math. 130(3), 567–577 (2015)

    Article  MathSciNet  Google Scholar 

  14. Karush, W.: Minima of functions of several variables with inequalities as side constraints. M.Sc. Dissertation, Department of Mathematics, Univ. Chicago, Chicago, Illinois, USA (1939)

  15. Nemirovski, A.: Pro-method with of convergence \(O(\frac{1}{n})\) for variational inequality with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(2), 229–251 (2005)

    MathSciNet  Google Scholar 

  16. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(O(\frac{1}{k^{2}})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  17. Nesterov, Y.E.: Gradient methods for minimizing composite objective functions. Math. Program. 140(1), 125–161 (2013)

    Article  MathSciNet  Google Scholar 

  18. Varga, R.S.: Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs (1966)

    Google Scholar 

Download references

Availability of data and materials

The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Funding

This work was supported by the Department of Mathematics, Jazan University, Jazan-45142, KSA.

Author information

Authors and Affiliations

Authors

Contributions

All authors finished this work together. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Salahuddin or Meshari Alesemi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salahuddin, Alesemi, M. Iteration complexity of generalized complementarity problems. J Inequal Appl 2019, 79 (2019). https://doi.org/10.1186/s13660-019-2024-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-019-2024-8

MSC

Keywords