Skip to main content

Some results on the filter method for nonlinear complementary problems

Abstract

Recent studies show that the filter method has good numerical performance for nonlinear complementary problems (NCPs). Their approach is to reformulate an NCP as a constrained optimization solved by filter algorithms. However, they can only prove that the iterative sequence converges to the KKT point of the constrained optimization. In this paper, we investigate the relation between the KKT point of the constrained optimization and the solution of the NCP. First, we give several sufficient conditions under which the KKT point of the constrained optimization is the solution of the NCP; second, we define regular conditions and regular point which include and generalize the previous results; third, we prove that the level sets of the objective function of the constrained optimization are bounded for a strongly monotone function or a uniform P-function; finally, we present some examples to verify the previous results.

1 Introduction

Consider the following nonlinear complementarity problem (NCP):

$$ \text{find } x\in \mathbb{R}^{n} \text{ such that } x\geq 0, F(x)\geq 0, \text{ and } x^{T}F(x)=0, $$
(1.1)

where \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is continuously differentiable everywhere and the superscript T denotes the transpose operator. When \(F(x)=Mx+q\), with \(M\in \mathbb{R}^{n\times n}\) and \(q\in \mathbb{R}^{n}\), the NCP becomes a linear complementarity problem.

Nonlinear complementarity problems have many important applications in engineering and equilibrium modeling [10, 12, 18, 28], and many numerical methods are developed to solve NCPs [1, 6, 17, 29, 32, 38]. Based on NCP functions some researchers try to solve NCPs by reformulating them as an unconstrained optimization [5, 9, 11, 14]. Under some assumptions, the solutions of NCPs are obtained by solving these problems. Subsequently, derivative-free algorithms for NCPs are presented [3–5, 11, 14, 17, 19, 23, 25, 37, 40].

In the last 18 years, the filter method [7, 8, 13, 15, 16, 30, 33–35] has been regarded as an efficient constrained optimization method. The advantage of this method is that trial points are accepted if they improve the objective function or improve the constraint violation instead of a combination of those two measures defined by a merit function. Recently, some authors [21, 22, 27, 31, 36, 39] have naturally reformulated the NCP as an inequality constrained optimization:

$$\begin{aligned} &\min\quad \Phi (x)=\frac{1}{2}\sum_{i=1}^{n} \bigl[F_{i}(x)x_{i} \bigr]^{2} \end{aligned}$$
(1.2a)
$$\begin{aligned} &\text{subject to}\quad F_{j}(x)\geq 0,\quad j\in \{1,2,\ldots,n\}, \end{aligned}$$
(1.2b)
$$\begin{aligned} &\phantom{\text{subject to}\quad} x_{j}\geq 0,\quad j\in \{1,2,\ldots,n\}, \end{aligned}$$
(1.2c)

and obtained good numerical performance by filter algorithms. However, they can only prove that the iterative sequence converges to the KKT point of the constrained optimization. The relation between the KKT point of constrained optimization (1.2a)–(1.2c) and the solution of NCP (1.1) has not been studied. What are the conditions for the KKT point of (1.2a)–(1.2c) to be the solution of (1.1)? This is an interesting question which should be answered.

From the above discussion, we shall study the relation between the solution of the NCP and the KKT point of (1.2a)–(1.2c) and propose several sufficient (and necessary) conditions for the KKT point of (1.2a)–(1.2c) to be the solution of the NCP. This work explains the relation between an optimization and an NCP and provides the theoretical basis of filter algorithms for the NCP. The paper is outlined as follows. In Sect. 2, we recall some definitions and basic facts; we give several sufficient conditions in Sect. 3; the definitions of the regular point and regular conditions are proposed in Sect. 4; the boundedness of level sets is discussed in Sect. 5; some numerical results are presented to verify previous results in Sect. 6.

Notation: Given \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\), the ith component function is denoted by \(F_{i}(x)\). \(F'(x)\) is the Jacobian of F at \(x\in \mathbb{R}^{n}\). \(\nabla F(x)=[\nabla F_{1}(x),\ldots,\nabla F_{n}(x)]\) denotes the transpose Jacobian of F at x. \(e_{i}\in \mathbb{R}^{n}\) denotes the ith column of the identity matrix \(I_{n}\). \(x\circ y=(x_{1}y_{1},\ldots,x_{n}y_{n})^{T}\).

2 Preliminaries

In this section, we recall some background concepts and materials.

Definition 1

([14])

A function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is called

(1) monotone if, for all \(x, y\in \mathbb{R}^{n}\),

$$ (x-y)^{T}\bigl[F(x)-F(y)\bigr]\geq 0; $$

(2) strictly monotone if, for all \(x, y\in \mathbb{R}^{n}\) with \(x\neq y\),

$$ (x-y)^{T}\bigl[F(x)-F(y)\bigr]> 0; $$

(3) strongly monotone (with modulus \(\omega >0\)) if, for all \(x, y\in \mathbb{R}^{n}\),

$$ (x-y)^{T}\bigl[F(x)-F(y)\bigr]\geq \omega \Vert x-y \Vert ^{2}. $$

Obviously, strongly monotone functions are strictly monotone, and strictly monotone functions are monotone.

Lemma 1

([14])

For a continuously differentiable function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\),

(1) F is monotone if and only if \(F'(x)\) is positive semidefinite for all \(x\in \mathbb{R}^{n}\);

(2) F is strictly monotone if \(F'(x)\) is positive definite for all \(x\in \mathbb{R}^{n}\);

(3) F is strongly monotone if \(F'(x)\) is uniformly positive definite, i.e., \(d^{T}F'(x)d\geq \omega \|d\|^{2}\) for some \(\omega >0\) and all \(x,d\in \mathbb{R}^{n}\).

Note that the converse direction in Lemma 1(2) is not correct in general. A solution \(x^{*}\in \mathbb{R}^{n}\) of the NCP is said to be nondegenerate if \(x_{i}^{*}+F_{i}(x^{*})>0\) for all \(i\in I\), and degenerate otherwise.

Lemma 2

([14])

Assume that \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is a continuous and strongly monotone function. Then NCPs have at most one solution.

Definition 2

([11])

A matrix \(M\in \mathbb{R}^{n\times n}\) is said to be a \(P_{0}\)-matrix if all its principal minors are nonnegative.

Lemma 3

([11])

A matrix \(M\in \mathbb{R}^{n\times n}\) is a

(1) \(P_{0}\)-matrix if every of its principal minors is nonnegative;

(2) P-matrix if every of its principal minors is positive;

(3) \(R_{0}\)-matrix if the linear complementarity problem

$$ Mx\geq 0,\quad x\geq 0, x^{T}Mx=0 $$

has 0 as its unique solution.

It is obvious that every P-matrix is also a \(P_{0}\)-matrix, and it is known that every P-matrix is an \(R_{0}\)-matrix. We shall also need the following characterization of \(P_{0}(P)\)-matrices.

Lemma 4

([11])

A matrix \(M\in \mathbb{R}^{n\times n}\) is a \(P_{0}(P)\)-matrix if and only if, for every nonzero vector x, there exists an index i such that \(x_{i}\neq 0\) and \(x_{i}(Mx)_{i}\geq (>)0\).

Lemma 5

([11])

A function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is a

(1) \(P_{0}\)-function if, for every x and y in \(\mathbb{R}^{n}\) with \(x\neq y\), there is an index i such that

$$ x_{i}\neq y_{i},\quad (x_{i}-y_{i}) \bigl[F_{i}(x)-F_{i}(y)\bigr]\geq 0; $$

(2) P-function if, for every x and y in \(\mathbb{R}^{n}\) with \(x\neq y\), there is an index i such that

$$ (x_{i}-y_{i})\bigl[F_{i}(x)-F_{i}(y) \bigr]> 0; $$

(3) uniform P-function if there exists a positive constant \(\omega >0\) such that, for every x and y in \(\mathbb{R}^{n}\), there is an index i such that

$$ (x_{i}-y_{i})\bigl[F_{i}(x)-F_{i}(y) \bigr]\geq \omega \Vert x-y \Vert ^{2}. $$

It is obvious that every monotone function is a \(P_{0}\)-function, every strictly monotone function is a P-function, and every strongly monotone function is a uniform P-function. Furthermore, it is known that the Jacobian of every continuously differentiable \(P_{0}\)-function is a \(P_{0}\)-matrix and that if the Jacobian of a continuously differentiable function is a P-matrix for every x, then the function is a P-function. If F is affine (that is, if \(F(x)= Mx+q\)), then F is a \(P_{0}\)-function if M is a \(P_{0}\)-matrix. F is a (uniform) P-function if M is a P-matrix (note that in the affine case, the concepts of uniform P-function and P-function coincide).

3 Sufficient conditions

In this paper, NCP (1.1) is transformed into the following equivalent inequality and nonnegative constrained optimization:

$$\begin{aligned} &\min\quad \Phi (x)=\frac{1}{2}\sum_{i=1}^{n} \bigl[F_{i}(x)x_{i} \bigr]^{2} \end{aligned}$$
(3.1a)
$$\begin{aligned} &\text{subject to }\quad F_{j}(x)\geq 0,\quad j\in \{1,2,\ldots,n\}, \end{aligned}$$
(3.1b)
$$\begin{aligned} &\phantom{\text{subject to }\quad} x_{j}\geq 0,\quad j\in \{1,2,\ldots,n\}. \end{aligned}$$
(3.1c)

The KKT conditions of (3.1a)–(3.1c) are

$$\begin{aligned} &\Biggl(\sum_{i=1}^{n}F_{i}(x) \nabla F_{i}(x)x_{i}^{2}+F_{i}^{2}(x)x_{i}e_{i} \Biggr)-\nabla F(x)\mu -\nu =0, \end{aligned}$$
(3.2a)
$$\begin{aligned} &F(x)\geq 0,\quad x\geq 0, \end{aligned}$$
(3.2b)
$$\begin{aligned} &\mu \circ F(x)=0,\quad \nu \circ x=0, \end{aligned}$$
(3.2c)
$$\begin{aligned} &\mu \geq 0, \quad\nu \geq 0, \end{aligned}$$
(3.2d)

where \(\mu, \nu \in \mathbb{R}^{n}\) are the vectors of multipliers corresponding to inequalities.

Remark 1

Previous algorithms [21, 22, 27, 31, 36, 39] treat nonnegative constraints as inequality constraints. In the following analysis we need to discuss inequality constraints and nonnegative constraints, respectively.

Lemma 6

Suppose that the NCP has at least one solution. Then \(x^{*}\) solves NCP (1.1) if and only if \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c).

The problem of finding a global minimum is quite difficult. It is therefore of interest under which assumptions on the mapping F, the KKT point of (3.1a)–(3.1c) are global minima. Because of the existence of multipliers in KKT conditions (3.2a)–(3.2d), it is complex. First, using the relation between constraints and the objective function, we arrange the KKT conditions of (3.1a)–(3.1c).

Lemma 7

Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c). Then there exist \(\mu ^{*},\nu ^{*}\geq 0\) such that

$$ \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix} + \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}=0. $$
(3.3)

Proof

From assumptions there exist \(\mu ^{*},\nu ^{*}\geq 0\) such that

$$\begin{aligned} &\Biggl(\sum_{i=1}^{n}F_{i} \bigl(x^{*}\bigr)\nabla F_{i}\bigl(x^{*}\bigr) \bigl(x_{i}^{*}\bigr)^{2}+F_{i}^{2} \bigl(x^{*}\bigr)x_{i}^{*}e_{i} \Biggr)- \nabla F\bigl(x^{*}\bigr)\mu ^{*}-\nu ^{*}=0, \end{aligned}$$
(3.4a)
$$\begin{aligned} &F\bigl(x^{*}\bigr)\geq 0,\quad x^{*}\geq 0, \end{aligned}$$
(3.4b)
$$\begin{aligned} &\mu ^{*}\circ F\bigl(x^{*}\bigr)=0,\quad \nu ^{*}\circ x^{*}=0, \end{aligned}$$
(3.4c)
$$\begin{aligned} &\mu ^{*}\geq 0, \quad\nu ^{*}\geq 0. \end{aligned}$$
(3.4d)

Denote \(\nabla F(x^{*})= (\nabla F_{1}(x^{*}),\ldots,\nabla F_{n}(x^{*}) )\), then we have

$$\begin{aligned} 0&=\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*}) \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*}) \end{pmatrix} + \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*}) \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*}) \end{pmatrix}-\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} \mu ^{*}_{1} \\ \vdots \\ \mu ^{*}_{n} \end{pmatrix}- \begin{pmatrix} \nu ^{*}_{1} \\ \vdots \\ \nu ^{*}_{n} \end{pmatrix} \\ &=\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix} + \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}. \end{aligned}$$

 □

Remark 2

The relation between the objective function and constraints is the key to analysis.

Next, we introduce some index sets:

$$\begin{aligned} &C:=\bigl\{ i\in I| x_{i}\geq 0, F_{i}(x)\geq 0, x_{i}F_{i}(x)=0\bigr\} , \\ &R:=I\backslash C, \end{aligned}$$

and further partition the index set C as follows:

$$\begin{aligned} &C_{1}:=\bigl\{ i\in C| x_{i}> 0, F_{i}(x)=0 \bigr\} , \\ &C_{2}:=\bigl\{ i\in C| x_{i}= 0, F_{i}(x)>0 \bigr\} , \\ &C_{3}:=C\backslash (C_{1}\cup C_{2})=\bigl\{ i \in C| x_{i}= 0, F_{i}(x)= 0 \bigr\} . \end{aligned}$$

Theorem 1

Suppose that the mapping F has a positive semidefinite Jacobian \(F'(x^{*})\), then \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c), which implies that (3.3) holds. Suppose the contrary, i.e., \(\Phi (x^{*})\neq 0\). Premultiplying (3.3) by

$$ \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}^{T}, $$

we get

$$\begin{aligned} 0={}& \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}^{T}\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}\\ &{} + \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}^{T} \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}. \end{aligned}$$

Since \(\nabla F(x^{*})\) is positive semidefinite, we obtain that

$$ \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}^{T}\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}\geq 0. $$

Hence

$$ 0\geq \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix}^{T} \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}=\sum _{i=1}^{n} \bigl[\bigl(x_{i}^{*} \bigr)^{2}F_{i}\bigl(x^{*}\bigr)-\mu ^{*}_{i} \bigr] \bigl[x_{i}^{*}F_{i}^{2} \bigl(x^{*}\bigr)-\nu ^{*}_{i} \bigr]. $$

There are four cases for \(\mu ^{*}\) and \(\nu ^{*}\):

(1) \(\mu ^{*}_{R}=0\) and \(\nu ^{*}_{R}=0\);

(2) \(\mu ^{*}_{C_{2}}=0\) and \(\nu ^{*}_{C_{2}}\geq 0\);

(3) \(\mu ^{*}_{C_{1}}\geq 0\) and \(\nu ^{*}_{C_{1}}=0\);

(4) \(\mu ^{*}_{C_{3}}\geq 0\) and \(\nu ^{*}_{C_{3}}\geq 0\).

So we have

$$\begin{aligned} &\sum_{i=1}^{n} \bigl[ \bigl(x_{i}^{*}\bigr)^{2}F_{i} \bigl(x^{*}\bigr)-\mu ^{*}_{i} \bigr] \bigl[x_{i}^{*}F_{i}^{2} \bigl(x^{*}\bigr)-\nu ^{*}_{i} \bigr] \\ &\quad=\sum_{i\in R}\bigl(x_{i}^{*} \bigr)^{2}F_{i}\bigl(x^{*}\bigr)\cdot x_{i}^{*}F_{i}^{2}\bigl(x^{*} \bigr)+ \sum_{i\in C_{2}}0\cdot \bigl(-\nu ^{*}_{i}\bigr)+\sum_{i\in C_{1}}-\mu ^{*}_{i} \cdot 0+\sum_{i\in C_{3}}\mu ^{*}_{i}\cdot \nu ^{*}_{i} \\ &\quad=\sum_{i\in R}\bigl(x_{i}^{*} \bigr)^{2}F_{i}\bigl(x^{*}\bigr)\cdot x_{i}^{*}F_{i}^{2}\bigl(x^{*} \bigr)+ \sum_{i\in C_{3}}\mu ^{*}_{i} \cdot \nu ^{*}_{i} \\ &\quad>0. \end{aligned}$$

Because \(\sum_{i=1}^{n} [(x_{i}^{*})^{2}F_{i}(x^{*})-\mu ^{*}_{i} ] [x_{i}^{*}F_{i}^{2}(x^{*})-\nu ^{*}_{i} ]\leq 0\), it is a contradiction. Thus, \(\Phi (x^{*})=0\), i.e., \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c).

The global minima of the constrained optimization is obviously a KKT point of (3.1a)–(3.1c). □

Corollary 1

Suppose that the mapping F is a monotone function, then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Since the mapping F is a monotone function, the Jacobian \(F'(x^{*})\) of the mapping F is a positive semidefinite matrix. □

Theorem 2

Suppose that the Jacobian \(F'(x^{*})\) of the mapping F is a P-matrix, then \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c), then we know from Lemma 7 that

$$ \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix} + \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}=0. $$
(3.5)

Suppose the contrary, i.e., that \(x^{*}\) is not the solution of the NCF, then \(R\neq \emptyset \). There are four cases for \(\mu ^{*}\) and \(\nu ^{*}\) which are the same as those in Theorem 1. Without loss of generality we have that

$$ \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} + \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=0. $$

Since \(\nabla F(x^{*})\) is a P-matrix, for every nonzero vector x, there exists an index i such that \(x_{i}\neq 0\) and \(x_{i}(\nabla F(x^{*}) x)_{i}> 0\). Because \((x_{R}^{*})^{2}\circ F_{R}(x^{*})>0\), we know that

$$ \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} $$

is a nonzero vector, and there exists an index \(i\in R\cup C_{1}\cup C_{3}\) such that

$$ \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}_{i}\neq 0 $$

and

$$ \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}^{T}_{i}\left ( \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} \right )_{i}> 0. $$
(3.6)

At the same time

$$\begin{aligned} \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}^{T}_{i} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}_{i}&=\textstyle\begin{cases} ((x_{R}^{*})^{3}F_{R}^{3}(x^{*}) )_{i}>0 &\text{if } i\in R, \\ (-\mu ^{*}_{C_{1}}\cdot 0 )_{i}=0 & \text{if } i\in C_{1}, \\ (\mu ^{*}_{C_{3}}\nu ^{*}_{C_{3}} )_{i}\geq 0 & \text{if } i\in C_{3}, \end{cases}\displaystyle \\ &\geq 0. \end{aligned}$$
(3.7)

It follows from (3.6) and (3.7) that

$$\begin{aligned} & \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}^{T}_{i}\left (\nabla F \bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} \right )_{i}\\ &\quad{}+ \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}^{T}_{i} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}_{i}>0, \end{aligned}$$

which contradicts (3.5).

The global minima of the constrained optimization is obviously a KKT point of (3.1a)–(3.1c). □

Corollary 2

Suppose that the mapping F is a P-function, then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Since the mapping F is a P-function, the Jacobian \(F'(x^{*})\) of the mapping F is a P-matrix. □

4 Regular conditions

In this section, we give some sufficient (and necessary) conditions for the KKT point of (3.1a)–(3.1c) to be the solution of the NCP. We call these conditions regular conditions which can be considered as a generalization of previous conclusions. First, we give the definitions of regular point and regular conditions.

Definition 3

A point \(x\in \mathbb{R}^{n}\) is called regular if, for every vector \(z\in \mathbb{R}^{n}\) (\(z\leq 0\) does not hold) with

$$ z_{C}\leq 0,\qquad z_{R}>0, $$
(4.1)

there exists a vector \(y\in \mathbb{R}^{n}\) such that

$$ y_{C_{2}}\leq 0,\qquad y_{C_{3}}\leq 0,\qquad y_{R} \geq 0,\qquad y_{R}\neq 0 \quad\text{or}\quad y_{C_{3}}\neq 0, $$
(4.2)

and

$$ y^{T}\nabla F(x)z\geq 0. $$
(4.3)

Moreover, a point \(x\in \mathbb{R}^{n}\) is called strictly regular if, for every vector \(z\in \mathbb{R}^{n}\) (\(z\leq 0\) does not hold) with

$$ z_{C}\leq 0,\qquad z_{R}>0, $$

there exists a vector \(y\in \mathbb{R}^{n}\) such that

$$ y_{C_{2}}\leq 0,\qquad y_{C_{3}}\leq 0, \qquad y_{R} \geq 0, $$
(4.4)

and

$$ y^{T}\nabla F(x)z> 0. $$
(4.5)

Theorem 3

Let \(x^{*}\in \mathbb{R}^{n}\) be a KKT point of (3.1a)–(3.1c). Then \(x^{*}\) solves the NCP if and only if \(x^{*}\) is regular (or strictly regular).

Proof

If \(x^{*}\in \mathbb{R}^{n}\) is a solution of the NCP, then \(R=\emptyset \) and \(z=z_{C}\), and hence there is no vector z (\(z \leq 0\) does not hold).

Suppose that \(x^{*}\) is regular and a KKT point of (3.1a)–(3.1c). From Lemma 7 we obtain that

$$ \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{1}^{*})^{2}F_{1}(x^{*})-\mu ^{*}_{1} \\ \vdots \\ (x_{n}^{*})^{2}F_{n}(x^{*})-\mu ^{*}_{n} \end{pmatrix} + \begin{pmatrix} x_{1}^{*}F_{1}^{2}(x^{*})-\nu ^{*}_{1} \\ \vdots \\ x_{n}^{*}F_{n}^{2}(x^{*})-\nu ^{*}_{n} \end{pmatrix}=0. $$

Without loss of generality we have that

$$ \nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} + \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=0. $$

Consequently, we have

$$ y^{T}\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} +y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=0 $$
(4.6)

for any \(y\in \mathbb{R}^{n}\). Assume that x is not a solution of the NCF. Then \(R\neq \emptyset \) and choose

$$ z= \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}, $$

which satisfies (4.1). Since \(x^{*}\) is regular, there exists a vector \(y\in \mathbb{R}^{n}\) such that (4.2) and (4.3) hold. With some computation, we obtain that

$$ y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2} \bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{2}}\bigl(- \nu ^{*}_{C_{2}}\bigr)+y^{T}_{C_{1}}\cdot 0+y^{T}_{C_{3}}\bigl(-\nu ^{*}_{C_{3}}\bigr)>0 $$
(4.7)

and

$$ y^{T}\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}=y^{T} \nabla F\bigl(x^{*}\bigr)z\geq 0, $$

which contradicts (4.6). Hence, \(x^{*}\) must be a solution of the NCP.

Suppose that \(x^{*}\) is strictly regular and a KKT point of (3.1a)–(3.1c). Similar to the above proof, we obtain that

$$ y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2} \bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{2}}\bigl(- \nu ^{*}_{C_{2}}\bigr)+y^{T}_{C_{1}}\cdot 0+y^{T}_{C_{3}}\bigl(-\nu ^{*}_{C_{3}}\bigr) \geq 0 $$
(4.8)

and

$$ y^{T}\nabla F\bigl(x^{*}\bigr) \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}=y^{T} \nabla F\bigl(x^{*}\bigr)z> 0, $$

which contradicts (4.6). Hence, \(x^{*}\) must be a solution of the NCP. □

Remark 3

Theorem 1 is the corollary of Theorem 3. Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since \(\nabla F(x^{*})\) is positive semidefinite, for the nonzero vector

$$z= \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix}, $$

\(z^{T}\nabla F(x^{*})z\geq 0\) holds. Choose \(y=z\), and it is easy to see that y satisfies (4.2), (4.7), and (4.3), i.e.,

$$\begin{aligned} &y_{C_{2}}\leq 0,\qquad y_{C_{3}}\leq 0,\qquad y_{R}\geq 0,\qquad y_{R}\neq 0, \\ &y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2} \bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{2}}\bigl(- \nu ^{*}_{C_{2}}\bigr)+y^{T}_{C_{1}}\cdot 0+y^{T}_{C_{3}}\bigl(-\nu ^{*}_{C_{3}}\bigr)>0, \end{aligned}$$

and \(y^{T}\nabla F(x^{*})z=z^{T}\nabla F(x^{*})z\geq 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.

Remark 4

Theorem 2 is the corollary of Theorem 3. Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since the Jacobian \(F'(x^{*})\) of the mapping F is a P-matrix, we have that, for the nonzero vector

$$z= \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} $$

, there exists an index \(i\in R\cup C_{1}\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}>0\). If \(i\in C_{1}\cup C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). Choose y to be the vector whose components are all 0 except for its ith component, which is equal to \(z_{i}\). It is easy to see that y satisfies (4.4), (4.8), and (4.5), i.e.,

$$\begin{aligned} &y_{C_{2}}\leq 0,\qquad y_{C_{3}}\leq 0,\qquad y_{R}\geq 0, \\ &y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2} \bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{2}}\bigl(- \nu ^{*}_{C_{2}}\bigr)+y^{T}_{C_{1}}\cdot 0+y^{T}_{C_{3}}\bigl(-\nu ^{*}_{C_{3}}\bigr) \geq 0, \end{aligned}$$

and \(y^{T}\nabla F(x^{*})z=z_{i}(\nabla F(x^{*})z)_{i}> 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.

Corollary 3

Suppose that, for the nonzero vector t= ( t R 0 t C 1 t C 3 ) with \(t_{R\cup C_{3}}\neq 0\), there exists an index \(i\in R\cup C_{3}\) such that \(t_{i}\neq 0\) and \(t_{i}(\nabla F(x^{*})t)_{i}\geq 0\), then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). By assumptions we have that, for the nonzero vector

$$z= \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ -\mu ^{*}_{C_{1}} \\ -\mu ^{*}_{C_{3}} \end{pmatrix} $$

, there exists an index \(i\in R\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). If \(i\in C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). Choose y to be the vector whose components are all 0 except for its ith component, which is equal to \(z_{i}\). It is easy to see that y satisfies (4.2), (4.7), and (4.3), i.e.,

$$\begin{aligned} &y_{C_{2}}\leq 0,\qquad y_{C_{3}}\leq 0,\qquad y_{R}\geq 0,\qquad y_{R}\neq 0 \quad\text{or}\quad y_{C_{3}}\neq 0, \\ &y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}=y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2} \bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{2}}\bigl(- \nu ^{*}_{C_{2}}\bigr)+y^{T}_{C_{1}}\cdot 0+y^{T}_{C_{3}}\bigl(-\nu ^{*}_{C_{3}}\bigr) \\ &\phantom{y^{T} \begin{pmatrix} x_{R}^{*}\circ F_{R}^{2}(x^{*}) \\ -\nu ^{*}_{C_{2}} \\ 0 \\ -\nu ^{*}_{C_{3}} \end{pmatrix}}\geq y^{T}_{R} \bigl(x_{R}^{*}\circ F_{R}^{2}\bigl(x^{*}\bigr) \bigr)+y^{T}_{C_{3}} \bigl(- \nu ^{*}_{C_{3}}\bigr)> 0, \end{aligned}$$

and \(y^{T}\nabla F(x^{*})z=z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.

If \(x^{*}\) solves NCP (1.1), it is a KKT point of (3.1a)–(3.1c). □

Corollary 4

Suppose that the Jacobian \(F'(x^{*})\) of the mapping F is a \(P_{0}\)-matrix and \(\mu ^{*}_{C_{1}}=0\), then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).

Proof

Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since the Jacobian \(F'(x^{*})\) of the mapping F is a \(P_{0}\)-matrix and \(\mu ^{*}_{C_{1}}=0\), we have that, for the nonzero vector

$$z= \begin{pmatrix} (x_{R}^{*})^{2}\circ F_{R}(x^{*}) \\ 0 \\ 0 \\ -\mu ^{*}_{C_{3}} \end{pmatrix}, $$

there exists an index \(i\in R\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). If \(i\in C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). The rest of the proof is the same as that of Corollary 3. By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.

If \(x^{*}\) solves NCP (1.1), it is a KKT point of (3.1a)–(3.1c). □

5 Boundedness of level sets

In this section, we prove that the level sets of the objective function of (3.1a)–(3.1c) are bounded for a strongly monotone function or a uniform P-function.

Theorem 4

Suppose that the mapping F is a strongly monotone function or a uniform P-function. Let \(x_{0}\) be any given vector, and let \(L(x_{0})=\{x\in \mathbb{R}^{n} | \Phi (x)\leq \Phi (x_{0})\}\) be the corresponding level set. Then \(L(x_{0})\) is bounded.

Proof

Assume that there is a sequence \(\{x_{k}\}\subseteq L(x_{0})\) such that \(\lim_{k\rightarrow \infty }\|x_{k}\|=\infty \). Define the index set

$$ J=\bigl\{ i |\bigl\{ x_{i}^{k}\bigr\} \text{ is unbounded,} i=1,2,\ldots,n \bigr\} \neq \emptyset. $$

Let

$$y_{i}^{k}= \textstyle\begin{cases} 0 & \text{if } i\in J, \\ x_{i}^{k} & \text{if } i\notin J. \end{cases} $$

There are two cases to be considered.

(1) If F is a strongly monotone function, we get

$$\begin{aligned} \omega \sum_{i\in J}\bigl(x_{i}^{k} \bigr)^{2}&=\omega \Vert x_{k}-y_{k} \Vert ^{2}\leq \sum_{i=1}^{n} \bigl(x_{i}^{k}-y_{i}^{k}\bigr) \bigl(F_{i}(x_{k})-F_{i}(y_{k})\bigr) =\sum_{i \in J}x_{i}^{k} \bigl(F_{i}(x_{k})-F_{i}(y_{k})\bigr) \\ &\leq \sqrt{\sum_{i\in J} \bigl(x_{i}^{k}\bigr)^{2}}\sum _{i\in J} \bigl\vert F_{i}(x_{k})-F_{i}(y_{k}) \bigr\vert . \end{aligned}$$

Since \(\sum_{i\in J}(x_{i}^{k})^{2}\neq 0\), we obtain

$$ \omega \sqrt{\sum_{i\in J} \bigl(x_{i}^{k}\bigr)^{2}}\leq \sum _{i\in J} \bigl\vert F_{i}(x_{k})-F_{i}(y_{k}) \bigr\vert . $$

Due to the boundedness of the sequence \(y_{k}\) and the continuity of \(F_{i}(x) (i\in J)\), the sequence \(\{F_{i}(y_{k})\}\) remains bounded. Therefore, there exists an index \(i_{0}\in J\) such that \(\lim_{k\rightarrow \infty }|F_{i_{0}}(x_{k})|=\infty \). It follows from \(\lim_{k\rightarrow \infty }|x_{i_{0}}^{k}|=\infty \) that

$$ \lim_{k\rightarrow \infty }\bigl(F_{i_{0}}(x_{k})x_{i_{0}}^{k} \bigr)^{2}= \infty. $$

However, \(\frac{1}{2}(F_{i_{0}}(x_{k})x_{i_{0}}^{k})^{2}\leq \Phi (x_{k})\leq \Phi (x_{0})\), which is a contradiction.

(2) If F is a uniform P-function, there exists an index \(i_{0}\in \{1,2,\ldots,n\}\) such that

$$\begin{aligned} \omega \sum_{i\in J}\bigl(x_{i}^{k} \bigr)^{2}&=\omega \Vert x_{k}-y_{k} \Vert ^{2}\leq \bigl(x_{i_{0}}^{k}-y_{i_{0}}^{k} \bigr) \bigl(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k}) \bigr)\\ &=\textstyle\begin{cases} x_{i_{0}}^{k}(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k})) & \text{if } i_{0} \in J, \\ 0 & \text{if } i\notin J. \end{cases}\displaystyle \end{aligned}$$

The second case is impossible because the left-hand side of the inequality is positive. Thus,

$$\begin{aligned} \omega \sum_{i\in J}\bigl(x_{i}^{k} \bigr)^{2}&\leq x_{i_{0}}^{k}\bigl(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k}) \bigr),\quad i_{0}\in J \\ &\leq \bigl\vert x_{i_{0}}^{k} \bigr\vert \bigl\vert \bigl(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k})\bigr) \bigr\vert ,\quad i_{0} \in J \\ &\leq \sqrt{\sum_{i\in J} \bigl(x_{i}^{k}\bigr)^{2}}\cdot \bigl\vert \bigl(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k})\bigr) \bigr\vert ,\quad i_{0}\in J. \end{aligned}$$

Since \(\sum_{i\in J}(x_{i}^{k})^{2}\neq 0\), we obtain

$$ \omega \sqrt{\sum_{i\in J} \bigl(x_{i}^{k}\bigr)^{2}}\leq \bigl\vert \bigl(F_{i_{0}}(x_{k})-F_{i_{0}}(y_{k})\bigr) \bigr\vert ,\quad i_{0}\in J. $$

Similar to the proof of case (1), we get that \(\lim_{k\rightarrow \infty }|F_{i_{0}}(x_{k})|=\infty, i_{0}\in J\), i.e.,

$$ \lim_{k\rightarrow \infty }\bigl(F_{i_{0}}(x_{k})x_{i_{0}}^{k} \bigr)^{2}= \infty, $$

which contradicts \(\frac{1}{2}(F_{i_{0}}(x_{k})x_{i_{0}}^{k})^{2}\leq \Phi (x_{k})\leq \Phi (x_{0})\). □

Remark 5

Theorem 4 implies that there exists at least one accumulation point of a sequence remaining in \(L(x_{0})\).

6 Some examples

In this section, we present several examples which are tested by a filter algorithm to verify the previous results. We intend to modify a globally convergent filter algorithm [16] to solve the NCP. Consider the following optimization:

$$\begin{aligned} &\min \quad \Phi (x)=\frac{1}{2}\sum_{i=1}^{n} \bigl[F_{i}(x)x_{i} \bigr]^{2} \end{aligned}$$
(6.1a)
$$\begin{aligned} &\text{subject to}\quad -F_{j}(x)\leq 0,\quad j\in \{1,2,\ldots,n\}, \end{aligned}$$
(6.1b)
$$\begin{aligned} &\phantom{\text{subject to}\quad } -x_{j}\leq 0, \quad j\in \{1,2,\ldots,n\}. \end{aligned}$$
(6.1c)

Define \(c(x)=[-F_{1}(x),\ldots,-F_{n}(x),-x_{1},\ldots,-x_{n}]^{T}\). There are two merit functions in the new algorithm

$$\begin{aligned} &\Phi (x)=\frac{1}{2}\sum_{i=1}^{n} \bigl[F_{i}(x)x_{i} \bigr]^{2}, \end{aligned}$$
(6.2a)
$$\begin{aligned} &\theta (x)= \bigl\Vert \max \bigl\{ c(x),0\bigr\} \bigr\Vert _{1}=\sum_{i=1}^{n}\bigl(\max \bigl\{ -F_{i}(x),0 \bigr\} +\max \{-x_{i},0\}\bigr). \end{aligned}$$
(6.2b)

In order to prevent the algorithm from cycling, the algorithm maintains a filter

$$ \mathcal{F}=\bigl\{ (\theta,\Phi )\in \mathbb{R}^{2}:\theta \geq \theta (x_{0}) \bigr\} . $$

The search direction \(d_{k}\) is obtained by the QP subproblem:

$$\begin{aligned} &\min \quad \nabla \Phi _{k}^{T}d+\frac{1}{2}d^{T}B_{k}d \end{aligned}$$
(6.3a)
$$\begin{aligned} &\text{subject to}\quad -F(x_{k})-J'(x_{k})d\leq 0, \end{aligned}$$
(6.3b)
$$\begin{aligned} &\phantom{\text{subject to}\quad} -x_{k}-d\leq 0, \end{aligned}$$
(6.3c)

where \(B_{k}\) denotes the approximation of the Hessian \(\nabla _{xx}^{2}\Phi (x_{k})\) of the Lagrangian function

$$ P(x)=\Phi (x)-\mu ^{T}F(x)-\nu ^{T}x. $$
(6.4)

After a search direction \(d_{k}\) has been computed, a step size \(\alpha _{k}\) is determined in order to obtain the next iterate \(x_{k+1}=x_{k}+\alpha _{k}d_{k}\). We say that a trial point \(x_{k}(\alpha _{k,l})=x_{k}+\alpha _{k,l}d_{k}\) is acceptable to the filter if and only if

$$\begin{aligned} &\theta \bigl(x_{k}(\alpha _{k,l})\bigr)\leq \theta (x_{j})-\phi (\alpha _{k,l}) \gamma _{\theta }\theta (x_{j})\quad \text{or} \end{aligned}$$
(6.5a)
$$\begin{aligned} &\Phi \bigl(x_{k}(\alpha _{k,l})\bigr)\leq \Phi (x_{j})- \phi (\alpha _{k,l})\gamma _{\Phi } \theta (x_{j}) \end{aligned}$$
(6.5b)

for all \((\theta (x_{j}),\Phi (x_{j}))\in \mathcal{F}_{k}\), where \(\phi (\alpha )\) is a dwindling function. We say that a trial point \(x_{k}(\alpha _{k,l})\) provides sufficient reduction if

$$\begin{aligned} &\theta \bigl(x_{k}(\alpha _{k,l})\bigr)\leq \theta (x_{k})-\phi (\alpha _{k,l}) \gamma _{\theta }\theta (x_{k}) \quad\text{or} \end{aligned}$$
(6.6a)
$$\begin{aligned} & \Phi \bigl(x_{k}(\alpha _{k,l})\bigr)\leq \Phi (x_{k})- \phi (\alpha _{k,l})\gamma _{\Phi } \theta _{\mathcal{I}}(x_{k}), \end{aligned}$$
(6.6b)

where \(\gamma _{\theta }\), \(\gamma _{\Phi }\in (0,1)\). But this could result in convergence to a feasible but non-optimal point. In order to prevent this, we change to a different sufficient reduction criterion

$$ m_{k}(1)< 0 \quad\text{and} \quad\bigl(-m_{k}(\alpha _{k,l})\bigr)^{s_{\Phi }}(\alpha _{k,l})^{1-s_{ \Phi }}> \delta \theta (x_{k})^{s_{\theta }},$$
(6.7)

where

$$ m_{k}(\alpha )=\alpha \nabla \Phi ^{T}_{k}d_{k}- \alpha \bigl(\mu ^{T}_{k}F(x_{k})+ \nu ^{T}_{k}x_{k}\bigr)+\alpha \bigl(\bigl(\nabla \mu _{k}^{T}d_{k}\bigr)^{T}F(x_{k})+ \bigl( \nabla \nu _{k}^{T}d_{k}\bigr)^{T}x_{k} \bigr), $$

\(\delta >0\), \(s_{\Phi }>1\), \(s_{\theta }\geq 1\). If condition (6.7) holds, the trial point \(x_{k}(\alpha _{k,l})\) is required to satisfy the Armijo condition

$$ \Phi \bigl(x_{k}(\alpha _{k,l})\bigr)\leq \Phi (x_{k})+\eta _{\Phi }m_{k}( \alpha _{k,l}),$$
(6.8)

where \(\eta _{\Phi }\in (0,\frac{1}{2})\). If condition (6.7) for \(\alpha _{k}\) does not hold, the filter is augmented for a new iteration using the updated formula

$$\begin{aligned} \mathcal{F}_{k+1}={}&\mathcal{F}_{k}\cup \bigl\{ (\theta,\Phi )\in \mathbb{R}^{2}: \theta \geq \theta (x_{k})-\phi (\alpha _{k})\gamma _{\theta }\theta (x_{k}), \\ & \Phi \geq \Phi (x_{k})-\phi (\alpha _{k}) \gamma _{\Phi }\theta (x_{k}) \bigr\} . \end{aligned}$$
(6.9)

We now formally state the new filter algorithm for the NCP.

Algorithm 1

Given: Starting point \(x_{0}\); \(\mathcal{F}_{0}=\{(\theta,\Phi )\in \mathbb{R}^{2}:\theta >\theta (x_{0}) \}\); \(\gamma _{\theta }\),\(\gamma _{\Phi }\in (0,1)\); \(\delta >0\), \(s_{\Phi }>1\), \(s_{\theta }\geq 1\); \(\eta _{\Phi }\in (0,\frac{1}{2})\); \(0<\tau _{1}\leq \tau _{2}<1\); \(\phi (\alpha )\); \(\epsilon >0\).

  1. 1.

    Compute \(\Phi (x_{k})\), \(\nabla \Phi (x_{k})\), \(F(x_{k})\), \(J'(x_{k})\), \(\theta (x_{k})\).

  2. 2.

    Compute \(d_{k}\) from the QP subproblem QP(\(x_{k}\)).

  3. 3.

    If \(\|d_{k}\|+\theta (x_{k})\leq \epsilon \), stop.

  4. 4.

    Line search.

    1. 4.1.

      Set \(\alpha _{k,0}=1\) and \(l\leftarrow 0\). Compute \(x_{k}(\alpha _{k,l})=x_{k}+\alpha _{k,l}d_{k}\).

    2. 4.2.

      If \(x_{k}(\alpha _{k,l})\in \mathcal{F}_{k}\), go to Step 4.4.

    3. 4.3.

      Check sufficient decrease with respect to the current iterate.

      1. 4.3.1.

        Case 1. (6.7) holds: If (6.8) holds, set \(x_{k+1}=x_{k}(\alpha _{k,l})\), \(\mathcal{F}_{k+1}=\mathcal{F}_{k}\) and go to Step 5. Otherwise, go to Step 4.4.

      2. 4.3.2.

        Case 2. (6.7) is not satisfied: If (6.6a)–(6.6b) holds, set \(x_{k+1}=x_{k}(\alpha _{k,l})\), augment the filter using (6.9), and go to Step 5. Otherwise, go to Step 4.4.

    4. 4.4.

      Choose \(\alpha _{k,l+1}\in [\tau _{1}\alpha _{k,l},\tau _{2}\alpha _{k,l}]\), set \(l\leftarrow l+1\), and go back to Step 4.2.

  5. 5.

    Update \(B_{k}\) by a BFGS updating and go back to Step 1.

Remark 6

The global convergence of Algorithm 1 is similar to that in [16]. For details, see [16].

In the following, some numerical results are given on an HP i5 personal computer with 4G memory. The selected parameter values are: \(\epsilon =10^{-6}\), \(\gamma _{\theta }=0.5\), \(\gamma _{\Phi }=0.5\), \(\delta =1\), \(s_{\Phi }=3.2\), \(s_{\theta }=1.5\), \(\eta _{\Phi }=0.3\), \(\tau _{1}=\tau _{2}=0.5\), and \(\phi (\alpha )=\alpha ^{\frac{4}{3}}\). The computation terminates when the stopping criterion \(\|d_{k}\|+\theta (x_{k})\leq \epsilon \) is satisfied. We use the Matlab function quadprog to solve the QP(\(x_{k}\)) subproblem. NIT and NF stand for the numbers of iterations and function evaluations, respectively. Gap stands for the absolute value of \(x^{T}F(x)\) at the final iteration.

Example 1

([14, 27])

Let \(F(x)=Mx+q\), where

$$ M= \begin{pmatrix} 4 & -1 & 0 & \cdots & 0 \\ -1 & 4 & -1 & \cdots & 0 \\ 0 & -1 & 4 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & -1 \\ 0 & 0 & 0 & -1 & 4 \end{pmatrix},\qquad q=(-1,-1,\ldots,-1)^{T}. $$

The starting point is \(x_{0}=(0,0,\ldots,0)^{T}\). The results of Example 1 are given in Table 1.

Table 1 Numerical results of Example 1

Algorithm 1 can compete with Nie’s filter algorithm [27]. Because the Jacobian \(F'(x)=M\) of \(F(x)\) is positive definite, F is strictly monotone. By Theorem 1 or Corollary 1, \(x^{*}\) is a solution of the NCP (a regular point).

Example 2

([14, 31])

Let \(F(x)=Mx+q\), where

$$ M=\text{diag} \biggl(\frac{1}{n},\frac{2}{n},\ldots,1 \biggr),\qquad q=(-1,-1, \ldots,-1)^{T}. $$

The starting point is \(x_{0}=(0,0,\ldots,0)^{T}\). The results of Example 2 are given in Table 2.

Table 2 Numerical results of Example 2

Algorithm 1 can compete with Su’s filter algorithm [31]. Because the Jacobian \(F'(x)=M\) of \(F(x)\) is approximately positive semidefinite as \(n\rightarrow \infty \), F is approximately monotone. By Theorem 1 or Corollary 1, \(x^{*}\) is a solution of the NCP (a regular point). Note that the condition number of M increases with the dimension n, but numerical results are not affected by it.

Example 3

(Murty problem [2])

Let \(F(x)=Mx+q\), where

$$ M= \begin{pmatrix} 1 & 2 & 2 & \cdots & 2 \\ 0 & 1 & 2 & \cdots & 2 \\ 0 & 0 & 1 & \ddots & 2 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{pmatrix},\qquad q=(-1,-1,\ldots,-1)^{T}. $$

The starting point is \(x_{0}=(1,1,\ldots,1)^{T}\) and the solution is \(x^{*}=(0,0,\ldots,1)^{T}\). The results of Example 3 are given in Table 3.

Table 3 Numerical results of Example 3

Because the Jacobian \(F'(x)=M\) of \(F(x)\) is a P-matrix, F is a P-function. By Theorem 2 or Corollary 2, \(x^{*}\) is a solution of the NCP (a regular point). Data and images on the convergence rate of Algorithm 1 (Example 3, \(n=8\)) are shown in Table 4 and Fig. 1, where data1 and data2 denote \(\|x^{k}-x^{*}\|\) and \(\frac{\|x^{k+1}-x^{*}\|}{\|x^{k}-x^{*}\|}\), respectively. From Table 4 and Fig. 1 we see that

$$ \lim_{k\rightarrow \infty }\frac{ \Vert x^{k+1}-x^{*} \Vert }{ \Vert x^{k}-x^{*} \Vert }=0, $$

which means that Algorithm 1 converges Q-superlinearly.

Figure 1
figure 1

The convergence rate of Algorithm 1

Table 4 Experimental results of Example 3

Example 4

([26, 27])

Let

$$\begin{aligned} &F_{1}(x)=3x_{1}^{2}+2x_{1}x_{2}+2x_{2}^{2}+x_{3}+3x_{4}-6, \\ &F_{2}(x)=2x_{1}^{2}+x_{1}+x_{2}^{2}+10x_{3}+2x_{4}-2, \\ &F_{3}(x)=3x_{1}^{2}+x_{1}x_{2}+2x_{2}^{2}+2x_{3}+9x_{4}-9, \\ &F_{4}(x)=x_{1}^{2}+3x_{2}^{2}+2x_{3}+3x_{4}-3. \end{aligned}$$

This example has one degenerate solution \((\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\) and one nondegenerate solution \((1,0,3,0)\). The Jacobian of F is

$$ F'(x)= \begin{pmatrix} 6x_{1}+2x_{2} & 2x_{1}+4x_{2} & 1 & 3 \\ 4x_{1}+1 & 2x_{2} & 10 & 2 \\ 6x_{1}+x_{2} & x_{1}+4x_{2} & 2 & 9 \\ 2x_{1} & 6x_{2} & 2 & 3 \end{pmatrix}. $$

It is difficult for simple Newton-type methods, since the LCP formed by linearizing F around \(x=0\) has no solution. The results of Example 4 are given in Table 5.

Table 5 Numerical results of Example 4

There are three cases:

(1) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a KKT point \((0,0,0,2)\) of (3.1a)–(3.1c) which is not a solution of the NCP. If \(x^{*}=(0,0,0,2)\), we have that

$$ \nabla F\bigl(x^{*}\bigr)=F'\bigl(x^{*} \bigr)^{T}= \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 8 & 0 \\ 1 & 10 & 2 & 2 \\ 3 & 2 & 9 & 3 \end{pmatrix}, $$

whose eigenvalues are 11.502, 0.12753, 1.9147, and −8.5447. This indicates that it is not a positive semidefinite matrix. Because

$$D_{1,2}= \begin{vmatrix} 0 & 1 \\ 0 & 0 \end{vmatrix}=-1< 0, $$

\(F'(x^{*})\) is not a \(P_{0}\)-matrix and F is not a \(P_{0}\)-function.

(2) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a degenerate solution \((\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\) of the NCP. If \(x^{*}=(\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\), we have that

$$ \nabla F\bigl(x^{*}\bigr)=F'\bigl(x^{*} \bigr)^{T}= \begin{pmatrix} 3\sqrt{6} &2\sqrt{6}+1 &3\sqrt{6} &\sqrt{6} \\ \sqrt{6} &0 &\sqrt{6}/2 & 0 \\ 1 & 10 & 2 & 2 \\ 3 & 2 & 9 & 3 \end{pmatrix}, $$

whose eigenvalues are 11.502, 0.12753, 1.9147, and −8.5447. This indicates that it is not a positive semidefinite matrix. Because

$$D_{1,2}= \begin{vmatrix} 3\sqrt{6} &2\sqrt{6}+1 \\ \sqrt{6} &0 \end{vmatrix}=-12-\sqrt{6}< 0, $$

it is not a \(P_{0}\)-matrix and F is not a \(P_{0}\)-function. But \(x^{*}\) is a solution of the NCP (a regular point).

(3) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a nondegenerate solution \((1,0,3,0)\) of the NCP. If \(x^{*}=(1,0,3,0)\), we have that

$$ \nabla F\bigl(x^{*}\bigr)=F'\bigl(x^{*} \bigr)^{T}= \begin{pmatrix} 6 & 5 & 6 & 2 \\ 2 & 0 & 1 & 0 \\ 1 & 10 & 2 & 2 \\ 3 & 2 & 9 & 3 \end{pmatrix}, $$

whose eigenvalues are 11.835, −2.6848, 0.84972, and 1. This indicates that it is not a positive semidefinite matrix. Because D 1 , 2 = | 6 5 2 0 | =−10<0, it is not a \(P_{0}\)-matrix and F is not a \(P_{0}\)-function. But \(x^{*}\) is a solution of the NCP (a regular point).

We note that most of the sequences converge to a degenerate solution. Although the Jacobian of F is not a \(P_{0}\)-matrix and F is not a \(P_{0}\)-function in the last two cases, \(x^{*}\) are still the solutions of the NCP (regular points).

Example 5

(Modified Mathiesen problem [2])

Let

$$\begin{aligned} &F_{1}(x)=-x_{2}+x_{3}+x_{4}, \\ &F_{2}(x)=x_{1}-(4.5x_{3}+2.7x_{4})/(x_{2}+1), \\ &F_{3}(x)=5-x_{1}-(0.5x_{3}+0.3x_{4})/(x_{3}+1), \\ &F_{4}(x)=3-x_{1}. \end{aligned}$$

This example has infinitely many solutions \(x^{*}=(\varrho,0,0,0)\), where \(\varrho \in [0,3]\). For \(\varrho =0\) or 3, the solutions are degenerate, and for \(\varrho \in (0,3)\) nondegenerate. The Jacobian of F is

$$ F'\bigl(x^{*}\bigr)= \begin{pmatrix} 0 & 1 & -1 & -1 \\ -1 & \frac{4.5x_{3}+2.7x_{4}}{(x_{2}+1)^{2}} & 0 & 0 \\ 1 & -\frac{4.5}{x_{2}+1} & -\frac{0.5-0.3x_{4}}{(x_{3}+1)^{2}} & 0 \\ 1 & -\frac{2.7}{x_{2}+1} & -\frac{0.3}{x_{3}+1} & 0 \end{pmatrix}_{x=x^{*}}= \begin{pmatrix} 0 & 1 & -1 & -1 \\ -1 & 0 & 0 & 0 \\ 1 & -4.5 & -0.5 & 0 \\ 1 & -2.7 & -0.3 & 0 \end{pmatrix}, $$

whose eigenvalues are \(0.56541+2.1271i\), \(0.56541-2.1271i\), −1.6308, and \(-2.5015e-011\). This indicates that it is not a positive semidefinite matrix. Because

$$D_{1,2,3}= \begin{vmatrix} 0 & 1 &-1 \\ -1 & 0 & 0 \\ 1 &-4.5 &-0.5 \end{vmatrix}=-1\times (-1)^{2+1} \begin{vmatrix} 1 &-1 \\ -4.5 &-0.5 \end{vmatrix}=-5< 0, $$

it is not a \(P_{0}\)-matrix and F is not a \(P_{0}\)-function. But \(x^{*}\) is a solution of the NCP (a regular point). The results of Example 5 are given in Table 6.

Table 6 Experimental results of Example 5

From Table 6 we see that Algorithm 1 performs better than Chen’s algorithm [2]. Different algorithms with the same starting point converge to different solutions. The advantage of the filter method is that it has two merit functions, which makes the requirements for trial points more relaxed and easy to accept the superlinear steps.

Remark 7

Examples 4 and 5 tell us that there exist other cases for the mapping F such that the KKT point of (3.1a)–(3.1c) is the solution of the NCP. The examples for Corollaries 3 and 4 are hard to be given, because the results of the experiments depend on both the Jacobian \(F'(x^{*})\) and multipliers. Maybe Examples 4 and 5 are the examples of Corollaries 3 and 4.

7 Conclusion

In this paper, we analyze the relation between the constrained optimization reformulation and the NCP which is not involved in filter algorithms [21, 22, 27, 31, 36, 39]. First, we give several sufficient conditions under which the KKT point of the constrained optimization is the solution of the NCP. Second, we define regular conditions and regular point which include and generalize the previous results. Third, we prove that the level sets of the objective function of (3.1a)–(3.1c) are bounded for a strongly monotone function or a uniform P-function. Finally, we present some examples to verify the previous results.

The above work explains the principle of the filter method for NCPs and promotes the development of the theory and algorithm. In the future, we will consider the following problems: the influence of different value functions [20, 24] on the algorithm and the possibility of other conditions.

Availability of data and materials

Not applicable.

References

  1. Billups, S.C., Dirkse, P.S., Ferris, M.C.: A comparison of large scale mixed complementarity problem solvers. Comput. Optim. Appl. 7, 3–25 (1997)

    Article  MathSciNet  Google Scholar 

  2. Chen, B.L., Ma, C.F.: Superlinear/quadratic smoothing Broyden-like method for the generalized nonlinear complementarity problem. Nonlinear Anal., Real World Appl. 12, 1250–1263 (2011)

    Article  MathSciNet  Google Scholar 

  3. Chen, J.S.: The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem. J. Glob. Optim. 36, 565–580 (2006)

    Article  MathSciNet  Google Scholar 

  4. Chen, J.S., Gao, H.T., Pan, S.H.: An R-linearly convergent derivative-free algorithm for the NCPs based on the generalized Fischer–Burmeister merit function. J. Comput. Appl. Math. 232, 455–471 (2009)

    Article  MathSciNet  Google Scholar 

  5. Chen, J.S., Pan, S.H.: A family of NCP functions and a descent method for the nonlinear complementarity problem. Comput. Optim. Appl. 40, 389–404 (2008)

    Article  MathSciNet  Google Scholar 

  6. Chen, X.: Smoothing methods for complementarity problems and their applications: a survey. J. Oper. Res. Soc. Jpn. 43, 32–47 (2000)

    MathSciNet  MATH  Google Scholar 

  7. Chen, Y., Sun, W.: A dwindling filter line search method for unconstrained optimization. Math. Comput. 84, 187–208 (2015)

    Article  MathSciNet  Google Scholar 

  8. Chin, C.M., Abdul Rashid, A.H., Nor, K.M.: Global and local convergence of a filter line search method for nonlinear programming. Optim. Methods Softw. 22, 365–390 (2007)

    Article  MathSciNet  Google Scholar 

  9. De Luca, T., Facchinei, F., Kanzow, C.: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407–439 (1996)

    MathSciNet  MATH  Google Scholar 

  10. Dirkse, S.P., Ferris, M.: MCPLIB: a collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 5, 319–345 (1994)

    Article  Google Scholar 

  11. Facchinei, F., Soares, J.: A new merit function for nonlinear complementarity problems and a related algorithm. SIAM J. Optim. 7, 225–247 (1997)

    Article  MathSciNet  Google Scholar 

  12. Ferris, M.C., Pang, J.S.: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669–713 (1997)

    Article  MathSciNet  Google Scholar 

  13. Fletcher, R., Leyffer, S.: Nonlinear programming without a penalty function. Math. Program. 91, 239–269 (2002)

    Article  MathSciNet  Google Scholar 

  14. Geiger, C., Kanzow, C.: On the resolution of monotone complementarity problems. Comput. Optim. Appl. 5, 155–173 (1996)

    Article  MathSciNet  Google Scholar 

  15. Gu, C., Zhu, D.: A secant algorithm with line search filter method for nonlinear optimization. Appl. Math. Model. 35, 879–894 (2011)

    Article  MathSciNet  Google Scholar 

  16. Gu, C., Zhu, D.: Global convergence of a three-dimensional dwindling filter algorithm without feasibility restoration phase. Numer. Funct. Anal. Optim. 37, 324–341 (2016)

    Article  MathSciNet  Google Scholar 

  17. Gu, W.Z., Lu, L.Y.: The linear convergence of convergence of a derivative-free descent method for nonlinear complementarity problems. J. Ind. Manag. Optim. 12, 531–548 (2017)

    MathSciNet  MATH  Google Scholar 

  18. Harker, P.T., Pang, J.S.: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48, 161–220 (1990)

    Article  MathSciNet  Google Scholar 

  19. Hu, S.L., Huang, Z.H., Chen, J.S.: Properties of a family of generalized NCP-functions and a derivative free algorithm for complementarity problems. J. Comput. Appl. Math. 230, 69–82 (2009)

    Article  MathSciNet  Google Scholar 

  20. Huang, C.H., Weng, K.J., Chen, J.-S., Chu, H.W., Li, M.Y.: On four discrete-type families of NCP-functions. J. Nonlinear Convex Anal. 20, 283–306 (2019)

    MathSciNet  Google Scholar 

  21. Lai, M.Y., Nie, P.Y., Zhang, P.A., Zhu, S.J.: A new SQP approach for nonlinear complementarity problems. Int. J. Comput. Math. 86, 1222–1230 (2009)

    Article  MathSciNet  Google Scholar 

  22. Long, J., Ma, C.F., Nie, P.Y.: A new filter method for solving nonlinear complementarity problems. Appl. Math. Comput. 185, 705–718 (2007)

    MathSciNet  MATH  Google Scholar 

  23. Lu, L.Y., Huang, Z.H., Hu, S.L.: Properties of a family of merit functions and a merit function method for the NCP. Appl. Math. J. Chin. Univ. 25, 379–390 (2010)

    Article  MathSciNet  Google Scholar 

  24. Ma, P.F., Chen, J.-S., Huang, C.H., Ch, K.: Discovery of new complementarity functions for NCP and SOCCP. Comput. Appl. Math. 37, 5727–5749 (2018)

    Article  MathSciNet  Google Scholar 

  25. Mangasarian, O.L., Solodov, M.V.: A linearly convergent derivative-free descent method for strongly monotone complementarity problems. Comput. Optim. Appl. 14, 5–16 (1999)

    Article  MathSciNet  Google Scholar 

  26. More, J.J.: Global methods for nonlinear complementarity problems. Math. Oper. Res. 21, 589–614 (1996)

    Article  MathSciNet  Google Scholar 

  27. Nie, P.Y.: A filter method for solving nonlinear complementarity problems. Appl. Math. Comput. 167, 677–694 (2005)

    MathSciNet  MATH  Google Scholar 

  28. Pang, J.S.: Complementarity problems. In: Horst, R., Pardalos, P. (eds.) Handbood of Global Optimization. Kluwer Academic, Boston (1995)

    Google Scholar 

  29. Rui, S.P., Xu, C.X.: A smoothing inexact Newton method for nonlinear complementarity problems. J. Comput. Appl. Math. 233, 2332–2338 (2015)

    Article  MathSciNet  Google Scholar 

  30. Su, K.: A globally and superlinearly convergent modified SQP-filter method. J. Glob. Optim. 41, 203–217 (2008)

    Article  MathSciNet  Google Scholar 

  31. Su, K., Cai, H.P.: A modified SQP-filter method for nonlinear complementarity problem. Appl. Math. Model. 33, 2890–2896 (2009)

    Article  MathSciNet  Google Scholar 

  32. Su, K., Yang, D.: A smooth Newton method with 3-1 piecewise NCP function for generalized nonlinear complementarity problem. Int. J. Comput. Math. 95, 1703–1713 (2018)

    Article  MathSciNet  Google Scholar 

  33. Ulbrich, M., Ulbrich, S., Vicente, L.N.: A globally convergent primal-dual interior-point filter method for nonconvex nonlinear programming. Math. Program. 100, 379–410 (2004)

    Article  MathSciNet  Google Scholar 

  34. Wächter, A., Biegler, L.T.: Line search filter methods for nonlinear programming: motivation and global convergence. SIAM J. Optim. 6, 1–31 (2005)

    Article  MathSciNet  Google Scholar 

  35. Wächter, A., Biegler, L.T.: Line search filter methods for nonlinear programming: local convergence. SIAM J. Optim. 6, 32–48 (2005)

    Article  MathSciNet  Google Scholar 

  36. Wang, H., Pu, D.G.: A kind of nonmonotone filter method for nonlinear complementarity problem. J. Appl. Math. Comput. 36, 27–40 (2011)

    Article  MathSciNet  Google Scholar 

  37. Yamada, K., Yamashita, N., Fukushima, M.: A new derivative-free descent method for the nonlinear complementarity problem. In: Pillo, G.D., Giannessi, F. (eds.) Nonlinear Optimization and Related Topics, vol. 36, pp. 463–489. Kluwer Academic, Dordrecht (2000)

    Chapter  Google Scholar 

  38. Yang, Y.F., Qi, L.Q.: Smoothing trust region methods for nonlinear complementarity problems with P0-functions. Ann. Oper. Res. 133, 99–117 (2005)

    Article  MathSciNet  Google Scholar 

  39. Zhou, Y.: A smoothing conic trust region filter method for the nonlinear complementarity problem. J. Comput. Appl. Math. 229, 248–263 (2009)

    Article  MathSciNet  Google Scholar 

  40. Zhu, J.G., Liu, H.W., Liu, C.H., Cong, W.J.: A nonmonotone derivative-free algorithm for nonlinear complementarity problems based on the new generalized penalized Fischer–Burmeister merit function. Numer. Algorithms 58, 573–591 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This research is supported by the National Natural Science Foundation (11971302) of China.

Author information

Authors and Affiliations

Authors

Contributions

The authors completed the paper. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Chao Gu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Gu, C. & Wang, G. Some results on the filter method for nonlinear complementary problems. J Inequal Appl 2021, 30 (2021). https://doi.org/10.1186/s13660-021-02558-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-021-02558-2

MSC

Keywords