 Research
 Open Access
 Published:
Some results on the filter method for nonlinear complementary problems
Journal of Inequalities and Applications volume 2021, Article number: 30 (2021)
Abstract
Recent studies show that the filter method has good numerical performance for nonlinear complementary problems (NCPs). Their approach is to reformulate an NCP as a constrained optimization solved by filter algorithms. However, they can only prove that the iterative sequence converges to the KKT point of the constrained optimization. In this paper, we investigate the relation between the KKT point of the constrained optimization and the solution of the NCP. First, we give several sufficient conditions under which the KKT point of the constrained optimization is the solution of the NCP; second, we define regular conditions and regular point which include and generalize the previous results; third, we prove that the level sets of the objective function of the constrained optimization are bounded for a strongly monotone function or a uniform Pfunction; finally, we present some examples to verify the previous results.
Introduction
Consider the following nonlinear complementarity problem (NCP):
where \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is continuously differentiable everywhere and the superscript T denotes the transpose operator. When \(F(x)=Mx+q\), with \(M\in \mathbb{R}^{n\times n}\) and \(q\in \mathbb{R}^{n}\), the NCP becomes a linear complementarity problem.
Nonlinear complementarity problems have many important applications in engineering and equilibrium modeling [10, 12, 18, 28], and many numerical methods are developed to solve NCPs [1, 6, 17, 29, 32, 38]. Based on NCP functions some researchers try to solve NCPs by reformulating them as an unconstrained optimization [5, 9, 11, 14]. Under some assumptions, the solutions of NCPs are obtained by solving these problems. Subsequently, derivativefree algorithms for NCPs are presented [3–5, 11, 14, 17, 19, 23, 25, 37, 40].
In the last 18 years, the filter method [7, 8, 13, 15, 16, 30, 33–35] has been regarded as an efficient constrained optimization method. The advantage of this method is that trial points are accepted if they improve the objective function or improve the constraint violation instead of a combination of those two measures defined by a merit function. Recently, some authors [21, 22, 27, 31, 36, 39] have naturally reformulated the NCP as an inequality constrained optimization:
and obtained good numerical performance by filter algorithms. However, they can only prove that the iterative sequence converges to the KKT point of the constrained optimization. The relation between the KKT point of constrained optimization (1.2a)–(1.2c) and the solution of NCP (1.1) has not been studied. What are the conditions for the KKT point of (1.2a)–(1.2c) to be the solution of (1.1)? This is an interesting question which should be answered.
From the above discussion, we shall study the relation between the solution of the NCP and the KKT point of (1.2a)–(1.2c) and propose several sufficient (and necessary) conditions for the KKT point of (1.2a)–(1.2c) to be the solution of the NCP. This work explains the relation between an optimization and an NCP and provides the theoretical basis of filter algorithms for the NCP. The paper is outlined as follows. In Sect. 2, we recall some definitions and basic facts; we give several sufficient conditions in Sect. 3; the definitions of the regular point and regular conditions are proposed in Sect. 4; the boundedness of level sets is discussed in Sect. 5; some numerical results are presented to verify previous results in Sect. 6.
Notation: Given \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\), the ith component function is denoted by \(F_{i}(x)\). \(F'(x)\) is the Jacobian of F at \(x\in \mathbb{R}^{n}\). \(\nabla F(x)=[\nabla F_{1}(x),\ldots,\nabla F_{n}(x)]\) denotes the transpose Jacobian of F at x. \(e_{i}\in \mathbb{R}^{n}\) denotes the ith column of the identity matrix \(I_{n}\). \(x\circ y=(x_{1}y_{1},\ldots,x_{n}y_{n})^{T}\).
Preliminaries
In this section, we recall some background concepts and materials.
Definition 1
([14])
A function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is called
(1) monotone if, for all \(x, y\in \mathbb{R}^{n}\),
(2) strictly monotone if, for all \(x, y\in \mathbb{R}^{n}\) with \(x\neq y\),
(3) strongly monotone (with modulus \(\omega >0\)) if, for all \(x, y\in \mathbb{R}^{n}\),
Obviously, strongly monotone functions are strictly monotone, and strictly monotone functions are monotone.
Lemma 1
([14])
For a continuously differentiable function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\),
(1) F is monotone if and only if \(F'(x)\) is positive semidefinite for all \(x\in \mathbb{R}^{n}\);
(2) F is strictly monotone if \(F'(x)\) is positive definite for all \(x\in \mathbb{R}^{n}\);
(3) F is strongly monotone if \(F'(x)\) is uniformly positive definite, i.e., \(d^{T}F'(x)d\geq \omega \d\^{2}\) for some \(\omega >0\) and all \(x,d\in \mathbb{R}^{n}\).
Note that the converse direction in Lemma 1(2) is not correct in general. A solution \(x^{*}\in \mathbb{R}^{n}\) of the NCP is said to be nondegenerate if \(x_{i}^{*}+F_{i}(x^{*})>0\) for all \(i\in I\), and degenerate otherwise.
Lemma 2
([14])
Assume that \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is a continuous and strongly monotone function. Then NCPs have at most one solution.
Definition 2
([11])
A matrix \(M\in \mathbb{R}^{n\times n}\) is said to be a \(P_{0}\)matrix if all its principal minors are nonnegative.
Lemma 3
([11])
A matrix \(M\in \mathbb{R}^{n\times n}\) is a
(1) \(P_{0}\)matrix if every of its principal minors is nonnegative;
(2) Pmatrix if every of its principal minors is positive;
(3) \(R_{0}\)matrix if the linear complementarity problem
has 0 as its unique solution.
It is obvious that every Pmatrix is also a \(P_{0}\)matrix, and it is known that every Pmatrix is an \(R_{0}\)matrix. We shall also need the following characterization of \(P_{0}(P)\)matrices.
Lemma 4
([11])
A matrix \(M\in \mathbb{R}^{n\times n}\) is a \(P_{0}(P)\)matrix if and only if, for every nonzero vector x, there exists an index i such that \(x_{i}\neq 0\) and \(x_{i}(Mx)_{i}\geq (>)0\).
Lemma 5
([11])
A function \(F: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) is a
(1) \(P_{0}\)function if, for every x and y in \(\mathbb{R}^{n}\) with \(x\neq y\), there is an index i such that
(2) Pfunction if, for every x and y in \(\mathbb{R}^{n}\) with \(x\neq y\), there is an index i such that
(3) uniform Pfunction if there exists a positive constant \(\omega >0\) such that, for every x and y in \(\mathbb{R}^{n}\), there is an index i such that
It is obvious that every monotone function is a \(P_{0}\)function, every strictly monotone function is a Pfunction, and every strongly monotone function is a uniform Pfunction. Furthermore, it is known that the Jacobian of every continuously differentiable \(P_{0}\)function is a \(P_{0}\)matrix and that if the Jacobian of a continuously differentiable function is a Pmatrix for every x, then the function is a Pfunction. If F is affine (that is, if \(F(x)= Mx+q\)), then F is a \(P_{0}\)function if M is a \(P_{0}\)matrix. F is a (uniform) Pfunction if M is a Pmatrix (note that in the affine case, the concepts of uniform Pfunction and Pfunction coincide).
Sufficient conditions
In this paper, NCP (1.1) is transformed into the following equivalent inequality and nonnegative constrained optimization:
The KKT conditions of (3.1a)–(3.1c) are
where \(\mu, \nu \in \mathbb{R}^{n}\) are the vectors of multipliers corresponding to inequalities.
Remark 1
Previous algorithms [21, 22, 27, 31, 36, 39] treat nonnegative constraints as inequality constraints. In the following analysis we need to discuss inequality constraints and nonnegative constraints, respectively.
Lemma 6
Suppose that the NCP has at least one solution. Then \(x^{*}\) solves NCP (1.1) if and only if \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c).
The problem of finding a global minimum is quite difficult. It is therefore of interest under which assumptions on the mapping F, the KKT point of (3.1a)–(3.1c) are global minima. Because of the existence of multipliers in KKT conditions (3.2a)–(3.2d), it is complex. First, using the relation between constraints and the objective function, we arrange the KKT conditions of (3.1a)–(3.1c).
Lemma 7
Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c). Then there exist \(\mu ^{*},\nu ^{*}\geq 0\) such that
Proof
From assumptions there exist \(\mu ^{*},\nu ^{*}\geq 0\) such that
Denote \(\nabla F(x^{*})= (\nabla F_{1}(x^{*}),\ldots,\nabla F_{n}(x^{*}) )\), then we have
□
Remark 2
The relation between the objective function and constraints is the key to analysis.
Next, we introduce some index sets:
and further partition the index set C as follows:
Theorem 1
Suppose that the mapping F has a positive semidefinite Jacobian \(F'(x^{*})\), then \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c), which implies that (3.3) holds. Suppose the contrary, i.e., \(\Phi (x^{*})\neq 0\). Premultiplying (3.3) by
we get
Since \(\nabla F(x^{*})\) is positive semidefinite, we obtain that
Hence
There are four cases for \(\mu ^{*}\) and \(\nu ^{*}\):
(1) \(\mu ^{*}_{R}=0\) and \(\nu ^{*}_{R}=0\);
(2) \(\mu ^{*}_{C_{2}}=0\) and \(\nu ^{*}_{C_{2}}\geq 0\);
(3) \(\mu ^{*}_{C_{1}}\geq 0\) and \(\nu ^{*}_{C_{1}}=0\);
(4) \(\mu ^{*}_{C_{3}}\geq 0\) and \(\nu ^{*}_{C_{3}}\geq 0\).
So we have
Because \(\sum_{i=1}^{n} [(x_{i}^{*})^{2}F_{i}(x^{*})\mu ^{*}_{i} ] [x_{i}^{*}F_{i}^{2}(x^{*})\nu ^{*}_{i} ]\leq 0\), it is a contradiction. Thus, \(\Phi (x^{*})=0\), i.e., \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c).
The global minima of the constrained optimization is obviously a KKT point of (3.1a)–(3.1c). □
Corollary 1
Suppose that the mapping F is a monotone function, then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Since the mapping F is a monotone function, the Jacobian \(F'(x^{*})\) of the mapping F is a positive semidefinite matrix. □
Theorem 2
Suppose that the Jacobian \(F'(x^{*})\) of the mapping F is a Pmatrix, then \(x^{*}\) is a global minima of constrained optimization (3.1a)–(3.1c) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Suppose that \(x^{*}\) is a KKT point of (3.1a)–(3.1c), then we know from Lemma 7 that
Suppose the contrary, i.e., that \(x^{*}\) is not the solution of the NCF, then \(R\neq \emptyset \). There are four cases for \(\mu ^{*}\) and \(\nu ^{*}\) which are the same as those in Theorem 1. Without loss of generality we have that
Since \(\nabla F(x^{*})\) is a Pmatrix, for every nonzero vector x, there exists an index i such that \(x_{i}\neq 0\) and \(x_{i}(\nabla F(x^{*}) x)_{i}> 0\). Because \((x_{R}^{*})^{2}\circ F_{R}(x^{*})>0\), we know that
is a nonzero vector, and there exists an index \(i\in R\cup C_{1}\cup C_{3}\) such that
and
At the same time
It follows from (3.6) and (3.7) that
which contradicts (3.5).
The global minima of the constrained optimization is obviously a KKT point of (3.1a)–(3.1c). □
Corollary 2
Suppose that the mapping F is a Pfunction, then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Since the mapping F is a Pfunction, the Jacobian \(F'(x^{*})\) of the mapping F is a Pmatrix. □
Regular conditions
In this section, we give some sufficient (and necessary) conditions for the KKT point of (3.1a)–(3.1c) to be the solution of the NCP. We call these conditions regular conditions which can be considered as a generalization of previous conclusions. First, we give the definitions of regular point and regular conditions.
Definition 3
A point \(x\in \mathbb{R}^{n}\) is called regular if, for every vector \(z\in \mathbb{R}^{n}\) (\(z\leq 0\) does not hold) with
there exists a vector \(y\in \mathbb{R}^{n}\) such that
and
Moreover, a point \(x\in \mathbb{R}^{n}\) is called strictly regular if, for every vector \(z\in \mathbb{R}^{n}\) (\(z\leq 0\) does not hold) with
there exists a vector \(y\in \mathbb{R}^{n}\) such that
and
Theorem 3
Let \(x^{*}\in \mathbb{R}^{n}\) be a KKT point of (3.1a)–(3.1c). Then \(x^{*}\) solves the NCP if and only if \(x^{*}\) is regular (or strictly regular).
Proof
If \(x^{*}\in \mathbb{R}^{n}\) is a solution of the NCP, then \(R=\emptyset \) and \(z=z_{C}\), and hence there is no vector z (\(z \leq 0\) does not hold).
Suppose that \(x^{*}\) is regular and a KKT point of (3.1a)–(3.1c). From Lemma 7 we obtain that
Without loss of generality we have that
Consequently, we have
for any \(y\in \mathbb{R}^{n}\). Assume that x is not a solution of the NCF. Then \(R\neq \emptyset \) and choose
which satisfies (4.1). Since \(x^{*}\) is regular, there exists a vector \(y\in \mathbb{R}^{n}\) such that (4.2) and (4.3) hold. With some computation, we obtain that
and
which contradicts (4.6). Hence, \(x^{*}\) must be a solution of the NCP.
Suppose that \(x^{*}\) is strictly regular and a KKT point of (3.1a)–(3.1c). Similar to the above proof, we obtain that
and
which contradicts (4.6). Hence, \(x^{*}\) must be a solution of the NCP. □
Remark 3
Theorem 1 is the corollary of Theorem 3. Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since \(\nabla F(x^{*})\) is positive semidefinite, for the nonzero vector
\(z^{T}\nabla F(x^{*})z\geq 0\) holds. Choose \(y=z\), and it is easy to see that y satisfies (4.2), (4.7), and (4.3), i.e.,
and \(y^{T}\nabla F(x^{*})z=z^{T}\nabla F(x^{*})z\geq 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.
Remark 4
Theorem 2 is the corollary of Theorem 3. Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since the Jacobian \(F'(x^{*})\) of the mapping F is a Pmatrix, we have that, for the nonzero vector
, there exists an index \(i\in R\cup C_{1}\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}>0\). If \(i\in C_{1}\cup C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). Choose y to be the vector whose components are all 0 except for its ith component, which is equal to \(z_{i}\). It is easy to see that y satisfies (4.4), (4.8), and (4.5), i.e.,
and \(y^{T}\nabla F(x^{*})z=z_{i}(\nabla F(x^{*})z)_{i}> 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.
Corollary 3
Suppose that, for the nonzero vector t=\left(\begin{array}{c}{t}_{R}\\ 0\\ {t}_{{C}_{1}}\\ {t}_{{C}_{3}}\end{array}\right) with \(t_{R\cup C_{3}}\neq 0\), there exists an index \(i\in R\cup C_{3}\) such that \(t_{i}\neq 0\) and \(t_{i}(\nabla F(x^{*})t)_{i}\geq 0\), then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). By assumptions we have that, for the nonzero vector
, there exists an index \(i\in R\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). If \(i\in C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). Choose y to be the vector whose components are all 0 except for its ith component, which is equal to \(z_{i}\). It is easy to see that y satisfies (4.2), (4.7), and (4.3), i.e.,
and \(y^{T}\nabla F(x^{*})z=z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.
If \(x^{*}\) solves NCP (1.1), it is a KKT point of (3.1a)–(3.1c). □
Corollary 4
Suppose that the Jacobian \(F'(x^{*})\) of the mapping F is a \(P_{0}\)matrix and \(\mu ^{*}_{C_{1}}=0\), then \(x^{*}\) solves NCP (1.1) if and only if it is a KKT point of (3.1a)–(3.1c).
Proof
Assume that \(x^{*}\) is a KKT point of (3.1a)–(3.1c) and is not a solution of the NCP, i.e., \(R\neq \emptyset \). Since the Jacobian \(F'(x^{*})\) of the mapping F is a \(P_{0}\)matrix and \(\mu ^{*}_{C_{1}}=0\), we have that, for the nonzero vector
there exists an index \(i\in R\cup C_{3}\) such that \(z_{i}\neq 0\) and \(z_{i}(\nabla F(x^{*})z)_{i}\geq 0\). If \(i\in C_{3}\), \(z_{i}< 0\), otherwise, \(z_{i}> 0\). The rest of the proof is the same as that of Corollary 3. By Theorem 3, \(x^{*}\) is a regular point and must be a solution of the NCP.
If \(x^{*}\) solves NCP (1.1), it is a KKT point of (3.1a)–(3.1c). □
Boundedness of level sets
In this section, we prove that the level sets of the objective function of (3.1a)–(3.1c) are bounded for a strongly monotone function or a uniform Pfunction.
Theorem 4
Suppose that the mapping F is a strongly monotone function or a uniform Pfunction. Let \(x_{0}\) be any given vector, and let \(L(x_{0})=\{x\in \mathbb{R}^{n}  \Phi (x)\leq \Phi (x_{0})\}\) be the corresponding level set. Then \(L(x_{0})\) is bounded.
Proof
Assume that there is a sequence \(\{x_{k}\}\subseteq L(x_{0})\) such that \(\lim_{k\rightarrow \infty }\x_{k}\=\infty \). Define the index set
Let
There are two cases to be considered.
(1) If F is a strongly monotone function, we get
Since \(\sum_{i\in J}(x_{i}^{k})^{2}\neq 0\), we obtain
Due to the boundedness of the sequence \(y_{k}\) and the continuity of \(F_{i}(x) (i\in J)\), the sequence \(\{F_{i}(y_{k})\}\) remains bounded. Therefore, there exists an index \(i_{0}\in J\) such that \(\lim_{k\rightarrow \infty }F_{i_{0}}(x_{k})=\infty \). It follows from \(\lim_{k\rightarrow \infty }x_{i_{0}}^{k}=\infty \) that
However, \(\frac{1}{2}(F_{i_{0}}(x_{k})x_{i_{0}}^{k})^{2}\leq \Phi (x_{k})\leq \Phi (x_{0})\), which is a contradiction.
(2) If F is a uniform Pfunction, there exists an index \(i_{0}\in \{1,2,\ldots,n\}\) such that
The second case is impossible because the lefthand side of the inequality is positive. Thus,
Since \(\sum_{i\in J}(x_{i}^{k})^{2}\neq 0\), we obtain
Similar to the proof of case (1), we get that \(\lim_{k\rightarrow \infty }F_{i_{0}}(x_{k})=\infty, i_{0}\in J\), i.e.,
which contradicts \(\frac{1}{2}(F_{i_{0}}(x_{k})x_{i_{0}}^{k})^{2}\leq \Phi (x_{k})\leq \Phi (x_{0})\). □
Remark 5
Theorem 4 implies that there exists at least one accumulation point of a sequence remaining in \(L(x_{0})\).
Some examples
In this section, we present several examples which are tested by a filter algorithm to verify the previous results. We intend to modify a globally convergent filter algorithm [16] to solve the NCP. Consider the following optimization:
Define \(c(x)=[F_{1}(x),\ldots,F_{n}(x),x_{1},\ldots,x_{n}]^{T}\). There are two merit functions in the new algorithm
In order to prevent the algorithm from cycling, the algorithm maintains a filter
The search direction \(d_{k}\) is obtained by the QP subproblem:
where \(B_{k}\) denotes the approximation of the Hessian \(\nabla _{xx}^{2}\Phi (x_{k})\) of the Lagrangian function
After a search direction \(d_{k}\) has been computed, a step size \(\alpha _{k}\) is determined in order to obtain the next iterate \(x_{k+1}=x_{k}+\alpha _{k}d_{k}\). We say that a trial point \(x_{k}(\alpha _{k,l})=x_{k}+\alpha _{k,l}d_{k}\) is acceptable to the filter if and only if
for all \((\theta (x_{j}),\Phi (x_{j}))\in \mathcal{F}_{k}\), where \(\phi (\alpha )\) is a dwindling function. We say that a trial point \(x_{k}(\alpha _{k,l})\) provides sufficient reduction if
where \(\gamma _{\theta }\), \(\gamma _{\Phi }\in (0,1)\). But this could result in convergence to a feasible but nonoptimal point. In order to prevent this, we change to a different sufficient reduction criterion
where
\(\delta >0\), \(s_{\Phi }>1\), \(s_{\theta }\geq 1\). If condition (6.7) holds, the trial point \(x_{k}(\alpha _{k,l})\) is required to satisfy the Armijo condition
where \(\eta _{\Phi }\in (0,\frac{1}{2})\). If condition (6.7) for \(\alpha _{k}\) does not hold, the filter is augmented for a new iteration using the updated formula
We now formally state the new filter algorithm for the NCP.
Algorithm 1
Given: Starting point \(x_{0}\); \(\mathcal{F}_{0}=\{(\theta,\Phi )\in \mathbb{R}^{2}:\theta >\theta (x_{0}) \}\); \(\gamma _{\theta }\),\(\gamma _{\Phi }\in (0,1)\); \(\delta >0\), \(s_{\Phi }>1\), \(s_{\theta }\geq 1\); \(\eta _{\Phi }\in (0,\frac{1}{2})\); \(0<\tau _{1}\leq \tau _{2}<1\); \(\phi (\alpha )\); \(\epsilon >0\).

1.
Compute \(\Phi (x_{k})\), \(\nabla \Phi (x_{k})\), \(F(x_{k})\), \(J'(x_{k})\), \(\theta (x_{k})\).

2.
Compute \(d_{k}\) from the QP subproblem QP(\(x_{k}\)).

3.
If \(\d_{k}\+\theta (x_{k})\leq \epsilon \), stop.

4.
Line search.

4.1.
Set \(\alpha _{k,0}=1\) and \(l\leftarrow 0\). Compute \(x_{k}(\alpha _{k,l})=x_{k}+\alpha _{k,l}d_{k}\).

4.2.
If \(x_{k}(\alpha _{k,l})\in \mathcal{F}_{k}\), go to Step 4.4.

4.3.
Check sufficient decrease with respect to the current iterate.

4.3.1.
Case 1. (6.7) holds: If (6.8) holds, set \(x_{k+1}=x_{k}(\alpha _{k,l})\), \(\mathcal{F}_{k+1}=\mathcal{F}_{k}\) and go to Step 5. Otherwise, go to Step 4.4.

4.3.2.
Case 2. (6.7) is not satisfied: If (6.6a)–(6.6b) holds, set \(x_{k+1}=x_{k}(\alpha _{k,l})\), augment the filter using (6.9), and go to Step 5. Otherwise, go to Step 4.4.

4.3.1.

4.4.
Choose \(\alpha _{k,l+1}\in [\tau _{1}\alpha _{k,l},\tau _{2}\alpha _{k,l}]\), set \(l\leftarrow l+1\), and go back to Step 4.2.

4.1.

5.
Update \(B_{k}\) by a BFGS updating and go back to Step 1.
Remark 6
The global convergence of Algorithm 1 is similar to that in [16]. For details, see [16].
In the following, some numerical results are given on an HP i5 personal computer with 4G memory. The selected parameter values are: \(\epsilon =10^{6}\), \(\gamma _{\theta }=0.5\), \(\gamma _{\Phi }=0.5\), \(\delta =1\), \(s_{\Phi }=3.2\), \(s_{\theta }=1.5\), \(\eta _{\Phi }=0.3\), \(\tau _{1}=\tau _{2}=0.5\), and \(\phi (\alpha )=\alpha ^{\frac{4}{3}}\). The computation terminates when the stopping criterion \(\d_{k}\+\theta (x_{k})\leq \epsilon \) is satisfied. We use the Matlab function quadprog to solve the QP(\(x_{k}\)) subproblem. NIT and NF stand for the numbers of iterations and function evaluations, respectively. Gap stands for the absolute value of \(x^{T}F(x)\) at the final iteration.
Example 1
Let \(F(x)=Mx+q\), where
The starting point is \(x_{0}=(0,0,\ldots,0)^{T}\). The results of Example 1 are given in Table 1.
Algorithm 1 can compete with Nie’s filter algorithm [27]. Because the Jacobian \(F'(x)=M\) of \(F(x)\) is positive definite, F is strictly monotone. By Theorem 1 or Corollary 1, \(x^{*}\) is a solution of the NCP (a regular point).
Example 2
Let \(F(x)=Mx+q\), where
The starting point is \(x_{0}=(0,0,\ldots,0)^{T}\). The results of Example 2 are given in Table 2.
Algorithm 1 can compete with Su’s filter algorithm [31]. Because the Jacobian \(F'(x)=M\) of \(F(x)\) is approximately positive semidefinite as \(n\rightarrow \infty \), F is approximately monotone. By Theorem 1 or Corollary 1, \(x^{*}\) is a solution of the NCP (a regular point). Note that the condition number of M increases with the dimension n, but numerical results are not affected by it.
Example 3
(Murty problem [2])
Let \(F(x)=Mx+q\), where
The starting point is \(x_{0}=(1,1,\ldots,1)^{T}\) and the solution is \(x^{*}=(0,0,\ldots,1)^{T}\). The results of Example 3 are given in Table 3.
Because the Jacobian \(F'(x)=M\) of \(F(x)\) is a Pmatrix, F is a Pfunction. By Theorem 2 or Corollary 2, \(x^{*}\) is a solution of the NCP (a regular point). Data and images on the convergence rate of Algorithm 1 (Example 3, \(n=8\)) are shown in Table 4 and Fig. 1, where data1 and data2 denote \(\x^{k}x^{*}\\) and \(\frac{\x^{k+1}x^{*}\}{\x^{k}x^{*}\}\), respectively. From Table 4 and Fig. 1 we see that
which means that Algorithm 1 converges Qsuperlinearly.
Example 4
Let
This example has one degenerate solution \((\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\) and one nondegenerate solution \((1,0,3,0)\). The Jacobian of F is
It is difficult for simple Newtontype methods, since the LCP formed by linearizing F around \(x=0\) has no solution. The results of Example 4 are given in Table 5.
There are three cases:
(1) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a KKT point \((0,0,0,2)\) of (3.1a)–(3.1c) which is not a solution of the NCP. If \(x^{*}=(0,0,0,2)\), we have that
whose eigenvalues are 11.502, 0.12753, 1.9147, and −8.5447. This indicates that it is not a positive semidefinite matrix. Because
\(F'(x^{*})\) is not a \(P_{0}\)matrix and F is not a \(P_{0}\)function.
(2) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a degenerate solution \((\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\) of the NCP. If \(x^{*}=(\frac{\sqrt{6}}{2},0,0,\frac{1}{2})\), we have that
whose eigenvalues are 11.502, 0.12753, 1.9147, and −8.5447. This indicates that it is not a positive semidefinite matrix. Because
it is not a \(P_{0}\)matrix and F is not a \(P_{0}\)function. But \(x^{*}\) is a solution of the NCP (a regular point).
(3) The sequence \(\{x_{k}\}\) generated by Algorithm 1 converges to a nondegenerate solution \((1,0,3,0)\) of the NCP. If \(x^{*}=(1,0,3,0)\), we have that
whose eigenvalues are 11.835, −2.6848, 0.84972, and 1. This indicates that it is not a positive semidefinite matrix. Because {D}_{1,2}=\left\begin{array}{cc}6& 5\\ 2& 0\end{array}\right=10<0, it is not a \(P_{0}\)matrix and F is not a \(P_{0}\)function. But \(x^{*}\) is a solution of the NCP (a regular point).
We note that most of the sequences converge to a degenerate solution. Although the Jacobian of F is not a \(P_{0}\)matrix and F is not a \(P_{0}\)function in the last two cases, \(x^{*}\) are still the solutions of the NCP (regular points).
Example 5
(Modified Mathiesen problem [2])
Let
This example has infinitely many solutions \(x^{*}=(\varrho,0,0,0)\), where \(\varrho \in [0,3]\). For \(\varrho =0\) or 3, the solutions are degenerate, and for \(\varrho \in (0,3)\) nondegenerate. The Jacobian of F is
whose eigenvalues are \(0.56541+2.1271i\), \(0.565412.1271i\), −1.6308, and \(2.5015e011\). This indicates that it is not a positive semidefinite matrix. Because
it is not a \(P_{0}\)matrix and F is not a \(P_{0}\)function. But \(x^{*}\) is a solution of the NCP (a regular point). The results of Example 5 are given in Table 6.
From Table 6 we see that Algorithm 1 performs better than Chen’s algorithm [2]. Different algorithms with the same starting point converge to different solutions. The advantage of the filter method is that it has two merit functions, which makes the requirements for trial points more relaxed and easy to accept the superlinear steps.
Remark 7
Examples 4 and 5 tell us that there exist other cases for the mapping F such that the KKT point of (3.1a)–(3.1c) is the solution of the NCP. The examples for Corollaries 3 and 4 are hard to be given, because the results of the experiments depend on both the Jacobian \(F'(x^{*})\) and multipliers. Maybe Examples 4 and 5 are the examples of Corollaries 3 and 4.
Conclusion
In this paper, we analyze the relation between the constrained optimization reformulation and the NCP which is not involved in filter algorithms [21, 22, 27, 31, 36, 39]. First, we give several sufficient conditions under which the KKT point of the constrained optimization is the solution of the NCP. Second, we define regular conditions and regular point which include and generalize the previous results. Third, we prove that the level sets of the objective function of (3.1a)–(3.1c) are bounded for a strongly monotone function or a uniform Pfunction. Finally, we present some examples to verify the previous results.
The above work explains the principle of the filter method for NCPs and promotes the development of the theory and algorithm. In the future, we will consider the following problems: the influence of different value functions [20, 24] on the algorithm and the possibility of other conditions.
Availability of data and materials
Not applicable.
References
Billups, S.C., Dirkse, P.S., Ferris, M.C.: A comparison of large scale mixed complementarity problem solvers. Comput. Optim. Appl. 7, 3–25 (1997)
Chen, B.L., Ma, C.F.: Superlinear/quadratic smoothing Broydenlike method for the generalized nonlinear complementarity problem. Nonlinear Anal., Real World Appl. 12, 1250–1263 (2011)
Chen, J.S.: The semismoothrelated properties of a merit function and a descent method for the nonlinear complementarity problem. J. Glob. Optim. 36, 565–580 (2006)
Chen, J.S., Gao, H.T., Pan, S.H.: An Rlinearly convergent derivativefree algorithm for the NCPs based on the generalized Fischer–Burmeister merit function. J. Comput. Appl. Math. 232, 455–471 (2009)
Chen, J.S., Pan, S.H.: A family of NCP functions and a descent method for the nonlinear complementarity problem. Comput. Optim. Appl. 40, 389–404 (2008)
Chen, X.: Smoothing methods for complementarity problems and their applications: a survey. J. Oper. Res. Soc. Jpn. 43, 32–47 (2000)
Chen, Y., Sun, W.: A dwindling filter line search method for unconstrained optimization. Math. Comput. 84, 187–208 (2015)
Chin, C.M., Abdul Rashid, A.H., Nor, K.M.: Global and local convergence of a filter line search method for nonlinear programming. Optim. Methods Softw. 22, 365–390 (2007)
De Luca, T., Facchinei, F., Kanzow, C.: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407–439 (1996)
Dirkse, S.P., Ferris, M.: MCPLIB: a collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 5, 319–345 (1994)
Facchinei, F., Soares, J.: A new merit function for nonlinear complementarity problems and a related algorithm. SIAM J. Optim. 7, 225–247 (1997)
Ferris, M.C., Pang, J.S.: Engineering and economic applications of complementarity problems. SIAM Rev. 39, 669–713 (1997)
Fletcher, R., Leyffer, S.: Nonlinear programming without a penalty function. Math. Program. 91, 239–269 (2002)
Geiger, C., Kanzow, C.: On the resolution of monotone complementarity problems. Comput. Optim. Appl. 5, 155–173 (1996)
Gu, C., Zhu, D.: A secant algorithm with line search filter method for nonlinear optimization. Appl. Math. Model. 35, 879–894 (2011)
Gu, C., Zhu, D.: Global convergence of a threedimensional dwindling filter algorithm without feasibility restoration phase. Numer. Funct. Anal. Optim. 37, 324–341 (2016)
Gu, W.Z., Lu, L.Y.: The linear convergence of convergence of a derivativefree descent method for nonlinear complementarity problems. J. Ind. Manag. Optim. 12, 531–548 (2017)
Harker, P.T., Pang, J.S.: Finitedimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48, 161–220 (1990)
Hu, S.L., Huang, Z.H., Chen, J.S.: Properties of a family of generalized NCPfunctions and a derivative free algorithm for complementarity problems. J. Comput. Appl. Math. 230, 69–82 (2009)
Huang, C.H., Weng, K.J., Chen, J.S., Chu, H.W., Li, M.Y.: On four discretetype families of NCPfunctions. J. Nonlinear Convex Anal. 20, 283–306 (2019)
Lai, M.Y., Nie, P.Y., Zhang, P.A., Zhu, S.J.: A new SQP approach for nonlinear complementarity problems. Int. J. Comput. Math. 86, 1222–1230 (2009)
Long, J., Ma, C.F., Nie, P.Y.: A new filter method for solving nonlinear complementarity problems. Appl. Math. Comput. 185, 705–718 (2007)
Lu, L.Y., Huang, Z.H., Hu, S.L.: Properties of a family of merit functions and a merit function method for the NCP. Appl. Math. J. Chin. Univ. 25, 379–390 (2010)
Ma, P.F., Chen, J.S., Huang, C.H., Ch, K.: Discovery of new complementarity functions for NCP and SOCCP. Comput. Appl. Math. 37, 5727–5749 (2018)
Mangasarian, O.L., Solodov, M.V.: A linearly convergent derivativefree descent method for strongly monotone complementarity problems. Comput. Optim. Appl. 14, 5–16 (1999)
More, J.J.: Global methods for nonlinear complementarity problems. Math. Oper. Res. 21, 589–614 (1996)
Nie, P.Y.: A filter method for solving nonlinear complementarity problems. Appl. Math. Comput. 167, 677–694 (2005)
Pang, J.S.: Complementarity problems. In: Horst, R., Pardalos, P. (eds.) Handbood of Global Optimization. Kluwer Academic, Boston (1995)
Rui, S.P., Xu, C.X.: A smoothing inexact Newton method for nonlinear complementarity problems. J. Comput. Appl. Math. 233, 2332–2338 (2015)
Su, K.: A globally and superlinearly convergent modified SQPfilter method. J. Glob. Optim. 41, 203–217 (2008)
Su, K., Cai, H.P.: A modified SQPfilter method for nonlinear complementarity problem. Appl. Math. Model. 33, 2890–2896 (2009)
Su, K., Yang, D.: A smooth Newton method with 31 piecewise NCP function for generalized nonlinear complementarity problem. Int. J. Comput. Math. 95, 1703–1713 (2018)
Ulbrich, M., Ulbrich, S., Vicente, L.N.: A globally convergent primaldual interiorpoint filter method for nonconvex nonlinear programming. Math. Program. 100, 379–410 (2004)
Wächter, A., Biegler, L.T.: Line search filter methods for nonlinear programming: motivation and global convergence. SIAM J. Optim. 6, 1–31 (2005)
Wächter, A., Biegler, L.T.: Line search filter methods for nonlinear programming: local convergence. SIAM J. Optim. 6, 32–48 (2005)
Wang, H., Pu, D.G.: A kind of nonmonotone filter method for nonlinear complementarity problem. J. Appl. Math. Comput. 36, 27–40 (2011)
Yamada, K., Yamashita, N., Fukushima, M.: A new derivativefree descent method for the nonlinear complementarity problem. In: Pillo, G.D., Giannessi, F. (eds.) Nonlinear Optimization and Related Topics, vol. 36, pp. 463–489. Kluwer Academic, Dordrecht (2000)
Yang, Y.F., Qi, L.Q.: Smoothing trust region methods for nonlinear complementarity problems with P0functions. Ann. Oper. Res. 133, 99–117 (2005)
Zhou, Y.: A smoothing conic trust region filter method for the nonlinear complementarity problem. J. Comput. Appl. Math. 229, 248–263 (2009)
Zhu, J.G., Liu, H.W., Liu, C.H., Cong, W.J.: A nonmonotone derivativefree algorithm for nonlinear complementarity problems based on the new generalized penalized Fischer–Burmeister merit function. Numer. Algorithms 58, 573–591 (2011)
Funding
This research is supported by the National Natural Science Foundation (11971302) of China.
Author information
Affiliations
Contributions
The authors completed the paper. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, J., Gu, C. & Wang, G. Some results on the filter method for nonlinear complementary problems. J Inequal Appl 2021, 30 (2021). https://doi.org/10.1186/s13660021025582
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660021025582
MSC
 90C30
 65K05
Keywords
 Inequality constrained optimization
 Filter method
 Nonlinear complementarity problems
 KKT point
 Regular conditions