A generalized gradient projection method based on a new working set for minimax optimization problems with inequality constraints
- Guodong Ma^{1}Email author,
- Yufeng Zhang^{2} and
- Meixing Liu^{1}
https://doi.org/10.1186/s13660-017-1321-3
© The Author(s) 2017
Received: 20 December 2016
Accepted: 12 February 2017
Published: 27 February 2017
Abstract
Combining the techniques of the working set identification and generalized gradient projection, we present a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints. In this paper, we propose a new optimal identification function, from which we provide a new working set. At each iteration, the improved search direction is generated by only one generalized gradient projection explicit formula, which is simple and could reduce the computational cost. Under some mild assumptions, the algorithm possesses the global and strong convergence. Finally, the numerical results show that the proposed algorithm is promising.
Keywords
MSC
1 Introduction
The first class of algorithms views the problem (1) as the constrained nonsmooth optimization problem, which can be solved directly by several classical nonsmooth methods, such as subgradient methods, bundle methods, and cutting plane methods, Refs. [6–9]. The algorithms in [6–8] have a shortcoming that it is difficult to improve the numerical results. However, we have observed in [9] that a feasible descent bundle method for solving inequality constrained minimax problems is proposed, by using the subgradients of functions, the idea of bundle method and the technique of partial cutting planes model, to generate the new cutting plane and aggregate the subgradients in the bundle, so the difficulty of numerical calculation and storage is overcome.
Recently, several researchers have worked on the SQCQP, to solve unconstrained or constrained minimax problems. The authors provided the SQCQP method [18] and showed that their algorithm is faster than the SQP algorithm that is a well-known algorithm used for solving constrained minimization problems, which solved a subproblem that involves convex quadratic inequality constraints and a convex quadratic objective function. Jian et al. [19] also proposed the simple sequential quadratically constrained quadratic programming algorithm for smooth constrained optimization. Unlike the previous work, at each iteration, the main search direction is obtained by solving only one subprogram which is composed of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the constrained functions.
Although the SQP and SQCQP methods can effectively solve the minimax problem, this transformation may ignore the unique nature of the minimax problem and then increase the number of constraint functions. Moreover, these methods require the solution of one or two QP (QCQP) subproblems at each iteration, especially, some subproblems may be complex for the large-scale problems. In general, there are many cases where the subproblems cannot be solved easily, which will increase the amount of computations largely. Hence, (generalized) gradient projection method (GGPM) based on the Rosen gradient projection method [23] has been developed for solving inequality constrained optimization problems. The GGPM method has good properties that the search direction is only a gradient projection explicit formula and it has nice convergence and numerical results for middle-small-scale problems. These good natures cause widespread concern of many scholars [24–26]. In [25], Chapter II, there are more systematic and detailed study about generalized gradient projection algorithm for inequality constrained smooth optimization.
It is well established in the literature that, when the number of constraints is very large, the active set identification technique can improve the local convergence behavior and decrease the computation cost of the algorithms of nonlinear programming and minimax problems. An earlier study of the active set identification technique can be found in [27]. Many satisfactory results on the general nonlinear case were studied, e.g., [28, 29]. Facchinei et al. [28] described a technique based on the algebraic representation of the constraint set, which identifies active constraints in a neighborhood of a solution. The extension to constrained minimax problems was also first presented in [30] without strict complementarity and linear independence. Moreover, the identification technique of active constraints for constrained minimax problems can be more suitable for infeasible algorithms, such as the strongly sub-feasible direction method and the penalty function method.
- 1.
We propose a new optimal identification function for the stationary point, from which we provide a new working set.
- 2.
The search direction is generated by only one generalized gradient projection explicit formula, which is simple and could reduce the computational cost.
- 3.
Under some mild assumptions, the algorithm possesses the global and strong convergence.
The paper is organized as follows. The next section describes the algorithm. Section 3 discusses the convergence analysis. Section 4 contains numerical results. Finally, some conclusions are drawn in Section 5.
2 Description of algorithm
Assumption A1
The functions \(f_{j}(x)\) (\(j \in I \cup J\)) are all first order continuously differentiable, and there exists an index \(l_{x} \in I(x)\) for each \(x \in X\) such that the gradient vectors \(\{ \nabla f_{i}(x) - \nabla f_{l_{x}}(x), i \in I(x)\backslash \{ l_{x}\};\nabla f_{j}(x),j \in J(x)\}\) are linearly independent.
Assumption A1-1
The vectors \(\{ \nabla f_{i}(x) - \nabla f_{t}(x), i \in I(x)\backslash \{ t\}; \nabla f_{j}(x),j \in J(x)\} \) are linearly independent for arbitrarily \(t \in I(x)\).
Lemma 1
- (i)
the matrix \(N_{k}^{T}N_{k} + D_{k}\) is nonsingular and positive definite. Suppose \(x^{k} \to x^{*}\), \(N_{k} \to N_{ *}\), \(D_{k} \to D_{ *}\), then the matrix \(N_{ *}^{T}N_{ *} + D_{ *}\) is also nonsingular and positive definite;
- (ii)
\(N_{k}^{T}P_{k} = D_{k}Q_{k}\), \(N_{k}^{T}Q_{k}^{T} = E_{\vert L_{k}\vert } - D_{k}(N_{k}^{T}N_{k} + D_{k})^{ - 1}\);
- (iii)
\((g_{l_{k}}^{k})^{T}P_{k}g_{l_{k}}^{k} = \Vert P_{k}g_{l_{k}}^{k}\Vert ^{2} + \sum_{j \in L_{k}}(\mu_{j}^{k})^{2}D_{j}^{k}\);
- (iv)
\(\rho_{k} = 0\) if and only if \(x^{k}\) is a stationary point of the problem (1).
Proof
Under Assumption A1, it is shown that the matrix \(N_{k}^{T}N_{k} + D_{k}\) is nonsingular by [25], Theorem 1.1.9. Then, we can prove that the matrix \(N_{ *}^{T}N_{ *} + D_{ *}\) is also definite similar to [25], Lemma 2.2.2. The conclusions (ii) and (iii) can obtained similar to [25], Theorem 1.1.9. Next, we prove the conclusion (iv) in detail below.
Conversely, if \(x^{k} \) is a stationary point of the problem (1) with the multiplier vector \(\lambda^{k}\), then from the (6)-(15) one knows that \(\mu_{L_{k}}^{k} = \lambda_{L_{k}}^{k}\), and \(\rho_{k} = 0\). □
Lemma 2
- (i)
\((g_{l_{k}}^{k})^{T}d^{k} \le - \rho_{k}^{\xi} \bar{\omega}_{k} - \varrho_{k}\);
- (ii)
\((g_{j}^{k})^{T}d^{k} \le - \varrho_{k}\), \(\forall j \in (I(x^{k})\backslash \{ l_{k}\} ) \cup J(x^{k})\);
- (iii)
\(F'(x^{k};d^{k}) \le - \varrho_{k}\), where \(F'(x^{k};d^{k})\) is the directional derivative of \(F(x)\) at the point \(x^{k}\) along the direction \(d^{k}\).
Proof
Then we discuss the following two cases, respectively.
(iii) Since \(F'(x^{k};d^{k}) = \max \{ (g_{i}^{k})^{T}d^{k}, i \in I(x^{k})\}\), we have \(F'(x^{k};d^{k}) \le - \varrho_{k}\) from the conclusions (i) and (ii). □
Based on the improved direction \(d^{k}\) defined by (16) and analysed above, we are now ready to describe the steps of our algorithm as follows.
Algorithm A
Step 0. Choose an initial feasible point \(x^{0} \in X\) and parameters: \(\alpha,\beta \in (0,1)\), \(\varepsilon > 0\), \(p \ge 1\), \(\xi > 0\). Let \(k: = 0\).
Step 1. For the current iteration point \(x^{k}\), generate the working set \(I_{k}\), \(J_{k}\) by (4) and (5), calculate the projection matrix \(P_{k}\), the optimal identification function values \(\rho_{k}\) and \(\varrho_{k}\) by (8), (13)-(15) and (17). If \(\rho_{k} = 0\), then \(x^{k} \) is a stationary point of the problem (1), stop. Otherwise, go to Step 2.
Step 2. Obtain the search direction \(d^{k}\) by (16)-(18).
Step 4. Let \(x^{k + 1} = x^{k} + t_{k}d^{k}\), \(k: = k + 1\), and go back to Step 1.
Remark 2
The inequality (21) is equivalent to \(f_{i}(x^{k} + td^{k}) \le F^{k} - \alpha t\varrho_{k}\), \(i \in I\).
Note that Lemma 2, we get \(F'(x^{k};d^{k}) \le - \varrho_{k} < 0 \) and \((g_{j}^{k})^{T}d^{k} \le - \varrho_{k}\), \(\forall j \in J(x^{k})\), so it is easy to know that (21) and (22) hold for \(t > 0\) small enough, then Algorithm A is well defined.
3 Convergence analysis
In this section, we will analyze the global and strong convergence of Algorithm A. If Algorithm A stops at \(x^{k}\) in a finite number of iterations, then \(\rho_{k} = 0\). From Lemma 1(iv), we know that \(x^{k}\) is a stationary point of the problem (1). Next, we assume that Algorithm A yields an infinite iteration sequence \(\{ x^{k}\}\) of points, and prove that any accumulation point \(x^{*}\) of \(\{ x^{k}\}\) is a stationary point for the problem (1) under some mild conditions including the following assumption.
Assumption A2
The sequence \(\{ x^{k}\}\) generated by Algorithm A is bounded.
Lemma 3
Proof
(i) Because of Assumption A2 and Lemma 1(i), it is easy to get the total sequences \(\{ P_{k}\}_{k = 1}^{\infty}\) and \(\{ Q_{k}\}_{k = 1}^{\infty}\) of matrices are bounded. Furthermore, from (15) and (18), we know that \(\{ \mu_{L_{k}}^{k}\}\) and \(\{ v_{L_{k}}^{k}\}\) are bounded. This implies that \(\Vert d^{k}\Vert \le c\rho_{k}^{\xi}\).
Based on the lemma above, we can present the global convergence of our algorithm.
Theorem 1
Suppose that Assumptions A1 and A2 hold. Then Algorithm A generates an infinite sequence \(\{ x^{k}\}\) of points such that each accumulation \(x^{*}\) of \(\{ x^{k}\}\) is a stationary point of the problem (1).
Proof
Next, we will prove this theorem in two steps.
A. The step size sequence \(\{ t_{k}\}_{k \in \mathcal{K}}\) is always bounded away from zero, that is, \(\bar{t}: = \inf \{ t_{k}, k \in \mathcal{K}\} > 0\). We only need to show the inequalities (21) and (22) hold for \(k \in \mathcal{K}\) large enough and \(t > 0 \) sufficiently small.
A1. For \(i \in I\), in this case, we have \(i \notin I(x^{*})\) and \(i \in I(x^{*})\) as two cases.
A12. Case \(i \in I(x^{*})\), that is, \(f_{i}(x^{*}) = F(x^{*})\). Note that \(x^{k} \to x^{ *}\) and \(\varrho_{ *}^{ \sim} > 0\), by (4) and (5), we know that \(I(x^{ *} ) \subseteq I_{k} \) always holds for \(k \in \mathcal{K}\) large enough. Hence, \(i \in I_{k}\). In this case, we also have two cases, which are \(i = l_{ *}\) and \(i \ne l_{ *}\). Therefore, we discuss the two cases, respectively.
A2. For \(j \in J\), there are also \(j \notin J(x^{*})\) and \(j \in J(x^{*}) \) two cases.
Subsequently, we further show that Algorithm A has the property of strong convergence.
Theorem 2
Suppose that Assumptions A1 and A2 hold. If the sequence \(\{ x^{k}\}\) of points generated by Algorithm A possesses an isolated accumulation point \(x^{ *}\), then \(\lim_{k \to \infty} x^{k} = x^{ *}\), that is, Algorithm A is strongly convergent.
Proof
Since the sequence \(\{ x^{k}\}\) of points generated by Algorithm A possesses an isolated accumulation point, together with \(\lim_{k \to \infty} \Vert x^{k + 1} - x^{k}\Vert = 0\) (Lemma 3(ii)) and [25], Corollary 1.1.8, we immediately have \(\lim_{k \to \infty} x^{k} = x^{ *}\). Finally, by \(\lim_{k \to \infty} x^{k} = x^{ *}\) and Theorem 1, it follows that \(x^{ *}\) is the stationary point of the problem (1). □
4 Numerical experiments
In this section, in order to validate the efficiency of our proposed Algorithm A, some preliminary numerical experiments have been carried out. Test problems are divided into two groups. The first test group is made up of 10 problems (P1-P10), of which four small scale problems P1-P4 are taken from [25], which is the problem 7.3.28, 7.3.32, 7.3.31 and 7.3.29, respectively; and the other six middle-large-scale problems P5-P10 are from [31]. These six problems are composed of the corresponding objective functions and constraint functions in [31]. The second test group, we pick up six problems from [31] and compare the results of Algorithm A with the algorithm from [17] (called Algorithm B). In particular, \(\mathrm{P}5=2.3+4.1(1)\) (which means the objective and constraints of the problem P11 are 2.3 and 4.1(1) in [31], respectively, and the same blew), \(\mathrm{P}6=2.3+4.6(2)\), \(\mathrm{P}7=2.1+4.6(2)\), \(\mathrm{P}8=2.1+4.1(1)\), \(\mathrm{P}9=2.5+4.6(2)\) (\(n=200\)), \(\mathrm{P}10=2.9+4.1(1)\), \(\mathrm{P}11=2.1+4.6(1)\), \(\mathrm{P}12=2.5+4.6(2)\) (\(n=50\)), \(\mathrm{P}13=2.9+4.6(2)\).
Algorithm A is coded in MATLAB R2010b, and on a PC computer with Windows 7, Intel(R) Pentium(R) CPU 2.4 GHz. During the numerical experiments, we set parameters \(\alpha = 0.4\), \(\beta = 0.4\), \(\varepsilon = 7\), \(p = 1\), \(\xi = 0.2\). Execution is terminated if \(\rho_{k} < 10^{ - 5}\) or \(\mathrm{NI}\ge 150\).
Numerical results of Algorithm A
Prob. | n/l/m | \(x^{0}\) | NI | NF | NC | \(F(x^{*})\) | T (s) |
---|---|---|---|---|---|---|---|
P1 | 2/3/2 | \((1,2.4)^{T}\) | 15 | 115 | 33 | 1.952225 | 0.03 |
P2 | 2/3/2 | \((0,1)^{T}\) | 31 | 44 | 63 | 2.000009 | 0.01 |
P3 | 4/4/3 | \((0,0.9,0.9, - 1.5)^{T}\) | 28 | 143 | 89 | −43.999992 | 0.01 |
P4 | 2/6/2 | \((1,2.4)^{T}\) | 18 | 379 | 37 | 0.616433 | 0.02 |
P5 | 50/2/48 | \((2, \ldots,2)^{T}\) | 85 | 551 | 4299 | −69.296460 | 0.27 |
P6 | 50/2/48 | \((0.5, \ldots,0.5)^{T}\) | 150 | 6061 | 17026 | −56.502976 | 0.79 |
P7 | 100/2/98 | \((0.5, \ldots,0.5)^{T}\) | 22 | 160 | 2157 | 0.000006 | 0.13 |
P8 | 100/100/98 | \((1, \ldots,1)^{T}\) | 98 | 691 | 9605 | 0.500009 | 1.22 |
P9 | 200/3/199 | \((0.5, \ldots,0.5)^{T}\) | 102 | 1087 | 22502 | 398.000010 | 2.72 |
P10 | 200/2/198 | \((1, \ldots,1)^{T}\) | 150 | 456 | 29723 | 111.701918 | 3.92 |
Numerical comparisons for Algorithms A and B
Prob. | n/l/m | \(x^{0}\) | Alg. | NI | NF | NC | \(F(x^{*})\) | T (s) |
---|---|---|---|---|---|---|---|---|
P5 | 50/2/48 | \((1, \ldots,1)^{T}\) | A | 65 | 551 | 4299 | −69.296460 | 0.27 |
B | 43 | 141 | 4178 | −69.296456 | 4.53 | |||
P6 | 50/2/48 | \((0.5, \ldots,0.5)^{T}\) | A | 150 | 6061 | 17026 | −56.502976 | 0.79 |
B | 92 | 278 | 9065 | 56.580323 | 7.61 | |||
P8 | 50/2/48 | \((1, \ldots,1)^{T}\) | A | 96 | 673 | 4609 | 0.500010 | 0.70 |
B | 9 | 949 | 887 | 0.500000 | 2.34 | |||
P11 | 50/2/49 | \((0.4, \ldots,0.4)^{T}\) | A | 82 | 1804 | 4019 | 0.111121 | 0.66 |
B | 6 | 650 | 638 | 0.111111 | 2.17 | |||
P12 | 50/3/49 | \((0.5, \ldots,0.5)^{T}\) | A | 24 | 202 | 1177 | 0.000001 | 0.09 |
B | 10 | 38 | 784 | 0 | 2.05 | |||
P13 | 50/3/49 | \((0.5, \ldots,0.5)^{T}\) | A | 80 | 652 | 3969 | 98.000010 | 0.35 |
B | 147 | 631 | 14455 | 98.000001 | 10.94 | |||
Total | 497 | 2.84 | ||||||
307 | 29.64 |
From Table 1, we see that Algorithm A can generate approximately optimal solution for all the test problems. The search direction is generated by the generalized gradient projection explicit formula, and a new working set is used, which can reduce the scale of the generalized gradient projection, so the proposed Algorithm A is efficient.
In Table 2, we compare our Algorithm A with Algorithm B, and all the numerical results are tested by a same PC computer. The performance of the two algorithms is similar in terms of the approximate optimal objective value at the final iteration point, although the number of all component functions evaluations in the objective and constraints evaluations is few for Algorithm B, our proposed algorithm performs much better than Algorithm B relative to the cost of CPU running times.
In summary, through the numerical results in two tables and the analysis above, we can get that the proposed Algorithm A is promising for middle-small-scale minimax problem.
5 Conclusions
Although the generalized gradient projection algorithms have good theoretical convergence and effectiveness in practice, their applications to minimax problems have not yet been investigated. In this paper, we present a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints.
- (1)
A new approximation working set is presented.
- (2)
Using the technique of generalized gradient projection, we construct a generalized gradient projection feasible decent direction.
- (3)
Under mild assumptions, the algorithm is global and strong convergent.
- (4)
Some preliminary numerical results show that the proposed algorithm performs efficiently.
6 Results and discussion
In this work, a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints is presented. As further work, we think the ideas can be extended to minimax optimization problems with equality and inequality constraints and other optimization problems.
Declarations
Acknowledgements
Project supported by the Natural Science Foundation of China (No. 11271086), the Natural Science Foundation of Guangxi Province (No. 2015GXNSFBA139001) and the Science Foundation of Guangxi Education Department (No. KY2015YB242) as well as the Doctoral Scientific Research Foundation (No. G20150010).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Baums, A: Minimax method in optimizing energy consumption in real-time embedded systems. Autom. Control Comput. Sci. 43, 57-62 (2009) View ArticleGoogle Scholar
- Li, YP, Huang, GH: Inexact minimax regret integer programming for long-term planning of municipal solid waste management - part A, methodology development. Environ. Eng. Sci. 26, 209-218 (2009) View ArticleGoogle Scholar
- Michelot, C, Plastria, F: An extended multifacility minimax location problem revisited. Ann. Oper. Res. 111, 167-179 (2002) MathSciNetView ArticleMATHGoogle Scholar
- Cai, XQ, Teo, KL, Yang, XQ, Zhou, XY: Portfolio optimization under a minimax rule. Manag. Sci. 46, 957-972 (2000) View ArticleMATHGoogle Scholar
- Khan, MA: Second-order duality for nondifferentiable minimax fractional programming problems with generalized convexity. J. Inequal. Appl. 2003, 500 (2013) MathSciNetView ArticleMATHGoogle Scholar
- Gaudioso, M, Monaco, MF: A bundle type approach to the unconstrained minimization of convex nonsmooth functions. Math. Program. 23, 216-226 (1982) MathSciNetView ArticleMATHGoogle Scholar
- Polak, E, Mayne, DQ, Higgins, JE: Superlinearly convergent algorithm for min-max problems. J. Optim. Theory Appl. 69, 407-439 (1991) MathSciNetView ArticleMATHGoogle Scholar
- Gaudioso, M, Giallombardo, G, Miglionico, G: An incremental method for solving convex finite min-max problems. Math. Oper. Res. 31, 173-187 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB, Tang, CM, Tang, F: A feasible descent bundle method for inequality constrained minimax problems. Sci. Sin., Math. 45, 2001-2024 (2015) (in Chinese) Google Scholar
- Xu, S: Smoothing method for minimax problems. Comput. Optim. Appl. 20, 267-279 (2001) MathSciNetView ArticleMATHGoogle Scholar
- Polak, E, Royset, JO, Womersley, RS: Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 119, 459-484 (2003) MathSciNetView ArticleMATHGoogle Scholar
- Polak, E, Womersley, RS, Yin, HX: An algorithm based on active sets and smoothing for discretized semi-infinite minimax problems. J. Optim. Theory Appl. 138, 311-328 (2008) MathSciNetView ArticleMATHGoogle Scholar
- Xiao, Y, Yu, B: A truncated aggregate smoothing Newton method for minimax problems. Appl. Math. Comput. 216, 1868-1879 (2010) MathSciNetMATHGoogle Scholar
- Zhou, JL, Tits, AL: An SQP algorithm for finely discretized continuous minimax problems and other minimax problems with many objective functions. SIAM J. Optim. 6, 461-487 (1996) MathSciNetView ArticleMATHGoogle Scholar
- Zhu, ZB, Cai, X, Jian, JB: An improved SQP algorithm for solving minimax problems. Appl. Math. Lett. 22, 464-469 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB, Zhang, XL, Quan, R, Ma, Q: Generalized monotone line search SQP algorithmfor constrained minimax problems. Optimization 58, 101-131 (2009) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB, Hu, QJ, Tang, CM: Superlinearly convergent norm-relaxed SQP method based on active set identification and new line search for constrained minimax problems. J. Optim. Theory Appl. 163, 859-883 (2014) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB, Chao, MT: A sequential quadratically constrained quadratic programming method for unconstrained minimax problems. J. Math. Anal. Appl. 362, 34-45 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB, Mo, XD, Qiu, LJ, Yang, SM, Wang, FS: Simple sequential quadratically constrained quadratic programming feasible algorithm with active identification sets for constrained minimax problems. J. Optim. Theory Appl. 160, 158-188 (2014) MathSciNetView ArticleMATHGoogle Scholar
- Wang, FS, Wang, CL: An adaptive nonmonotone trust-region method with curvilinear search for minimax problem. Appl. Math. Comput. 219, 8033-8041 (2013) MathSciNetMATHGoogle Scholar
- Ye, F, Liu, HW, Zhou, SS, Liu, SY: A smoothing trust-region Newton-CG method for minimax problem. Appl. Math. Comput. 199, 581-589 (2008) MathSciNetMATHGoogle Scholar
- Obasanjo, E, Tzallas-Regas, G, Rustem, B: An interior-point algorithm for nonlinear minimax problems. J. Optim. Theory Appl. 144, 291-318 (2010) MathSciNetView ArticleMATHGoogle Scholar
- Rosen, JB: The gradient projection method for nonlinear programming, part I. Linear constraints. J. Soc. Ind. Appl. Math. 8, 181-217 (1960) View ArticleMATHGoogle Scholar
- Du, DZ, Wu, F, Zhang, XS: On Rosen’s gradient projection methods. Ann. Oper. Res. 24, 9-28 (1990) MathSciNetView ArticleMATHGoogle Scholar
- Jian, JB: Fast Algorithms for Smooth Cconstrained Optimization - Theoretical Analysis and Numerical Experiments. Science Press, Beijing (2010) (in Chinese) Google Scholar
- Ma, GD, Jian, JB: An ε-generalized gradient projection method for nonlinear minimax problems. Nonlinear Dyn. 75, 693-700 (2014) MathSciNetView ArticleMATHGoogle Scholar
- Burke, JV, More, JJ: On the identification of active constraints. SIAM J. Numer. Anal. 25, 1197-1211 (1988) MathSciNetView ArticleMATHGoogle Scholar
- Facchinei, F, Fischer, A, Kanzow, C: On the accurate identification of active constraints. SIAM J. Optim. 9, 14-32 (1998) MathSciNetView ArticleMATHGoogle Scholar
- Oberlin, C, Wright, SJ: Active set identification in nonlinear programming. SIAM J. Optim. 17, 577-605 (2006) MathSciNetView ArticleMATHGoogle Scholar
- Han, DL, Jian, JB, Li, J: On the accurate identification of active set for constrained minimax problems. Nonlinear Anal. 74, 3022-3032 (2011) MathSciNetView ArticleMATHGoogle Scholar
- Karmitsa, N: Test problems for large-scale nonsmooth minimization, Reports of the Department of Mathematical Information Technology. Series B, Scientific computing, No. B, 4 (2007) Google Scholar