- Research
- Open access
- Published:
A method with inertial extrapolation step for convex constrained monotone equations
Journal of Inequalities and Applications volume 2021, Article number: 189 (2021)
Abstract
In recent times, various algorithms have been incorporated with the inertial extrapolation step to speed up the convergence of the sequence generated by these algorithms. As far as we know, very few results exist regarding algorithms of the inertial derivative-free projection method for solving convex constrained monotone nonlinear equations. In this article, the convergence analysis of a derivative-free iterative algorithm (Liu and Feng in Numer. Algorithms 82(1):245–262, 2019) with an inertial extrapolation step for solving large scale convex constrained monotone nonlinear equations is studied. The proposed method generates a sufficient descent direction at each iteration. Under some mild assumptions, the global convergence of the sequence generated by the proposed method is established. Furthermore, some experimental results are presented to support the theoretical analysis of the proposed method.
1 Introduction
Our main aim in this paper is to find the approximate solutions of the systems of monotone nonlinear equations with convex constraints; precisely, the problem
where \(h: \mathcal{R}^{n} \rightarrow \mathcal{R}^{n}\) is assumed to be a monotone and Lipschitz continuous operator, while \(\mathcal{C}\) is a nonempty, closed, and convex subset of \(\mathcal{R}^{n}\).
Monotone operator was first introduced by Minty [2]. The concept has aided several studies such as the abstract study of electrical networks [2]. Interest in the study of the systems of monotone nonlinear equations with convex constraint (1) stems mainly from their several applications in various fields. For instance, in power flow equations [3], economic equilibrium problems [4], chemical equilibrium [5], and compressive sensing [6]. These applications have attracted the attention of many researchers. Thus, numerous iterative methods have been proposed by many authors to approximate solutions of (1) (see [7–35] and the references therein).
Among the early methods introduced and studied in the literature are Newton method, quasi-Newton method, Gauss–Newton method, Levenberg–Marquardt method, and their modifications (see, e.g., [36–39] and the references therein). These methods have fast local convergence but are not efficient for solving large scale nonlinear monotone equations, because they involve the computation of the Jacobian matrix or its approximation per iteration, which is well known to require a large amount of storage. To overcome this problem, various alternatives and modifications of the early methods have been proposed by several authors. Amongst these methods are conjugate gradient methods, spectral conjugate gradient methods, and spectral gradient methods. Extensions of the conjugate gradient method and its variant to solve large scale nonlinear equations have been obtained by several authors. For instance, motivated by the stability and efficiency of the Dai–Yuan (DY) conjugate gradient method [40] for solving unconstrained optimization problems, Liu and Feng [1] proposed a derivative-free projection method based on the structures of the DY conjugate gradient method [40]. This method inherits the stability of the DY method and greatly improves its computing performance.
In practical applications, it is always desirable to have iterative algorithms that have a high rate of convergence [41–46]. An increasingly important acceleration method is the inertial extrapolation type algorithms [47, 48]. They use an iterative procedure in which subsequent terms are obtained using the preceding two terms. This idea was first introduced by Polyak [49] and was inspired by an implicit discretization of a second-order-in-time dissipative dynamical system, so-called ‘Heavy Ball with Friction’:
where \(\gamma >0\) and \(f:\mathcal{R}^{n} \rightarrow \mathcal{R}\) is differentiable. System (2) is discretized so that, having the terms \(x_{k-1}\) and \(x_{k}\), the next term \(x_{k+1}\) can be determined using
where j is the step size. Equation (3) yields the following iterative algorithm:
where \(\beta =1-\gamma j\), \(\alpha =j^{2}\) and \(\beta (x_{k}-x_{k-1})\) is called the inertial extrapolation term which is intended to speed up the convergence of the sequence generated by equation (4).
Several algorithms with inertial extrapolation term have been tested in the solution of several problems (for example, imaging/data analysis problems and motion of a body in a potential field), and the test showed that the inertial steps remarkably increase the convergence speed of these algorithms (see [47, 48, 50] and other references therein). Therefore, this property is very important. As far as we know, there are not many results regarding algorithms of inertial derivative-free projection for solving (1).
Our concern now is the following: Based on the derivative-free iterative algorithm of Liu and Feng [1], can we construct an inertial derivative-free method for solving the system of monotone nonlinear equations with convex constraints?
In this paper, we give a positive answer to the aforementioned question. Motivated and inspired by the algorithm in [1], we introduce an inertial derivative-free algorithm for solving (1). Our proposed method is a combination of inertial extrapolation step and the derivative-free iterative method for nonlinear monotone equations with convex constraints [1]. We obtain the global convergence result under mild assumptions. Using a set of test problems, we illustrate the numerical behaviors of the algorithm in [1] and compare it with the algorithm presented in this paper. The results indicate that the proposed algorithm with the inertial step is superior in terms of the number of iterations and function evaluations.
The rest of paper is organized as follows. The next section contains some preliminaries. The proposed inertial algorithm is presented in Sect. 3, and its convergence is presented in the fourth section. The last section is devoted to presentation of examples and numerical results.
2 Preliminaries
We recall some known definitions and results which will be used in the sequel. First, let us denote by \(\mathbf{SOL(h, \mathcal{C})}\) the solution set of (1).
Definition 2.1
Let \(\mathcal{C}\) be a nonempty closed convex subset of \(\mathcal{R}^{n}\). A mapping \(h: \mathcal{R}^{n} \rightarrow \mathcal{R}^{n}\) is said to be:
-
(i)
monotone on \(\mathcal{C}\) if
$$ \bigl(h(x) - h(z)\bigr)^{T} (x-z) \geq 0, \quad \forall x,z \in \mathcal{C}. $$ -
(ii)
L-Lipschitz continuous on \(\mathcal{C}\), if there exists \(L>0\) such that
$$ \bigl\Vert h(x) -h(z) \bigr\Vert \leq L \Vert x-z \Vert ,\quad \forall x, z \in \mathcal{C}. $$
Definition 2.2
Let \(\mathcal{C} \subset \mathcal{R}^{n}\) be a closed and convex set, some vector \(x \in \mathcal{R}^{n}\), the orthogonal projection of x onto \(\mathcal{C}\) denoted by \(P_{\mathcal{C}}(x)\), is defined by
where \(\|x\| = \sqrt{x^{T} x}\).
The following lemma gives some well-known characteristics of the projection operator.
Lemma 2.3
Let \(\mathcal{C} \subset \mathcal{R}^{n}\) be a nonempty closed and convex set. Then the following statements hold:
-
(i)
\((x -P_{\mathcal{C}}(x) )^{T}(P_{\mathcal{C}}(x) - z) \geq 0\), \(\forall x \in \mathcal{R}^{n}\), \(\forall z \in \mathcal{C.}\)
-
(ii)
\(\|P_{\mathcal{C}}(x) - P_{\mathcal{C}}(z) \| \leq \|x - z\|\), \(\forall x,z \in \mathcal{R}^{n}\).
-
(iii)
\(\|P_{\mathcal{C}}(x) - z\|^{2} \leq \|x -z\|^{2} - \|x - P_{ \mathcal{C}}({x}) \|^{2}\), \(\forall x \in \mathcal{R}^{n}\), \(\forall z \in \mathcal{C}\).
Lemma 2.4
([51])
Let \(\mathcal{R}^{n}\) be an Euclidean space. Then the following inequality holds:
Lemma 2.5
([52])
Let \(\{x_{k}\}\) and \(\{z_{k}\}\) be sequences of nonnegative real numbers satisfying the following relation:
where \(\sum_{k =1}^{\infty }z_{k} < \infty \), then \(\lim_{k \rightarrow \infty } x_{k}\) exists.
3 Proposed method
Based on the Liu and Feng [1] derivative-free iterative method for monotone nonlinear equation with convex constraint, in the sequel, we present an inertial extrapolation algorithm for solving the system of nonlinear monotone equations (1). The corresponding algorithm, which we refer to as the inertial projected Dai–Yuan (IPDY) algorithm, uses a strategy which tracks the optimal x-value by starting with an initial x-value \(x_{0}\) and thereafter updating the x by performing iterations of the form
where \(\alpha _{k}\) is a positive step size obtained by a line search procedure, and \(d_{k}\) is the search direction implemented so that
is fulfilled. Next, we give a precise statement for our method as follows.
Algorithm 1
(Inertial projected Dai–Yuan algorithm (IPDY))
- 1 :
-
(S.0) Choose \(x_{0}\), \(x_{1} \in \mathcal{C}\), \(\mathit{Tol} \in (0,1), a \in (0,1], \sigma > 0, \theta \in [0,1) r \in (0,1)\). Set \(k := 1\).
- 2 :
-
(S.1) Compute
$$ w_{k} = x_{k} + \theta _{k} (x_{k} - x_{k-1}), $$where \(0\leq \theta _{k} \leq \tilde{\theta _{k}}\) with
$$ \tilde{\theta _{k}} := \textstyle\begin{cases} \min \{ \theta , \frac{1}{k^{2} \Vert x_{k} -x_{k-1} \Vert ^{2}} \} & \text{if $x_{k} \neq x_{k-1}$}, \\ \theta , & \text{otherwise}. \end{cases} $$(7) - 3 :
-
(S.2) Compute \(h(w_{k})\). If \(\|h(w_{k})\| \leq \mathit{Tol}\), stop. Otherwise, generate the search direction \(d_{k}\) by
- 4 :
-
$$ d_{k} := \textstyle\begin{cases} -h(w_{k}) & \text{if $k =1$}, \\ -\zeta _{k} h(w_{k}) + \beta _{k}^{\mathrm{IPDY}} d_{k-1} & \text{if $k>1$}, \end{cases} $$(8)
- 5 :
-
where
$$ \begin{aligned} & \beta _{k}^{\mathrm{IPDY}} := \frac{ \Vert h(w_{k}) \Vert ^{2}}{d_{k-1}^{T}y_{k-1}},\qquad \zeta _{k}: = c_{0} + \frac{h(w_{k})^{T} d_{k-1}}{d_{k-1}^{T}y_{k-1}},\quad c_{0} >0, \\ & v_{k-1} := h(w_{k}) - h(w_{k-1}), \\ & y_{k-1}:=v_{k-1}+t_{k-1} d_{k-1},\qquad t_{k-1}:=1+\max \biggl\{ 0,-\frac{d_{k-1}^{T} v_{k-1}}{d_{k-1}^{T} d_{k-1}} \biggr\} . \end{aligned} $$(9) - 6 :
-
(S.3) Find \(z_{k} = w_{k} + \alpha _{k} d_{k}\), where \(\alpha _{k} = a r^{i}\) with i being the smallest nonnegative integer such that
- 7 :
-
$$ {-h(w_{k} + \alpha _{k} d_{k})^{T} d_{k}} \geq \sigma \alpha _{k} \bigl\Vert h(w_{k} + \alpha _{k} d_{k}) \bigr\Vert \Vert d_{k} \Vert ^{2}. $$(10)
- 8 :
-
(S.4) If \(z_{k} \in \mathcal{C}\) and \(\|h(z_{k})\| \leq \mathit{Tol}\), stop. Otherwise, compute the next iterate by
- 9 :
-
$$ x_{k+1} = P_{\mathcal{C}} \bigl[w_{k} - \lambda _{k} h(z_{k}) \bigr], $$(11)
where
$$ \lambda _{k} := \frac{ h(z_{k})^{T} (w_{k} - z_{k})}{ \Vert h(z_{k}) \Vert ^{2}}. $$ - 10 :
-
(S.5) Set \(k \leftarrow k+1\), and return to (S.1).
Remark 3.1
For all \(k\geq 0\), it can be observed from equation (7) that \(\theta _{k} \|x_{k} -x_{k-1}\|^{2} \leq \frac{1}{k^{2}}\). This implies that
Throughout this paper, we make use of the following assumptions.
Assumption 1
-
(A1)
The solution set \(\mathcal{C}^{*}\) of (1) is nonempty.
-
(A2)
h is monotone on \(\mathcal{C}\).
-
(A3)
h is Lipschitz continuous on \(\mathcal{C}\).
4 Convergence result
In this section, convergence analysis of our algorithm is presented. We start by proving some lemmas followed by the proof of the main theorem.
Lemma 4.1
Let \(d_{k}\) be generated by Algorithm 1. Then \(d_{k}\) always satisfies the sufficient descent condition, that is,
Proof
For \(k=1\), multiplying both sides of (8) by \(h(w_{0})^{T}\), we have
Also for \(k> 1\), multiplying both sides of (8) by \(h(w_{k})^{T}\), we get
□
Remark 4.2
From the definition of \(y_{k-1}\) and \(t_{k-1}\), it holds that
then from (12) we have
This indicates that \(d_{k-1}^{T}y_{k-1}\) is always positive when the solution of (1) is not achieved, which means that the parameters \(\zeta _{k}\) and \(\beta _{k}\) are well defined.
Lemma 4.3
The line search condition (10) is well defined. That is, for all \(k\geq 1\), there exists a nonnegative integer i satisfying (10).
Proof
The proof of Lemma 4.3 can be obtained in the same way as [1] with the difference that the sequence \(\{x_{k}\}\) is replaced with the inertial extrapolation term \(w_{k}\). □
Lemma 4.4
Suppose that h is a monotone and Lipschitz continuous mapping, and \(\{w_{k}\}\) and \(\{z_{k}\}\) are sequences generated by Algorithm 1, then
Proof
From line search (10), if \(\alpha _{k} \neq a\), then \(\tilde{\alpha _{k}} r^{-1}\) does not satisfy the line search. That is,
This fact, in combination with the Lipschitz continuity assumption (A3) and the sufficient descent condition (12), expresses
This yields the desired inequality (13). □
Lemma 4.5
Let \(\{x_{k}\}\) and \(\{z_{k}\}\) be generated by Algorithm 1. If \(x^{*} \in \mathbf{SOL(h, \mathcal{C})}\), then under Assumption 1, it holds that
Moreover, the sequence \(\{x_{k}\}\) is bounded and
Proof
By the monotonicity of the mapping h, we have
By Lemma 2.3(iii), (16), and (17), it holds that, for any \(x^{*} \in \mathbf{SOL}(\mathbf{h}, \mathcal{C})\),
From inequality (18), we can deduce that
From Remark 3.1, noting that \(\sum_{k=1}^{\infty } \theta _{k} \|x_{k} - x_{k-1}\| < \infty \), by Lemma 2.5, we deduce that the sequence \(\{\|x_{k} -x^{*}\|\}\) is bounded by a positive number, say \(M_{0}\). Therefore, for all k, we have that
Thus, we can infer that \(\|x_{k} - x_{k-1}\| \leq 2M_{0}\). Using the aforementioned facts, we have
Combining (21) with (18), we have
Thus, we have
Adding (23) for \(k =1,2,3,\ldots \) , we have
But \(\sum_{k=1}^{\infty } ( \|x_{k} -x^{*}\|^{2} -\|x_{k+1} -x^{*} \|^{2} )\) is finite since the sequence \(\{\|x_{k+1} -x^{*}\|\}\) is convergent and \(\sum_{k=1}^{\infty }\theta _{k} \|x_{k} -x_{k-1}\| < \infty \). It implies that
Therefore,
□
Remark 4.6
By the definition of \(\{z_{k}\}\) and (24), we have
Theorem 4.7
Suppose that the conditions of Assumption 1hold. If \(\{x_{k}\}\) is the sequence generated by (11) in Algorithm 1, then
Furthermore, \(\lbrace x_{k} \rbrace \) converges to a solution of (1).
Proof
We first prove that
Suppose that equality (27) does not hold. Then there exists a constant \(\varepsilon >0\) such that
This fact, in combination with the sufficient descent condition (12), implies that
This shows that
On the other hand, by the Lipschitz continuity assumption (A3) and (20), we have
By using the Cauchy–Schwarz inequality, Remark 4.2, and (28), it follows from (8)–(9) that, for all \(k> 1\),
Then we get from (13) that
which contradicts (29). Thus, (27) holds. Now, since we know that
by the continuity of h, we have that
From the continuity of h, the boundedness of \(\{x_{k}\}\), and (32), it implies that the sequence \(\{x_{k}\}\) generated by Algorithm 1 has an accumulation point \(x^{*}\) such that \(h(x^{*})=0\). On the other hand, the sequence \(\{x_{k}-x^{*} \}\) is convergent by Lemma 2.5, which means that the whole sequence \(\{x_{k}\}\) globally converges to the solution \(x^{*}\) of system (1). □
5 Numerical experiments
In this section, an efficiency comparison between the proposed method called IPDY and the method proposed Liu and Feng in [1] called PDY is presented. Recall that the IPDY is a modification of the method in PDY by introducing the inertial term. The metrics considered for the comparison are the number of iterations (NI) and function evaluations (NF). This means that the method with the least NI and NF is the best method. The following were considered for the experimental comparison:
-
Dimensions: 1000, 5000, \(10\text{,}000\), \(50\text{,}000\), \(100\text{,}000\).
-
Parameters: For IPDY, we select \(\theta =0.8\), \(a=1\), \(r=0.7\), \(\sigma =0.01\), \(c_{0} =1\). As for PDY, all parameters are selected as in [1].
-
Terminating criterion: When \(\|h(w_{k})\|\leq 10^{-6}\).
-
Implementation software: All methods are coded in MATLAB R2019ba and run on a PC with an intel COREi3 processor, 8 GB of RAM and CPU 2.30 GHz.
The two methods were compared based on the following test problems, where \(h=(h_{1},h_{2}, \ldots ,h_{n})^{T}\).
Problem 1
(Modified exponential function [53])
Problem 2
(Logarithmic function [53])
Problem 3
(Nonsmooth function [54])
Problem 4
([55])
Problem 5
(Strictly convex function I [53])
Problem 6
(Strictly convex function II [53])
Problem 7
(Tridiagonal exponential function [53])
Problem 8
(Nonsmooth function II [56])
Problem 9
(Trig-Exp function [57])
Problem 10
(Penalty function I [58])
Above is a list of the seven starting points in Table 1.
The numerical results are given in Tables 2–11 in the Appendix section for the sake of comparison. From the table, it can be observed that the IPDY method has lower NI and NF than the PDY in most of the problems. This is the result of the inertial effect possessed by the IPDY method. For all initial points used, it can be observed that the IPDY method was able to solve the test problems. However, it can be seen that for Problem 3, using the randomly selected initial points, the IPDY method failed for dimension 5000 and \(10\text{,}000\). On the overall, to visualize the performance of IPDY verses the PDY method, we employ the well-known performance profiles of Dolan and Moré [59] defined as:
where \(T_{P}\) is the test set, \(|T_{P}|\) is the number of problems in the test set \(T_{P}\), Q is the set of optimization solvers, and \(t_{p,q}\) is the NI (or the NF) for \(t_{p} \in T_{P}\) and \(q \in Q\). Figures 1 and 2 were obtained using the above performance profiles.
From Figs. 1 and 2, the IPDY method has the least NI and NF in over 80% of the problem, respectively. This can be seen on the y-axis of the plots. As a conclusion, it can be said that the purpose of introducing the inertial effect was achieved as the IPDY method recorded the lowest number of iterations and function evaluations.
6 Conclusion
The paper has proposed an inertial derivative-free algorithm, called IPDY, for solving systems of monotone nonlinear equations with convex constraints in the Euclidean space. Under some suitable conditions imposed on parameters, we established the global convergence of the algorithm. In all our comparisons, the numerical results as shown in Tables 2–11 and Figs. 1, 2 demonstrate that our method converges faster and is more efficient than the PDY algorithm. In the future, we plan to study different variants of derivative-free methods with the inertial extrapolation step and apply them in various directions like image deblurring and signal processing problems.
Availability of data and materials
Not applicable.
References
Liu, J., Feng, Y.: A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 82(1), 245–262 (2019)
Minty, G.J.: Monotone networks. Proc. R. Soc. Lond. Ser. A, Math. Phys. Sci. 257(1289), 194–212 (1960)
Aj, W., Wollenberg, B.: Power Generation, Operation and Control, p. 592. Wiley, New York (1996)
Dirkse, S.P., Ferris, M.C.: MCPLIB: a collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 5(4), 319–345 (1995)
Meintjes, K., Morgan, A.P.: A methodology for solving chemical equilibrium systems. Appl. Math. Comput. 22(4), 333–361 (1987)
Figueiredo, M.A., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)
Abubakar, A.B., Kumam, P., Ibrahim, A.H., Chaipunya, P., Rano, S.A.: New hybrid three-term spectral-conjugate gradient method for finding solutions of nonlinear monotone operator equations with applications. Math. Comput. Simul. (2021, in press)
Ibrahim, A.H., Deepho, J., Bala Abubakar, A., Adamu, A.: A three-term Polak–Ribière–Polyak derivative-free method and its application to image restoration. Sci. Afr. 13, e00880 (2021). https://www.sciencedirect.com/science/article/pii/S2468227621001848
Ibrahim, A.H., Kumam, P., Hassan, B.A., Abubakar, A.B., Abubakar, J.: A derivative-free three-term Hestenes–Stiefel type method for constrained nonlinear equations and image restoration. Int. J. Comput. Math. (2021). https://doi.org/10.1080/00207160.2021.1946043
Ibrahim, A.H., Deepho, J., Abubakar, A.B., Aremu, K.O.: A modified Liu–Storey-conjugate descent hybrid projection method for convex constrained nonlinear equations and image restoration. Numer. Algebra Control Optim. (2021). https://doi.org/10.3934/naco.2021022
Ibrahim, A.H., Garba, A.I., Usman, H., Abubakar, J., Abubakar, A.B.: Derivative-free RMIL conjugate gradient method for convex constrained equations. Thai J. Math. 18(1), 212–232 (2019)
Abubakar, A.B., Rilwan, J., Yimer, S.E., Ibrahim, A.H., Ahmed, I.: Spectral three-term conjugate descent method for solving nonlinear monotone equations with convex constraints. Thai J. Math. 18(1), 501–517 (2020)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Jirakitpuwapat, W., Abubakar, J.: A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing. Heliyon 6(3), e03466 (2020)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Abubakar, J., Muhammad, A.B.: Least-square-based three-term conjugate gradient projection method for \(\ell _{1}\)-norm problems with application to compressed sensing. Mathematics 8(4), 602 (2020)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Yusuf, U.B., Rilwan, J.: Derivative-free conjugate residual algorithms for convex constraints nonlinear monotone equations and signal recovery. J. Nonlinear Convex Anal. 21(9), 1959–1972 (2020)
Abubakar, A.B., Ibrahim, A.H., Muhammad, A.B., Tammer, C.: A modified descent Dai–Yuan conjugate gradient method for constraint nonlinear monotone operator equations. Appl. Anal. Optim. 4, 1–24 (2020)
Abubakar, A.B., Kumam, P., Ibrahim, A.H., Rilwan, J.: Derivative-free HS–DY-type method for solving nonlinear equations and image restoration. Heliyon 6(11), e05400 (2020)
Ibrahim, A.H., Kumam, P., Kumam, W.: A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration. IEEE Access 8, 162714–162729 (2020)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Yusuf, U.B., Yimer, S.E., Aremu, K.O.: An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration. AIMS Math. 6(1), 235 (2020)
Abubakar, A.B., Muangchoo, K., Ibrahim, A.H., Muhammad, A.B., Jolaoso, L.O., Aremu, K.O.: A new three-term Hestenes–Stiefel type method for nonlinear monotone operator equations and image restoration. IEEE Access 9, 18262–18277 (2021)
Ibrahima, A.H., Muangchoob, K., Mohamedc, N.S., Abubakard, A.B.: Derivative-free SMR conjugate gradient method for constraint nonlinear equations. J. Math. Comput. Sci. 24(2), 147–164 (2022)
Abubakar, A.B., Muangchoo, K., Ibrahim, A.H., Abubakar, J., Rano, S.A.: FR-type algorithm for finding approximate solutions to nonlinear monotone operator equations. Arab. J. Math. 10, 261–270 (2021)
Abubakar, A.B., Kumam, P., Mohammad, H., Ibrahim, A.H.: PRP-like algorithm for monotone operator equations. Jpn. J. Ind. Appl. Math. 38, 805–822 (2021)
Ibrahim, A.H., Muangchoo, K., Abubakar, A.B., Adedokun, A.D., Spectral, M.H.: Conjugate gradient like method for signal reconstruction. Thai J. Math. 18(4), 2013–2022 (2020)
Ibrahim, A.H., Kumam, P.: Re-modified derivative-free iterative method for nonlinear monotone equations with convex constraints. Ain Shams Eng. J. 12(2), 2205–2210 (2021)
Mohammad, H.: Barzilai–Borwein-like method for solving large-scale non-linear systems of equations. J. Niger. Math. Soc. 36(1), 71–83 (2017)
Abubakar, A.B., Kumam, P.: A descent Dai–Liao conjugate gradient method for nonlinear equations. Numer. Algorithms 81(1), 197–210 (2019)
Abubakar, A.B., Kumam, P.: An improved three-term derivative-free method for solving nonlinear equations. Comput. Appl. Math. 37(5), 6760–6773 (2018)
Abubakar, A.B., Muangchoo, K., Ibrahim, A.H., Fadugba, S.E., Aremu, K.O., Jolaoso, L.O.: A modified scaled spectral-conjugate gradient-based algorithm for solving monotone operator equations. J. Math. 2021, Article ID 5549878 (2021)
Waziri, M.Y., Ahmed, K., Sabi’u, J.: A family of Hager–Zhang conjugate gradient methods for system of monotone nonlinear equations. Appl. Math. Comput. 361, 645–660 (2019)
Waziri, M., Ahmed, K., Sabi’u, J.: A Dai–Liao conjugate gradient method via modified secant equation for system of nonlinear equations. Arab. J. Math. 9, 443–457 (2020)
Sabi’u, J., Shah, A., Waziri, M.Y., Ahmed, K.: Modified Hager–Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint. Int. J. Comput. Methods 2020, 2050043 (2020)
Waziri, M.Y., Hungu, K.A., Descent, S.J.: Perry conjugate gradient methods for systems of monotone nonlinear equations. Numer. Algorithms 85(3), 763–785 (2020)
Waziri, M.Y., Muhammad, H.U., Halilu, A.S., Ahmed, K.: Modified matrix-free methods for solving system of nonlinear equations. Optimization (2020). https://doi.org/10.1080/02331934.2020.1778689
Halilu, A.S., Majumder, A., Waziri, M.Y., Ahmed, K.: Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Math. Comput. Simul. 187, 520–539 (2021)
Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-Newton methods. Math. Comput. 28(126), 549–560 (1974)
Li, D., Fukushima, M.: A globally and superlinearly convergent Gauss–Newton-based BFGS method for symmetric nonlinear equations. SIAM J. Numer. Anal. 37(1), 152–172 (1999)
Zhou, G., Toh, K.C.: Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optim. Theory Appl. 125(1), 205–221 (2005)
Zhou, W.J., Li, D.H.: A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 77(264), 2231–2240 (2008)
Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)
Chen, P., Huang, J., Zhang, X.: A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 29(2), 025011 (2013)
Iiduka, H.: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22(3), 862–878 (2012)
Jolaoso, L.O., Alakoya, T., Taiwo, A., Mewomo, O.: Inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert space. Optimization 70(2), 387–412 (2021)
Abubakar, J., Kumam, P., Ibrahim, A.H., Relaxed, P.A.: Inertial Tseng’s type method for solving the inclusion problem with application to image restoration. Mathematics 8(5), 818 (2020)
Abubakar, J., Kumam, P., Ibrahim, A.H., et al.: Inertial iterative schemes with variable step sizes for variational inequality problem involving pseudomonotone operator. Mathematics 8(4), 609 (2020)
Abubakar, J., Sombut, K., Ibrahim, A.H., et al.: An accelerated subgradient extragradient algorithm for strongly pseudomonotone variational inequality problems. Thai J. Math. 18(1), 166–187 (2019)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9(1), 3–11 (2001)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Ogwo, G., Izuchukwu, C., Mewomo, O.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88, 1419–1456 (2021)
Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009)
Auslender, A., Teboulle, M., Ben-Tiba, S.: A logarithmic-quadratic proximal method for variational inequalities. In: Computational Optimization, pp. 31–40. Springer, Berlin (1999)
La Cruz, W., Martínez, J.M., Raydan, M.: Spectral residual method without gradient information for solving large-scale nonlinear systems: theory and experiments. Citeseer. Technical report RT-04-08 (2004). https://www.ime.unicamp.br/~martinez/lmrreport.pdf
Li, Q., Li, D.H.: A class of derivative-free methods for large-scale nonlinear monotone equations. IMA J. Numer. Anal. 31(4), 1625–1635 (2011)
La Cruz, W.: A spectral algorithm for large-scale systems of nonlinear monotone equations. Numer. Algorithms 76(4), 1109–1130 (2017)
Yu, Z., Lin, J., Sun, J., Xiao, Y., Liu, L., Li, Z.: Spectral gradient projection method for monotone nonlinear equations with convex constraints. Appl. Numer. Math. 59(10), 2416–2423 (2009)
Lukšan, L., Matonoha, C., Vlcek, J.: Problems for nonlinear least squares and nonlinear equations. Technical report (2018)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Jirakitpuwapat, W., Abubakar, J.: A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing. Heliyon 6(3), e03466 (2020)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Acknowledgements
We are grateful to the anonymous referees for their useful comments which have made the paper clearer and more comprehensive than the earlier version. The first author was supported by the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi” (Grant no. 16/2561). The authors also acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this research project is supported by National Council of Thailand (NRCT) under Research Grants for Talented Mid-Career Researchers (Contract no. N41A640089). The third author acknowledges with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University.
Funding
No funding received.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the manuscript and read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Appendix
Appendix
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ibrahim, A.H., Kumam, P., Abubakar, A.B. et al. A method with inertial extrapolation step for convex constrained monotone equations. J Inequal Appl 2021, 189 (2021). https://doi.org/10.1186/s13660-021-02719-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02719-3