Skip to main content

Dai-Kou type conjugate gradient methods with a line search only using gradient

Abstract

In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

1 Introduction

Consider the following problem of finding \(x\in R^{n}\) such that

$$ g(x)=0, $$
(1)

where \(g:R^{n}\rightarrow R^{n}\) is continuous. Throughout this paper, problem (1) corresponds to the first-order optimality condition of the unconstrained optimization

$$ \min f(x), $$
(2)

where \(f:R^{n}\rightarrow R\) is the function whose gradient is g.

Conjugate gradient methods are very efficient in solving large scale problem (2), if f is known, due to their simple iteration and their low memory requirements. For any given starting point \(x_{0}\in R^{n}\), an iterative sequence \(\{ x_{k}\}\) is generated by the following form:

$$ x_{k+1}=x_{k}+\alpha_{k} d_{k}, $$
(3)

where \(\alpha_{k}\) is a step-length obtained by some line search, and \(d_{k}\) is a search direction generated by

$$ d_{k}= \textstyle\begin{cases} -g_{k}, &\text{if }k=0, \\ -g_{k}+\beta_{k} d_{k-1}, &\text{if }k\geq1, \end{cases} $$
(4)

where \(g_{k}=g(x_{k})\). Different choices of the parameter \(\beta_{k}\) in (4) lead to different nonlinear conjugate gradient methods. The Fletcher-Reeves [1], Hestenes-Stiefel [2], Polak-Ribiére-Polyak [3, 4], Dai-Yuan [5] and Liu-Storey [6] formulas, and so on, are well-known formulas for \(\beta_{k}\). Particularly, conjugate gradient methods with the following (sufficient) descent condition

$$ g_{k}^{T} d_{k}\leq-c \|g_{k}\|^{2}, \quad \forall k\geq0, c>0, $$
(5)

are very important and are always more efficient.

Recently, Dai and Kou [7] designed a family of conjugate gradient methods for the unconstrained nonlinear problems, the corresponding search direction is close to the direction of the scaled memoryless BFGS method. More importantly, they satisfied the sufficient descent condition (5). Numerical experiments illustrated that the Dai-Kou type conjugate gradient methods are more efficient than the Hager-Zhang type methods [8] presented by Hager and Zhang [8, 9]. For other descent conjugate gradient methods proposed by researchers, please see [7, 9–11] and the references therein.

For conjugate gradient methods, line search plays an important role for the global convergence. In general, the weak Wolfe line search,

$$\begin{aligned}& f(x_{k}+\alpha_{k} d_{k}) \leq f(x_{k})+\delta\alpha_{k} g_{k}^{T} d_{k}, \end{aligned}$$
(6)
$$\begin{aligned}& \sigma g_{k}^{T} d_{k} \leq g(x_{k}+ \alpha_{k} d_{k})^{T} d_{k}, \end{aligned}$$
(7)

where \(0<\delta<\sigma<1\), was used to obtain the step-length \(\alpha _{k}\). Hager and Zhang [9] showed that the first condition (6) may never be satisfied due to the existence of the numerical errors (see also [7]). Thus, in order to avoid the numerical drawback of the weak Wolfe line search, they proposed approximate Wolfe conditions [8, 9], which was a combination of the weak Wolfe line search and

$$ \sigma g_{k}^{T} d_{k}\leq g(x_{k}+\alpha_{k} d_{k})^{T} d_{k}\leq(2\delta-1) g_{k}^{T} d_{k}, $$
(8)

where \(0<\delta<1/2\) and \(\delta<\sigma<1\). Numerical tests showed that the combined line search performed well, but there is no theory to guarantee the global convergence. Then Dai and Kou proposed an improved Wolfe line search, that is, the step-length \(\alpha_{k}\) satisfied (7) and

$$ f(x_{k}+\alpha_{k} d_{k})\leq f(x_{k})+\min\bigl\{ \epsilon \bigl\vert g_{k}^{T} d_{k} \bigr\vert , \delta \alpha_{k} \bigl\vert g_{k}^{T} d_{k} \bigr\vert +\eta_{k} \bigr\} , $$
(9)

where \(0<\delta<\sigma<1\), \(\epsilon>0\) is a constant parameter and \(\{\eta_{k}\}\) is a positive sequence satisfying \(\sum_{k\geq0}\eta_{k}<+\infty\). With the improved Wolfe line search, the global convergence of Dai-Kou type conjugate gradient methods was guaranteed.

Although the Hager-Zhang type and Dai-Kou type conjugate gradient methods are efficient in solving problem (2), during the implementation of the methods, function evaluations are required. The goal of this paper is to solve problem (1) which is more general and includes some nonlinear equations, such as boundary value problems [12]. So, we hope to improve the Dai-Kou type conjugate gradient methods to directly solve problem (1) and retain their high numerical efficiency. More recently, Dong [13] embedded an Armijo-type line search only using gradient into the PRP+ conjugate gradient method [14] to solve problem (1), the step-length \(\alpha_{k}\) satisfied

$$ g(x_{k}+\alpha_{k} d_{k})^{T} d_{k}+\frac{1}{2}\max\{-\mu_{k},0\} \alpha_{k}\| d_{k}\|^{2}\leq\sigma g_{k}^{T} d_{k}, $$
(10)

where \(\mu_{k}\) is a determined real number and \(0<\sigma<1\). The line search allowed small choices of \(\alpha_{k}\). In order to avoid this drawback, Dong [15] considered the following line search:

$$ \sigma g_{k}^{T} d_{k} \leq g(x_{k}+\alpha_{k} d_{k})^{T} d_{k} \leq\delta g_{k}^{T} d_{k}, $$
(11)

where \(0<\delta<\sigma<1\). Motivated by the work of [15], we embed the line search (11) into the Dai-Kou type conjugate gradient methods, then the improved methods of this paper have several advantages. They have the positive features of the Dai-Kou type methods for problem (2), they can be used to solve the nonlinear optimization (2) only requiring gradient information, and they can be used to solve some systems of nonlinear equations, such as those arising in boundary value problems and others.

The rest of this paper is organized as follows. In the next section, we simply review the Dai-Kou type conjugate gradient methods for unconstrained minimization and develop them to solve problem (1). In Section 3, we prove the global convergence of the improved methods under some suitable conditions. In Section 4, we select two classes of test problems to test the improved methods. One class is composed of test problems from the CUTEst test environment, and the other class is composed of some boundary value problems. The numerical performance is used to confirm their broader application and to compare with that of the PRP+ conjugate gradient method in [13]. Finally, some conclusions are given in Section 5.

2 Algorithm

In this section, we describe the details of the proposed methods. First, we briefly review the Dai-Kou type conjugate gradient methods in the setting of unconstrained minimization (2). We have mentioned above that nonlinear conjugate gradient methods are identified by the definitions of the parameter \(\beta_{k}\) in (4). For the family of Dai-Kou type conjugate gradient methods, the parameter \(\beta_{k}\) is defined as

$$ \beta_{k}^{N}(\tau_{k-1})=\max \biggl\{ \beta_{k}(\tau_{k-1}),\eta \frac{g_{k}^{T} d_{k-1}}{\|d_{k-1}\|^{2}} \biggr\} . $$
(12)

Here,

$$ \beta_{k}(\tau_{k-1})=\frac{g_{k}^{T} y_{k-1}}{d_{k-1}^{T} y_{k-1}}- \biggl(\tau_{k-1}+\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}-\frac{s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\|^{2}} \biggr) \frac{g_{k}^{T} s_{k-1}}{d_{k-1}^{T} y_{k-1}}, $$
(13)

where \(y_{k-1}=g_{k}-g_{k-1}\), \(s_{k-1}=\alpha _{k-1}d_{k-1}=x_{k}-x_{k-1}\), \(\tau_{k-1}\) is a parameter corresponding to the scaling parameter in the scaled memoryless BFGS method, and \(\eta\in[0,1)\). The parameters \(\beta_{k}\) in the Dai-Liao type methods [16] and the Hager-Zhang type methods [9] are special cases of formula (13). If \(\tau _{k-1}\) is specially defined as

$$ \tau_{k-1}=\lambda\tau_{k-1}^{A}+(1- \lambda)\tau_{k-1}^{B} $$
(14)

with \(\lambda\in[0,1]\) and

$$\begin{aligned}& \tau_{k-1}^{A} = \frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}, \end{aligned}$$
(15)
$$\begin{aligned}& \tau_{k-1}^{B} = \frac{s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\|^{2}}, \end{aligned}$$
(16)

then the Dai-Kou type conjugate gradient methods satisfy the sufficient descent condition (5).

The Dai-Kou type methods are very efficient in solving the unconstrained minimization, so we hope they can be used to solve problem (1) only requiring gradient information. Now we describe the improved methods in detail.

Algorithm 2.1

Step 0.:

Choose \(x_{0}\in R^{n}\), constants \(\sigma\in(0,1)\), \(\delta\in (0,\sigma)\), \(\lambda\in[0,1]\), \(\eta\in[0,1)\), \(\varepsilon>0\). Set \(g_{0}:=g(x_{0})\) and \(k:=0\).

Step 1.:

If \(\|g_{k}\|_{\infty}\leq\varepsilon\), then stop.

Step 2.:

Generate the search direction \(d_{k}\) by (4) with \(\beta_{k}\) from (12), where \(\tau_{k-1}\) is defined by (14).

Step 3.:

Find \(\alpha_{k}\) such that condition (11) holds, then compute the new iterate \(x_{k+1}=x_{k}+\alpha_{k} d_{k}\). Set \(k:=k+1\) and go to Step 1.

In Step 3, the step-length \(\alpha_{k}\) is determined following the inexact line search strategies of Algorithm 2.6 in [17]. Detailed steps are described in the following line search algorithm.

Algorithm 2.2

Step 0.:

Set \(u=0\) and \(v=+\infty\). Choose \(\alpha>0\). Set \(j:=0\).

Step 1.:

If α does not satisfy

$$g(x_{k}+\alpha d_{k})^{T} d_{k}\leq \delta g_{k}^{T} d_{k}, $$

then set j:=j+1, and go to Step 2. If α does not satisfy

$$\sigma g_{k}^{T} d_{k}\leq g(x_{k}+ \alpha d_{k})^{T} d_{k}, $$

then set \(j:=j+1\), and go to Step 3. Otherwise, set \(\alpha_{k}:=\alpha\), and return.

Step 2.:

Set \(v=\alpha\), \(\alpha=(u+v)/2\). Then go to Step 1.

Step 3.:

Set \(u=\alpha\), \(\alpha=2u\). Then go to Step 1.

The choice of the initial step-length is important for a line search. For conjugate gradient methods, it is important to make an initial guess of the step-length by utilizing the current iterative information about the problem. So, in Algorithm 2.2, we choose the initial step-length \(\alpha=1/\|g_{0}\|\) if \(k=0\), and \(\alpha=\alpha _{k-1}g_{k-1}^{T} d_{k-1}/y_{k-1}^{T} d_{k-1}\) if \(k\geq1\).

3 Convergence analysis

Assumption 1

Assume that \(f: R^{n}\rightarrow R\) is bounded below, that is, \(f(x)>-\infty\) for all \(x\in R^{n}\), and f is continuously differentiable. Its gradient \(g: R^{n}\rightarrow R^{n}\) is L-Lipschitz continuous, that is, there exists a constant \(L>0\) such that

$$ \bigl\Vert g(x)-g(y) \bigr\Vert \leq L \Vert x-y \Vert ,\quad \forall x, y\in R^{n}. $$
(17)

Assumption 1 implies that there exists a positive constant γ̂ such that

$$ \bigl\Vert g(x) \bigr\Vert \leq\hat{\gamma}, \quad \forall x\in R^{n}. $$
(18)

Lemma 3.1

Assume that \(g: R^{n}\rightarrow R^{n}\) satisfies Assumption  1. If \(d_{0}=-g_{0}\) and \(d_{k-1}^{T} y_{k-1} \neq0\) for all \(k \geq1\), then

$$ g_{k}^{T} d_{k}\leq-\min \biggl\{ \frac{3}{4},1-\eta \biggr\} \|g_{k}\|^{2}. $$
(19)

Proof

Since \(d_{0}=-g_{0}\), we have \(g_{0}^{T} d_{0}=-\|g_{0}\|^{2}\), which satisfies (19). If

$$ \beta_{k}^{N}(\tau_{k-1})=\frac{g_{k}^{T} y_{k-1}}{d_{k-1}^{T} y_{k-1}}- \biggl(\tau_{k-1}+\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}-\frac {s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\|^{2}}\biggr) \frac{g_{k}^{T} s_{k-1}}{d_{k-1}^{T} y_{k-1}}, $$

from Lemma 2.3 in [5], we have the result that

$$ g_{k}^{T} d_{k}\leq-\frac{3}{4} \|g_{k}\|^{2}. $$

And if

$$ \beta_{k}^{N}(\tau_{k-1})=\eta\frac{g_{k}^{T} d_{k-1}}{\|d_{k-1}\|^{2}}, $$

it is easy to know that

$$ g_{k}^{T} d_{k}\leq-(1-\eta)\|g_{k} \|^{2}. $$

The proof is complete. □

Lemma 3.2

Suppose that \(f:R^{n}\rightarrow R\) is bounded below along the ray \(\{ x_{k}+\alpha d_{k} | \alpha>0\}\), its gradient g is continuous, \(d_{k}\) is a search direction at \(x_{k}\), and \(g_{k}^{T} d_{k}<0\). Then if \(0<\delta <\sigma<1\), there exists \(\alpha_{k}>0\) satisfying the line search (11).

Proof

Define \(\phi(\alpha)=f(x_{k}+\alpha d_{k})\) and \(\psi(\alpha )=f(x_{k})+\alpha\delta g_{k}^{T} d_{k}\). Since \(\phi(\alpha)\) is bounded below for all \(\alpha>0\), \(0<\delta <1\) and \(g_{k}^{T} d_{k}<0\), the functions \(\phi(\alpha)\) and \(\psi(\alpha )\) must intersect at at least one point. Let \(\alpha_{k}^{*}>0\) be the smallest intersecting value of α, i.e.,

$$ f\bigl(x_{k}+\alpha_{k}^{*} d_{k} \bigr)=f(x_{k})+\alpha_{k}^{*} \delta g_{k}^{T} d_{k}. $$
(20)

Since f is continuously differentiable, by the mean value theorem, there exists \(\alpha_{k}\in(0,\alpha_{k}^{*})\) such that

$$ f\bigl(x_{k}+\alpha_{k}^{*} d_{k} \bigr)-f(x_{k})=\alpha_{k}^{*} g(x_{k}+ \alpha_{k} d_{k})^{T} d_{k}. $$
(21)

By combining (20) and (21), we obtain

$$ \delta g_{k}^{T} d_{k}=g(x_{k}+ \alpha_{k} d_{k})^{T} d_{k}. $$
(22)

Furthermore,

$$ \sigma g_{k}^{T} d_{k}\leq g(x_{k}+\alpha_{k} d_{k})^{T} d_{k}= \delta g_{k}^{T} d_{k}, $$
(23)

since \(0<\delta<\sigma<1\) and \(g_{k}^{T} d_{k}<0\). □

Lemma 3.3

Assume that \(g:R^{n}\rightarrow R^{n}\) is monotone on the interval \(\{ x_{k}+\alpha d_{k} : 0 \leq\alpha\leq\alpha_{k}\}\), where \(\alpha_{k}\) satisfies the line search (11), then the following inequality holds:

$$ f(x_{k}+\alpha_{k} d_{k})\leq f(x_{k})+\delta\alpha_{k} g_{k}^{T} d_{k}, $$
(24)

where \(f:R^{n}\rightarrow R\) is the function whose gradient is g.

Proof

Since g is monotone on the interval \(\{x_{k}+\alpha d_{k} : 0 \leq\alpha \leq\alpha_{k}\}\), then

$$ \bigl(g(x_{k}+\alpha_{k} d_{k})-g(x_{k}+ \alpha d_{k})\bigr)^{T} \bigl((x_{k}+ \alpha_{k} d_{k})-(x_{k}+\alpha d_{k}) \bigr)\geq0. $$

Since \(\alpha\leq\alpha_{k}\), it is not difficult to get that

$$g(x_{k}+\alpha d_{k})^{T} d_{k} \leq g(x_{k}+\alpha_{k} d_{k})^{T} d_{k}\leq\delta g_{k}^{T} d_{k}. $$

Applying this inequality to the following relation

$$f(x_{k}+\alpha_{k} d_{k})=f(x_{k})+ \int_{0}^{\alpha_{k}} g(x_{k}+\alpha d_{k})^{T} d_{k} \, d\alpha $$

yields inequality (24). □

Now, we state the Zoutendijk condition [18] for the line search (11).

Lemma 3.4

Assume that \(g:R^{n}\rightarrow R^{n}\) satisfies Assumption  1. Consider any iterative method in the form (3), where \(d_{k}\) is a descent direction and \(\alpha_{k}\) satisfies the line search (11), then

$$ \sum_{k\geq0}\frac{(g_{k}^{T} d_{k})^{2}}{\|d_{k}\|^{2}}< +\infty. $$
(25)

Proof

It follows from the Cauchy-Schwarz inequality, the Lipschitz condition (17) and the line search (11) that

$$ (\sigma-1)g_{k}^{T} d_{k} \leq(g_{k+1}-g_{k})^{T} d_{k}\leq \alpha_{k} L \|d_{k}\|^{2}. $$
(26)

Then we have

$$ \alpha_{k}\geq\frac{1-\sigma}{L} \frac{-g_{k}^{T} d_{k}}{\|d_{k}\|^{2}}. $$
(27)

The formula with (24) implies that

$$ \frac{(g_{k}^{T} d_{k})^{2}}{\|d_{k}\|^{2}}\leq\frac{L}{(1-\sigma)\delta }\bigl(f(x_{k})-f(x_{k+1}) \bigr). $$
(28)

Summing (28) over k and noting that f is bounded below, we have that the desired result holds. □

Now we discus the convergence properties of Algorithm 2.1. In the following, we will prove that if the gradient \(g:R^{n}\rightarrow R^{n}\) is μ-strongly monotone, that is, there exists a constant \(\mu >0\) such that

$$ \bigl(g(x)-g(y)\bigr)^{T} ( x-y ) \geq\mu\|x-y\|^{2},\quad \forall x, y\in R^{n}, $$
(29)

Algorithm 2.1 is globally convergent with \(\lim_{k\rightarrow \infty}\|g_{k}\|=0\), and for more general gradient \(g:R^{n}\rightarrow R^{n}\), Algorithm 2.1 is convergent in the sense that \(\liminf_{k\rightarrow\infty}\|g_{k}\|=0\).

Theorem 3.1

Assume that \(g:R^{n}\rightarrow R^{n}\) satisfies Assumption  1 and is μ-strongly monotone. The sequence \(\{x_{k}\}\) is generated by Algorithm 2.1, then

$$ \lim_{k\rightarrow\infty}\|g_{k}\|=0. $$
(30)

Proof

It follows from (17) and (29) that

$$\begin{aligned}& s_{k-1}^{T} y_{k-1}\leq\|s_{k-1}\| \|y_{k-1}\| \leq L\|s_{k-1}\| ^{2}, \end{aligned}$$
(31)
$$\begin{aligned}& \mu\|s_{k-1}\|^{2} \leq s_{k-1}^{T} y_{k-1}. \end{aligned}$$
(32)

By (31) and (32), it is easy to see that

$$\begin{aligned}& \frac{s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\|^{2}} \leq L, \end{aligned}$$
(33)
$$\begin{aligned}& \frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}} \leq \frac{L^{2}}{\mu }. \end{aligned}$$
(34)

Then we have that

$$|\tau_{k-1}|\leq(1-\lambda)\frac{L^{2}}{\mu}+\lambda L. $$

Consequently, we have that

$$\begin{aligned} \bigl\vert \beta_{k}(\tau_{k-1}) \bigr\vert =& \biggl\vert \frac{g_{k}^{T} y_{k-1}}{d_{k-1}^{T} y_{k-1}}-\biggl(\tau_{k-1}+\frac{\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}- \frac {s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\|^{2}}\biggr)\frac{g_{k}^{T} s_{k-1}}{d_{k-1}^{T} y_{k-1}} \biggr\vert \\ \leq& \biggl[\frac{(2-\lambda)L^{2}}{\mu^{2}}+\frac{(2+\lambda )L}{\mu} \biggr]\frac{\|g_{k}\|}{\|d_{k-1}\|}. \end{aligned}$$

Furthermore,

$$ \bigl\vert \beta^{N}_{k}(\tau_{k-1}) \bigr\vert \leq\max \biggl\{ \frac{(2-\lambda )L^{2}}{\mu^{2}}+\frac{(2+\lambda)L}{\mu},\eta \biggr\} \frac{\|g_{k}\| }{\|d_{k-1}\|}. $$

Then

$$\begin{aligned} \Vert d_{k} \Vert =& \bigl\Vert -g_{k}+ \beta^{N}_{k}(\tau_{k-1})d_{k-1} \bigr\Vert \\ \leq& \Vert g_{k} \Vert + \bigl\vert \beta^{N}_{k}( \tau_{k-1}) \bigr\vert \Vert d_{k-1} \Vert \\ \leq&\zeta \Vert g_{k} \Vert , \end{aligned}$$
(35)

where \(\zeta=1+\max\{\frac{(2-\lambda)L^{2}}{\mu^{2}}+\frac {(2+\lambda)L}{\mu},\eta\}\).

By Lemmas 3.1 and 3.4, we have that

$$\sum_{k\geq0}\frac{\|g_{k}\|^{4}}{\|d_{k}\|^{2}}< \infty. $$

It follows from this and (35) that

$$\sum_{k\geq0}\|g_{k}\|^{2}< \infty, $$

which implies the desired result. □

Theorem 3.2

Assume that \(g:R^{n}\rightarrow R^{n}\) satisfies Assumption  1. Then Algorithm 2.1 is convergent in the sense that

$$ \liminf_{k\rightarrow\infty} \|g_{k}\|=0. $$
(36)

Proof

We prove the theorem by contradiction. Assume that both \(g_{k}\neq0\) for all k and \(\liminf_{k\rightarrow\infty} \|g_{k}\|>0\), then there must exist some \(\gamma>0\) such that

$$ \|g_{k}\|\geq\gamma,\quad \forall k\geq0, $$
(37)

then \(d_{k}\neq0\), otherwise Lemma 3.1 would imply \(g_{k}=0\).

It follows from (37), Lemma 3.1 and Lemma 3.4 that

$$ \gamma^{2}\sum_{k\geq0} \frac{1}{\|d_{k}\|^{2}}\leq\sum_{k\geq0}\frac {\|g_{k}\|^{2}}{\|d_{k}\|^{2}} $$

and

$$ \sum_{k\geq0}\frac{\|g_{k}\|^{2}}{\|d_{k}\|^{2}}\leq\sum _{k\geq0}\frac {1}{\gamma^{2} }\frac{\|g_{k}\|^{4}}{\|d_{k}\|^{2}}\leq \frac{1}{\gamma^{2} \bar{c}}\sum_{k\geq0}\frac{(g_{k}^{T} d_{k})^{2}}{\|d_{k}\|^{2}}< \infty, $$
(38)

where \(\bar{c}=\min\{\frac{3}{4},1-\eta\}\), then we have that

$$ \|d_{k}\|\rightarrow+\infty. $$
(39)

This means that there exists a positive integer N, for all \(k\geq N\),

$$\begin{aligned} \beta_{k}^{N}(\tau_{k-1}) =& \beta_{k}(\tau_{k-1}) \\ =& \frac{g_{k}^{T} y_{k-1}}{d_{k-1}^{T} y_{k-1}}- \biggl(\tau_{k-1}+\frac {\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}- \frac{s_{k-1}^{T} y_{k-1}}{\|s_{k-1}\| ^{2}} \biggr)\frac{g_{k}^{T} s_{k-1}}{d_{k-1}^{T} y_{k-1}} \\ =& \frac{g_{k}^{T} y_{k-1}}{d_{k-1}^{T} y_{k-1}}- \biggl((1+\lambda)\frac {\|y_{k-1}\|^{2}}{s_{k-1}^{T} y_{k-1}}-\lambda \frac{s_{k-1}^{T} y_{k-1}}{\| s_{k-1}\|^{2}} \biggr)\frac{g_{k}^{T} s_{k-1}}{d_{k-1}^{T} y_{k-1}}. \end{aligned}$$
(40)

It follows from Lemma 3.1, (11) and (37) that

$$ d_{k-1}^{T} y_{k-1}\geq-(1- \sigma)g_{k-1}^{T} d_{k-1}\geq\bar{c} (1-\sigma) \gamma^{2}. $$
(41)

It follows from (18), (33), (34), (40), (41) and the L-Lipschitz continuity of g that, for all \(k\geq N\),

$$ \bigl\vert \beta_{k}^{N}( \tau_{k-1}) \bigr\vert \leq\frac{\hat{\gamma}(1+\lambda)}{\bar{c}(1-\sigma)\gamma^{2}}\biggl(L+ \frac{L^{2}}{\mu}\biggr)\|s_{k-1}\|. $$
(42)

Define \(u_{k}=d_{k}/\|d_{k}\|\), then similarly to the proof of Lemma 4.3 in [7], we can get the result that

$$ \|u_{k}-u_{k-1}\|\leq2(1+\eta) \frac{\|g_{k}\|}{\|d_{k}\|}. $$
(43)

Then it follows from (38) and (43) that

$$ \sum_{k\geq1}\|u_{k}-u_{k-1} \|^{2}< \infty. $$
(44)

From Assumption 1 and Lemma 3.3, we know that the generated sequence \(\{x_{k}\}\) is bounded, then there exists some positive constant γ̄ such that

$$ \|x_{k}\|\leq\bar{\gamma},\quad \forall k \geq0. $$
(45)

By using inequalities (42), (44) and (45), we can get the desired result similarly to the proof of items II and III of Theorem 3.2 in [9]. □

4 Numerical experiments

In this section, we did some numerical experiments to test the performance of the proposed method and compared it with the PRP+ conjugate gradient method in [13]. All codes were written in Matlab and run on a notebook computer with an Intel(R) Core(TM) i5-5200U 2.20 GHz CPU, 8.00 GB of RAM and Linux operation system Ubuntu 12.04. All test problems were drawn from the CUTEst test library [19, 20] and the literature [12]. For the test problems from the CUTEst test library, we particularly chose the unconstrained optimization problems whose dimensions were at least 50. Different from the work in the literature such as [5, 7], we solved them only using gradient information. In order to confirm the broader application scope of the proposed method, some boundary value problems were selected from [12]. See Chapter 1 in [21] for the background of the boundary value problems.

In practical implementations, the stopping criterion used was \(\|g_{k}\|_{\infty}\leq10^{-3}\). For the proposed method in this paper, the values of σ and δ in the line search (11) were taken to be 0.9 and 0.0001, respectively, \(\lambda=0.5\), and \(\eta=0.5\). For the PRP+ conjugate gradient, all the initial values came from the reference [13].

The numerical results are reported in Tables 1 and 2, where Name, Dim, Iter, Ng and CPU represent the name of the test problem, the dimension, the number of iterations, the number of gradient evaluations and the CPU time elapsed in seconds, respectively. ‘-’ means the method failed to achieve the prescribed accuracy when the number of iterations exceeded 50,000 or the gradient function generated ‘NaN’. The performances of the two methods were evaluated using the profiles of Dolan and Morè [22]. That is, we plotted the fraction P of the test problems for which each of the two methods was within a factor τ. In the performance profiles, the top curve represents the most robust one within the same factor τ, and the left curve represents the fastest one to solve the same percentage of test problems. Figures 1-3 show the performance profiles for test problems from the CUTEst library relating to the number of iterations, the number of gradient evaluations and the CPU time, respectively. Figures 4-6 show the performance profiles for some boundary value problems. These figures reveal that, for the test problems, the proposed method is more efficient and robust than the PRP+ conjugate gradient method. Consequently, the improved method not only can solve problems only referring to gradient information but also inherits the good numerical performance of the Dai-Kou type conjugate gradient methods.

Figure 1
figure 1

Performance profile for the test problems from the CUTEst library based on the number of iterations.

Figure 2
figure 2

Performance profile for the test problems from the CUTEst library based on the number of gradient evaluations.

Figure 3
figure 3

Performance profile for the test problems from the CUTEst library based on the CPU time.

Figure 4
figure 4

Performance profile for some boundary value problems based on the number of iterations.

Figure 5
figure 5

Performance profile for some boundary value problems based on the number of gradient evaluations.

Figure 6
figure 6

Performance profile for some boundary value problems based on the CPU time.

Table 1 Numerical results for test problems from the CUTEst library
Table 2 Numerical results for some boundary value problems

5 Conclusions

In this paper, we discussed the improved Dai-Kou type conjugate gradient methods only using gradient information. They inherited the advantages of the Dai-Kou type conjugate gradient methods for solving the unconstrained minimization problems, but had broader application scope. Moreover, the problem considered in this paper can be viewed as the nonlinear equation

$$ F(x)=0 $$
(46)

with \(F=g\). While the convergence analysis of this paper needed some assumptions of the function f whose gradient is g, our further investigation is to avoid the function f and to solve general nonlinear equation (46) using different strategies from those of this paper and literature [23–25].

References

  1. Fletcher, R, Reeves, C: Function minimization by conjugate gradients. Comput. J. 7, 149-154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  2. Hestenes, MR, Stiefel, EL: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(5), 409-436 (1952)

    Article  MathSciNet  MATH  Google Scholar 

  3. Polak, E, Ribière, G: Note sur la méthodes de directions conjuguées. Rev. Fr. Inform. Rech. Opér., Sér Rouge 16, 35-43 (1969)

    MATH  Google Scholar 

  4. Polyak, BT: The conjugate gradient methods in extreme problems. USSR Comput. Math. Math. Phys. 9, 94-112 (1969)

    Article  MATH  Google Scholar 

  5. Dai, YH, Yuan, Y: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177-182 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Liu, Y, Storey, C: Efficient generalized conjugate gradient algorithms. Part 1: theory. J. Optim. Theory Appl. 69, 129-137 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dai, YH, Kou, CX: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23, 296-320 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Hager, WW, Zhang, HC: Algorithm 851: CG__DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113-137 (2006)

    Article  MATH  Google Scholar 

  9. Hager, WW, Zhang, HC: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170-192 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Zhang, L, Li, JL: A new globalization technique for nonlinear conjugate gradient methods for nonconvex minimization. Appl. Math. Comput. 217, 10295-10304 (2011)

    MathSciNet  MATH  Google Scholar 

  11. Nakamura, W, Narushima, Y, Yabe, H: Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. J. Ind. Manag. Optim. 9, 595-619 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Spedicato, E, Huang, Z: Numerical experience with Newton-like methods for nonlinear algebraic systems. Computing 58, 69-89 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  13. Dong, YD: A practical PR+ conjugate gradient method only using gradient. Appl. Math. Comput. 219, 2041-2052 (2012)

    MathSciNet  MATH  Google Scholar 

  14. Powell, MJD: Convergence properties of algorithms for nonlinear optimization. SIAM Rev. 28, 487-500 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  15. Dong, YD: The BFGS method only using gradient. Preprint. http://www.optimization-online.org/DB_FILE/2016/07/5560.pdf

  16. Dai, YH, Liao, LZ: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87-101 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lewis, AS, Overton, ML: Nonsmooth optimization via quasi-Newton methods. Math. Program. 141(1-2), 135-163 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Zoutendijk, G: Nonlinear programming, computational methods. In: Abadie, J (ed.) Integer and Nonlinear Programming, pp. 37-86. North-Holland, Amsterdam (1970)

    Google Scholar 

  19. Bongartz, I, Conn, AR, Gould, N, Toint, PL: CUTE: constrained and unconstrained testing environment. ACM Trans. Math. Softw. 21, 123-160 (1995)

    Article  MATH  Google Scholar 

  20. Gould, NIM, Orban, D, Toint, PL: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373-394 (2003)

    Article  MATH  Google Scholar 

  21. Ortega, JM, Rheinboldt, WC: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)

    MATH  Google Scholar 

  22. Dolan, ED, Morè, JJ: Benchmarking optimization software with performance profiles. Math. Program. 91, 201-213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  23. Solodov, MV, Svaiter, BF: A globally convergent inexact Newton method for systems of monotone equations. In: Fukushima, M, Qi, L (eds.) Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355-369. Kluwer Academic, Dordrecht (1999)

    Google Scholar 

  24. Yan, QR, Peng, XZ, Li, DH: A globally convergent derivative-free method for solving large-scale nonlinear monotone equations. J. Comput. Appl. Math. 234, 649-657 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, QN, Li, DH: A class of derivative-free methods for large-scale nonlinear monotone equations. IMA J. Numer. Anal. 31, 1625-1635 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the associate editor and reviewers for their valuable suggestions which have greatly improved the paper. This work was partially supported by the National Natural Science Foundation of China (No. 11471102) and the Key Basic Research Foundation of the Higher Education Institutions of Henan Province (No. 16A110012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanyuan Huang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final version of this paper.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Y., Liu, C. Dai-Kou type conjugate gradient methods with a line search only using gradient. J Inequal Appl 2017, 66 (2017). https://doi.org/10.1186/s13660-017-1341-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-017-1341-z

Keywords