 Research
 Open Access
 Published:
A conjugate gradient algorithm and its application in largescale optimization problems and image restoration
Journal of Inequalities and Applications volume 2019, Article number: 247 (2019)
Abstract
To solve largescale unconstrained optimization problems, a modified PRP conjugate gradient algorithm is proposed and is found to be interesting because it combines the steepest descent algorithm with the conjugate gradient method and successfully fully utilizes their excellent properties. For smooth functions, the objective algorithm sufficiently utilizes information about the gradient function and the previous direction to determine the next search direction. For nonsmooth functions, a Moreau–Yosida regularization is introduced into the proposed algorithm, which simplifies the process in addressing complex problems. The proposed algorithm has the following characteristics: (i) a sufficient descent feature as well as a trust region trait; (ii) the ability to achieve global convergence; (iii) numerical results for largescale smooth/nonsmooth functions prove that the proposed algorithm is outstanding compared to other similar optimization methods; (iv) image restoration problems are done to turn out that the given algorithm is successful.
Introduction
The concerned problem is given by
where the function \(f: \Re ^{n}\rightarrow \Re \) and \(f\in C^{2}\). The above model is quite typical but a difficult mathematic model and is seen throughout daily life, work, and scientific research, thus being the focus of a great variety of careers. Experts and scholars have conducted numerous indepth studies and achieved a series of fruitful results (see, e.g., [2, 5, 12, 22, 29,30,31, 40, 50, 52, 53]). It is quite noticeable that the steepest descent method is simple, and the computational and memory requirements are low. In the negative gradient direction, the function’s value decreases rapidly, which makes it easy to think that this is a suitable search direction, although the convergence rate of the gradient method is not always fast. Later, experts and scholars modified this method and presented an efficient conjugate gradient method, which provides a simple form but high performance. There are two aspects of optimization problems: the step length and the search direction. In general, the mathematical formula for (1.1) is
where \(x_{k}\) is the current iteration point, \(\alpha _{k}\) is called the step length, and \(d_{k}\) is the kth search direction. The formula for \(d_{k}\) is often defined by
where \(\beta _{k} \in \Re \). In addition, increasingly more efficient and successful conjugate gradient algorithms have been proposed using a variety of expression for \(\beta _{k}\) as well as \(d_{k}\) (see, e.g., [9, 11, 27, 38, 42,43,44, 47, 49]). The wellknown PRP algorithm [26, 27] is of the following form:
where \(g_{k}\), \(g_{k+1}\) and \(f_{k}\) denote \(g(x_{k})\), \(g(x_{k+1})\) and \(f(x_{k})\), respectively. \(g(x_{k+1})=g_{k+1}=\nabla f(x_{k+1})\) is the gradient function of the objective function f at \(x_{k+1}\). It is remarkable that the PRP conjugate algorithm is extremely effective for largescale optimization problems. It is regrettable that it fails to achieve global convergence when addressing nonconvex function problems under the socalled weak Wolfe–Powell (WWP) line search technique. Its formula is as follows:
and
where \(\varphi \in (0, 1/2)\), \(\alpha _{k} > 0\) and \(\rho \in (\varphi , 1)\). To address the above exchanging problem, Yuan, Wei, and Lu [48] developed the following innovative formula for the normal WWP line search technique (called Yuan, Wei, and Lu line search (YWL)) and obtained numerous rich theoretical results:
and
where \(\varphi \in (0,1/2)\), \(\rho \in (\varphi ,1)\) and \(\varphi _{1} \in (0,\varphi )\). Further study can be found in [45]. Based on the innovation of the YWL line search technique, Yuan et al. [41] focused on the usual Armijo line search technique and proposed a modified Armijo line search technique as follows:
where \(\lambda , \gamma \in (0,1)\), \(\lambda _{1} \in (0,\lambda )\), and \(\alpha _{k}\) is the largest number of \(\{\gamma ^{k}\mid k=0,1,2,\ldots \}\). It is interesting that some scholars not only focus on the expression of the coefficient \(\beta _{k}\) but also attempt to modify the formula of the search direction \(d_{k+1}\). Nonlinear conjugate gradient methods are increasingly more interesting to scholars because of their simplicity and lower equipment requirements for the calculation environment. Thus, HS (see [13, 16, 33]) and PRP algorithms (see [35, 51]) are widely used to solve complex problems in various fields. Currently, some experts focus on the threeterm conjugate gradient because its search direction sometimes has the descent and automatic trust region properties. Motivated by the above discussion, a new modified threeterm conjugate gradient algorithm based on the modified Armijo line search technique is proposed. The algorithm has the following properties:

The search direction has a sufficient decrease and a trust region property.

For general functions, the proposed algorithm under mild assumptions possesses global convergence.

The new algorithm combines the deepest descent method with the conjugate gradient algorithm through the size of the coefficients, and the numerical results demonstrate the method’s good performance compared with established algorithms.

The corresponding numerical results prove that the discussed method is efficient as well as successful at solving general problems.

The paper successfully combines the mathematic theory with realworld application. On the one hand, the proposed algorithm has a good performance in solving the largescale optimization problems, on the other hand, it is introduced in the image restoration, which has wild application in biological engineering, medical sciences and other areas of science and engineering.
The remainder of this paper is organized as follows: The next section presents the motivation and the content of the algorithm to solve largescale smooth problems includes the important mathematical characters; the similar optimization algorithm was presented to solve largescale nonsmooth optimization problems; the Sect. 4 presents the application of the Sect. 3 in the problem of the image restoration; the paper’s conclusion and algorithm’s characters was listed in Sect. 5. Without loss of generality, \(f(x_{k})\) and \(f(x_{k+1})\) are replaced by \(f_{k}\) and \(f_{k+1}\), and \(\\cdot \\) is the Euclidean norm.
New threeterm conjugate gradient algorithm for smooth problems
The threeterm conjugate gradient algorithm has seen extensive study and obtained extremely good theoretical results. In the light of the work by ToouatiAhmed, Storey [34], AlBaali [1], Gilbert, and Nocedal [17] on conjugate gradient methods, the sufficient descent condition is crucial for the global convergence. From this, a famous formula for the search direction \(d_{k+1}\) emerges. Zhang [51] proposed the following formula:
where \(y_{k}=g_{k+1}g_{k}\). It is notable that the threeterm conjugate gradient algorithm was firstly introduced in solving optimization problems and the numeral results proves it is competitive than similar methods, thus this paper choose it as the compared algorithm in Sects. 2.3 and 3.2. In [23], Nazareth proposed another variety of formula,
where \(y_{k}=g_{k+1}g_{k}\), \(g_{k}\) is the gradient function value at the point \(x_{k}\), and \(d_{0}=d_{1}=0\). In [14], Deng and Zhong expressed a new threeterm conjugate gradient formula as follows:
where \(s_{k}=x_{k+1}x_{k}\). Based on the above discussion, we express the new threeterm algorithm under the modified Armijo line search technique (1.9) as follows:
where \(\xi _{2}, \xi _{3}, \xi _{4} > 0\). To gather more information about the objective function, we address the corresponding gradient function as well as the initial point, let \(y_{k}^{*}=y_{k}+\varphi _{k}(x_{k+1}x_{k})\), where \(y_{k}=g_{k+1}g_{k}\), \(\varphi _{k}= \max \{0, B_{k}\}\), and \(B_{k}=\frac{(g_{k+1}+g_{k})^{T}s_{k}+2(f_{k}f _{k+1})}{\x_{k+1}x_{k}\^{2}}\). This plays an important role in theory and numerical performance [46]. From the above discussion, we introduce a new PRP algorithm (Algorithm 2.1).
Algorithm steps
Algorithm 2.1
 Step 1::

(Initiation) Choose an initial point \(x_{0}\), \(\gamma \in (0,1)\), \(\xi _{2}\), \(\xi _{3}\), \(\xi _{4}> 0\), and positive constants \(\varepsilon \in (0,1)\). Let \(k=0\), \(d_{0}=g_{0}\).
 Step 2::

If \(\g_{k}\ \leq \varepsilon \), then stop.
 Step 3::

Find the step length, where the calculation \(\alpha _{k} = \max \{\gamma ^{k}\mid k=0, 1, 2, \ldots \}\) stems from (1.9).
 Step 4::

Set the new iteration point of \(x_{k+1}=x_{k}+\alpha _{k}d_{k}\).
 Step 5::

Update the search direction by (2.4).
 Step 6::

If \(\g_{k+1}\\leq \varepsilon \) holds, the algorithm stops. Otherwise, go to next step.
 Step 7::

Let \(k:=k+1\) and go to Step 3.
Algorithm characteristics
This section states the properties of the sufficient descent, trust region as well as global convergence of Algorithm 2.1.
Lemma 2.1
If the search direction \(d_{k}\) is generated by (2.4), then
and
where σ are positive constants.
Proof
On the one hand, it is true that (2.5) and (2.6) are correct if \(k=0\).
On the other hand, from (2.4),
and
It is true that (2.5) and (2.6) demonstrate that the search direction has a sufficient descent trait and a trust region property, respectively. □
Aiming at achieving global convergence, we propose the following mild assumptions.
Assumption (i)
The level set of \(\varOmega =\{x\mid f(x) \leq f(x _{0})\}\) is bounded.
Assumption (ii)
The objective function \(f(x) \in C^{2}\) is bounded from below, and its gradient function \(g(x)\) is Lipschitz continuous, i.e., there exists a positive constant τ such that
Based on the above discussion and established conclusion concerning the modified Armijo line search of being reasonable and necessary (see [48]), the global convergence algorithm is established as follows.
Theorem 2.1
If assumptions (i)–(ii) are true and the corresponding sequences of \(\{x_{k}\}\), \(\{d_{k}\}\), \(\{g_{k}\}\), \(\{\alpha _{k}\}\) are generated by Algorithm 2.1, then we arrive at the conclusion that
Proof
Suppose that the conclusion of the above theorem is incorrect, i.e., there exist a positive constant \(\sigma _{3}\) and index number \(k^{\prime } \) such that
Then with the above formulas with \(k=0\) from ∞ and combining with Assumption (ii), we obtain
Based on the convergence theorem of sequences,
Then we have
from the formula of (2.5), then
This means that \(\{\alpha _{k}\} \rightarrow 0\), \(k \rightarrow \infty \) or \(\{\g_{k}\\} \rightarrow 0\), \(k \rightarrow \infty \). We then state two cases:
(i) If \(\{\alpha _{k}\} \rightarrow 0\), \(k \rightarrow \infty \), consider the line search method, for every suitable parameter \(\alpha _{k}\),
there thus exists a positive constant \(\lambda ^{*} \leq \lambda _{1}\), such that
Using (2.5), Assumption (ii) and the continuity of \(f(x)\) and \(g(x)\), we have
where \(\eta _{k} \in (0,1)\). Comparing the above two expressions, we have
i.e.,
Thus,
where \(\sigma \in (1+\frac{2}{\xi _{2}}, \infty )\). This contradicts the assumption of case (i).
(ii) Clearly, \(\{g_{k}\} \rightarrow 0\) if \(\alpha _{k}\) is a positive finite constant when k is a sufficiently large constant from the formula of (2.14). This conclusion does not satisfy the assumption of (2.10); this completes the proof. □
Numerical results
Related content is presented in this section and consists of two parts: test problems and corresponding numerical results. To measure the algorithm’s efficiency, we compare Algorithm 2.1 with Algorithm 1 in [51] in terms of NI, NFG, and CPU on the test problems listed in Table 2 of Appendix 1, which are from [3], where NI, NFG, and CPU indicate the number of iterations, the sum of the calculation’s frequency of the objective function and gradient function, and the calculation time needed to solve various test problems (in seconds), respectively. Algorithm 1 is different from the objective algorithm in the formula of \(d_{k+1}\) that was determined by (2.1), and the remainder of Algorithm 1 is identical to Algorithm 2.1.
Stopping rule: If \( f(x_{k}) > e_{1}\), let \(\mathit{stop}1=\frac{ f(x_{k})f(x_{k+1}) }{ f(x_{k}) }\) or \(\mathit{stop}1= f(x_{k})f(x_{k+1}) \). If the condition \(\g(x)\< \epsilon \) or \(\mathit{stop}1 < e_{2}\) is satisfied, the algorithm stops, where \(e_{1}=e_{2}=10^{4}\), \(\epsilon =10^{4}\). On the one hand, based on the virtual case, the proposed algorithm also stops if the number of iterations is greater than 10,000 and the iteration number of \(\alpha _{k}\) is greater than 5. On the other hand, ‘NO’ and ‘problem’ in Table 2 indicate the number of the tested problem and the name of the problem, respectively.
Initiation: \(\lambda =0.9\), \(\lambda _{1}=0.4\), \(\xi _{3}= 300\), \(\xi _{2}=\xi _{4}=0.01\), \(\gamma =0.01\).
Dimension: \(30\text{,}000\), \(90\text{,}000\), \(150\text{,}000\), \(210\text{,}000\).
Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium (R) DualCore CPU E5800@3.20 GHz, and the 64bit Windows 7 operating system.
The algorithms’ numerical results are listed in Table 3 of Appendix 1 with their corresponding NI, NFG and CPU. Then, based on the technique in [15], plots of the corresponding figures are presented for the proposed algorithm. Some of the test problems are especially complex in that the above algorithms fail to solve them. Thus, a list of the numerical results with the corresponding problem index is given in Table 3, where ‘NO’ and ‘Dim’ are the index of the problem and the dimension of the variable, respectively. Clearly, the proposed algorithm (Algorithm 2.1) is effective from the above figures because the point’s value on the algorithm’s curve is larger than for the other algorithms. In Fig. 1, the red curve’s value of the initial point is close to 0.9, while Algorithm 1 only arrives at a value of 0.6. This means that Algorithm 2.1 addresses complex problems fewer iterations. In Fig. 2, the red curve is above the left curve because the calculation number of the objective function is less than the left one when addressing practical problems, which is an important aspect for measuring the performance of an algorithm. It is well know that the calculation time (CPU time) is the most essential metric of an algorithm because the speed of the algorithm is the most basic feature. The objective algorithm not only is well defined because the curve seems more feasible and smooth but also can address most complex problems because the largest value of the point on the curve is close to 0.98, which indicates that the proposed algorithm is highly effective. Overall, the proposed algorithm cannot only solve smooth problems but it also enriches the knowledge of optimization and lays the foundation for further indepth studies.
Algorithm for nonsmooth problems
From the previous section, the proposed algorithm is trustworthy and has good potential based on the fundamental numerical results. Thus, this section attempts to apply the proposed method to nonsmooth problems. It is interesting that the vast majority of practical conditions are harsh; therefore, Newton’s series of methods are often unsatisfactory for solving such problems because they require information about the gradient function [37, 39, 46]. Currently, most experts and scholars focus on bundled methods, which are successful solutions to smallscale problems (see [18, 19, 24, 36]) but fail to solve largescale practical problems. With the development of science and technology, it is becoming an urgent need to design a simple but effective algorithm to solve largescale nonsmooth problems. Based on the simplicity of the conjugate gradient method, some experts and scholars have proposed relevant algorithms and made numerous fruitful theoretical achievements (see [20, 28]).
Consider the following problem:
where \(\theta (x)\) sometimes is nonsmooth and \(x \in R^{n}\). A famous technique is called ‘Moreau–Yosida’ regularization, which calculates the equivalent solution of the previous problem through a modified object function with the formula
where χ and \(\\cdot \\) denote a positive constant and the Euclidean norm, respectively. Without loss of generality, we denote \(\theta ^{M}(x)\) as in (3.1) ‘Moreau–Yosida’ regularization, i.e.,
The ‘Moreau–Yosida’ regularization technique was introduced because of its outstanding properties such as differentiability (see [21, 25]). Assume that the function (3.1) is convex and that its ‘Moreau–Yosida’ regularization obtains the best solution of \(\omega (x)=\operatorname{argmin} \theta ^{M}(t)\). Based on the mathematic knowledge and the established conclusion, we then have
The most surprising result is that the function \(\theta ^{M}(x)\) is instantly smooth and that its gradient function is Lipschitz continuous, which is not true for the original function. It is worth noting that (3.1) and (3.3) are equivalent to each other because they have the same solution. In the remainder of this discussion, we attempt to further study the goal of (3.3) because it is clear and brief to a large extent, and we introduce relevant components of the objective algorithm simultaneously. Some necessary properties of sufficient descent and trust region will be listed in the next section. Then we provide relevant numerical results to show the performance of the proposed algorithm and draw a conclusion with regard of the whole paper.
New algorithm and its necessary properties
We start with the following formula for \(d_{k+1}\), which is an important component of the proposed algorithm in addressing complex problems:
where \(s_{k}=x_{k+1}x_{k}\), \(y_{k}=\nabla \theta ^{M}(x_{k+1})\nabla \theta ^{M}(x_{k})\), and \(\xi _{2}\), \(\xi _{3}\), and \(\xi _{4}\) are positive constants. The step length \(\alpha _{k}\) is determined by
where \(\lambda , \gamma \in (0,1)\), \(\lambda _{1} \in (0,\lambda )\), and \(\alpha _{k}\) is the largest number of \(\{\gamma ^{k}\mid k=0,1,2,\ldots \}\). It is well known from the case of smooth functions that the new search direction has satisfactory descent and trust region properties; therefore, we merely list them without proof. We have
where σ is the same as in (2.7). Now, manifesting specific algorithm steps, we express the cause of the existing \(\alpha '_{k} \in \Re \) that satisfies the demands of the modified Armijo line search formula and provides the global convergence of the proposed algorithm.
Algorithm 3.1
 Step 1::

(Initiation) Choose an initial point \(x_{0}\), \(\gamma \in (0,1)\), \(\xi _{2}\), \(\xi _{3}\), and \(\xi _{4}> 0\) and positive constants \(\varepsilon \in (0,1)\). Let \(k=0\), \(d_{0}=\nabla \theta ^{M}(x _{0})\).
 Step 2::

If \(\\nabla \theta ^{M}(x_{k})\ \leq \varepsilon \), then stop.
 Step 3::

Find the step length, i.e., the calculation \(\alpha _{k} = \max \{\gamma ^{k}\mid k=0, 1, 2, \ldots \}\) stemming from (3.6).
 Step 4::

Set the new iteration point \(x_{k+1}=x_{k}+\alpha _{k}d_{k}\).
 Step 5::

Update the search direction by (3.5).
 Step 6::

If \(\\nabla \theta ^{M}(x_{k+1})\\leq \varepsilon \) holds, the algorithm stops; otherwise, go to the next step.
 Step 7::

Let \(k:=k+1\) and go to Step 3.
To express the validity of the step length \(\alpha _{k}\) in (3.6) and the global convergence of Algorithm 3.1, the following assumptions are necessary.
Assumption

(i)
The level set \(\pi =\{x\mid \theta ^{M}(x) \leq \theta ^{M}(x_{0})\}\) is bounded.

(ii)
The function \(\theta ^{M}(x) \in C^{2}\) is bounded from below.
From the ‘Moreau–Yosida’ regularization technique, the function \(\theta ^{M}(x)\) is Lipschitz continuous, i.e., there exists a positive constant κ subject to
Theorem 3.1
If Assumptions (i)–(ii) are true, then there exists a constant \(\alpha _{k}\) that satisfies the requirements of (3.6).
Proof
We introduce the following function:
Based on the established theorem, following the sufficient decrease of (3.7), for sufficiently small positive α, we have
where the latter inequality holds since the objective function is continuous. Thus, there exists a constant \(0<\alpha _{0}<1\) such that \(\vartheta (\alpha _{0}) < 0\); on the other hand, \(\vartheta (0)=0\), and based on the function’s continuous property, there exists a constant \(\alpha _{1}\) such that
Thus,
is correct, and this means that the modified Armijo line search is well defined. From the above discussion, Algorithm 3.1 has the properties of the sufficient descent and a trust region, and we can now present the theorem of global convergence. □
Theorem 3.2
If the above assumptions are satisfied and the relative sequences \(\{x_{k}\}\), \(\{\alpha _{k}\}\), \(\{d_{k}\}\), \(\{\theta ^{M}(x_{k})\}\) are generated by Algorithm 3.1, then we have \(\lim_{k \rightarrow \infty }\\nabla \theta ^{M}(x_{k})\=0\).
We neglect the proof because its proof is similar to that of Theorem 2.1.
Nonsmooth numerical experiment
Two algorithms are proposed and compared to the proposed algorithm because this section measures the objective algorithm’s efficiency on the test problems listed in Table 4. In addition, the problems are only different from Algorithm 3.1 in the formula for \(d_{k+1}\). The relevant numerical data are listed in Table 5 of Appendix 2, and we plot the corresponding graphs based on these data, where ‘NI’, ‘NF’, and ‘CPU’ are the iteration number, calculation number of the objective function and the algorithm’s run time (in seconds). The first metric is determined by
In [51], without loss of generality, calling Algorithm 2, the other algorithm in [4] is calculated as
denoted as Algorithm 3.
Dimension: 150,000, 180,000, 192,000, 210,000, 222,000, 231,000, 240,000, 252,000, 270,000.
Initiation: \(\lambda =0.9\), \(\lambda _{1}=0.4\), \(\xi _{3}= 100\), \(\xi _{2}=\xi _{4}=0.01\), \(\gamma =0.5\).
Stopping rule: If NI is no greater than 10,000, \(f(x_{k+1})f(x _{k}) < 1e7\) and if the iteration number of \(\alpha _{k}\) is no greater than 5, then the algorithm stops.
Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium (R) DualCore CPU E5800@3.20 GHz and the 64bit Windows 7 operating system.
From Figs. 4–6, the proposed algorithm is effective and successful to a large extent. First, the computational data of the algorithm fully address complex situations. Second, the algorithm in the design of the search direction carefully considers the corresponding function, gradient function and current direction. In Figs. 4 and 5, the curve of Algorithm 3.1 is above the other two curves because the number of iterations is much lower. Its initial point is close to 0.75, which is much larger than the other algorithms. Note that the proposed algorithm’s computation time is the best of the three algorithms because the curve increases rapidly and is very smooth. In other words, its curve has a wonderful initiation point, which results in a high efficiency in addressing complex issues.
Applications of Algorithm 3.1 in image restoration
It is well known that many modern applications of optimization call for studying largescale nonsmooth convex optimization problems, where the image restoration problem arising in image processing is an illustrating example. The image restoration problem plays an important role in biological engineering, medical sciences and other areas of science and engineering (see [6, 10, 32] etc.), which is to reconstruct an image of an unknown scene from an observed image. The most common image degradation model is defined by the following system:
where \(x \in \Re ^{n}\) is the underlying images, \(b \in \Re ^{m}\) is the observed images, A is an \(m \times n\) blurring matrix, and \(\eta \in \Re ^{m}\) denotes the noise. One way to get the unknown η is to solving the problem \(\min_{x \in \Re ^{n}} \Ax+b\^{2}\). This problem will not have a satisfactory solution since the system is very sensitive to lack of information and the noise. The regularized least square problem is often used to overcome the above shortcoming
where \(\\cdot \_{1}\) is the \(l^{1}\) norm, λ is the regularization parameter controlling the tradeoff between the regularization term and the datafitting term, and D is a linear operator. It is easy to see that the above problem is a nonsmooth convex optimization problem and it is typically of large scale since the \(l^{1}\) norm is nonsmooth.
Image restoration problem
The above section tells us that Algorithm 3.1 can be used for largescale nonsmooth problems. Then we will use this algorithm to solve the above image noise problem, where the parameters are the same as those of Sect. 3.2 different from \(\xi _{2}=\xi _{3}=\xi _{4}=1\). All codes are written by MATLAB r2017a and run on a PC with an Intel Pentium(R) Xeon(R) E5507 CPU @2.27 GHz, 6.00 GB of RAM, and the Windows 7 operating system. The stopped condition is
or
where
the noise candidate indices set \(N:=\{(i,j)\in {A} \mid \bar{y} _{i,j} \neq y_{i,j},y_{i,j}=s_{\mathrm{min}} \mbox{ or } s_{ \mathrm{max}}\}\), \(s_{\mathrm{max}}\) is the maximum of the noisy pixel and \(s_{\mathrm{min}}\) denotes the minimum of the noisy pixel, \({A}=\{1,2,\ldots , M\}\times \{1,2,3,\ldots ,N\}\), \(V_{i,j}=\{(i,j1),(i,j+1),(i1,j),(i+1,j) \}\) is the neighborhood of \((i,j)\), y denotes the observed noisy image of x corrupted by the saltandpepper noise, ȳ is defined by the image obtained by applying the adaptive median filter method to the noisy image y in the first phase, x is the true image with MbyN pixels, and \(x_{i,j}\) denotes the gray level of x at pixel location \((i,j)\). It is easy to see that the regularity of \(\theta _{\alpha }\) only depends on \(\varphi _{\alpha }\) and there exist many properties as regards \(\theta _{\alpha }\) and \(\varphi _{\alpha }\) that are studied by many scholars (see [7, 8] etc.). In the experiments, Lena (\(256 \times 256\)), Cameraman (\(256\times 256\)), Barbara (\(512\times 512\)), Banoon (\(512\times 512\)), and Man (\(1024\times 1024\)) are the tested images. To compare Algorithm 3.1 with other similar algorithm, we also test the wellknown PRP conjugate gradient algorithm, where the Step 5 of Algorithm 3.1 is replaced by the PRP formula. The tested performances of these two algorithms (Algorithm 3.1 and PRP algorithm) are listed and the spent time is stated in Table 1.
Results and discussion
Figures 7 and 8 show that Algorithm 3.1 and PRP algorithm have good performance to solve the image restoration and both of them can successfully do this problem. From the results of Table 1 it turns out that Algorithm 3.1 is competitive to PRP algorithm since it needs less CPU time to restoration of the most given images than those of the PRP algorithm.
Conclusion
This paper proposes a new PRP algorithm that combines the innovative formula of the search direction \(d_{k+1}\) with the modified Armijo line technique: (i) In the design of the proposed algorithm, the key information about the objective function, the gradient function and its current direction is collected and applied to complex problems, and the numerical results show that the proposed algorithm is efficient. (ii) For nonsmooth problems, the introduced ‘Moreau–Yosida’ regularization technique succeeds in enhancing the proposed algorithm, and the numerical results prove the validity and simplicity of the discussed algorithm. (iii) Image restoration problems are done by Algorithm 3.1 and from the tested results it turns out that the given algorithm has better performance than those of the normal PRP algorithm. However, there are some problems with the optimization method that need to be studied such as how to better leverage the benefits of the steepest descent method while overcoming its shortcomings.
References
AlBaali, A., Albaali, M.: Descent property and global convergence of the Fletcher–Reeves method with inexact line search. IMA J. Numer. Anal. 5(1), 121–124 (1985)
AlBaali, M., Narushima, Y., Yabe, H.: A family of threeterm conjugate gradient methods with sufficient descent property for unconstrained optimization. Comput. Optim. Appl. 60(1), 89–110 (2015)
Andrei, N.: An unconstrained optimization test functions collection. Environ. Sci. Technol. 10(1), 6552–6558 (2008)
Andrei, N.: On threeterm conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 241(11), 19–29 (2008)
Argyros, I.K., George, S.: Local convergence analysis of Jarratttype schemes for solving equations. Appl. SetValued Anal. Optim. 1, 53–62 (2019)
Banham, M.R., Katsaggelos, A.K.: Digital image restoration. IEEE Signal Process. Mag. 14, 24–41 (1997)
Cai, J.F., Chan, R.H., Fiore, C.D.: Minimization of a detailpreserving regularization functional for impulse noise removal. J. Math. Imaging Vis. 29, 79–91 (2007)
Cai, J.F., Chan, R.H., Morini, B.: Minimization of an edgepreserving regularization functional by conjugate gradient types methods. In: Image Processing Based on Partial Differential Equations: Prceedings of the International Conference on PDEBased Image Processsing and Related Inverse Problems, CMA, Oslo, August 8–12, 2005, pp. 109–122. Springer, Berlin (2007)
Cardenas, S.: Efficient generalized conjugate gradient algorithms. I. Theory. J. Optim. Theory Appl. 69(1), 129–137 (1991)
Chan, C.L., Katsaggelos, A.K., Sahakian, A.V.: Image sequence filtering in quantumlimited noise with applications to lowdose fluoroscopy. IEEE Trans. Med. Imaging 12, 610–621 (1993)
Dai, Z.: A mixed HS–DY conjugate gradient methods. Math. Numer. Sin. (2005)
Dai, Z., Wen, F.: A generalized approach to sparse and stable portfolio optimization problem. J. Ind. Manag. Optim. 14, 1651–1666 (2018)
Dai, Z.F.: Two modified HS type conjugate gradient methods for unconstrained optimization problems. Nonlinear Anal., Theory Methods Appl. 74(3), 927–936 (2011)
Deng, S., Wan, Z.: A threeterm conjugate gradient algorithm for largescale unconstrained optimization problems. Appl. Numer. Math. 92, 70–81 (2015)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2001)
Du, X., Liu, J.: Global convergence of a spectral HS conjugate gradient method. Proc. Eng. 15(4), 1487–1492 (2011)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1990)
Haarala, M., Miettinen, K., Mäkelä, M.M.: New limited memory bundle method for largescale nonsmooth optimization. Optim. Methods Softw. 19(6), 673–692 (2004)
Haarala, N., Miettinen, K., Mäkelä, M.M.: Globally convergent limited memory bundle method for largescale nonsmooth optimization. Math. Program. 109(1), 181–205 (2007)
Lin, C.J., Weng, R.C., Keerthi, S.S.: Trust region Newton method for logistic regression. J. Mach. Learn. Res. 9(2), 627–650 (2008)
Meng, F., Zhao, G.: On secondorder properties of the Moreau–Yosida regularization for constrained nonsmooth convex programs. Numer. Funct. Anal. Optim. 25(5–6), 515–529 (2004)
Narushima, Y., Yabe, H., Ford, J.A.: A threeterm conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 21(1), 212–230 (2016)
Nazareth, L.: A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 23(3), 373–387 (1977)
Oustry, F.: A secondorder bundle method to minimize the maximum eigenvalue function. Math. Program. 89(1), 1–33 (2000)
Pearson, J.W., Martin, S., Wathen, A.J.: Preconditioners for stateconstrained optimal control problems with Moreau–Yosida penalty function. Numer. Linear Algebra Appl. 21(1), 81–97 (2013)
Polak, E.: The conjugate gradient method in extreme problems. Comput. Math. Math. Phys. 9, 94–112 (1969)
Polak, E., Ribière, G.: Note sur la convergence de directions conjugees. Rev. Fr. Inform. Rech. Opér. 3, 35–43 (1969)
Schramm, H., Zowe, J.: A version of the bundle method for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results. SIAM J. Optim. 2(1), 121–152 (2006)
Sheng, Z., Yuan, G.: An effective adaptive trust region algorithm for nonsmooth minimization. Comput. Optim. Appl. 71, 251–271 (2018)
Sheng, Z., Yuan, G., et al.: An adaptive trust region algorithm for largeresidual nonsmooth least squares problems. J. Ind. Manag. Optim. 14, 707–718 (2018)
Sheng, Z., Yuan, G., Cui, Z.: A new adaptive trust region algorithm for optimization problems. Acta Math. Sci. 38(2), 479–496 (2018)
Slump, C.H.: Realtime image restoration in diagnostic Xray imaging, the effects on quantum noise. In: Proceedings of the 11th IAPR International Conference on Pattern Recognition, Vol. II, Conference B: Pattern Recognition Methodology and Systems, pp. 693–696 (1992)
Sun, Q., Liu, Q.: Global convergence of modified HS conjugate gradient method. J. Appl. Math. Comput. 22(3), 289–297 (2009)
TouatiAhmed, D., Storey, C.: Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. 64(2), 379–397 (1990)
Wan, Z., Yang, Z.L. Wang, Y.L.: New spectral PRP conjugate gradient method for unconstrained optimization. Appl. Math. Lett. 24(1), 16–22 (2011)
Wang, W., Qiao, X., Han, Y.: A proximal bundle method for nonsmooth and nonconvex constrained optimization. Comput. Stat. Data Anal. 34(34), 3464–3485 (2011)
Wei, Z., Li, G., Qi, L.: New quasiNewton methods for unconstrained optimization problems. Appl. Math. Comput. 175(2), 1156–1188 (2006)
Wei, Z., Yao, S., Liu, L.: The convergence properties of some new conjugate gradient methods. Appl. Math. Comput. 183(2), 1341–1350 (2006)
Wei, Z., Yu, G., Yuan, G., et al.: The superlinear convergence of a modified BFGStype method for unconstrained optimization. Comput. Optim. Appl. 29(3), 315–332 (2004)
Ying, L., Hai, Z., Fei, L.: A modified proximal gradient method for a family of nonsmooth convex optimization problems. J. Oper. Res. Soc. China 5, 391–403 (2017)
Yuan, G., Hu, W., Wang, B.: A modified Armijo line search technique for largescale nonconvex smooth and convex nonsmooth optimization problems. Preprint (2017)
Yuan, G., Li, Y., Li, Y.: A modified Hestenes and Stiefel conjugate gradient algorithm for largescale nonsmooth minimizations and nonlinear equations. J. Optim. Theory Appl. 168(1), 129–152 (2016)
Yuan, G., Lu, X.: A modified PRP conjugate gradient method. Ann. Oper. Res. 166(1), 73–90 (2009)
Yuan, G., Lu, X., Wei, Z.: A conjugate gradient method with descent direction for unconstrained optimization. J. Comput. Appl. Math. 233(2), 519–530 (2009)
Yuan, G., Sheng, Z., Wang, P., Hu, W., Li, C.: The global convergence of a modified BFGS method for nonconvex functions. J. Comput. Appl. Math. 327, 274–294 (2018)
Yuan, G., Wei, Z.: Convergence analysis of a modified BFGS method on convex minimizations. Comput. Optim. Appl. 47(2), 237–255 (2010)
Yuan, G., Wei, Z., Li, G.: A modified Polak–Ribière–Polyak conjugate gradient algorithm for nonsmooth convex programs. J. Comput. Appl. Math. 255, 86–96 (2014)
Yuan, G., Wei, Z., Lu, X.: Global convergence of BFGS and PRP methods under a modified weak Wolfe–Powell line search. Appl. Math. Model. 47, 811–825 (2017)
Yuan, G., Zhang, M.: A threeterms Polak–Ribière–Polyak conjugate gradient algorithm for largescale nonlinear equations. J. Comput. Appl. Math. 286, 186–195 (2015)
Zaslavski, A.J.: Three convergence results for continuous descent methods with a convex objective function. J. Appl. Numer. Optim. 1, 53–61 (2019)
Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629–640 (2006)
Zhao, X., Ng, K.F., Li, C., Yao, J.C.: Linear regularity and linear convergence of projectionbased methods for solving convex feasibility problems. Appl. Math. Optim. 78, 613–641 (2018)
Zhou, W.: Some descent threeterm conjugate gradient methods and their global convergence. Optim. Methods Softw. 22(4), 697–711 (2007)
Acknowledgements
The authors would like to thank for the support funds.
Funding
This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the Guangxi Natural Science Fund for Distinguished Young Scholars (No. 2015GXNSFGA139001), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046).
Author information
Authors and Affiliations
Contributions
GY mainly analyzed the theory results and organized this paper, TL did the numerical experiments of smooth problems and WH focused on the nonsmooth problems and image problems. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare to have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1
Appendix 2
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Yuan, G., Li, T. & Hu, W. A conjugate gradient algorithm and its application in largescale optimization problems and image restoration. J Inequal Appl 2019, 247 (2019). https://doi.org/10.1186/s1366001921926
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366001921926
MSC
 90C26
Keywords
 Conjugate gradient
 Nonconvex and nonsmooth
 Descent property
 Global convergence