Open Access

Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem

Journal of Inequalities and Applications20092009:970723

https://doi.org/10.1155/2009/970723

Received: 17 December 2008

Accepted: 7 July 2009

Published: 18 August 2009

Abstract

A new approach was proposed to reformulate the biobjectives optimization model of portfolio management into an unconstrained minimization problem, where the objective function is a piecewise quadratic polynomial. We presented some properties of such an objective function. Then, a class of penalty algorithms based on the well-known conjugate gradient methods was developed to find the solution of portfolio management problem. By implementing the proposed algorithm to solve the real problems from the stock market in China, it was shown that this algorithm is promising.

1. Introduction

Portfolio management problem concerns itself with allocating one's assets among alternative securities to maximize the return of assets and to minimize the investment risk. The pioneer work on this problem was Markowitz's mean variance model [1], and the solution of his mean-variance methodology has been the center of the consequent research activities and forms the basis for the development of modern portfolio management theory. Commonly, the portfolio management problem has the following mathematical description.

Assume that there are kinds of securities. The return rate of the th security is denoted as , . Let be the proportion of total assets devoted to the th security, then it is obvious that
(1.1)
In the real setting, due to uncertainty, the return rates , , are random parameters. Hence, the total return of the assets
(1.2)
is also random. In this situation, the risk of investment has to be taken into consideration. In the classical model, this risk is measured by the variance of . If is the covariance matrix of the vector , then the variance of is
(1.3)
Therefore, a portfolio management problem can be formulated into the following biobjectives programming problem:
(1.4)

where is a vector of all ones. Up to our knowledge, almost all of the existing models of portfolio management problems evolved from the basic model (1.4).

Summarily, the past attempts on the portfolio management problems concentrated on two major issues. The first one is to propose new models. In this connection, some recent notable contributions mainly include the following:

(i)mean-absolute deviation model (Simaan [2]),

(ii)maximizing probability model (Williams [3]),

(iii) different types of mean-variance models (Best and Jaroslava [4], Konna and Suzuki [5] and Yoshimoto [6]),

(iv) min-max models (Cai et al. [7], Deng et al. [8]),

(v)interval programming models (Giove et al. [9], Ida [10], Lai et al. [11]),

(vi) fuzzy goal programming model (Parra et al. [12]),

(vii) admissible efficient portfolio selection model (Zhang and Nie [13]),

(viii)possibility approach model with highest utility score (Carlsson et al. [14]),

(ix)upper and lower exponential possibility distribution based model (Tanaka and Guo [15]),

(x)model with fuzzy probabilities (Huang [16, 17], Tanaka and Guo [15]).

The second issue is about the numerical solution algorithms for the distinct models. One of the fundamental ways is to reformulate (1.4) into a deterministic single-objective optimization problem. For example, in the researches of Best [4, 18], Pang [19], Kawadai and Konno [20], Perold [21], Sharpe [22], Szegö [23], and Yoshimoto [6], they assumed that the return of each security, the variance, and the covariances among them can be estimated by the investor prior to decision. Under this assumption, the problem (1.4) is a deterministic problem. Furthermore, if an aversion coefficient is introduced, the problem (1.4) can be transformed into the following standard quadratic programming problem:
(1.5)

where is the expected value vector of , and are two given vectors denoting the lower and the upper bounds of decision vector, respectively.

Obviously, if in (1.5), then it implies that the return is maximized regardless of the investment risk. On the other hand, if , then the risk is minimized without consideration on the investment income. Increasing value of in the interval indicates an increasingly weight of the invest risk, and vice versa.

For a fixed , it is noted that (1.5) is a quadratic programming problem. Since it has been shown that the matrix is positive semidefinite, the problem (1.5) is a convex quadratic programming (CQP). For a CQP, there exist a lot of efficient methods to find its minimizers. Among them, active-set methods, interior-point methods, and gradient-projection methods have been widely used since the 1970s. For their detailed numerical performances, one can see [2430] and the references therein. However, the efficiency of those methods seriously depends on the factorization techniques of matrix at each iteration, often exploiting the sparsity in for a large-scale quadratic programming. So, from the viewpoint of smaller storage requirements and computation cost, the methods mentioned above must not be most suitable for solving the problem (1.5) if is a dense matrix.

Fortunately, recent research shows that the conjugate gradient methods can remedy the drawback in factorization of Hessian matrix for an unconstrained minimization problem. At each conjugate-gradient iteration, it is only involved with computing the gradient of objective function. For details in this direction, see, for example, [3134].

Motivated by the advantage of the conjugate gradient methods, the first aim of this paper is to reformulate problem (1.5) as an equivalent unconstrained optimization problem. Then, we are going to develop an efficient algorithm based on conjugate gradient methods to find its solution. The effectiveness of such algorithm will be tested by implementing the designed algorithm to solve some real problems from the stock market in China.

The lay out of the paper is as follows. Section 2 is devoted to the reformulation of the original constrained problem. Some features of the subproblem will be presented. Then, in Section 3, we are going to develop a penalty algorithm based on conjugate gradient methods. Section 4 will provide applications of the proposed algorithm. The last section concludes with some final remarks.

2. Reformulation

Firstly, for brevity, denote
(2.1)
Then, the problem (1.5) reads
(2.2)

Since the covariance matrix is symmetric positive semidefinite, also has such property. Thus, is a convex function.

For the equality constraint and the inequality constraints in (2.2), we define a function , which is used to describe the constraints violation:
(2.3)
where is called penalty parameter, and denotes the 2-norm of vector. If is a feasible point of problem (2.2), then
(2.4)

Actually, the larger the absolute value of is, the further from the feasible region is.

The function ,
(2.5)

is said to be a penalty function of the problem (2.2). It is noted that has the following features:

is a piecewise quadratic polynomial;

is piecewise continuously differentiable;

If is positive semidefinite, then is a piecewise convex quadratic function.

For example, let , and denote
(2.6)

Then, has the following more compact form:

(2.7)

where is a constant scalar.

Next, we are going to present some properties of .

Proposition 2.1.

Given , the piecewise function consists of
(2.8)

pieces.

Proposition 2.2.

Assume that . For any , define a matrix
(2.9)

where each of the other rows is . Then, the following results hold.

, where denotes the rank of a matrix.

For a fixed , all matrices , where , have the same eigenvalues and eigenvectors.

When , all matrices , where , have nonnegative eigenvalue, and hence they are positive semidefinite. When , they are positive definite matrices.

Proof.

From the construction of and the linear algebra theory, it is not difficult to prove the above two propositions. We omit it.

In the following, we turn to state the relation between the global minimizer of and that of the original problem (1.5).

Theorem 2.3.

For a given sequence , suppose that as . Let be an exact global minimizer of . Then, every accumulation point of is a solution of problem (1.5).

Proof.

Let be a global solution of problem (1.5). Then, for any feasible point , we have
(2.10)
Since is an exact global minimizer of for the fixed , it follows that
(2.11)

By definition, (2.11) is equivalent to

(2.12)
where the last equality is from the feasibility of . So, it is obtained that
(2.13)

Let be an accumulation point of . Without loss of generality, assume that

(2.14)
Then, by taking the limit of on both sides of (2.13), we have
(2.15)
where the last equality follows from . It follows that
(2.16)

Therefore, we have proved that is a feasible point.

In the following, we prove that is a global minimizer of problem (1.5).

Because

(2.17)
is a global minimizer of .

The desired result has been proved.

Without difficulty, the following result can be proved.

Theorem 2.4.

Suppose that is a solution of problem (1.5). Then, is a global minimizer of for any .

Based on Theorems 2.3 and 2.4, we will develop an algorithm to search for a solution of problem (1.5) by solving a sequence of piecewise quadratical programming problems.

3. Penalty Algorithm Based on Conjugate Gradient Method

Among all methods for the unconstrained optimization problems, the conjugate gradient method is regarded as one of the most powerful approaches due to its smaller storage requirements and computation cost. Its priorities over other methods have been addressed in many literatures. For example, in [27, 32, 3438], the global convergence theory and the detailed numerical performances on the conjugate gradient methods have been extensively investigated.

Since the number of the possible selected securities in the investment management is large and the matrix may be dense, it is natura that the conjugate gradient method is selected to find the minimizer of for some given . However, it is noted that (2.7) is not a classical quadratic function. The standard procedures of minimizing a quadratic function can not be directly employed. To develop a new algorithm, we first propose an rule of updating the coefficients in .

Regarding the coefficients of the quadratic terms in
(3.1)
we modify according to the following update rule:
(3.2)
Regarding the coefficients of the linear terms in
(3.3)
we modify according to the following update rule:
(3.4)
Define
(3.5)
The conjugate gradient method will be employed into an ordinary minimization of quadratic function:
(3.6)
where is a given parameter. It is easy to see that
(3.7)

Although there exist several variants on the conjugate gradient method, the fundamental computing procedures for the solution of (3.6) include the following two steps.

At the current iterate point , determinate a search direction:

(3.8)

where is chosen such that is a conjugate direction of with respect to the matrix .

Along the direction , choose a step size such that, at the new iterate point

(3.9)

the absolute value of the function decreases sufficiently.

The following lemma presents a method to determine the search direction.

Lemma 3.1.

If
(3.10)
(3.11)

then in (3.8) is a conjugate direction of with respect to .

Proof.

Owing to
(3.12)

the desired result is obtained.

Actually, the formula (3.10) is called HS method.

In the case that the step size is chosen by exact linear search along the direction , that is,
(3.13)

we have the following global convergence theorem.

Theorem 3.2.

Let be an arbitrary initial vector. Let be a sequence generated by the conjugate gradient algorithm defined by (3.8)–(3.13). Then, either
(3.14)
or
(3.15)

In particular, if is an accumulation point of the sequence , then is a global minimizer of .

Remark 3.3.

If is computed by
(3.16)

then the results in Theorem 3.2 still hold. Equation (3.16) is called FR method.

Based on the discussion above, we now come to develop a penalty algorithm based on conjugate gradient method in the last of this section.

Algorithm 1 (Penalty Algorithm Based on Conjugate Gradient Method).

Step 1 Z (Initialization).

Given constant scalars , , , and . Input the expected return vector , and compute and . Choose an initial solution . Set , , and .

Step 1 (Reformulation).

If
(3.17)
then set
(3.18)

and go to Step 4; otherwise, go to Step 2.

Step 2 (Search Direction).

Compute the search direction by (3.8) and (3.10).

Step 3 (Exact Line Search).

Compute by (3.13), and update
(3.19)

Return to Step 1.

Step 4 (Feasibility Test).

Check feasibility of in problem (2.2). If
(3.20)

the algorithm terminates; otherwise, go to Step 5.

Step 5 (Update).

Set , , . At the new iterate point , modify the matrix and the vector by (3.2) and (3.4), respectively. Set , and return to Step 1.

Remark 3.4.

In Algorithm 1, the index denotes the number of updating penalty parameter, and denotes the number of iterations of conjugate gradient method for unconstrained subproblem (3.6).
For some fixed , it is easy to see that the condition
(3.21)

implies that is feasible. From Theorem 2.3, it leads that is a global minimizer of the original problem (1.5) if is a global minimizer of problem (3.6).

4. Numerical Experiments

In this section, we are going to test the effectiveness of Algorithm 1. All the test problems come from the real stock market in China, in 2007. The computer procedures are implemented on MATLAB 6.5.

In our numerical experiments, the initial solution is chosen to satisfy
(4.1)
the bound vector is a vector of all zeros, and is a vector of all ones. We take the initial penalty parameter and the aversion coefficient . The tolerance of error is taken as
(4.2)

We implement Algorithm 1 to solve ten real problems. Each of them has a different dimension ranging from 10 to 100. In these problems, the expected return rates of each stock come from the monthly data in the stock market of China, in 2007. In Table 3, we list the data used to form a real problem whose size of dimension is 30.

In Table 1, we report the numerical behavior of Algorithm 1 for all ten problems.
Table 1

Numerical performance of Algorithm 1.

Problem

CPU of HS

CPU of FR

1

2

3

4

5

6

7

8

9

10

In Table 1, is the dimensional size of each problem; the third and the forth columns report the CPU time when is evaluated by HS method and FR method, respectively. indicates the number of updating penalty parameter, is the penalty parameter, and denotes the value of penalty term.

In Table 2, we list the obtained optimal solution for each problem.
Table 2

Optimal solutions of the ten problems.

 

the optimal solution

Problem 1

 

other components of are zeros

Problem 2

 

other components of are zeros

Problem 3

 

other components of are zeros

Problem 4

 

other components of are zeros

Problem 5

 

other components of are zeros

Problem 6

 

other components of are zeros

Problem 7

 

other components of are zeros

Problem 8

 

other components of are zeros

Problem 9

 

other components of are zeros

Problem 10

 

other components of are zeros

Table 3

The return rates collected from the stock market in China, 2007.

 

1

2

3

4

5

6

7

8

9

10

11

12

No.1

0.4600

0.1900

0.1800

0.1130

0.2400

0.4600

0.4200

0.1500

0.1700

0.1140

0.2100

0.4200

No.2

0.6420

0.6560

0.6630

0.6990

0.6080

0.5420

0.6210

0.5550

0.6590

0.5810

0.6850

0.6210

No.3

0.1190

0.0590

0.2100

0.1100

0.1200

0.1190

0.1280

0.0580

0.2100

0.1100

0.1300

0.1280

No.4

0.0800

0.0350

0.2540

0.0830

0.0960

0.0800

0.1000

0.0340

0.2440

0.1100

0.1200

0.1000

No.5

0.7170

0.0940

0.4400

0.1430

0.6880

0.7170

0.7080

0.0190

0.3100

0.1470

0.6810

0.7080

No.6

0.0151

0.0105

0.0749

0.0081

0.0133

0.0151

0.0179

0.0083

0.0309

0.0090

0.1390

0.0179

No.7

0.2530

0.2430

0.3100

0.0480

0.1500

0.2530

0.2470

0.2440

0.3000

0.0480

0.1500

0.2470

No.8

0.3400

0.3006

0.3500

0.2280

0.4800

0.3400

0.3400

0.3026

0.3500

0.2270

0.4800

0.3400

No.9

0.0804

0.0579

0.1190

0.0420

0.0600

0.0804

0.0833

0.0597

0.1070

0.0430

0.0600

0.0833

No.10

0.0360

0.0230

0.0300

0.0140

0.0360

0.0360

0.0740

0.0210

0.0420

0.0150

0.0540

0.0740

No.11

0.0050

0.0130

0.0234

0.0020

0.0020

0.0050

0.0046

0.0130

0.0187

0.0020

0.0014

0.0046

No.12

0.2897

0.3100

0.4303

0.1153

0.1930

0.2897

0.2893

0.3200

0.3893

0.1151

0.1927

0.2893

No.13

0.7690

0.8060

0.9050

0.5340

0.4980

0.7690

0.7670

0.8090

0.8600

0.4350

0.5700

0.7670

No.14

0.0160

0.0110

0.0258

0.0006

0.0050

0.0160

0.0230

0.0170

0.0171

0.0242

0.0220

0.0230

No.15

0.0820

0.0370

0.0640

0.0200

0.0550

0.0820

0.0770

0.0370

0.0690

0.0200

0.0490

0.0770

No.16

0.4714

0.3607

0.6000

0.1275

0.2700

0.4714

0.4295

0.3585

0.5700

0.1275

0.2600

0.4295

No.17

0.2280

0.0950

0.1240

0.0820

0.1650

0.2280

0.2150

0.0970

0.1250

0.0812

0.1520

0.2150

No.18

0.0107

0.0053

0.0120

0.0040

0.0070

0.0107

0.0108

0.0416

0.0400

0.0040

0.0120

0.0108

No.19

0.1400

0.2000

0.2400

0.0518

0.1100

0.1400

0.1400

0.2000

0.2300

0.0512

0.1200

0.1400

No.20

0.1500

0.1600

0.2100

0.0400

0.1100

0.1500

0.1500

0.1500

0.2000

0.0400

0.1100

0.1500

No.21

0.9850

1.3137

1.3200

0.2567

0.6100

0.9850

0.8130

1.3179

1.2900

0.1336

0.4300

0.8130

No.22

0.4717

0.4800

0.5730

0.0150

0.4271

0.4717

0.4285

0.2500

0.2338

0.0130

0.3816

0.4285

No.23

0.2500

0.1250

0.3300

0.0780

0.2000

0.2500

0.2400

0.1260

0.3400

0.0770

0.2000

0.2400

No.24

0.0310

0.0600

0.0880

0.0060

0.0300

0.0310

0.0240

0.0600

0.0950

0.0060

0.0190

0.0240

No.25

0.1190

0.0590

0.2100

0.1100

0.1200

0.1190

0.1280

0.0580

0.2100

0.1100

0.1300

0.1280

No.26

0.0110

0.0139

0.0140

0.0040

0.0090

0.0110

0.0020

0.0137

0.0080

0.0010

0.0010

0.0020

No.27

0.0100

0.0640

0.0515

0.0350

0.0040

0.0100

0.0700

0.0630

0.0061

0.0690

0.0733

0.0700

No.28

0.2680

0.2770

0.4320

0.0230

0.1900

0.2680

0.2680

0.2740

0.4320

0.0220

0.1880

0.2680

No.29

0.0061

0.0060

0.6742

0.0033

0.0050

0.0061

0.0131

0.0220

0.4474

0.0039

0.0099

0.0131

No.30

0.0600

0.2200

2.2300

0.0250

0.0400

0.0600

0.0450

0.0100

1.5700

0.0080

0.0300

0.0450

5. Final Remarks

In this paper, the biobjectives optimization model of portfolio management was reformulated as an unconstrained minimization problem. We also presented the properties of the obtained quadratic function.

Regarding the features of the optimization models in portfolio management, a class of penalty algorithms based on the conjugate gradient method was developed. The numerical performance of the proposed algorithm in solving the real problems verifies its effectiveness.

Declarations

Acknowledgments

The authors are grateful to the editors and the anonymous three referees for their suggestions, which have greatly improved the presentation of this paper. This work is supported by the National Natural Science Fund of China (grant no.60804037) and the project for Excellent Talent of New Century, Ministry of Education, China(grant no.NCET-07-0864).

Authors’ Affiliations

(1)
School of Mathematics Sciences and Computing Technology, Central South University
(2)
School of Information Sciences and Engineering, Central South University

References

  1. Markowitz H: Portfolio selection. Journal of Finance 1952, 7: 77–91. 10.2307/2975974Google Scholar
  2. Simaan Y: Estimation risk in portfolio selection: the mean variance model versus the mean absolute deviation model. Management Science 1997, 43: 1437–1446. 10.1287/mnsc.43.10.1437View ArticleGoogle Scholar
  3. Williams JO: Maximizing the probability of achieving investment goals. Journal of Portfolio Management 1997, 24: 77–81. 10.3905/jpm.1997.409627View ArticleGoogle Scholar
  4. Best MJ, Jaroslava H: The efficient frontier for bounded assets. Mathematical Methods of Operations Research 2000,52(2):195–212. 10.1007/s001860000073MathSciNetView ArticleMATHGoogle Scholar
  5. Konno H, Suzuki K: A mean-variance-skewness optimization model. Journal of Operations Research Society of Japan 1995, 38: 137–187.MATHGoogle Scholar
  6. Yoshimoto A: The mean-variance approach to portfolio optimization subject to transaction costs. Journal of the Operations Research Society of Japan 1996,39(1):99–117.MathSciNetMATHGoogle Scholar
  7. Cai X, Teo KL, Yang X, Zhou XY: Portfolio optimization under a minimax rule. Management Science 2000,46(7):957–972. 10.1287/mnsc.46.7.957.12039View ArticleMATHGoogle Scholar
  8. Deng XT, Li ZF, Wang SY: A minimax portfolio selection strategy with equilibrium. European Journal of Operational Research 2005,166(1):278–292. 10.1016/j.ejor.2004.01.040MathSciNetView ArticleMATHGoogle Scholar
  9. Giove S, Funari S, Nardelli C: An interval portfolio selection problem based on regret function. European Journal of Operational Research 2006,170(1):253–264. 10.1016/j.ejor.2004.05.030View ArticleMATHGoogle Scholar
  10. Ida M: Solutions for the portfolio selection problem with interval and fuzzy coefficients. Reliable Computing 2004,10(5):389–400.MathSciNetView ArticleMATHGoogle Scholar
  11. Lai KK, Wang SY, Xu JP, Zhu SS, Fang Y: A class of linear interval programming problems and its application to portfolio selection. IEEE Transactions on Fuzzy Systems 2002,10(6):698–704. 10.1109/TFUZZ.2002.805902View ArticleGoogle Scholar
  12. Parra MA, Terol AB, Uria MVR: A fuzzy goal programming approach to portfolio selection. European Journal of Operational Research 2001,133(2):287–297. 10.1016/S0377-2217(00)00298-8MathSciNetView ArticleMATHGoogle Scholar
  13. Zhang WG, Nie ZK: On admissible efficient portfolio selection problem. Applied Mathematics and Computation 2004, 159: 357–371. 10.1016/j.amc.2003.10.019MathSciNetView ArticleMATHGoogle Scholar
  14. Carlsson C, Fullér R, Majlender P: A possibilistic approach to selecting portfolios with highest utility score. Fuzzy Sets and Systems 2002,131(1):13–21. 10.1016/S0165-0114(01)00251-2MathSciNetView ArticleMATHGoogle Scholar
  15. Tanaka H, Guo P: Portfolio selection based on upper and lower exponential possibility distributions. European Journal of Operational Research 1999, 114: 115–126. 10.1016/S0377-2217(98)00033-2View ArticleMATHGoogle Scholar
  16. Huang XX: Fuzzy chance-constrained portfolio selection. Applied Mathematics and Computation 2006,177(2):500–507. 10.1016/j.amc.2005.11.027MathSciNetView ArticleMATHGoogle Scholar
  17. Huang XX: Two new models for portfolio selection with stochastic returns taking fuzzy information. European Journal of Operational Research 2007,180(1):396–405. 10.1016/j.ejor.2006.04.010MathSciNetView ArticleMATHGoogle Scholar
  18. Best MJ, Grauer RR: The efficient set mathematics when mean-variance problems are subject to general linear constrains. Journal of Economics and Business 1990, 42: 105–120. 10.1016/0148-6195(90)90027-AView ArticleGoogle Scholar
  19. Pang JS: A new and efficient algorithm for a class of portfolio selection problems. Operations Research 1980,28(3, part 2):754–767. 10.1287/opre.28.3.754MathSciNetView ArticleMATHGoogle Scholar
  20. Kawadai N, Konno H: Solving large scale mean-variance models with dense non-factorable covariance matrices. Journal of the Operations Research Society of Japan 2001,44(3):251–260.MathSciNetMATHGoogle Scholar
  21. Perold AF: Large-scale portfolio optimization. Management Science 1984,30(10):1143–1160. 10.1287/mnsc.30.10.1143MathSciNetView ArticleMATHGoogle Scholar
  22. Sharpe WF: Portfolio Theory and Capital Markets. McGraw-Hil, New York, NY, USA; 1970.Google Scholar
  23. Szegö GP: Portfolio Theory, Economic Theory, Econometrics, and Mathematical Economics. Academic Press, New York, NY, USA; 1980:xiv+215.Google Scholar
  24. Andersen ED, Gondzio J, Meszaros C, Xu X: Implementation of interior-point methods for large scale linear programs. In Interior Point Methods of Mathematical Programming, Applied Optimization. Volume 5. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1996:189–252. 10.1007/978-1-4613-3449-1_6View ArticleGoogle Scholar
  25. Gondzio J, Grothey A: Parallel interior-point solver for structured quadratic programs: application to financial planning problems. Annals of Operations Research 2007, 152: 319–339. 10.1007/s10479-006-0139-zMathSciNetView ArticleMATHGoogle Scholar
  26. Mehrotra S: On the implementation of a primal-dual interior point method. SIAM Journal on Optimization 1992,2(4):575–601. 10.1137/0802028MathSciNetView ArticleMATHGoogle Scholar
  27. Nocedal J, Wright SJ: Numerical Optimization, Springer Series in Operations Research and Financial Engineering. 2nd edition. Springer, New York, NY, USA; 2006:xxii+664.MATHGoogle Scholar
  28. Potra F, Roos C, Terlaky T (Eds): In Special Issue on Interior-Point Methods, Optimization Method and Software. 1999., 11–12:Google Scholar
  29. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM Journal on Control and Optimization 1996,34(5):1814–1830. 10.1137/S0363012994268655MathSciNetView ArticleMATHGoogle Scholar
  30. Coleman TF, Hulbert LA: A direct active set algorithm for large sparse quadratic programs with simple bounds. Mathematical Programming 1989,45(3):373–406. 10.1007/BF01589112MathSciNetView ArticleMATHGoogle Scholar
  31. Andrei N: A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization. Applied Mathematics Letters 2008,21(2):165–171. 10.1016/j.aml.2007.05.002MathSciNetView ArticleMATHGoogle Scholar
  32. Dai Y, Ni Q: Testing different conjugate gradient methods for large-scale unconstrained optimization. Journal of Computational Mathematics 2003,21(3):311–320.MathSciNetMATHGoogle Scholar
  33. Shi Z-J, Shen J: Convergence of Liu-Storey conjugate gradient method. European Journal of Operational Research 2007,182(2):552–560. 10.1016/j.ejor.2006.09.066MathSciNetView ArticleMATHGoogle Scholar
  34. Sun J, Yang X, Chen X: Quadratic cost flow and the conjugate gradient method. European Journal of Operational Research 2005,164(1):104–114. 10.1016/j.ejor.2003.04.003MathSciNetView ArticleMATHGoogle Scholar
  35. Dai YH, Yuan Y: Convergence properties of the conjugate descent method. Advances in Mathematics 1996,25(6):552–562.MathSciNetMATHGoogle Scholar
  36. Liu Y, Storey C: Efficient generalized conjugate gradient algorithms. I. Theory. Journal of Optimization Theory and Applications 1991,69(1):129–137. 10.1007/BF00940464MathSciNetView ArticleMATHGoogle Scholar
  37. Nocedal J: Conjugate gradient methods and nonlinear optimization. In Linear and Nonlinear Conjugate Gradient-Related Methods. Edited by: Adams L, Nazareth JL. SIAM, Philadelphia, Pa, USA; 1996:9–23.Google Scholar
  38. Sun J, Zhang JP: Convergence of conjugate gradient methods without line search. Annals of Operations Research 2001, 103: 161–173. 10.1023/A:1012903105391MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Zhong Wan et al. 2009

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.