Skip to main content
  • Research Article
  • Open access
  • Published:

Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem

Abstract

A new approach was proposed to reformulate the biobjectives optimization model of portfolio management into an unconstrained minimization problem, where the objective function is a piecewise quadratic polynomial. We presented some properties of such an objective function. Then, a class of penalty algorithms based on the well-known conjugate gradient methods was developed to find the solution of portfolio management problem. By implementing the proposed algorithm to solve the real problems from the stock market in China, it was shown that this algorithm is promising.

1. Introduction

Portfolio management problem concerns itself with allocating one's assets among alternative securities to maximize the return of assets and to minimize the investment risk. The pioneer work on this problem was Markowitz's mean variance model [1], and the solution of his mean-variance methodology has been the center of the consequent research activities and forms the basis for the development of modern portfolio management theory. Commonly, the portfolio management problem has the following mathematical description.

Assume that there are kinds of securities. The return rate of the th security is denoted as , . Let be the proportion of total assets devoted to the th security, then it is obvious that

(1.1)

In the real setting, due to uncertainty, the return rates , , are random parameters. Hence, the total return of the assets

(1.2)

is also random. In this situation, the risk of investment has to be taken into consideration. In the classical model, this risk is measured by the variance of . If is the covariance matrix of the vector , then the variance of is

(1.3)

Therefore, a portfolio management problem can be formulated into the following biobjectives programming problem:

(1.4)

where is a vector of all ones. Up to our knowledge, almost all of the existing models of portfolio management problems evolved from the basic model (1.4).

Summarily, the past attempts on the portfolio management problems concentrated on two major issues. The first one is to propose new models. In this connection, some recent notable contributions mainly include the following:

(i)mean-absolute deviation model (Simaan [2]),

(ii)maximizing probability model (Williams [3]),

(iii) different types of mean-variance models (Best and Jaroslava [4], Konna and Suzuki [5] and Yoshimoto [6]),

(iv) min-max models (Cai et al. [7], Deng et al. [8]),

(v)interval programming models (Giove et al. [9], Ida [10], Lai et al. [11]),

(vi) fuzzy goal programming model (Parra et al. [12]),

(vii) admissible efficient portfolio selection model (Zhang and Nie [13]),

(viii)possibility approach model with highest utility score (Carlsson et al. [14]),

(ix)upper and lower exponential possibility distribution based model (Tanaka and Guo [15]),

(x)model with fuzzy probabilities (Huang [16, 17], Tanaka and Guo [15]).

The second issue is about the numerical solution algorithms for the distinct models. One of the fundamental ways is to reformulate (1.4) into a deterministic single-objective optimization problem. For example, in the researches of Best [4, 18], Pang [19], Kawadai and Konno [20], Perold [21], Sharpe [22], Szegö [23], and Yoshimoto [6], they assumed that the return of each security, the variance, and the covariances among them can be estimated by the investor prior to decision. Under this assumption, the problem (1.4) is a deterministic problem. Furthermore, if an aversion coefficient is introduced, the problem (1.4) can be transformed into the following standard quadratic programming problem:

(1.5)

where is the expected value vector of , and are two given vectors denoting the lower and the upper bounds of decision vector, respectively.

Obviously, if in (1.5), then it implies that the return is maximized regardless of the investment risk. On the other hand, if , then the risk is minimized without consideration on the investment income. Increasing value of in the interval indicates an increasingly weight of the invest risk, and vice versa.

For a fixed , it is noted that (1.5) is a quadratic programming problem. Since it has been shown that the matrix is positive semidefinite, the problem (1.5) is a convex quadratic programming (CQP). For a CQP, there exist a lot of efficient methods to find its minimizers. Among them, active-set methods, interior-point methods, and gradient-projection methods have been widely used since the 1970s. For their detailed numerical performances, one can see [24–30] and the references therein. However, the efficiency of those methods seriously depends on the factorization techniques of matrix at each iteration, often exploiting the sparsity in for a large-scale quadratic programming. So, from the viewpoint of smaller storage requirements and computation cost, the methods mentioned above must not be most suitable for solving the problem (1.5) if is a dense matrix.

Fortunately, recent research shows that the conjugate gradient methods can remedy the drawback in factorization of Hessian matrix for an unconstrained minimization problem. At each conjugate-gradient iteration, it is only involved with computing the gradient of objective function. For details in this direction, see, for example, [31–34].

Motivated by the advantage of the conjugate gradient methods, the first aim of this paper is to reformulate problem (1.5) as an equivalent unconstrained optimization problem. Then, we are going to develop an efficient algorithm based on conjugate gradient methods to find its solution. The effectiveness of such algorithm will be tested by implementing the designed algorithm to solve some real problems from the stock market in China.

The lay out of the paper is as follows. Section 2 is devoted to the reformulation of the original constrained problem. Some features of the subproblem will be presented. Then, in Section 3, we are going to develop a penalty algorithm based on conjugate gradient methods. Section 4 will provide applications of the proposed algorithm. The last section concludes with some final remarks.

2. Reformulation

Firstly, for brevity, denote

(2.1)

Then, the problem (1.5) reads

(2.2)

Since the covariance matrix is symmetric positive semidefinite, also has such property. Thus, is a convex function.

For the equality constraint and the inequality constraints in (2.2), we define a function , which is used to describe the constraints violation:

(2.3)

where is called penalty parameter, and denotes the 2-norm of vector. If is a feasible point of problem (2.2), then

(2.4)

Actually, the larger the absolute value of is, the further from the feasible region is.

The function ,

(2.5)

is said to be a penalty function of the problem (2.2). It is noted that has the following features:

is a piecewise quadratic polynomial;

is piecewise continuously differentiable;

If is positive semidefinite, then is a piecewise convex quadratic function.

For example, let , and denote

(2.6)

Then, has the following more compact form:

(2.7)

where is a constant scalar.

Next, we are going to present some properties of .

Proposition 2.1.

Given , the piecewise function consists of

(2.8)

pieces.

Proposition 2.2.

Assume that . For any , define a matrix

(2.9)

where each of the other rows is . Then, the following results hold.

, where denotes the rank of a matrix.

For a fixed , all matrices , where , have the same eigenvalues and eigenvectors.

When , all matrices , where , have nonnegative eigenvalue, and hence they are positive semidefinite. When , they are positive definite matrices.

Proof.

From the construction of and the linear algebra theory, it is not difficult to prove the above two propositions. We omit it.

In the following, we turn to state the relation between the global minimizer of and that of the original problem (1.5).

Theorem 2.3.

For a given sequence , suppose that as . Let be an exact global minimizer of . Then, every accumulation point of is a solution of problem (1.5).

Proof.

Let be a global solution of problem (1.5). Then, for any feasible point , we have

(2.10)

Since is an exact global minimizer of for the fixed , it follows that

(2.11)

By definition, (2.11) is equivalent to

(2.12)

where the last equality is from the feasibility of . So, it is obtained that

(2.13)

Let be an accumulation point of . Without loss of generality, assume that

(2.14)

Then, by taking the limit of on both sides of (2.13), we have

(2.15)

where the last equality follows from . It follows that

(2.16)

Therefore, we have proved that is a feasible point.

In the following, we prove that is a global minimizer of problem (1.5).

Because

(2.17)

is a global minimizer of .

The desired result has been proved.

Without difficulty, the following result can be proved.

Theorem 2.4.

Suppose that is a solution of problem (1.5). Then, is a global minimizer of for any .

Based on Theorems 2.3 and 2.4, we will develop an algorithm to search for a solution of problem (1.5) by solving a sequence of piecewise quadratical programming problems.

3. Penalty Algorithm Based on Conjugate Gradient Method

Among all methods for the unconstrained optimization problems, the conjugate gradient method is regarded as one of the most powerful approaches due to its smaller storage requirements and computation cost. Its priorities over other methods have been addressed in many literatures. For example, in [27, 32, 34–38], the global convergence theory and the detailed numerical performances on the conjugate gradient methods have been extensively investigated.

Since the number of the possible selected securities in the investment management is large and the matrix may be dense, it is natura that the conjugate gradient method is selected to find the minimizer of for some given . However, it is noted that (2.7) is not a classical quadratic function. The standard procedures of minimizing a quadratic function can not be directly employed. To develop a new algorithm, we first propose an rule of updating the coefficients in .

Regarding the coefficients of the quadratic terms in

(3.1)

we modify according to the following update rule:

(3.2)

Regarding the coefficients of the linear terms in

(3.3)

we modify according to the following update rule:

(3.4)

Define

(3.5)

The conjugate gradient method will be employed into an ordinary minimization of quadratic function:

(3.6)

where is a given parameter. It is easy to see that

(3.7)

Although there exist several variants on the conjugate gradient method, the fundamental computing procedures for the solution of (3.6) include the following two steps.

At the current iterate point , determinate a search direction:

(3.8)

where is chosen such that is a conjugate direction of with respect to the matrix .

Along the direction , choose a step size such that, at the new iterate point

(3.9)

the absolute value of the function decreases sufficiently.

The following lemma presents a method to determine the search direction.

Lemma 3.1.

If

(3.10)
(3.11)

then in (3.8) is a conjugate direction of with respect to .

Proof.

Owing to

(3.12)

the desired result is obtained.

Actually, the formula (3.10) is called HS method.

In the case that the step size is chosen by exact linear search along the direction , that is,

(3.13)

we have the following global convergence theorem.

Theorem 3.2.

Let be an arbitrary initial vector. Let be a sequence generated by the conjugate gradient algorithm defined by (3.8)–(3.13). Then, either

(3.14)

or

(3.15)

In particular, if is an accumulation point of the sequence , then is a global minimizer of .

Remark 3.3.

If is computed by

(3.16)

then the results in Theorem 3.2 still hold. Equation (3.16) is called FR method.

Based on the discussion above, we now come to develop a penalty algorithm based on conjugate gradient method in the last of this section.

Algorithm 1 (Penalty Algorithm Based on Conjugate Gradient Method).

Step 1 Z (Initialization).

Given constant scalars , , , and . Input the expected return vector , and compute and . Choose an initial solution . Set , , and .

Step 1 (Reformulation).

If

(3.17)

then set

(3.18)

and go to Step 4; otherwise, go to Step 2.

Step 2 (Search Direction).

Compute the search direction by (3.8) and (3.10).

Step 3 (Exact Line Search).

Compute by (3.13), and update

(3.19)

Return to Step 1.

Step 4 (Feasibility Test).

Check feasibility of in problem (2.2). If

(3.20)

the algorithm terminates; otherwise, go to Step 5.

Step 5 (Update).

Set , , . At the new iterate point , modify the matrix and the vector by (3.2) and (3.4), respectively. Set , and return to Step 1.

Remark 3.4.

In Algorithm 1, the index denotes the number of updating penalty parameter, and denotes the number of iterations of conjugate gradient method for unconstrained subproblem (3.6).

For some fixed , it is easy to see that the condition

(3.21)

implies that is feasible. From Theorem 2.3, it leads that is a global minimizer of the original problem (1.5) if is a global minimizer of problem (3.6).

4. Numerical Experiments

In this section, we are going to test the effectiveness of Algorithm 1. All the test problems come from the real stock market in China, in 2007. The computer procedures are implemented on MATLAB 6.5.

In our numerical experiments, the initial solution is chosen to satisfy

(4.1)

the bound vector is a vector of all zeros, and is a vector of all ones. We take the initial penalty parameter and the aversion coefficient . The tolerance of error is taken as

(4.2)

We implement Algorithm 1 to solve ten real problems. Each of them has a different dimension ranging from 10 to 100. In these problems, the expected return rates of each stock come from the monthly data in the stock market of China, in 2007. In Table 3, we list the data used to form a real problem whose size of dimension is 30.

In Table 1, we report the numerical behavior of Algorithm 1 for all ten problems.

Table 1 Numerical performance of Algorithm 1.

In Table 1, is the dimensional size of each problem; the third and the forth columns report the CPU time when is evaluated by HS method and FR method, respectively. indicates the number of updating penalty parameter, is the penalty parameter, and denotes the value of penalty term.

In Table 2, we list the obtained optimal solution for each problem.

Table 2 Optimal solutions of the ten problems.
Table 3 The return rates collected from the stock market in China, 2007.

5. Final Remarks

In this paper, the biobjectives optimization model of portfolio management was reformulated as an unconstrained minimization problem. We also presented the properties of the obtained quadratic function.

Regarding the features of the optimization models in portfolio management, a class of penalty algorithms based on the conjugate gradient method was developed. The numerical performance of the proposed algorithm in solving the real problems verifies its effectiveness.

References

  1. Markowitz H: Portfolio selection. Journal of Finance 1952, 7: 77–91. 10.2307/2975974

    Google Scholar 

  2. Simaan Y: Estimation risk in portfolio selection: the mean variance model versus the mean absolute deviation model. Management Science 1997, 43: 1437–1446. 10.1287/mnsc.43.10.1437

    Article  Google Scholar 

  3. Williams JO: Maximizing the probability of achieving investment goals. Journal of Portfolio Management 1997, 24: 77–81. 10.3905/jpm.1997.409627

    Article  Google Scholar 

  4. Best MJ, Jaroslava H: The efficient frontier for bounded assets. Mathematical Methods of Operations Research 2000,52(2):195–212. 10.1007/s001860000073

    Article  MathSciNet  MATH  Google Scholar 

  5. Konno H, Suzuki K: A mean-variance-skewness optimization model. Journal of Operations Research Society of Japan 1995, 38: 137–187.

    MATH  Google Scholar 

  6. Yoshimoto A: The mean-variance approach to portfolio optimization subject to transaction costs. Journal of the Operations Research Society of Japan 1996,39(1):99–117.

    MathSciNet  MATH  Google Scholar 

  7. Cai X, Teo KL, Yang X, Zhou XY: Portfolio optimization under a minimax rule. Management Science 2000,46(7):957–972. 10.1287/mnsc.46.7.957.12039

    Article  MATH  Google Scholar 

  8. Deng XT, Li ZF, Wang SY: A minimax portfolio selection strategy with equilibrium. European Journal of Operational Research 2005,166(1):278–292. 10.1016/j.ejor.2004.01.040

    Article  MathSciNet  MATH  Google Scholar 

  9. Giove S, Funari S, Nardelli C: An interval portfolio selection problem based on regret function. European Journal of Operational Research 2006,170(1):253–264. 10.1016/j.ejor.2004.05.030

    Article  MATH  Google Scholar 

  10. Ida M: Solutions for the portfolio selection problem with interval and fuzzy coefficients. Reliable Computing 2004,10(5):389–400.

    Article  MathSciNet  MATH  Google Scholar 

  11. Lai KK, Wang SY, Xu JP, Zhu SS, Fang Y: A class of linear interval programming problems and its application to portfolio selection. IEEE Transactions on Fuzzy Systems 2002,10(6):698–704. 10.1109/TFUZZ.2002.805902

    Article  Google Scholar 

  12. Parra MA, Terol AB, Uria MVR: A fuzzy goal programming approach to portfolio selection. European Journal of Operational Research 2001,133(2):287–297. 10.1016/S0377-2217(00)00298-8

    Article  MathSciNet  MATH  Google Scholar 

  13. Zhang WG, Nie ZK: On admissible efficient portfolio selection problem. Applied Mathematics and Computation 2004, 159: 357–371. 10.1016/j.amc.2003.10.019

    Article  MathSciNet  MATH  Google Scholar 

  14. Carlsson C, Fullér R, Majlender P: A possibilistic approach to selecting portfolios with highest utility score. Fuzzy Sets and Systems 2002,131(1):13–21. 10.1016/S0165-0114(01)00251-2

    Article  MathSciNet  MATH  Google Scholar 

  15. Tanaka H, Guo P: Portfolio selection based on upper and lower exponential possibility distributions. European Journal of Operational Research 1999, 114: 115–126. 10.1016/S0377-2217(98)00033-2

    Article  MATH  Google Scholar 

  16. Huang XX: Fuzzy chance-constrained portfolio selection. Applied Mathematics and Computation 2006,177(2):500–507. 10.1016/j.amc.2005.11.027

    Article  MathSciNet  MATH  Google Scholar 

  17. Huang XX: Two new models for portfolio selection with stochastic returns taking fuzzy information. European Journal of Operational Research 2007,180(1):396–405. 10.1016/j.ejor.2006.04.010

    Article  MathSciNet  MATH  Google Scholar 

  18. Best MJ, Grauer RR: The efficient set mathematics when mean-variance problems are subject to general linear constrains. Journal of Economics and Business 1990, 42: 105–120. 10.1016/0148-6195(90)90027-A

    Article  Google Scholar 

  19. Pang JS: A new and efficient algorithm for a class of portfolio selection problems. Operations Research 1980,28(3, part 2):754–767. 10.1287/opre.28.3.754

    Article  MathSciNet  MATH  Google Scholar 

  20. Kawadai N, Konno H: Solving large scale mean-variance models with dense non-factorable covariance matrices. Journal of the Operations Research Society of Japan 2001,44(3):251–260.

    MathSciNet  MATH  Google Scholar 

  21. Perold AF: Large-scale portfolio optimization. Management Science 1984,30(10):1143–1160. 10.1287/mnsc.30.10.1143

    Article  MathSciNet  MATH  Google Scholar 

  22. Sharpe WF: Portfolio Theory and Capital Markets. McGraw-Hil, New York, NY, USA; 1970.

    Google Scholar 

  23. Szegö GP: Portfolio Theory, Economic Theory, Econometrics, and Mathematical Economics. Academic Press, New York, NY, USA; 1980:xiv+215.

    Google Scholar 

  24. Andersen ED, Gondzio J, Meszaros C, Xu X: Implementation of interior-point methods for large scale linear programs. In Interior Point Methods of Mathematical Programming, Applied Optimization. Volume 5. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1996:189–252. 10.1007/978-1-4613-3449-1_6

    Chapter  Google Scholar 

  25. Gondzio J, Grothey A: Parallel interior-point solver for structured quadratic programs: application to financial planning problems. Annals of Operations Research 2007, 152: 319–339. 10.1007/s10479-006-0139-z

    Article  MathSciNet  MATH  Google Scholar 

  26. Mehrotra S: On the implementation of a primal-dual interior point method. SIAM Journal on Optimization 1992,2(4):575–601. 10.1137/0802028

    Article  MathSciNet  MATH  Google Scholar 

  27. Nocedal J, Wright SJ: Numerical Optimization, Springer Series in Operations Research and Financial Engineering. 2nd edition. Springer, New York, NY, USA; 2006:xxii+664.

    MATH  Google Scholar 

  28. Potra F, Roos C, Terlaky T (Eds): In Special Issue on Interior-Point Methods, Optimization Method and Software. 1999., 11–12:

    Google Scholar 

  29. Solodov MV, Tseng P: Modified projection-type methods for monotone variational inequalities. SIAM Journal on Control and Optimization 1996,34(5):1814–1830. 10.1137/S0363012994268655

    Article  MathSciNet  MATH  Google Scholar 

  30. Coleman TF, Hulbert LA: A direct active set algorithm for large sparse quadratic programs with simple bounds. Mathematical Programming 1989,45(3):373–406. 10.1007/BF01589112

    Article  MathSciNet  MATH  Google Scholar 

  31. Andrei N: A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization. Applied Mathematics Letters 2008,21(2):165–171. 10.1016/j.aml.2007.05.002

    Article  MathSciNet  MATH  Google Scholar 

  32. Dai Y, Ni Q: Testing different conjugate gradient methods for large-scale unconstrained optimization. Journal of Computational Mathematics 2003,21(3):311–320.

    MathSciNet  MATH  Google Scholar 

  33. Shi Z-J, Shen J: Convergence of Liu-Storey conjugate gradient method. European Journal of Operational Research 2007,182(2):552–560. 10.1016/j.ejor.2006.09.066

    Article  MathSciNet  MATH  Google Scholar 

  34. Sun J, Yang X, Chen X: Quadratic cost flow and the conjugate gradient method. European Journal of Operational Research 2005,164(1):104–114. 10.1016/j.ejor.2003.04.003

    Article  MathSciNet  MATH  Google Scholar 

  35. Dai YH, Yuan Y: Convergence properties of the conjugate descent method. Advances in Mathematics 1996,25(6):552–562.

    MathSciNet  MATH  Google Scholar 

  36. Liu Y, Storey C: Efficient generalized conjugate gradient algorithms. I. Theory. Journal of Optimization Theory and Applications 1991,69(1):129–137. 10.1007/BF00940464

    Article  MathSciNet  MATH  Google Scholar 

  37. Nocedal J: Conjugate gradient methods and nonlinear optimization. In Linear and Nonlinear Conjugate Gradient-Related Methods. Edited by: Adams L, Nazareth JL. SIAM, Philadelphia, Pa, USA; 1996:9–23.

    Google Scholar 

  38. Sun J, Zhang JP: Convergence of conjugate gradient methods without line search. Annals of Operations Research 2001, 103: 161–173. 10.1023/A:1012903105391

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the editors and the anonymous three referees for their suggestions, which have greatly improved the presentation of this paper. This work is supported by the National Natural Science Fund of China (grant no.60804037) and the project for Excellent Talent of New Century, Ministry of Education, China(grant no.NCET-07-0864).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhong Wan.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wan, Z., Zhang, S.J. & Wang, Y.L. Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem. J Inequal Appl 2009, 970723 (2009). https://doi.org/10.1155/2009/970723

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/970723

Keywords