- Research Article
- Open access
- Published:

# Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem

*Journal of Inequalities and Applications*
**volumeÂ 2009**, ArticleÂ number:Â 970723 (2009)

## Abstract

A new approach was proposed to reformulate the biobjectives optimization model of portfolio management into an unconstrained minimization problem, where the objective function is a piecewise quadratic polynomial. We presented some properties of such an objective function. Then, a class of penalty algorithms based on the well-known conjugate gradient methods was developed to find the solution of portfolio management problem. By implementing the proposed algorithm to solve the real problems from the stock market in China, it was shown that this algorithm is promising.

## 1. Introduction

Portfolio management problem concerns itself with allocating one's assets among alternative securities to maximize the return of assets and to minimize the investment risk. The pioneer work on this problem was Markowitz's mean variance model [1], and the solution of his mean-variance methodology has been the center of the consequent research activities and forms the basis for the development of modern portfolio management theory. Commonly, the portfolio management problem has the following mathematical description.

Assume that there are kinds of securities. The return rate of the th security is denoted as , . Let be the proportion of total assets devoted to the th security, then it is obvious that

In the real setting, due to uncertainty, the return rates , , are random parameters. Hence, the total return of the assets

is also random. In this situation, the risk of investment has to be taken into consideration. In the classical model, this risk is measured by the variance of . If is the covariance matrix of the vector , then the variance of is

Therefore, a portfolio management problem can be formulated into the following biobjectives programming problem:

where is a vector of all ones. Up to our knowledge, almost all of the existing models of portfolio management problems evolved from the basic model (1.4).

Summarily, the past attempts on the portfolio management problems concentrated on two major issues. The first one is to propose new models. In this connection, some recent notable contributions mainly include the following:

(i)mean-absolute deviation model (Simaan [2]),

(ii)maximizing probability model (Williams [3]),

(iii) different types of mean-variance models (Best and Jaroslava [4], Konna and Suzuki [5] and Yoshimoto [6]),

(iv) min-max models (Cai et al. [7], Deng et al. [8]),

(v)interval programming models (Giove et al. [9], Ida [10], Lai et al. [11]),

(vi) fuzzy goal programming model (Parra et al. [12]),

(vii) admissible efficient portfolio selection model (Zhang and Nie [13]),

(viii)possibility approach model with highest utility score (Carlsson et al. [14]),

(ix)upper and lower exponential possibility distribution based model (Tanaka and Guo [15]),

(x)model with fuzzy probabilities (Huang [16, 17], Tanaka and Guo [15]).

The second issue is about the numerical solution algorithms for the distinct models. One of the fundamental ways is to reformulate (1.4) into a deterministic single-objective optimization problem. For example, in the researches of Best [4, 18], Pang [19], Kawadai and Konno [20], Perold [21], Sharpe [22], SzegÃ¶ [23], and Yoshimoto [6], they assumed that the return of each security, the variance, and the covariances among them can be estimated by the investor prior to decision. Under this assumption, the problem (1.4) is a deterministic problem. Furthermore, if an aversion coefficient is introduced, the problem (1.4) can be transformed into the following standard quadratic programming problem:

where is the expected value vector of , and are two given vectors denoting the lower and the upper bounds of decision vector, respectively.

Obviously, if in (1.5), then it implies that the return is maximized regardless of the investment risk. On the other hand, if , then the risk is minimized without consideration on the investment income. Increasing value of in the interval indicates an increasingly weight of the invest risk, and vice versa.

For a fixed , it is noted that (1.5) is a quadratic programming problem. Since it has been shown that the matrix is positive semidefinite, the problem (1.5) is a convex quadratic programming (CQP). For a CQP, there exist a lot of efficient methods to find its minimizers. Among them, active-set methods, interior-point methods, and gradient-projection methods have been widely used since the 1970s. For their detailed numerical performances, one can see [24â€“30] and the references therein. However, the efficiency of those methods seriously depends on the factorization techniques of matrix at each iteration, often exploiting the sparsity in for a large-scale quadratic programming. So, from the viewpoint of smaller storage requirements and computation cost, the methods mentioned above must not be most suitable for solving the problem (1.5) if is a dense matrix.

Fortunately, recent research shows that the conjugate gradient methods can remedy the drawback in factorization of Hessian matrix for an unconstrained minimization problem. At each conjugate-gradient iteration, it is only involved with computing the gradient of objective function. For details in this direction, see, for example, [31â€“34].

Motivated by the advantage of the conjugate gradient methods, the first aim of this paper is to reformulate problem (1.5) as an equivalent unconstrained optimization problem. Then, we are going to develop an efficient algorithm based on conjugate gradient methods to find its solution. The effectiveness of such algorithm will be tested by implementing the designed algorithm to solve some real problems from the stock market in China.

The lay out of the paper is as follows. Section 2 is devoted to the reformulation of the original constrained problem. Some features of the subproblem will be presented. Then, in Section 3, we are going to develop a penalty algorithm based on conjugate gradient methods. Section 4 will provide applications of the proposed algorithm. The last section concludes with some final remarks.

## 2. Reformulation

Firstly, for brevity, denote

Then, the problem (1.5) reads

Since the covariance matrix is symmetric positive semidefinite, also has such property. Thus, is a convex function.

For the equality constraint and the inequality constraints in (2.2), we define a function , which is used to describe the constraints violation:

where is called penalty parameter, and denotes the 2-norm of vector. If is a feasible point of problem (2.2), then

Actually, the larger the absolute value of is, the further from the feasible region is.

The function ,

is said to be a penalty function of the problem (2.2). It is noted that has the following features:

is a piecewise quadratic polynomial;

is piecewise continuously differentiable;

If is positive semidefinite, then is a piecewise convex quadratic function.

For example, let , and denote

Then, has the following more compact form:

where is a constant scalar.

Next, we are going to present some properties of .

Proposition 2.1.

Given , the piecewise function consists of

pieces.

Proposition 2.2.

Assume that . For any , define a matrix

where each of the other rows is . Then, the following results hold.

, where denotes the rank of a matrix.

For a fixed , all matrices , where , have the same eigenvalues and eigenvectors.

When , all matrices , where , have nonnegative eigenvalue, and hence they are positive semidefinite. When , they are positive definite matrices.

Proof.

From the construction of and the linear algebra theory, it is not difficult to prove the above two propositions. We omit it.

In the following, we turn to state the relation between the global minimizer of and that of the original problem (1.5).

Theorem 2.3.

For a given sequence , suppose that as . Let be an exact global minimizer of . Then, every accumulation point of is a solution of problem (1.5).

Proof.

Let be a global solution of problem (1.5). Then, for any feasible point , we have

Since is an exact global minimizer of for the fixed , it follows that

By definition, (2.11) is equivalent to

where the last equality is from the feasibility of . So, it is obtained that

Let be an accumulation point of . Without loss of generality, assume that

Then, by taking the limit of on both sides of (2.13), we have

where the last equality follows from . It follows that

Therefore, we have proved that is a feasible point.

In the following, we prove that is a global minimizer of problem (1.5).

Because

is a global minimizer of .

The desired result has been proved.

Without difficulty, the following result can be proved.

Theorem 2.4.

Suppose that is a solution of problem (1.5). Then, is a global minimizer of for any .

Based on Theorems 2.3 and 2.4, we will develop an algorithm to search for a solution of problem (1.5) by solving a sequence of piecewise quadratical programming problems.

## 3. Penalty Algorithm Based on Conjugate Gradient Method

Among all methods for the unconstrained optimization problems, the conjugate gradient method is regarded as one of the most powerful approaches due to its smaller storage requirements and computation cost. Its priorities over other methods have been addressed in many literatures. For example, in [27, 32, 34â€“38], the global convergence theory and the detailed numerical performances on the conjugate gradient methods have been extensively investigated.

Since the number of the possible selected securities in the investment management is large and the matrix may be dense, it is natura that the conjugate gradient method is selected to find the minimizer of for some given . However, it is noted that (2.7) is not a classical quadratic function. The standard procedures of minimizing a quadratic function can not be directly employed. To develop a new algorithm, we first propose an rule of updating the coefficients in .

Regarding the coefficients of the quadratic terms in

we modify according to the following update rule:

Regarding the coefficients of the linear terms in

we modify according to the following update rule:

Define

The conjugate gradient method will be employed into an ordinary minimization of quadratic function:

where is a given parameter. It is easy to see that

Although there exist several variants on the conjugate gradient method, the fundamental computing procedures for the solution of (3.6) include the following two steps.

At the current iterate point , determinate a search direction:

where is chosen such that is a conjugate direction of with respect to the matrix .

Along the direction , choose a step size such that, at the new iterate point

the absolute value of the function decreases sufficiently.

The following lemma presents a method to determine the search direction.

Lemma 3.1.

If

then in (3.8) is a conjugate direction of with respect to .

Proof.

Owing to

the desired result is obtained.

Actually, the formula (3.10) is called HS method.

In the case that the step size is chosen by exact linear search along the direction , that is,

we have the following global convergence theorem.

Theorem 3.2.

Let be an arbitrary initial vector. Let be a sequence generated by the conjugate gradient algorithm defined by (3.8)â€“(3.13). Then, either

or

In particular, if is an accumulation point of the sequence , then is a global minimizer of .

Remark 3.3.

If is computed by

then the results in Theorem 3.2 still hold. Equation (3.16) is called FR method.

Based on the discussion above, we now come to develop a penalty algorithm based on conjugate gradient method in the last of this section.

Algorithm 1 (Penalty Algorithm Based on Conjugate Gradient Method).

Step 1 Z (Initialization).

Given constant scalars , , , and . Input the expected return vector , and compute and . Choose an initial solution . Set , , and .

Step 1 (Reformulation).

If

then set

and go to Step 4; otherwise, go to Step 2.

Step 2 (Search Direction).

Compute the search direction by (3.8) and (3.10).

Step 3 (Exact Line Search).

Compute by (3.13), and update

Return to Step 1.

Step 4 (Feasibility Test).

Check feasibility of in problem (2.2). If

the algorithm terminates; otherwise, go to Step 5.

Step 5 (Update).

Set , , . At the new iterate point , modify the matrix and the vector by (3.2) and (3.4), respectively. Set , and return to Step 1.

Remark 3.4.

In Algorithm 1, the index denotes the number of updating penalty parameter, and denotes the number of iterations of conjugate gradient method for unconstrained subproblem (3.6).

For some fixed , it is easy to see that the condition

implies that is feasible. From Theorem 2.3, it leads that is a global minimizer of the original problem (1.5) if is a global minimizer of problem (3.6).

## 4. Numerical Experiments

In this section, we are going to test the effectiveness of Algorithm 1. All the test problems come from the real stock market in China, in 2007. The computer procedures are implemented on MATLAB 6.5.

In our numerical experiments, the initial solution is chosen to satisfy

the bound vector is a vector of all zeros, and is a vector of all ones. We take the initial penalty parameter and the aversion coefficient . The tolerance of error is taken as

We implement Algorithm 1 to solve ten real problems. Each of them has a different dimension ranging from 10 to 100. In these problems, the expected return rates of each stock come from the monthly data in the stock market of China, in 2007. In Table 3, we list the data used to form a real problem whose size of dimension is 30.

In Table 1, we report the numerical behavior of Algorithm 1 for all ten problems.

In Table 1, is the dimensional size of each problem; the third and the forth columns report the CPU time when is evaluated by HS method and FR method, respectively. indicates the number of updating penalty parameter, is the penalty parameter, and denotes the value of penalty term.

In Table 2, we list the obtained optimal solution for each problem.

## 5. Final Remarks

In this paper, the biobjectives optimization model of portfolio management was reformulated as an unconstrained minimization problem. We also presented the properties of the obtained quadratic function.

Regarding the features of the optimization models in portfolio management, a class of penalty algorithms based on the conjugate gradient method was developed. The numerical performance of the proposed algorithm in solving the real problems verifies its effectiveness.

## References

Markowitz H:

**Portfolio selection.***Journal of Finance*1952,**7:**77â€“91. 10.2307/2975974Simaan Y:

**Estimation risk in portfolio selection: the mean variance model versus the mean absolute deviation model.***Management Science*1997,**43:**1437â€“1446. 10.1287/mnsc.43.10.1437Williams JO:

**Maximizing the probability of achieving investment goals.***Journal of Portfolio Management*1997,**24:**77â€“81. 10.3905/jpm.1997.409627Best MJ, Jaroslava H:

**The efficient frontier for bounded assets.***Mathematical Methods of Operations Research*2000,**52**(2):195â€“212. 10.1007/s001860000073Konno H, Suzuki K:

**A mean-variance-skewness optimization model.***Journal of Operations Research Society of Japan*1995,**38:**137â€“187.Yoshimoto A:

**The mean-variance approach to portfolio optimization subject to transaction costs.***Journal of the Operations Research Society of Japan*1996,**39**(1):99â€“117.Cai X, Teo KL, Yang X, Zhou XY:

**Portfolio optimization under a minimax rule.***Management Science*2000,**46**(7):957â€“972. 10.1287/mnsc.46.7.957.12039Deng XT, Li ZF, Wang SY:

**A minimax portfolio selection strategy with equilibrium.***European Journal of Operational Research*2005,**166**(1):278â€“292. 10.1016/j.ejor.2004.01.040Giove S, Funari S, Nardelli C:

**An interval portfolio selection problem based on regret function.***European Journal of Operational Research*2006,**170**(1):253â€“264. 10.1016/j.ejor.2004.05.030Ida M:

**Solutions for the portfolio selection problem with interval and fuzzy coefficients.***Reliable Computing*2004,**10**(5):389â€“400.Lai KK, Wang SY, Xu JP, Zhu SS, Fang Y:

**A class of linear interval programming problems and its application to portfolio selection.***IEEE Transactions on Fuzzy Systems*2002,**10**(6):698â€“704. 10.1109/TFUZZ.2002.805902Parra MA, Terol AB, Uria MVR:

**A fuzzy goal programming approach to portfolio selection.***European Journal of Operational Research*2001,**133**(2):287â€“297. 10.1016/S0377-2217(00)00298-8Zhang WG, Nie ZK:

**On admissible efficient portfolio selection problem.***Applied Mathematics and Computation*2004,**159:**357â€“371. 10.1016/j.amc.2003.10.019Carlsson C, FullÃ©r R, Majlender P:

**A possibilistic approach to selecting portfolios with highest utility score.***Fuzzy Sets and Systems*2002,**131**(1):13â€“21. 10.1016/S0165-0114(01)00251-2Tanaka H, Guo P:

**Portfolio selection based on upper and lower exponential possibility distributions.***European Journal of Operational Research*1999,**114:**115â€“126. 10.1016/S0377-2217(98)00033-2Huang XX:

**Fuzzy chance-constrained portfolio selection.***Applied Mathematics and Computation*2006,**177**(2):500â€“507. 10.1016/j.amc.2005.11.027Huang XX:

**Two new models for portfolio selection with stochastic returns taking fuzzy information.***European Journal of Operational Research*2007,**180**(1):396â€“405. 10.1016/j.ejor.2006.04.010Best MJ, Grauer RR:

**The efficient set mathematics when mean-variance problems are subject to general linear constrains.***Journal of Economics and Business*1990,**42:**105â€“120. 10.1016/0148-6195(90)90027-APang JS:

**A new and efficient algorithm for a class of portfolio selection problems.***Operations Research*1980,**28**(3, part 2):754â€“767. 10.1287/opre.28.3.754Kawadai N, Konno H:

**Solving large scale mean-variance models with dense non-factorable covariance matrices.***Journal of the Operations Research Society of Japan*2001,**44**(3):251â€“260.Perold AF:

**Large-scale portfolio optimization.***Management Science*1984,**30**(10):1143â€“1160. 10.1287/mnsc.30.10.1143Sharpe WF:

*Portfolio Theory and Capital Markets*. McGraw-Hil, New York, NY, USA; 1970.SzegÃ¶ GP:

*Portfolio Theory, Economic Theory, Econometrics, and Mathematical Economics*. Academic Press, New York, NY, USA; 1980:xiv+215.Andersen ED, Gondzio J, Meszaros C, Xu X:

**Implementation of interior-point methods for large scale linear programs.**In*Interior Point Methods of Mathematical Programming, Applied Optimization*.*Volume 5*. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1996:189â€“252. 10.1007/978-1-4613-3449-1_6Gondzio J, Grothey A:

**Parallel interior-point solver for structured quadratic programs: application to financial planning problems.***Annals of Operations Research*2007,**152:**319â€“339. 10.1007/s10479-006-0139-zMehrotra S:

**On the implementation of a primal-dual interior point method.***SIAM Journal on Optimization*1992,**2**(4):575â€“601. 10.1137/0802028Nocedal J, Wright SJ:

*Numerical Optimization, Springer Series in Operations Research and Financial Engineering*. 2nd edition. Springer, New York, NY, USA; 2006:xxii+664.Potra F, Roos C, Terlaky T (Eds): In

*Special Issue on Interior-Point Methods, Optimization Method and Software*. 1999.,**11â€“12:**Solodov MV, Tseng P:

**Modified projection-type methods for monotone variational inequalities.***SIAM Journal on Control and Optimization*1996,**34**(5):1814â€“1830. 10.1137/S0363012994268655Coleman TF, Hulbert LA:

**A direct active set algorithm for large sparse quadratic programs with simple bounds.***Mathematical Programming*1989,**45**(3):373â€“406. 10.1007/BF01589112Andrei N:

**A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization.***Applied Mathematics Letters*2008,**21**(2):165â€“171. 10.1016/j.aml.2007.05.002Dai Y, Ni Q:

**Testing different conjugate gradient methods for large-scale unconstrained optimization.***Journal of Computational Mathematics*2003,**21**(3):311â€“320.Shi Z-J, Shen J:

**Convergence of Liu-Storey conjugate gradient method.***European Journal of Operational Research*2007,**182**(2):552â€“560. 10.1016/j.ejor.2006.09.066Sun J, Yang X, Chen X:

**Quadratic cost flow and the conjugate gradient method.***European Journal of Operational Research*2005,**164**(1):104â€“114. 10.1016/j.ejor.2003.04.003Dai YH, Yuan Y:

**Convergence properties of the conjugate descent method.***Advances in Mathematics*1996,**25**(6):552â€“562.Liu Y, Storey C:

**Efficient generalized conjugate gradient algorithms. I. Theory.***Journal of Optimization Theory and Applications*1991,**69**(1):129â€“137. 10.1007/BF00940464Nocedal J:

**Conjugate gradient methods and nonlinear optimization.**In*Linear and Nonlinear Conjugate Gradient-Related Methods*. Edited by: Adams L, Nazareth JL. SIAM, Philadelphia, Pa, USA; 1996:9â€“23.Sun J, Zhang JP:

**Convergence of conjugate gradient methods without line search.***Annals of Operations Research*2001,**103:**161â€“173. 10.1023/A:1012903105391

## Acknowledgments

The authors are grateful to the editors and the anonymous three referees for their suggestions, which have greatly improved the presentation of this paper. This work is supported by the National Natural Science Fund of China (grant no.60804037) and the project for Excellent Talent of New Century, Ministry of Education, China(grant no.NCET-07-0864).

## Author information

### Authors and Affiliations

### Corresponding author

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Wan, Z., Zhang, S.J. & Wang, Y.L. Penalty Algorithm Based on Conjugate Gradient Method for Solving Portfolio Management Problem.
*J Inequal Appl* **2009**, 970723 (2009). https://doi.org/10.1155/2009/970723

Received:

Revised:

Accepted:

Published:

DOI: https://doi.org/10.1155/2009/970723