Open Access

-Duality Theorems for Convex Semidefinite Optimization Problems with Conic Constraints

Journal of Inequalities and Applications20102010:363012

https://doi.org/10.1155/2010/363012

Received: 30 October 2009

Accepted: 10 December 2009

Published: 12 January 2010

Abstract

A convex semidefinite optimization problem with a conic constraint is considered. We formulate a Wolfe-type dual problem for the problem for its -approximate solutions, and then we prove -weak duality theorem and -strong duality theorem which hold between the problem and its Wolfe type dual problem. Moreover, we give an example illustrating the duality theorems.

1. Introduction

Convex semidefinite optimization problem is to optimize an objective convex function over a linear matrix inequality. When the objective function is linear and the corresponding matrices are diagonal, this problem becomes a linear optimization problem.

For convex semidefinite optimization problem, Lagrangean duality without constraint qualification [1, 2], complete dual characterization conditions of solutions [1, 3, 4], saddle point theorems [5], and characterizations of optimal solution sets [6, 7] have been investigated.

To get the -approximate solution, many authors have established -optimality conditions, -saddle point theorems and -duality theorems for several kinds of optimization problems [1, 816].

Recently, Jeyakumar and Glover [11] gave -optimality conditions for convex optimization problems, which hold without any constraint qualification. Yokoyama and Shiraishi [16] gave a special case of convex optimization problem which satisfies -optimality conditions. Kim and Lee [12] proved sequential -saddle point theorems and -duality theorems for convex semidefinite optimization problems which have not conic constraints.

The purpose of this paper is to extend the -duality theorems by Kim and Lee [12] to convex semidefinite optimization problems with conic constraints. We formulate a Wolfe type dual problem for the problem for its -approximate solutions, and then prove -weak duality theorem and -strong duality theorem for the problem and its Wolfe type dual problem, which hold under a weakened constraint qualification. Moreover, we give an example illustrating the duality theorems.

2. Preliminaries

Consider the following convex semidefinite optimization problem:

(2.1)

where is a convex function, is a closed convex cone of , and for , where is the space of real symmetric matrices. The space is partially ordered by the L wner order, that is, for if and only if is positive semidefinite. The inner product in is defined by , where is the trace operation.

Let . Then is self-dual, that is,
(2.2)
Let , , Then is a linear operator from to and its dual is defined by
(2.3)

for any Clearly, is the feasible set of SDP.

Definition 2.1.

Let be a convex function.

(1)The subdifferential of at where , is given by
(2.4)

where is the scalar product on .

(2)The -subdifferential of at is given by
(2.5)

Definition 2.2.

Let Then is called an -approximate solution of SDP, if, for any ,
(2.6)

Definition 2.3.

The conjugate function of a function is defined by
(2.7)

Definition 2.4.

The epigraph of a function , , is defined by
(2.8)

If is sublinear (i.e., convex and positively homogeneous of degree one), then , for all . If , , , then . It is worth nothing that if is sublinear, then

(2.9)

Moreover, if is sublinear and if , , and , then

(2.10)

Definition 2.5.

Let be a closed convex set in and .

(1)Let . Then is called the normal cone to at .

(2)Let . Let . Then is called the -normal set to at .

(3)When is a closed convex cone in , we denoted by and called the negative dual cone of .

Proposition 2.6 (see [17, 18]).

Let be a convex function and let be the indicator function with respect to a closed convex subset C of , that is, if , and if . Let . Then
(2.11)

Proposition 2.7 (see [7]).

Let be a continuous convex function and let be a proper lower semicontinuous convex function. Then
(2.12)

Following the proof of Lemma in [1], we can prove the following lemma.

Lemma 2.8.

Let . Suppose that Let and . Then the following are equivalent:
(2.13)

3. -Duality Theorem

Now we give -duality theorems for SDP. Using Lemma 2.8, we can obtain the following lemma which is useful in proving our -strong duality theorems for SDP.

Lemma 3.1.

Let . Suppose that
(3.1)
is closed. Then is an -approximate solution of SDP if and only if there exists such that for any ,
(3.2)

Proof.

( ) Let be an -approximate solution of SDP. Then , for any . Let . Then , for any . Thus we have, from Proposition 2.7,
(3.3)
and hence, . So there exists such that and hence there exists such that for any . Since , for any ; and hence it follows from Lemma 2.8 that
(3.4)
Thus there exist , and such that
(3.5)
This gives
(3.6)
for any . Thus we have
(3.7)

for any .

( ) Suppose that there exists such that

(3.8)
for any . Then we have
(3.9)

for any Thus , for any . Hence is an -approximate solution of SDP.

Now we formulate the dual problem SDD of SDP as follows:

(3.10)

We prove -weak and -strong duality theorems which hold between SDP and SDD.

Theorem 3.2 ( -weak duality).

For any feasible solution of SDP and any feasible solution of SDD,
(3.11)

Proof.

Let and be feasible solutions of SDP and SDD respectively. Then and there exist and such that . Thus, we have
(3.12)

Hence .

Theorem 3.3 ( -strong duality).

Suppose that
(3.13)

is closed. If is an -approximate solution of SDP, then there exists such that is a -approximate solution of SDD.

Proof.

Let be an -approximate solution of SDP. Then for any By Lemma 3.1, there exists such that
(3.14)

for any . Letting in (3.14), . Since and , .

Thus from (3.14),

(3.15)
for any . Hence is an -approximate solution of the following problem:
(3.16)
and so, , and hence, by Proposition 2.6, there exist , such that and
(3.17)
So, is a feasible solution of SDD. For any feasible solution of SDD,
(3.18)

Thus is a 2 -approximate solution to SDD.

Now we characterize the -normal set to .

Proposition 3.4.

Let and Then
(3.19)
where
(3.20)

Proof.

Let and . Then
(3.21)
Let (where is at the th position in )
(3.22)
Thus, we have
(3.23)

From Proposition 3.4, we can calculate .

Corollary 3.5.

Let and Then following hold.

(i)If , then

(ii)If and , then

(iii)If and , then

(iv)If and and , then
(3.24)

Now we give an example illustrating our -duality theorems.

Example 3.6.

Consider the following convex semidefinite program.
(3.25)
Let ,
(3.26)
and 0. Let and
(3.27)
Then is the set of all feasible solutions of SDP and the set of all -approximate solutions of SDP is . Let . Then is the set of all feasible solution of SDD. Now we calculate the set .
(3.28)
Thus . We can check that for any and any ,
(3.29)

that is, -weak duality holds.

Let be an -approximate solution of SDP. Then and . So, we can easily check that .

Since , from (3.29),

(3.30)

for any . So is an -approximate solution of SDD. Hence -strong duality holds.

Declarations

Acknowledgment

This work was supported by the Korea Science and Engineering Foundation (KOSEF) NRL Program grant funded by the Korean government (MEST)(no. R0A-2008-000-20010-0).

Authors’ Affiliations

(1)
Department of Applied Mathematics, Pukyong National University

References

  1. Jeyakumar V, Dinh N: Avoiding duality gaps in convex semidefinite programming without Slater's condition. In Applied Mathematics Report. University of New South Wales, Sydney, Australia; 2004.Google Scholar
  2. Ramana MV, Tunçel L, Wolkowicz H: Strong duality for semidefinite programming. SIAM Journal on Optimization 1997, 7(3):641–662. 10.1137/S1052623495288350MathSciNetView ArticleMATHGoogle Scholar
  3. Jeyakumar V, Lee GM, Dinh N: New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs. SIAM Journal on Optimization 2003, 14(2):534–547. 10.1137/S1052623402417699MathSciNetView ArticleMATHGoogle Scholar
  4. Jeyakumar V, Nealon MJ: Complete dual characterizations of optimality for convex semidefinite programming. In Constructive, Experimental, and Nonlinear Analysis (Limoges, 1999), CMS Conference Proceedings. Volume 27. American Mathematical Society, Providence, RI, USA; 2000:165–173.Google Scholar
  5. Dinh N, Jeyakumar V, Lee GM: Sequential Lagrangian conditions for convex programs with applications to semidefinite programming. Journal of Optimization Theory and Applications 2005, 125(1):85–112. 10.1007/s10957-004-1712-8MathSciNetView ArticleMATHGoogle Scholar
  6. Jeyakumar V, Lee GM, Dinh N: Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs. Journal of Optimization Theory and Applications 2004, 123(1):83–103.MathSciNetView ArticleMATHGoogle Scholar
  7. Jeyakumar V, Lee GM, Dinh N: Characterizations of solution sets of convex vector minimization problems. European Journal of Operational Research 2006, 174(3):1380–1395. 10.1016/j.ejor.2005.05.007MathSciNetView ArticleMATHGoogle Scholar
  8. Govil MG, Mehra A: -optimality for multiobjective programming on a Banach space. European Journal of Operational Research 2004, 157(1):106–112. 10.1016/S0377-2217(03)00206-6MathSciNetView ArticleMATHGoogle Scholar
  9. Gutiérrez C, Jiménez B, Novo V: Multiplier rules and saddle-point theorems for Helbig's approximate solutions in convex Pareto problems. Journal of Global Optimization 2005, 32(3):367–383. 10.1007/s10898-004-5904-4MathSciNetView ArticleMATHGoogle Scholar
  10. Hamel A: An -lagrange multiplier rule for a mathematical programming problem on Banach spaces. Optimization 2001, 49(1–2):137–149. 10.1080/02331930108844524MathSciNetView ArticleMATHGoogle Scholar
  11. Jeyakumar V, Glover BM: Characterizing global optimality for DC optimization problems under convex inequality constraints. Journal of Global Optimization 1996, 8(2):171–187. 10.1007/BF00138691MathSciNetView ArticleMATHGoogle Scholar
  12. Kim GS, Lee GM: On -approximate solutions for convex semidefinite optimization problems. Taiwanese Journal of Mathematics 2007, 11(3):765–784.MathSciNetMATHGoogle Scholar
  13. Liu JC: -duality theorem of nondifferentiable nonconvex multiobjective programming. Journal of Optimization Theory and Applications 1991, 69(1):153–167. 10.1007/BF00940466MathSciNetView ArticleMATHGoogle Scholar
  14. Liu JC: -Pareto optimality for nondifferentiable multiobjective programming via penalty function. Journal of Mathematical Analysis and Applications 1996, 198(1):248–261. 10.1006/jmaa.1996.0080MathSciNetView ArticleMATHGoogle Scholar
  15. Strodiot J-J, Nguyen VH, Heukemes N: -optimal solutions in nondifferentiable convex programming and some related questions. Mathematical Programming 1983, 25(3):307–328. 10.1007/BF02594782MathSciNetView ArticleMATHGoogle Scholar
  16. Yokoyama K, Shiraishi S: An -optimal condition for convex programming problems without Slater's constraint qualifications. preprintGoogle Scholar
  17. Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms. I. Fundamentals, Grundlehren der mathematischen Wissenschaften. Volume 305. Springer, Berlin, Germany; 1993.View ArticleGoogle Scholar
  18. Hiriart-Urruty JB, Lemarechal C: Convex Analysis and Minimization Algorithms. II. Advanced Theory and Bundle Methods, Grundlehren der mathematischen Wissenschaften. Volume 306. Springer, Berlin, Germany; 1993.Google Scholar

Copyright

© G.M. Lee and J.H. Lee 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.