Skip to main content

Parallel LQP alternating direction method for solving variational inequality problems with separable structure

Abstract

In this paper, we propose a logarithmic-quadratic proximal alternating direction method for structured variational inequalities. The predictor is obtained by solving series of related systems of nonlinear equations, and the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along a new descent direction. Global convergence of the new method is proved under certain assumptions. Preliminary numerical experiments are included to verify the theoretical assertions of the proposed method.

MSC: 90C33, 49J40.

1 Introduction

The problem we are concerned with in this paper is for the following variational inequalities: find uΩ such that

( u u ) T F(u)0, u Ω,
(1.1)

with

u = ( x y ) , F ( u ) = ( f ( x ) g ( y ) ) and Ω = { ( x , y ) x R + n , y R + m , A x + B y = b } ,
(1.2)

where A R l × n , B R l × m are given matrices, b R l is a given vector, and f: R + n R n , g: R + m R m are given monotone operators. Studies and applications of such problems can be found in [111]. By attaching a Lagrange multiplier vector λ R l to the linear constraint Ax+By=b, problem (1.1)-(1.2) can be explained in terms of finding wW such that

( w w ) T Q(w)0, w W,
(1.3)

where

w= ( x y λ ) ,Q(w)= ( f ( x ) A T λ g ( y ) B T λ A x + B y b ) ,W= R + n × R + m × R l .
(1.4)

Problem (1.3)-(1.4) is referred to as SVI (structured variational inequalities).

The alternating direction method (ADM) is a powerful method for solving the structured problem (1.3)-(1.4), since it decomposes the original problems into a series of subproblems with a lower scale, originally proposed by Gabay and Mercier [4] and Gabay [3]. The classical proximal alternating direction method (PADM) [1215] is an effective numerical approach for solving variational inequalities with separable structure. In [16], He proposed splitting augmented Lagrangian methods for structured monotone variational inequalities whose operator is composed by two separable operators. For given ( x k , y k , λ k ) R + + n × R + + m × R l , the predictor w ˜ k =( x ˜ k , y ˜ k , λ ˜ k ) is obtained via the following steps:

Step 1. Solve the following variational inequality to obtain x ˜ k :

( x x ˜ k ) T { f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] } 0, x R + + n .
(1.5)

Step 2. Solve the following variational inequality to obtain y ˜ k :

( y y ˜ k ) T { g ( y ˜ k ) B T [ λ k H ( A x k + B y ˜ k b ) ] } 0, y R + + m .
(1.6)

Step 3. Update λ k via

λ ˜ k = λ k H ( A x ˜ k + B y ˜ k b ) .
(1.7)

The main advantage of the method of [16] is that the predictor is obtained by solving subproblems in a parallel wise. Recently, Yuan and Li [17] have developed a logarithmic quadratic proximal (LQP)-based decomposition method by applying the LQP terms to regularize the ADM subproblems. The new iterative ( x k + 1 , y k + 1 , λ k + 1 ) in [17] is obtained via the following procedure: From a given w k =( x k , y k , λ k ) R + + n × R + + m × R l , and μ(0,1), ( x k + 1 , y k + 1 , λ k + 1 ) is obtained via solving the following system:

f ( x ) A T [ λ k H ( A x + B y k b ) ] + R [ ( x x k ) + μ ( x k X k 2 x 1 ) ] = 0 , g ( y ) B T [ λ k H ( A x k + 1 + B y b ) ] + S [ ( y y k ) + μ ( y k Y k 2 y 1 ) ] = 0 , λ k + 1 = λ k H ( A x k + B y k b ) ,

where X k =diag( x 1 k , x 2 k ,, x n k ). Note that the LQP method was presented originally in [18]. Later, Bnouhachem et al. [1921] have proposed a new inexact LQP alternating direction method by solving a series of related systems of nonlinear equations. Very recently, Li [22] presented an LQP-based prediction-correction method: the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along a descent direction.

In the present paper, inspired by the above cited works and by the recent works going in this direction, we proposed a new LQP-based prediction-correction method. The predictor is obtained by using the idea of He [16] to solve series of related systems of nonlinear equations, and the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along another descent direction. Under certain conditions, we proved the global convergence of the proposed algorithm. Preliminary numerical experiments are included to verify the theoretical assertions of the proposed method.

2 The proposed method

In this section, we recall some basic definitions and properties, which will be frequently used in our later analysis. Some useful results proved already in the literature are also summarized. The first lemma provides some basic properties of projection onto Ω.

Lemma 2.1 Let G be a symmetric positive definite matrix and Ω be a nonempty closed convex subset of R l , we denote P Ω , G [] as the projection under the G-norm, i.e.,

P Ω , G [v]=argmin { v u G u Ω } .

Then, we have the following inequalities:

( z P Ω , G [ z ] ) T G ( P Ω , G [ z ] v ) 0,z R l ,vΩ;
(2.1)
P Ω , G [ u ] P Ω , G [ v ] G u v G ,u,v R l ;
(2.2)
u P Ω , G [ z ] G 2 z u G 2 z P Ω , G [ z ] G 2 ,z R l ,uΩ.
(2.3)

In course, we always make the following standard assumptions:

Assumption A f is monotone with respect to R + + n and g is monotone with respect to R + + m .

Assumption B The solution set of SVI, denoted by W , is nonempty.

Now, we suggest and consider the new LQP alternating direction method (LQP-ADM) for solving SVI as follows.

Prediction step: For a given w k =( x k , y k , λ k ) R + + n × R + + m × R l , and μ(0,1), the predictor w ˜ k =( x ˜ k , y ˜ k , λ ˜ k ) R + + n × R + + m × R l is obtained via solving the following system:

f(x) A T [ λ k H ( A x + B y k b ) ] +R [ ( x x k ) + μ ( x k X k 2 x 1 ) ] =0,
(2.4a)
g(y) B T [ λ k H ( A x k + B y b ) ] +S [ ( y y k ) + μ ( y k Y k 2 y 1 ) ] =0,
(2.4b)
λ ˜ k = λ k H ( A x ˜ k + B y ˜ k b ) .
(2.4c)

Correction step: The new iterate w k + 1 ( α k )=( x k + 1 , y k + 1 , λ k + 1 ) is given by

w k + 1 ( α k )=(1σ) w k +σ P W [ w k α k G 1 d ( w k , w ˜ k ) ] ,σ(0,1),
(2.5)

where

α k = φ k w k w ˜ k G 2 ,
(2.6)
φ k : = w k w ˜ k M 2 + ( λ k λ ˜ k ) T ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) , d ( w k , w ˜ k ) = ( f ( x ˜ k ) A T λ ˜ k + A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) g ( y ˜ k ) B T λ ˜ k + B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) A x ˜ k + B y ˜ k b )
(2.7)

and

G = ( ( 1 + μ ) R + A T H A 0 0 0 ( 1 + μ ) S + B T H B 0 0 0 H 1 ) , M = ( R + A T H A 0 0 0 S + B T H B 0 0 0 H 1 ) .

We need the following result in the convergence analysis of the proposed method.

Lemma 2.2 [17]

Let q be a monotone mapping of u with respect to R n + and R R n × n be positive definite diagonal matrix. For given u k >0, if we let U k :=diag( u 1 k , u 2 k ,, u n k ) and u 1 be an n-vector whose jth element is 1/ u j , then the equation

q(u)+R [ ( u u k ) + μ ( u k U k 2 u 1 ) ] =0
(2.8)

has a unique positive solution u. Moreover, for any v0, we have

( v u ) T q(u) 1 + μ 2 ( u v R 2 u k v R 2 ) + 1 μ 2 u k u R 2 .
(2.9)

In the next theorem, we show that α k is lower bounded away from zero, and it is one of the keys to prove the global convergence results.

Theorem 2.1 For given w k R + + n × R + + m × R l , let w ˜ k be generated by (2.4a)-(2.4c), then we have the following:

φ k 2 2 2 w k w ˜ k G 2
(2.10)

and

α k 2 2 2 .
(2.11)

Proof It follows from (2.7) that

φ k = w k w ˜ k M 2 + ( λ k λ ˜ k ) T ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) = x k x ˜ k R 2 + A x k A x ˜ k H 2 + y k y ˜ k S 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 + ( λ k λ ˜ k ) T ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) .
(2.12)

By using the Cauchy-Schwarz inequality, we have

( λ k λ ˜ k ) T ( A ( x k x ˜ k ) ) 1 2 ( 2 A ( x k x ˜ k ) H 2 + 1 2 λ k λ ˜ k H 1 2 )
(2.13)

and

( λ k λ ˜ k ) T ( B ( y k y ˜ k ) ) 1 2 ( 2 B ( y k y ˜ k ) H 2 + 1 2 λ k λ ˜ k H 1 2 ) .
(2.14)

Substituting (2.13) and (2.14) into (2.12), we get

φ k 2 2 2 ( A x k A x ˜ k H 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 ) + x k x ˜ k R 2 + y k y ˜ k S 2 2 2 2 ( A x k A x ˜ k H 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 ) + ( 2 2 ) ( x k x ˜ k R 2 + y k y ˜ k S 2 ) = 2 2 2 ( w k w ˜ k G 2 + ( 1 μ ) x k x ˜ k R 2 + ( 1 μ ) y k y ˜ k S 2 ) 2 2 2 w k w ˜ k G 2 .

Therefore, it follows from (2.6) and (2.10) that

α k 2 2 2

and this completes the proof. □

3 Basic results

In this section, we prove some basic properties, which will be used to establish the sufficient and necessary conditions for the convergence of the proposed method. The following results are due to applying Lemma 2.2 to the LQP system in the prediction step of the proposed method.

Lemma 3.1 For given w k =( x k , y k , λ k ) R + + n × R + + m × R l , let w ˜ k be generated by (2.4a)-(2.4c). Then for any w =( x , y , λ ) W , we have

( x k x ˜ k ) T { ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k + A T H A ( x k x ˜ k ) A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } μ x k x ˜ k R 2
(3.1)

and

( y k y ˜ k ) T { ( 1 + μ ) S ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k + B T H B ( y k y ˜ k ) B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } μ y k y ˜ k S 2 ,
(3.2)

where

w k = ( x k , y k , λ k ) := P W [ w k α k G 1 d ( w k , w ˜ k ) ] .
(3.3)

Proof Applying Lemma 2.2 to (2.4a) by setting u k = x k , u= x ˜ k , v= x k in (2.9) and

q(u)=f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] ,

we get

( x k x ˜ k ) T { f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] } 1 + μ 2 ( x ˜ k x k R 2 x k x k R 2 ) + 1 μ 2 x k x ˜ k R 2 .
(3.4)

Recall

( x k x ˜ k ) T R ( x k x ˜ k ) = 1 2 ( x ˜ k x k R 2 x k x k R 2 ) + 1 2 x k x ˜ k R 2 .
(3.5)

Adding (3.4) and (3.5), we obtain

( x k x ˜ k ) T { ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k + A T H A ( x k x ˜ k ) A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } μ x k x ˜ k R 2 .

Similarly, applying Lemma 2.2 to (2.4b), substituting u k = y k , u= y ˜ k , v= y k , and replacing R, n with S, m, respectively, in (2.9) and

q(u)=g ( y ˜ k ) B T [ λ k H ( A x k + B y ˜ k b ) ] ,

we get

( y k y ˜ k ) T { g ( y ˜ k ) B T [ λ k H ( A x k + B y ˜ k b ) ] } 1 + μ 2 ( y ˜ k y k S 2 y k y k S 2 ) + 1 μ 2 y k y ˜ k S 2 .
(3.6)

Recall

( y k y ˜ k ) T S ( y k y ˜ k ) = 1 2 ( y ˜ k y k S 2 y k y k S 2 ) + 1 2 y k y ˜ k S 2 .
(3.7)

Adding (3.6) and (3.7), we have

( y k y ˜ k ) T { ( 1 + μ ) S ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k + B T H B ( y k y ˜ k ) B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } μ y k y ˜ k S 2 .

The assertion of this lemma is proved. □

Theorem 3.1 Let w W , w k + 1 ( α k ) be defined by (2.5) and

Θ( α k ):= w k w G 2 w k + 1 ( α k ) w G 2 ,
(3.8)

then we have

Θ( α k )σ ( w k w k α k ( w k w ˜ k ) G 2 + 2 α k φ k α k 2 w k w ˜ k G 2 ) .
(3.9)

Proof It follows from (3.1) and (3.2) that

( x k x ˜ k y k y ˜ k λ k λ ˜ k ) T ( ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k + A T H A ( x k x ˜ k ) A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) ( 1 + μ ) S ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k + B T H B ( y k y ˜ k ) B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) H 1 ( λ k λ ˜ k ) ( A x ˜ k + B y ˜ k b ) ) μ x k x ˜ k R 2 + μ y k y ˜ k S 2 ,

which implies

2 α k ( w k w ˜ k ) T ( G ( w k w ˜ k ) d ( w k , w ˜ k ) ) 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 0.
(3.10)

Since w W and w k = P W [ w k α k G 1 d( w k , w ˜ k )], it follows from (2.3) that

w k w G 2 w k α k G 1 d ( w k , w ˜ k ) w G 2 w k α k G 1 d ( w k , w ˜ k ) w k G 2 .
(3.11)

From (2.5), we get

w k + 1 ( α k ) w G 2 = ( 1 σ ) ( w k w ) + σ ( w k w ) G 2 = ( 1 σ ) 2 w k w G 2 + σ 2 w k w G 2 + 2 σ ( 1 σ ) ( w k w ) T G ( w k w ) .

Using the following identity:

2 ( a + b ) T Gb= a + b G 2 a G 2 + b G 2

for a= w k w k , b= w k w , and (3.11), we obtain

w k + 1 ( α k ) w G 2 = ( 1 σ ) 2 w k w G 2 + σ 2 w k w G 2 + σ ( 1 σ ) { w k w G 2 w k w k G 2 + w k w G 2 } = ( 1 σ ) w k w G 2 + σ w k w G 2 σ ( 1 σ ) w k w k G 2 ( 1 σ ) w k w G 2 + σ w k α k G 1 d ( w k , w ˜ k ) w G 2 σ w k α k G 1 d ( w k , w ˜ k ) w k G 2 σ ( 1 σ ) w k w k G 2 .
(3.12)

Using the definition of Θ( α k ) and (3.12), we get

Θ ( α k ) σ w k w k G 2 + 2 σ α k ( w k w k ) T d ( w k , w ˜ k ) + 2 σ α k ( w k w ) T d ( w k , w ˜ k ) .
(3.13)

Using the monotonicity of f and g, we obtain

( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ˜ k ) A T λ ˜ k g ( y ˜ k ) B T λ ˜ k A x ˜ k + B y ˜ k b ) ( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ) A T λ g ( y ) B T λ A x + B y b ) 0

and consequently

( w ˜ k w ) T d ( w k , w ˜ k ) ( w ˜ k w ) T ( A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) 0 ) = ( A x ˜ k + B y ˜ k b ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) = ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) )

and it follows that

( w k w ) T d ( w k , w ˜ k ) ( w k w ˜ k ) T d ( w k , w ˜ k ) + ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) .
(3.14)

Applying (3.14) to the last term on the right side of (3.13), we obtain

Θ ( α k ) σ w k w k G 2 + 2 σ α k ( w k w k ) T d ( w k , w ˜ k ) + 2 σ α k { ( w k w ˜ k ) T d ( w k , w ˜ k ) + ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } = σ { w k w k G 2 + 2 α k ( w k w ˜ k ) T d ( w k , w ˜ k ) + 2 α k ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } .
(3.15)

Adding (3.10) (multiplied by σ) to (3.15), we get

Θ ( α k ) σ { w k w k G 2 + 2 α k ( w k w ˜ k ) T G ( w k w ˜ k ) 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 + 2 α k ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) } = σ { w k w k α k ( w k w ˜ k ) G 2 α k 2 w k w ˜ k G 2 + 2 α k w k w ˜ k G 2 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 + 2 α k ( λ k λ ˜ k ) T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) }

and by using the notation of φ k in (2.7), the theorem is proved. □

From the computational point of view, a relaxation factor γ(0,2) is preferable in the correction. We are now in a position to prove the contractive property of the iterative sequence.

Theorem 3.2 Let w W be a solution of SVI and let w k + 1 (γ α k ) be generated by (2.5). Then { w k } and { w ˜ k } are bounded sequences, and

w k + 1 ( γ α k ) w G 2 w k w G 2 c w k w ˜ k G 2 ,
(3.16)

where

c:= σ γ ( 2 2 ) 2 ( 2 γ ) 4 >0.

Proof It follows from (3.9), (2.10), and (2.11) that

w k + 1 ( γ α k ) w G 2 w k w G 2 σ ( 2 γ α k φ k γ 2 α k 2 w k w ˜ k G 2 ) = w k w G 2 γ ( 2 γ ) α k σ φ k w k w G 2 σ γ ( 2 2 ) 2 ( 2 γ ) 4 ( w k w ˜ k G 2 ) .

Since γ(0,2), we have

w k + 1 w w k w w 0 w ,

and thus { w k } is a bounded sequence.

It follows from (3.16) that

k = 0 c w k w ˜ k G 2 <+,

which means that

lim k w k w ˜ k G =0.
(3.17)

Since { w k } is a bounded sequence, we conclude that { w ˜ k } is also bounded. □

4 Convergence of the proposed method

In this section, we prove the global convergence of the proposed method. The following results can be proved by using the technique of Lemma 5.1 in [19].

Lemma 4.1 For given w k =( x k , y k , λ k ) R + + n × R + + m × R l , let w ˜ k =( x ˜ k , y ˜ k , λ ˜ k ) be generated by (2.4a)-(2.4c). Then for any w=(x,y,λ)W, we have

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k A T H A ( x k x ˜ k ) + A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) ) ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) }
(4.1)

and

( y y ˜ k ) T ( g ( y ˜ k ) B T λ ˜ k B T H B ( y k y ˜ k ) + B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) ) ( y k y ˜ k ) T S { ( 1 + μ ) y ( μ y k + y ˜ k ) } .
(4.2)

Proof Similarly as in (3.4), we have

( x x ˜ k ) T { f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] } 1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 ,

which implies

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k A T H A ( x k x ˜ k ) + A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) ) 1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 .

By a simple manipulation, we have

1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 = ( 1 + μ ) x T R x k ( 1 + μ ) x T R x ˜ k ( 1 μ ) ( x ˜ k ) T R x k μ x k R 2 + x ˜ k R 2 = ( 1 + μ ) x T R ( x k x ˜ k ) ( x k x ˜ k ) T R ( μ x k + x ˜ k ) = ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) } ,

and the assertion (4.1) is proved. Similarly we can prove the assertion (4.2). □

Now, we are ready to prove the convergence of the proposed method.

Theorem 4.1 The sequence { w k } generated by the proposed method converges to some w which is a solution of SVI.

Proof It follows from (3.17) that

lim k x k x ˜ k R =0, lim k y k y ˜ k S =0
(4.3)

and

lim k λ k λ ˜ k H 1 = lim k A x ˜ k + B y ˜ k b H =0.
(4.4)

Moreover, (4.1) and (4.2) imply that

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k ) ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) } + ( x x ˜ k ) T ( A T H A ( x k x ˜ k ) A T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) )

and

( y y ˜ k ) T ( g ( y ˜ k ) B T λ ˜ k ) ( y k y ˜ k ) T S { ( 1 + μ ) y ( μ y k + y ˜ k ) } + ( y y ˜ k ) T ( B T H B ( y k y ˜ k ) B T H ( A ( x k x ˜ k ) + B ( y k y ˜ k ) ) ) .

We deduce from (4.3) that

{ lim k ( x x ˜ k ) T { f ( x ˜ k ) A T λ ˜ k } 0 , x R + + n , lim k ( y y ˜ k ) T { g ( y ˜ k ) B T λ ˜ k } 0 , y R + + m .
(4.5)

Since { w k } is bounded, it has at least one cluster point. Let w be a cluster point of { w k } and the subsequence { w k j } converges to w . It follows from (4.4) and (4.5) that

{ lim j ( x x k j ) T { f ( x k j ) A T λ k j } 0 , x R + + n , lim j ( y y k j ) T { g ( y k j ) B T λ k j } 0 , y R + + m , lim j ( A x k j + B y k j b ) = 0

and consequently

{ ( x x ) T { f ( x ) A T λ } 0 , x R + + n , ( y y ) T { g ( y ) B T λ } 0 , y R + + m , A x + B y b = 0 ,

which means that w is a solution of SVI.

Now, we prove that the sequence { w k } converges to w . Since

lim k w k w ˜ k G =0and { w ˜ k j } w

for any ϵ>0, there exists an l>0 such that

w ˜ k l w G < ϵ 2 and w k l w ˜ k l G < ϵ 2 .
(4.6)

Therefore, for any k k l , it follows from (3.16) and (4.6) that

w k w G w k l w G w k l w ˜ k l G + w ˜ k l w G <ϵ.

This implies that the sequence { w k } converges to w which is a solution of SVI. □

5 Preliminary computational results

In this section, we report some numerical results of the proposed method, we consider the following optimization problem with matrix variables:

min { 1 2 X C F 2 | X S + n } ,
(5.1)

where F is the matrix Fröbenius norm, i.e., C F = ( i = 1 n j = 1 n | C i j | 2 ) 1 / 2 ,

S + n = { H R n × n H T = H , H 0 } .

Note that the matrix Fröbenius norm is induced by the inner product

A,B=Trace ( A T B ) .

Note that problem (5.1) is equivalent to the following:

min 1 2 X C F 2 + 1 2 Y C F 2 s.t.  X Y = 0 , s.t.  X , Y S + n ,
(5.2)

by attaching a Lagrange multiplier Z R n × n to the linear constraint XY=0, the Lagrange function of (5.2) is

L(X,Y,Z)= 1 2 X C F 2 + 1 2 Y C F 2 Z,XY,

which is defined on S + n × S + n × R n × n . If ( X , Y , Z ) S + n × S + n × R n × n is a KKT point of (5.2), then (5.2) can be converted to the following variational inequality: find u =( X , Y , Z )Ω= S + n × S + n × R n × n such that

{ X X , ( X C ) Z 0 , Y Y , ( Y C ) + Z 0 , u = ( X , Y , Z ) Ω , X Y = 0 .
(5.3)

Problem (5.3) is a special case of (1.3)-(1.4) with matrix variables, where A= I n × n , B= I n × n , b=0, f(X)=XC, g(Y)=YC and W= S + n × S + n × R n × n .

For simplification, we take R=r I n × n , S=s I n × n and H= I n × n where r>0 and s>0 are scalars. In all tests we take μ=0.5, C=rand(n) and ( X 0 , Y 0 , Z 0 )=( I n × n , I n × n , 0 n × n ) as the initial point in the test. The iteration is stopped as soon as

max { X k X ˜ k , Y k Y ˜ k , Z k Z ˜ k } 10 6 .

All codes were written in Matlab, we compare the proposed method with that in [22]. The iteration numbers, denoted by k, and the computational time for problem (5.1) with different dimensions are given in Tables 1 and 2.

Table 1 Numerical results for problem ( 5.1 ) with r=0.5 , s=5
Table 2 Numerical results for problem ( 5.1 ) with r=1 , s=10

Tables 1 and 2 show that the proposed method is more flexible and efficient. Moreover, it demonstrates computationally that the new method is more effective than the method presented in [22] in the sense that the new method needs fewer iterations and less computational time.

6 Conclusions

In this paper, we propose a new logarithmic-quadratic proximal alternating direction method (LQP-ADM) for solving structured variational inequalities. Each iteration of the new LQP-ADM includes a prediction step, where a prediction point is obtained by using the idea of He [16], and a correction step, where the new iterate is generated by a convex combination of the previous iterate and the one generated by a projection-type method along a new descent direction. Global convergence of the proposed method is proved under mild assumptions.

References

  1. Eckstein J, Bertsekas DB: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55: 293-318. 10.1007/BF01581204

    Article  MathSciNet  MATH  Google Scholar 

  2. Fortin M, Glowinski R: Augmented Lagrangian Methods: Applications to the Solution of Boundary-Valued Problems. North-Holland, Amsterdam; 1983.

    MATH  Google Scholar 

  3. Gabay D: Applications of the method of multipliers to variational inequalities. In Augmented Lagrange Methods: Applications to the Solution of Boundary-Valued Problems. Edited by: Fortin M, Glowinski R. North-Holland, Amsterdam; 1983:299-331.

    Chapter  Google Scholar 

  4. Gabay D, Mercier B: A dual algorithm for the solution of nonlinear variational problems via finite element approximations. Comput. Math. Appl. 1976, 2: 17-40. 10.1016/0898-1221(76)90003-1

    Article  MATH  Google Scholar 

  5. Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.

    Book  MATH  Google Scholar 

  6. Glowinski R, Le Tallec P SIAM Studies in Applied Mathematics. In Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. SIAM, Philadelphia; 1989.

    Chapter  Google Scholar 

  7. Teboulle M: Convergence of proximal-like algorithms. SIAM J. Optim. 1997, 7: 1069-1083. 10.1137/S1052623495292130

    Article  MathSciNet  MATH  Google Scholar 

  8. He BS, Yang H: Some convergence properties of a method of multipliers for linearly constrained monotone variational inequalities. Oper. Res. Lett. 1998, 23: 151-161. 10.1016/S0167-6377(98)00044-3

    Article  MathSciNet  MATH  Google Scholar 

  9. Kontogiorgis S, Meyer RR: A variable-penalty alternating directions method for convex optimization. Math. Program. 1998, 83: 29-53.

    MathSciNet  MATH  Google Scholar 

  10. Jiang ZK, Bnouhachem A: A projection-based prediction-correction method for structured monotone variational inequalities. Appl. Math. Comput. 2008, 202: 747-759. 10.1016/j.amc.2008.03.018

    Article  MathSciNet  MATH  Google Scholar 

  11. Tao M, Yuan XM: On the O(1/t) convergence rate of alternating direction method with logarithmic-quadratic proximal regularization. SIAM J. Optim. 2012,22(4):1431-1448. 10.1137/110847639

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen G, Teboulle M: A proximal-based decomposition method for convex minimization problems. Math. Program. 1994, 64: 81-101. 10.1007/BF01582566

    Article  MathSciNet  MATH  Google Scholar 

  13. Eckstein J: Some saddle-function splitting methods for convex programming. Optim. Methods Softw. 1994, 4: 75-83. 10.1080/10556789408805578

    Article  MathSciNet  Google Scholar 

  14. He BS, Liao LZ, Han DR, Yang H: A new inexact alternating directions method for monotone variational inequalities. Math. Program. 2002, 92: 103-118. 10.1007/s101070100280

    Article  MathSciNet  MATH  Google Scholar 

  15. Hamdi A, Mishra SK: Decomposition methods based on augmented Lagrangians: a survey. In Topics in Nonconvex Optimization: Theory and Applications. Springer, Berlin; 2011:175-204.

    Chapter  Google Scholar 

  16. He BS: Parallel splitting augmented Lagrangian methods for monotone structured variational inequalities. Comput. Optim. Appl. 2009, 42: 195-212. 10.1007/s10589-007-9109-x

    Article  MathSciNet  MATH  Google Scholar 

  17. Yuan XM, Li M: An LQP-based decomposition method for solving a class of variational inequalities. SIAM J. Optim. 2011,21(4):1309-1318. 10.1137/070703557

    Article  MathSciNet  MATH  Google Scholar 

  18. Auslender A, Teboulle M, Ben-Tiba S: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 1999, 12: 31-40. 10.1023/A:1008607511915

    Article  MathSciNet  MATH  Google Scholar 

  19. Bnouhachem A, Benazza H, Khalfaoui M: An inexact alternating direction method for solving a class of structured variational inequalities. Appl. Math. Comput. 2013, 219: 7837-7846. 10.1016/j.amc.2013.01.067

    Article  MathSciNet  MATH  Google Scholar 

  20. Bnouhachem A, Xu MH: An inexact LQP alternating direction method for solving a class of structured variational inequalities. Comput. Math. Appl. 2014, 67: 671-680. 10.1016/j.camwa.2013.12.010

    Article  MathSciNet  Google Scholar 

  21. Bnouhachem A: On LQP alternating direction method for solving variational. J. Inequal. Appl. 2014., 2014: Article ID 80

    Google Scholar 

  22. Li M: A hybrid LQP-based method for structured variational inequalities. Int. J. Comput. Math. 2012,89(10):1412-1425. 10.1080/00207160.2012.688822

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Qatar University for providing excellent research facilities under the Start-Up Grant: QUSG-CAS-DMSP-13/14-8.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdelouahed Hamdi.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bnouhachem, A., Hamdi, A. Parallel LQP alternating direction method for solving variational inequality problems with separable structure. J Inequal Appl 2014, 392 (2014). https://doi.org/10.1186/1029-242X-2014-392

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-392

Keywords