Skip to main content

A projection method for bilevel variational inequalities

Abstract

A fixed point iteration algorithm is introduced to solve bilevel monotone variational inequalities. The algorithm uses simple projection sequences. Strong convergence of the iteration sequences generated by the algorithm to the solution is guaranteed under some assumptions in a real Hilbert space.

MSC:65K10, 90C25.

1 Introduction

Let C be a nonempty closed convex subset of a real Hilbert space ℋ with the inner product 〈⋅,⋅〉 and the norm ∥⋅∥. We denote weak convergence and strong convergence by notations ⇀ and →, respectively. The bilevel variational inequalities, shortly (BVI), are formulated as follows:

Find  x ∗ ∈Sol(G,C) such that  〈 F ( x ∗ ) , x − x ∗ 〉 ≥0,∀x∈Sol(G,C),

where G:H→H, Sol(G,C) denotes the set of all solutions of the variational inequalities:

Find  y ∗ ∈C such that  〈 G ( y ∗ ) , y − y ∗ 〉 ≥0,∀y∈C,

and F:C→H. We denote the solution set of problem (BVI) by Ω.

Bilevel variational inequalities are special classes of quasivariational inequalities (see [1–4]) and of equilibrium with equilibrium constraints considered in [5]. However, they cover some classes of mathematical programs with equilibrium constraints (see [6]), bilevel minimization problems (see [7]), variational inequalities (see [8–13]), minimum-norm problems of the solution set of variational inequalities (see [14, 15]), bilevel convex programming models (see [16]) and bilevel linear programming in [17].

Suppose that f:H→R. It is well known in convex programming that if f is convex and differentiable on Sol(G,C), then x ∗ is a solution to

min { f ( x ) : x ∈ Sol ( G , C ) }

if and only if x ∗ is the solution to the bilevel variational inequalities VI(∇f,Sol(G,C)), where ∇f is the gradient of f. Then the bilevel variational inequalities (BVI) are written in a form of mathematical programs with equilibrium constraints as follows:

{ min f ( x ) , x ∈ { y ∗ : 〈 G ( y ∗ ) , z − y ∗ 〉 ≥ 0 , ∀ z ∈ C } .

If f, g are two convex and differentiable functions, then problem (BVI) (where F:=∇f and G:=∇g) becomes the following bilevel minimization problem (see [7]):

{ min f ( x ) , x ∈ argmin { g ( x ) : x ∈ C } .

In a special case F(x)=x for all x∈C, problem (BVI) becomes the minimum-norm problems of the solution set of variational inequalities as follows:

Find  x ∗ ∈C such that  x ∗ = Pr Sol ( G , C ) (0),

where Pr Sol ( G , C ) (0) is the projection of 0 onto Sol(G,C). A typical example is the least-squares solution to the constrained linear inverse problem in [18]. For solving this problem under the assumption that the subset C⊆H is nonempty closed convex, G:C→H is α-inverse strongly monotone and Sol(G,C)≠∅, Yao et al. in [14] introduced the following extended extragradient method:

{ x 0 ∈ C , y k = Pr C ( x k − λ G ( x k ) − α k x k ) , x k + 1 = Pr C [ x k − λ G ( x k ) + μ ( y k − x k ) ] , ∀ k ≥ 0 .

They showed that under certain conditions over parameters, the sequence { x k } converges strongly to x ˆ = Pr Sol ( G , C ) (0).

Recently, Anh et al. in [19] introduced an extragradient algorithm for solving problem (BVI) in the Euclidean space R n . Roughly speaking the algorithm consists of two loops. At each iteration k of the outer loop, they applied the extragradient method for the lower variational inequality problem. Then, starting from the obtained iterate in the outer loop, they computed an ϵ k -solution of problem VI(G,C). The convergence of the algorithm crucially depends on the starting points x 0 and the parameters chosen in advance. More precisely, they presented the following scheme

{ Compute  y k : = Pr C ( x k − α k G ( x k ) )  and  z k : = Pr C ( x k − α k G ( y k ) ) . Inner iterations  j = 0 , 1 , … .  Compute { x k , 0 : = z k − λ F ( z k ) , y k , j : = Pr C ( x k , j − δ j G ( x k , j ) ) , x k , j + 1 : = α j x k , 0 + β j x k , j + γ j Pr C ( x k , j − δ j G ( y k , j ) ) . If  ∥ x k , j + 1 − Pr Sol ( G , C ) ( x k , 0 ) ∥ ≤ ϵ ¯ k , then set  h k : = x k , j + 1  and go to Step 3 . Otherwise, increase  j  by  1 . Set  x k + 1 : = α k u + β k x k + γ k h k .  Increase  k  by  1  and go to Step 1 .

Under assumptions that F is strongly monotone and Lipschitz continuous, G is pseudomonotone and Lipschitz continuous on C, the sequences of parameters were chosen appropriately. They showed that two iterative sequences { x k } and { z k } converged to the same point x ∗ which is a solution of problem (BVI). However, at each iteration of the outer loop, the scheme requires computing an approximation solution to a variational inequality problem.

There exist some other solution methods for bilevel variational inequalities when the cost operator has some monotonicity (see [16, 19–21]). In all of these methods, solving auxiliary variational inequalities is required. In order to avoid this requirement, we combine the projected gradient method in [10] for solving variational inequalities and the fixed point property that x ∗ is a solution to problem VI(F,C) if and only if it is a fixed point of the mapping Pr C (x−λF(x)), where λ>0. Then, the strong convergence of proposed sequences is considered in a real Hilbert space.

In this paper, we are interested in finding a solution to bilevel variational inequalities (BVI), where the operators F and G satisfy the following usual conditions:

(A1) G is η-inverse strongly monotone on ℋ and F is β-strongly monotone on C.

(A2) F is L-Lipschitz continuous on C.

(A3) The solution set Ω of problem (BVI) is nonempty.

The purpose of this paper is to propose an algorithm for directly solving bilevel pseudomonotone variational inequalities by using the projected gradient method and fixed point techniques.

The rest of this paper is divided into two sections. In Section 2, we recall some properties for monotonicity, the metric projection onto a closed convex set and introduce in detail a new algorithm for solving problem (BVI). The third section is devoted to the convergence analysis for the algorithm.

2 Preliminaries

We list some well-known definitions and the projection under the Euclidean norm which will be used in our analysis.

Definition 2.1 Let C be a nonempty closed convex subset in â„‹. We denote the projection on C by Pr C (â‹…), i.e.,

Pr C (x)=argmin { ∥ y − x ∥ : y ∈ C } ,∀x∈H.

The operator φ:C→H is said to be

  1. (i)

    γ-strongly monotone on C if for each x,y∈C,

    〈 φ ( x ) − φ ( y ) , x − y 〉 ≥γ ∥ x − y ∥ 2 ;
  2. (ii)

    η-inverse strongly monotone on C if for each x,y∈C,

    〈 φ ( x ) − φ ( y ) , x − y 〉 ≥η ∥ φ ( x ) − φ ( y ) ∥ 2 ;
  3. (iii)

    Lipschitz continuous with constant L>0 (shortly L-Lipschitz continuous) on C if for each x,y∈C,

    ∥ φ ( x ) − φ ( y ) ∥ ≤L∥x−y∥.

If φ:C→C and L=1, then φ is called nonexpansive on C.

We know that the projection Pr C (â‹…) has the following well-known basic properties.

Property 2.2

  1. (a)

    ∥ Pr C (x)− Pr C (y)∥≤∥x−y∥, ∀x,y∈H.

  2. (b)

    〈x− Pr C (x),y− Pr C (x)〉≤0, ∀y∈C,x∈H.

  3. (c)

    ∥ Pr C ( x ) − Pr C ( y ) ∥ 2 ≤ ∥ x − y ∥ 2 − ∥ Pr C ( x ) − x + y − Pr C ( y ) ∥ 2 , ∀x,y∈H.

To prove the main theorem of this paper, we need the following lemma.

Lemma 2.3 (see [21])

Let A:H→H be β-strongly monotone and L-Lipschitz continuous, λ∈(0,1] and μ∈(0, 2 β L 2 ). Then the mapping T(x):=x−λμA(x) for all x∈H satisfies the inequality

∥ T ( x ) − T ( y ) ∥ ≤(1−λτ)∥x−y∥,∀x,y∈H,

where τ=1− 1 − μ ( 2 β − μ L 2 ) ∈(0,1].

Lemma 2.4 (see [22])

Let ℋ be a real Hilbert space, C be a nonempty closed and convex subset of ℋ and S:C→H be a nonexpansive mapping. Then I−S (I is the identity operator on ℋ) is demiclosed at y∈H, i.e., for any sequence ( x k ) in C such that x k ⇀ x ¯ ∈D and (I−S)( x k )→y, we have (I−S)( x ¯ )=y.

Lemma 2.5 (see [19])

Let { a n } be a sequence of nonnegative real numbers such that

a n + 1 ≤(1− γ n ) a n + δ n ,∀n≥0,

where { γ n }⊂(0,1) and { δ n } is a sequence in ℛ such that

  1. (a)

    ∑ n = 0 ∞ γ n =∞,

  2. (b)

    lim sup n → ∞ δ n γ n ≤0 or ∑ n = 0 ∞ | δ n γ n |<+∞.

Then lim n → ∞ a n =0.

Now we are in a position to describe an algorithm for problem (BVI). The proposed algorithm can be considered as a combination of the projected gradient and fixed point methods. Roughly speaking the algorithm consists of two steps. First, we use the well-known projected gradient method for solving the variational inequalities VI(G,C): x k + 1 = Pr C ( x k −λG( x k )) (k=0,1,…), where λ>0 and x 0 ∈C. The method generates a sequence ( x k ) converging strongly to the unique solution of problem VI(G,C) under assumptions that G is L-Lipschitz continuous and α-strongly monotone on C with the step-size λ∈(0, 2 α L 2 ). Next, we use the Banach contraction-mapping fixed-point principle for finding the unique fixed point of the contraction-mapping T λ =I−λμF, where F is β-strongly monotone and L-Lipschitz continuous, I is the identity mapping, μ∈(0, 2 β L 2 ) and λ∈(0,1]. The algorithm is presented in detail as follows.

Algorithm 2.6 (Projection algorithm for solving (BVI))

Step 0. Choose x 0 ∈C, k=0, a positive sequence { α k }, λ, μ such that

{ 0 < α k ≤ min { 1 , 1 τ } , τ = 1 − 1 − μ ( 2 β − μ L 2 ) , lim k → ∞ α k = 0 , lim k → ∞ | 1 α k + 1 − 1 α k | = 0 , ∑ k = 0 ∞ α k = ∞ , 0 < λ ≤ 2 η , 0 < μ < 2 β L 2 .
(2.1)

Step 1. Compute

{ y k : = Pr C ( x k − λ G ( x k ) ) , x k + 1 = y k − μ α k F ( y k ) .

Update k:=k+1, and go to Step 1.

Note that in the case F(x)=0 for all x∈C, Algorithm 2.6 becomes the projected gradient algorithm as follows:

x k + 1 := Pr C ( x k − λ G ( x k ) ) .

3 Convergence results

In this section, we state and prove our main results.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let two mappings F:C→H and G:H→H satisfy assumptions (A1)-(A3). Then the sequences ( x k ) and ( y k ) in Algorithm 2.6 converge strongly to the same point x ∗ ∈Ω.

Proof For conditions (2.1), we consider the mapping S k :H→H defined by

S k (x):= Pr C ( x − λ G ( x ) ) −μ α k F [ Pr C ( x − λ G ( x ) ) ] ,∀x∈H.

Using Property 2.2(a), G is η-inverse strongly monotone and conditions (2.1), for each x,y∈H, we have

∥ Pr C ( x − λ G ( x ) ) − Pr C ( y − λ G ( y ) ) ∥ 2 ≤ ∥ x − λ G ( x ) − y + λ G ( y ) ∥ 2 = ∥ x − y ∥ 2 + λ 2 ∥ G ( x ) − G ( y ) ∥ 2 − 2 λ 〈 x − y , G ( x ) − G ( y ) 〉 ≤ ∥ x − y ∥ 2 + λ ( λ − 2 η ) ∥ G ( x ) − G ( y ) ∥ 2 ≤ ∥ x − y ∥ 2 .
(3.1)

Combining this and Lemma 2.3, we get

∥ S k ( x ) − S k ( y ) ∥ = ∥ Pr C ( x − λ G ( x ) ) − μ α k F [ Pr C ( x − λ G ( x ) ) ] − Pr C ( y − λ G ( y ) ) + μ α k F [ Pr C ( y − λ G ( y ) ) ] ∥ ≤ ( 1 − α k τ ) ∥ Pr C ( x − λ G ( x ) ) − Pr C ( y − λ G ( y ) ) ∥ ≤ ( 1 − α k τ ) ∥ x − y ∥ ,
(3.2)

where τ:=1− 1 − μ ( 2 β − μ L 2 ) . Thus, S k is a contraction on ℋ. By the Banach contraction principle, there is the unique fixed point ξ k such that S k ( ξ k )= ξ k . For each x ˆ ∈Sol(G,C), set

C ˆ := { x ∈ H : ∥ x − x ˆ ∥ ≤ μ ∥ F ( x ˆ ) ∥ τ } .

Due to this and Property 2.2(a), we get that the mapping S k Pr C ˆ is contractive on â„‹. Then there exists the unique point z k such that S k [ Pr C ˆ ( z k )]= z k . Set z ¯ k = Pr C ˆ ( z k ). It follows from (3.2) that

∥ z k − x ˆ ∥ = ∥ S k ( z ¯ k ) − x ˆ ∥ ≤ ∥ S k ( z ¯ k ) − S k ( x ˆ ) ∥ + ∥ S k ( x ˆ ) − x ˆ ∥ = ∥ S k ( z ¯ k ) − S k ( x ˆ ) ∥ + ∥ S k ( x ˆ ) − Pr C ( x ˆ − α k G ( x ˆ ) ) ∥ ≤ ( 1 − α k τ ) ∥ z ¯ k − x ˆ ∥ + μ α k ∥ F [ Pr C ( x ˆ − α k G ( x ˆ ) ) ] ∥ ≤ ( 1 − α k τ ) μ ∥ F ( x ˆ ) ∥ τ + μ α k ∥ F ( x ˆ ) ∥ = μ ∥ F ( x ˆ ) ∥ τ .

Thus, z k ∈ C ˆ , S k [ Pr C ˆ ( z k )]= S k ( z k )= z k and hence ξ k = z k ∈ C ˆ . Therefore, there exists a subsequence ( ξ k i ) of the sequence ( ξ k ) such that ξ k i ⇀ ξ ¯ . Combining this and the assumption lim k → ∞ α k =0, we get

∥ Pr C ( ξ k i − λ G ( ξ k i ) ) − ξ k i ∥ = ∥ Pr C ( ξ k i − λ G ( ξ k i ) ) − S k i ( ξ k i ) ∥ = μ α k i ∥ F [ Pr C ( ξ k i − λ G ( ξ k i ) ) ] ∥ → 0 as  i → ∞ .
(3.3)

It follows from (3.1) that the mapping Pr C (⋅− α k G(⋅)) is nonexpansive on ℋ. Using Lemma 2.4, (3.3) and ξ k i ⇀ ξ ¯ , we obtain Pr C ( ξ ¯ −λG( ξ ¯ ))= ξ ¯ , which implies ξ ¯ ∈Sol(G,C). Now we will prove that lim j → ∞ ξ k j = x ∗ ∈Sol(BVI).

Set z ¯ k = Pr C ( ξ k −λG( ξ k )), v ∗ =(μF−I)( x ∗ ) and v k =(μF−I)( z ¯ k ), where I is the identity mapping. Since S k j ( ξ k j )= ξ k j and x ∗ = Pr C ( x ∗ −λG( x ∗ )), we have

(1− α k j ) ( ξ k j − z ¯ k j ) + α k j ( ξ k j + v k j ) =0

and

(1− α k j ) [ I − Pr C ( ⋅ − λ G ( ⋅ ) ) ] ( x ∗ ) + α k j ( x ∗ + v ∗ ) = α k j ( x ∗ + v ∗ ) .

Then

− α k j 〈 x ∗ + v ∗ , ξ k j − x ∗ 〉 = ( 1 − α k j ) 〈 ξ k j − x ∗ − ( z ¯ k j − x ∗ ) , ξ k j − x ∗ 〉 + α k j 〈 ξ k j − x ∗ + v k j − v ∗ , ξ k j − x ∗ 〉 .
(3.4)

By the Schwarz inequality, we have

〈 ξ k j − x ∗ − ( z ¯ k j − x ∗ ) , ξ k j − x ∗ 〉 ≥ ∥ ξ k j − x ∗ ∥ 2 − ∥ z ¯ k j − x ∗ ∥ ∥ ξ k j − x ∗ ∥ ≥ ∥ ξ k j − x ∗ ∥ 2 − ∥ ξ k j − x ∗ ∥ 2 = 0
(3.5)

and

〈 ξ k j − x ∗ + v k j − v ∗ , ξ k j − x ∗ 〉 ≥ ∥ ξ k j − x ∗ ∥ 2 − ∥ v k j − v ∗ ∥ ∥ ξ k j − x ∗ ∥ ≥ ∥ ξ k j − x ∗ ∥ 2 − ( 1 − τ ) ∥ ξ k j − x ∗ ∥ 2 = τ ∥ ξ k j − x ∗ ∥ 2 .
(3.6)

Combining (3.4), (3.5) and (3.6), we get

− τ ∥ ξ k j − x ∗ ∥ 2 ≥ 〈 x ∗ + v ∗ , ξ k j − x ∗ 〉 = μ 〈 F ( x ∗ ) , ξ k j − x ∗ 〉 = μ 〈 F ( x ∗ ) , ξ k j − ξ ¯ 〉 + μ 〈 F ( x ∗ ) , ξ ¯ − x ∗ 〉 ≥ μ 〈 F ( x ∗ ) , ξ k j − ξ ¯ 〉 .

Then we have

τ ∥ ξ k j − x ∗ ∥ 2 ≤μ 〈 F ( x ∗ ) , ξ ¯ − ξ k j 〉 .

Let j→∞, and hence the sequence { ξ k j } converges strongly to x ∗ . This implies that the sequence { ξ k } also converges strongly to x ∗ .

Otherwise, by using (3.2), we have

∥ x k − ξ k ∥ ≤ ∥ x k − ξ k − 1 ∥ + ∥ ξ k − 1 − ξ k ∥ = ∥ S k − 1 ( x k − 1 ) − S k − 1 ( ξ k − 1 ) ∥ + ∥ ξ k − 1 − ξ k ∥ ≤ ( 1 − α k − 1 τ ) ∥ x k − 1 − ξ k − 1 ∥ + ∥ ξ k − 1 − ξ k ∥ .
(3.7)

Moreover, by Lemma 2.3, we have

∥ ξ k − 1 − ξ k ∥ = ∥ S k − 1 ( ξ k − 1 ) − S k ( ξ k ) ∥ = ∥ ( 1 − α k ) z ¯ k − α k v k − ( 1 − α k − 1 ) z ¯ k − 1 + α k − 1 v k − 1 ∥ = ∥ ( 1 − α k ) ( z ¯ k − z ¯ k − 1 ) − α k ( v k − v k − 1 ) + ( α k − 1 − α k ) ( z ¯ k − 1 + v k − 1 ) ∥ ≤ ( 1 − α k ) ∥ z ¯ k − z ¯ k − 1 ∥ + α k ∥ v k − v k − 1 ∥ + | α k − 1 − α k | μ ∥ F ( z ¯ k − 1 ) ∥ ≤ ( 1 − α k ) ∥ z ¯ k − z ¯ k − 1 ∥ + α k 1 − μ ( 2 β − μ L 2 ) ∥ ξ k − ξ k − 1 ∥ + | α k − 1 − α k | μ ∥ F ( z ¯ k − 1 ) ∥ ≤ ( 1 − α k ) ∥ ξ k − ξ k − 1 ∥ + α k 1 − μ ( 2 β − μ L 2 ) ∥ ξ k − ξ k − 1 ∥ + | α k − 1 − α k | μ ∥ F ( z ¯ k − 1 ) ∥ .

This implies that

α k τ ∥ ξ k − 1 − ξ k ∥ ≤| α k − 1 − α k |μ ∥ F ( z ¯ k − 1 ) ∥

and hence

∥ ξ k − ξ k − 1 ∥ ≤ μ | α k − 1 − α k | ∥ F ( z ¯ k − 1 ) ∥ α k τ .

So, we have

∥ x k − ξ k ∥ ≤(1− α k − 1 τ) ∥ x k − 1 − ξ k − 1 ∥ + μ | α k − 1 − α k | ∥ F ( z ¯ k − 1 ) ∥ α k τ .

Let

δ k := μ | α k − α k + 1 | ∥ F ( z ¯ k ) ∥ α k α k + 1 τ 2 ,k≥0.

Then

∥ x k − ξ k ∥ ≤(1− α k − 1 τ) ∥ x k − 1 − ξ k − 1 ∥ + α k − 1 τ δ k − 1 ,∀k≥1.

Since {F( z ¯ k )} is bounded, ∥F( z ¯ k )∥≤K for all k≥0, we have

lim k → ∞ δ k = lim k → ∞ μ | α k − α k + 1 | ∥ F ( z ¯ k ) ∥ α k α k + 1 τ 2 ≤ μ K τ 2 lim k → ∞ | 1 α k + 1 − 1 α k |=0.

Applying Lemma 2.5, lim k → ∞ ∥ x k − ξ k ∥=0. Combining this and the fact that the sequence { ξ k } converges strongly to x ∗ , the sequence { x k } also converges strongly to the unique solution to problem (BVI). □

Now we consider the special case F(x)=x for all x∈H. It is easy to see that F is Lipschitz continuous with constant L=1 and strongly monotone with constant β=1 on ℋ. Problem (BVI) becomes the minimum-norm problems of the solution set of the variational inequalities.

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space ℋ. Let G:H→H be η-inverse strongly monotone. The iteration sequence ( x k ) is defined by

{ y k : = Pr C ( x k − λ G ( x k ) ) , x k + 1 = ( 1 − μ α k ) y k .

The parameters satisfy the following:

{ 0 < α k ≤ min { 1 , 1 τ } , τ = 1 − | 1 − μ | , lim k → ∞ α k = 0 , lim k → ∞ | 1 α k + 1 − 1 α k | = 0 , ∑ k = 0 ∞ α k = ∞ , 0 < λ ≤ 2 η , 0 < μ < 2 .

Then the sequences { x k } and { y k } converge strongly to the same point x ˆ = Pr Sol ( G , C ) (0).

References

  1. Anh PN, Muu LD, Hien NV, Strodiot JJ: Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J. Optim. Theory Appl. 2005, 124: 285–306. 10.1007/s10957-004-0926-0

    Article  MathSciNet  Google Scholar 

  2. Baiocchi C, Capelo A: Variational and Quasivariational Inequalities: Applications to Free Boundary Problems. Wiley, New York; 1984.

    Google Scholar 

  3. Facchinei F, Pang JS: Finite-Dimensional Variational Inequalities and Complementary Problems. Springer, New York; 2003.

    Google Scholar 

  4. Xu MH, Li M, Yang CC: Neural networks for a class of bi-level variational inequalities. J. Glob. Optim. 2009, 44: 535–552. 10.1007/s10898-008-9355-1

    Article  MathSciNet  Google Scholar 

  5. Moudafi A: Proximal methods for a class of bilevel monotone equilibrium problems. J. Glob. Optim. 2010, 47: 287–292. 10.1007/s10898-009-9476-1

    Article  MathSciNet  Google Scholar 

  6. Luo ZQ, Pang JS, Ralph D: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge; 1996.

    Book  Google Scholar 

  7. Solodov M: An explicit descent method for bilevel convex optimization. J. Convex Anal. 2007, 14: 227–237.

    MathSciNet  Google Scholar 

  8. Anh PN: An interior-quadratic proximal method for solving monotone generalized variational inequalities. East-West J. Math. 2008, 10: 81–100.

    MathSciNet  Google Scholar 

  9. Anh PN: An interior proximal method for solving pseudomonotone non-Lipschitzian multivalued variational inequalities. Nonlinear Anal. Forum 2009, 14: 27–42.

    MathSciNet  Google Scholar 

  10. Anh PN, Muu LD, Strodiot JJ: Generalized projection method for non-Lipschitz multivalued monotone variational inequalities. Acta Math. Vietnam. 2009, 34: 67–79.

    MathSciNet  Google Scholar 

  11. Daniele P, Giannessi F, Maugeri A: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht; 2003.

    Book  Google Scholar 

  12. Giannessi F, Maugeri A, Pardalos PM: Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models. Kluwer Academic, Dordrecht; 2004.

    Book  Google Scholar 

  13. Konnov IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin; 2000.

    Google Scholar 

  14. Yao, Y, Marino, G, Muglia, L: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization, 1–11, iFirst (2012)

  15. Zegeye H, Shahzad N, Yao Y: Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 2013. 10.1080/02331934.2013.764522

    Google Scholar 

  16. Trujillo-Cortez R, Zlobec S: Bilevel convex programming models. Optimization 2009, 58: 1009–1028. 10.1080/02331930701763330

    Article  MathSciNet  Google Scholar 

  17. Glackin J, Ecker JG, Kupferschmid M: Solving bilevel linear programs using multiple objective linear programming. J. Optim. Theory Appl. 2009, 140: 197–212. 10.1007/s10957-008-9467-2

    Article  MathSciNet  Google Scholar 

  18. Sabharwal A, Potter LC: Convexly constrained linear inverse problems: iterative least-squares and regularization. IEEE Trans. Signal Process. 1998, 46: 2345–2352. 10.1109/78.709518

    Article  MathSciNet  Google Scholar 

  19. Anh PN, Kim JK, Muu LD: An extragradient method for solving bilevel variational inequalities. J. Glob. Optim. 2012, 52: 627–639. 10.1007/s10898-012-9870-y

    Article  MathSciNet  Google Scholar 

  20. Kalashnikov VV, Kalashnikova NI: Solving two-level variational inequality. J. Glob. Optim. 1996, 8: 289–294. 10.1007/BF00121270

    Article  MathSciNet  Google Scholar 

  21. Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 2009, 71: e1292-e1297. 10.1016/j.na.2009.01.133

    Article  MathSciNet  Google Scholar 

  22. Suzuki T: Strong convergence of Krasnoselskii and Mann’s type sequences for one parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305: 227–239. 10.1016/j.jmaa.2004.11.017

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the anonymous referees for their really helpful and constructive comments that helped us very much in improving the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tran TH Anh.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed by TTHA. The revision is made by LBL and TVA. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Anh, T.T., Long, L.B. & Anh, T.V. A projection method for bilevel variational inequalities. J Inequal Appl 2014, 205 (2014). https://doi.org/10.1186/1029-242X-2014-205

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-205

Keywords