Skip to main content

Yosida approximation equations technique for system of generalized set-valued variational inclusions

Abstract

In this paper, under the assumption with no continuousness, a new system of generalized variational inclusions in the Banach space is introduced. By using the Yosida approximation operator technique, the existence and uniqueness theorems for solving this kind of variational inclusion are established.

MSC:49H09, 49J40, 49H10.

1 Introduction

Variational inclusions are useful and important extensions and generalizations of the variational inequalities with a wide range of applications in industry, mathematical finance, economics, decisions sciences, ecology, mathematical and engineering sciences. In general, the method based on the resolvent operator technique has been widely used to solve variational inclusions.

In this paper, under the assumption with no continuousness, we first introduce a new system of generalized variational inclusions in the Banach space. By using the Yosida approximation technique for m-accretive operator, we prove some existence and uniqueness theorems of solutions for this kind of system of generalized variational inclusions. Our results generalize and improve main results in [1–7].

For i=1,2, let E i be a real Banach space, let T i : E i → 2 E i , M i : E 1 × E 2 → 2 E i be set-valued mappings, let h i , g i : E i → E i , F i : E 1 × E 2 → E i be single-valued mappings, and let ( f 1 , f 2 )∈ E 1 × E 2 . We consider the following problem: finding (x,y)∈ E 1 × E 2 such that

{ f 1 ∈ T 1 x + F 1 ( x , y ) + M 1 ( h 1 ( x ) , g 1 ( x ) ) ; f 2 ∈ T 2 y + F 2 ( x , y ) + M 2 ( h 2 ( y ) , g 2 ( y ) ) .
(1.1)

This problem is called the system of generalized set-valued variational inclusions.

There are some special cases in literature.

(1) If T 1 =0, T 2 =0, f 1 =0, f 2 =0, then (1.1) reduces to the problem of finding (x,y)∈ E 1 × E 2 such that

{ 0 ∈ F 1 ( x , y ) + M 1 ( h 1 ( x ) , g 1 ( x ) ) ; 0 ∈ F 2 ( x , y ) + M 2 ( h 2 ( y ) , g 2 ( y ) ) .
(1.2)

Problem (1.2) was introduced and studied by Kazmi and Khan [1, 2] ( g 1 = g 2 =I in [2]).

(2) If h i = g i =I is the identity operator, M i (⋅,⋅)=0, f 1 = f 2 =0, then (1.1) reduces to the problem of finding (x,y)∈ E 1 × E 2 such that

{ 0 ∈ T 1 x + F 1 ( x , y ) ; 0 ∈ T 2 y + F 2 ( x , y ) .
(1.3)

Problem (1.3) was introduced and studied by Verma [3], Fang and Huang [5].

(3) If E 1 = E 2 =H is a Hilbert space, F 1 = F 2 =F(x), M(⋅,⋅)=M(⋅), f 1 = f 2 =0, then (1.1) reduces to the problem of finding x∈H such that

0∈F(x)+M(x).
(1.4)

Problem (1.4) was introduced and studied by Zeng et al. [6]. If F(x)=S(x)−T(x)−f, f≠0, (1.4) becomes f∈S(x)−T(x)+M(x) considered by Verma [4].

Let E be a real Banach space with dual E ∗ , J:E→ 2 E ∗ is the normalized duality mapping defined by

Jx= { f ∈ E ∗ : 〈 x , f 〉 = ∥ x ∥ 2 = ∥ f ∥ 2 } ,

where 〈⋅,⋅〉 denotes the generalized duality paring. In the sequel, we shall denote the single-valued normalized duality map by j. It is well known that if E is smooth, then J is single-valued, and E ∗ is uniformly convex, then j is uniformly continuous on bounded set.

We assume that E, E 1 , E 2 are smooth Banach spaces. For convenience, the norms of E, E 1 and E 2 are all denoted by ∥⋅∥. The norm of E 1 × E 2 is defined by ∥⋅∥+∥⋅∥, i.e., if (x,y)∈ E 1 × E 2 , then ∥(x,y)∥=∥x∥+∥y∥.

Definition 1.1 Let T:E→ 2 E be a set-valued mapping.

  1. (i)

    T is said to be accretive, if ∀x,y∈E, u∈Tx, v∈Ty,

    〈 u − v , j ( x − y ) 〉 ≥0,∀x,y∈E,u∈Tx,v∈Ty.
  2. (ii)

    T is said to be α-strongly-accretive if there exists α>0 such that ∀x,y∈E, u∈Tx, v∈Ty,

    〈 u − v , j ( x − y ) 〉 ≥α ∥ x − y ∥ 2 .
  3. (iii)

    T is said to be m-accretive if T is accretive and (I+λT)(E)=E, ∀λ>0.

Definition 1.2 Let N: E 1 × E 2 → 2 E 1 be a set-valued mapping.

  1. (i)

    The mapping x↦N(x,y) is said to be accretive if ∀ x 1 , x 2 ∈ E 1 , u∈N( x 1 ,y), v∈N( x 2 ,y), y∈ E 2 ,

    〈 u − v , j ( x 1 − x 2 ) 〉 ≥0.
  2. (ii)

    The mapping x↦N(x,y) is said to be α-strongly-accretive if there exists α>0 such that ∀ x 1 , x 2 ∈ E 1 , u 1 ∈N( x 1 ,y), u 2 ∈N( x 2 ,y), y∈ E 2 ,

    〈 u 1 − u 2 , j ( x 1 − x 2 ) 〉 ≥α ∥ x 1 − x 2 ∥ 2 .
  3. (iii)

    The mapping x↦N(x,y) is said to be m-α-strongly-accretive if N(⋅,y) is α-strongly-accretive and (I+N(⋅,y))(E)=E, ∀y∈ E 2 , λ>0.

In a similar way, we can define the strong accretiveness of the mapping N: E 1 × E 2 → 2 E 2 with respect to the second argument.

Definition 1.3 Let T:E→ 2 E be m-accretive mapping.

  1. (i)

    The resolvent operator of T is defined by R λ T x= ( I + λ T ) − 1 x, ∀x∈E,λ>0.

  2. (ii)

    The Yosida approximation of T is defined by J λ T x= 1 λ (I− R λ T )x, ∀x∈E, λ>0.

Definition 1.4 The mapping F: E 1 × E 2 →E is said to be (r,s)-mixed Lipschitz continuous if there exist r>0, s>0 such that ∀( x 1 , y 1 ),( x 2 , y 2 )∈ E 1 × E 2 ,

∥ F ( x 1 , y 1 ) − F ( x 2 , y 2 ) ∥ ≤r∥ x 1 − x 2 ∥+s∥ y 1 − y 2 ∥.

In the sequel, we use the notation → and ⇀ to denote strong and weak convergence, respectively.

Proposition 1.1 [8–10]

If T:E→ 2 E is m-accretive, then

  1. (1)

    R λ T is single-valued and ∥ R λ T x− R λ T y∥≤∥x−y∥, ∀x,y∈E;

  2. (2)

    ∥ J λ T x∥≤|Tx|=inf{∥y∥:y∈Tx}, ∀x∈D(T);

  3. (3)

    J λ T is m-accretive on E, and ∥ J λ T x− J λ T y∥≤ 2 λ ∥x−y∥, ∀x,y∈E, λ>0;

  4. (4)

    J λ T x∈T R λ T x;

  5. (5)

    If E ∗ is uniformly convex Banach space, then T is demiclosed, i.e., [ x n , y n ]∈Graph(T), x n →x, y n ⇀y implies that [x,y]∈Graph(T).

Lemma 1.1 If T:E→ 2 E is m-α-strongly-accretive, then

  1. (i)

    R λ T is 1 1 + λ α -Lipschitz continuous;

  2. (ii)

    J λ T is α 1 + λ α -strongly-accretive.

Proof (i) Let u= R λ T x, v= R λ T y. Then x−u∈λTu, y−v∈λTv. Since T is α-strongly-accretive, λT is λα-strongly-accretive, λα ∥ u − v ∥ 2 ≤〈x−u−y+v,j(u−v)〉≤∥x−y∥∥u−v∥− ∥ u − v ∥ 2 . Therefore, ∥u−v∥≤ 1 1 + λ α ∥x−y∥. This completes the proof of (i).

  1. (ii)

    By definition of J λ T and (i), we have

    〈 J λ T x − J λ T y , j ( x − y ) 〉 = 1 λ 〈 x − y − ( R λ T x − R λ T y ) , j ( x − y ) 〉 ≥ 1 λ ( ∥ x − y ∥ 2 − ∥ R λ T x − R λ T y ∥ ∥ x − y ∥ ) ≥ α 1 + λ α ∥ x − y ∥ 2 .

This completes the proof of (ii). □

Remark 1.1 Let N i : E 1 × E 2 → 2 E i be set-valued mapping, let x↦ N 1 (x,y) and y↦ N 2 (x,y) be m-accretive. Then the resolvent operator and Yosida approximation of N i can be rewritten as

R λ N 1 ( ⋅ , y ) x = ( I + λ N 1 ( ⋅ , y ) ) − 1 x , J λ N 1 ( ⋅ , y ) x = 1 λ ( I − R λ N 1 ( ⋅ , y ) ) x , R λ N 2 ( x , ⋅ ) y = ( I + λ N 2 ( x , ⋅ ) ) − 1 y , J λ N 2 ( x , ⋅ ) y = 1 λ ( I − R λ N 2 ( x , ⋅ ) ) y ,

respectively.

Lemma 1.2 Let N 1 (x,y)= T 1 x+ F 1 (x,y) and N 2 (x,y)= T 2 y+ F 2 (x,y). If T i : E i → 2 E i is m-accretive, F i : E 1 × E 2 → E i is α i -strongly-accretive in the ith argument, and ( r i , s i )-mixed Lipschitz continuous, then

  1. (i)

    N i is m- α i -strongly-accretive in the ith argument (i=1,2);

  2. (ii)

    ∥ R λ N 1 ( ⋅ , y 1 ) x− R λ N 1 ( ⋅ , y 2 ) x∥≤λ s 1 ∥ y 1 − y 2 ∥;

  3. (iii)

    ∥ R λ N 2 ( x 1 , ⋅ ) y− R λ N 2 ( x 2 , ⋅ ) y∥≤λ r 2 ∥ x 1 − x 2 ∥.

Proof (i) The fact directly follows from Kobayashi [11] (Theorem 5.3).

(ii) Let u= R λ N 1 ( ⋅ , y 1 ) x, v= R λ N 1 ( ⋅ , y 2 ) x. Then

x−u−λ F 1 (u, y 1 )∈λ T 1 u,x−v−λ F 1 (v, y 2 )∈λ T 1 v.

By accretiveness of T i and α i -strong accretiveness of F i , we have that

0 ≤ 〈 − u − λ F 1 ( u , y 1 ) + v + λ F 1 ( v , y 2 ) , j ( u − v ) 〉 = − ∥ u − v ∥ 2 + λ 〈 F 1 ( v , y 2 ) − F 1 ( u , y 1 ) , j ( u − v ) 〉 = − ∥ u − v ∥ 2 + λ 〈 F 1 ( v , y 2 ) − F 1 ( u , y 2 ) , j ( u − v ) 〉 + λ 〈 F 1 ( u , y 2 ) − F 1 ( u , y 1 ) , j ( u − v ) 〉 ≤ − ∥ u − v ∥ 2 − λ α 1 ∥ u − v ∥ 2 + λ ∥ F 1 ( u , y 2 ) − F 1 ( u , y 1 ) ∥ ∥ u − v ∥ ≤ − ( 1 + λ α 1 ) ∥ u − v ∥ 2 + λ s 1 ∥ y 1 − y 2 ∥ ∥ u − v ∥ ,

Therefore, ∥u−v∥≤ λ s 1 1 + λ α 1 ∥ y 1 − y 2 ∥≤λ s 1 ∥ y 1 − y 2 ∥. This completes the proof of (ii).

(iii) The proof is similar. We omit it. □

2 Main results

We assume that CB(E) in the family of all nonempty closed and bounded subset of E.

Lemma 2.1 [12]

Let T 1 : E 1 × E 2 → E 1 and T 2 = E 1 × E 2 → E 2 be two continuous mappings. If there exist θ 1 , θ 2 , 0< θ 1 , θ 2 <1 such that

∥ T 1 ( x 1 , y 1 ) − T 1 ( x 2 , y 2 ) ∥ + ∥ T 2 ( x 1 , y 1 ) − T 2 ( x 2 , y 2 ) ∥ ≤ θ 1 ∥ x 1 − x 2 ∥ + θ 2 ∥ y 1 − y 2 ∥ ,

then there exists (x,y)∈ E 1 × E 2 such that x= T 1 (x,y), y= T 2 (x,y).

Theorem 2.1 For i=1,2, let E i be a real Banach space with uniformly convex dual E i ∗ , and let F i : E 1 × E 2 → E i , h i , g i : E i → E i be three single-valued mappings, let T i : E i → 2 E i , M i : E i × E i → 2 E i be two set-valued mappings satisfying the following conditions that

  1. (1)

    M i ( h i (⋅), g i (⋅)): E i →CB( E i ) is m-accretive;

  2. (2)

    T i is m-accretive.

  3. (3)

    F i is α i -strongly-accretive in the ith argument and ( r i , s i )-mixed Lipschitz continuous, N 1 (x,y)= T 1 (x)+ F 1 (x,y), N 2 (x,y)= T 2 (y)+ F 2 (x,y).

If λ satisfies that

0<λ<min { α 1 − r 2 t 2 α 1 , α 2 − s 1 s 1 α 2 } , r 2 < α 1 , s 1 < α 2 , s 1 r 2 α 1 α 2 <1,
(2.1)

and ( f 1 , f 2 )∈ E 1 × E 2 , then

  1. (i)

    for any λ in (2.1), there exists ( x λ , y λ )∈ E 1 × E 2 such that

    { f 1 ∈ J λ N 1 ( ⋅ , y λ ) x λ + M 1 ( h 1 ( x λ ) , g 1 ( x λ ) ) , f 2 ∈ J λ N 2 ( x λ , ⋅ ) y λ + M 2 ( h 2 ( y λ ) , g 2 ( y λ ) ) ,
    (2.2)

    and { x λ } λ → 0 and { y λ } λ → 0 are bounded;

  2. (ii)

    if { J λ N 1 ( ⋅ , y ) x λ } λ → 0 , { J λ N 2 ( x , ⋅ ) y λ } λ → 0 are bounded, then there exists unique (x,y)∈ E 1 × E 2 , which is a solution of Problem (1.1), such that x λ →x, y λ →y as λ→0.

Remark 2.1 Equation (2.2) is called the system of Yosida approximation inclusions (equations).

Proof of Theorem 2.1 (i) By Definition 1.3, we can easily show that ( x λ , y λ ) satisfies (2.2), if and only if ( x λ , y λ ) satisfies the relation that

x = R λ M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) [ λ f 1 + R λ N 1 ( ⋅ , y ) x ] ≜ B 1 ( x , y ) , y = R λ M 2 ( h 2 ( ⋅ ) , g 2 ( ⋅ ) ) [ λ f 2 + R λ N 2 ( x , ⋅ ) y ] ≜ B 2 ( x , y ) .
(2.3)

Now, we study the mapping B i : E 1 × E 2 → E i (i=1,2) defined by (2.3). By Proposition 1.1(1), Lemma 1.1 and Lemma 1.2, and Eq. (2.3), for any x 1 , x 2 ∈ E 1 , y 1 , y 2 ∈ E 2 , we have that

∥ B 1 ( x 1 , y 1 ) − B 1 ( x 2 , y 2 ) ∥ = ∥ R λ M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) [ λ f 1 + R λ N 1 ( ⋅ , y 1 ) x 1 ] − R λ M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) [ λ f 1 + R λ N 1 ( ⋅ , y 2 ) x 2 ] ∥ ≤ ∥ R λ N 1 ( ⋅ , y 1 ) x 1 − R λ N 1 ( ⋅ , y 2 ) x 2 ∥ ≤ ∥ R λ N 1 ( ⋅ , y 1 ) x 1 − R λ N 1 ( ⋅ , y 2 ) x 1 ∥ + ∥ R λ N 1 ( ⋅ , y 2 ) x 1 − R λ N 1 ( ⋅ , y 2 ) x 2 ∥ ≤ λ s 1 ∥ y 1 − y 2 ∥ + 1 1 + λ α 1 ∥ x 1 − x 2 ∥ .
(2.4)

Similarly, by Proposition 1.1(1), Lemma 1.1 and Lemma 1.2, we can prove that

∥ B 2 ( x 1 , y 1 ) − B 2 ( x 2 , y 2 ) ∥ ≤λ r 2 ∥ x 1 − x 2 ∥+ 1 1 + λ α 2 ∥ y 1 − y 2 ∥.
(2.5)

Equations (2.4) and (2.5) imply that

∥ B 1 ( x 1 , y 1 ) − B 1 ( x 2 , y 2 ) ∥ + ∥ B 2 ( x 1 , y 1 ) − B 2 ( x 2 , y 2 ) ∥ ≤ θ 1 ∥ x 1 − x 2 ∥+ θ 2 ∥ x 1 − x 2 ∥,

where θ 1 = 1 1 + λ α 1 +λ r 2 , θ 2 = 1 1 + λ α 2 +λ s 1 . By (2.1), 0< θ 1 , θ 2 <1. Therefore, by Lemma 2.1, for λ in (2.1), there exists ( x λ , y λ )∈ E 1 × E 2 such that x λ = B 1 ( x λ , y λ ), y λ = B 2 ( x λ , y λ ), i.e., ( x λ , y λ ) satisfies (2.3), and hence (2.2) hold.

Now, we show that { x λ } λ → 0 and { y λ } λ → 0 are bounded. For ( x 1 , y 1 )∈ E 1 × E 2 , and λ in (2.1), let

z λ ∈ J λ N 1 ( ⋅ , y λ ) x 1 + M 1 ( h 1 ( x 1 ) , g 1 ( x 1 ) ) ;
(2.6)
w λ ∈ J λ N 2 ( x λ , ⋅ ) y 1 + M 2 ( h 2 ( y 1 ) , g 2 ( y 1 ) ) .
(2.7)

Equations (2.6) plus (2.2) indicates that

z λ − J λ N 1 ( ⋅ , y λ ) x 1 − f 1 + J λ N 1 ( ⋅ , y λ ) x λ ∈ M 1 ( h 1 ( x 1 ) , g 1 ( x 1 ) ) − M 1 ( h 1 ( x λ ) , g 1 ( x λ ) ) .

By Lemma 1.1 and condition (1) in Theorem 2.1, we obtain that

0 ≤ 〈 z λ − f 1 − J λ N 1 ( ⋅ , y λ ) x 1 + J λ N 1 ( ⋅ , y λ ) x λ , j ( x 1 − x λ ) 〉 = 〈 z λ − f 1 , j ( x 1 − x λ ) 〉 − 〈 J λ N 1 ( ⋅ , y λ ) x 1 − J λ N 1 ( ⋅ , y λ ) x λ , j ( x 1 − x λ ) 〉 ≤ ∥ z λ − f 1 ∥ ∥ x 1 − x λ ∥ − α 1 1 + λ α 1 ∥ x 1 − x λ ∥ 2 .
(2.8)

By Definition 1.3(ii), Proposition 1.1(2) and Lemma 1.2, we get that

∥ J λ N 1 ( ⋅ , y λ ) x 1 ∥ ≤ ∥ J λ N 1 ( ⋅ , y λ ) x 1 − J λ N 1 ( ⋅ , y 1 ) x 1 ∥ + ∥ J λ N 1 ( ⋅ , y 1 ) x 1 ∥ = ∥ 1 λ ( x 1 − R λ N 1 ( ⋅ , y λ ) x 1 − x 1 + R λ N 1 ( ⋅ , y 1 ) x 1 ) ∥ + | N 1 ( ⋅ , y 1 ) x 1 | ≤ s 1 1 + λ α 1 ∥ y 1 − y λ ∥ + | N 1 ( x 1 , y 1 ) | .
(2.9)

For any λ in (2.1), take u λ ∈ M 1 ( h 1 ( x 1 ), g 1 ( x 1 )), v λ ∈ M 2 ( h 2 ( y 1 ), g 2 ( y 1 )) such that z λ = J λ N 1 ( ⋅ , y λ ) x 1 + u λ , w λ = J λ N 2 ( x λ , ⋅ ) y 1 + v λ . Since { u λ }⊂ M 1 ( h 1 ( x 1 ), g 1 ( x 1 )) and { v λ }⊂ M 2 ( h 2 ( y 1 ), g 2 ( y 1 )), by condition (1), { u λ } and { v λ } are bounded. Combining (2.6), (2.8) and (2.9) yields that

∥ x 1 − x λ ∥ ≤ 1 + λ α 1 α 1 ∥ z λ − f 1 ∥ ≤ 1 + λ α 1 α 1 ( ∥ z λ ∥ + ∥ f 1 ∥ ) ≤ 1 + λ α 1 α 1 ( ∥ J λ N 1 ( ⋅ , y λ ) x 1 ∥ + ∥ u λ ∥ + ∥ f 1 ∥ ) ≤ 1 + λ α 1 α 1 ( s 1 1 + λ α 1 ∥ y 1 − y λ ∥ + | N 1 ( x 1 , y 1 ) | + ∥ u λ ∥ + ∥ f 1 ∥ ) .
(2.10)

By using similar methods, we obtain that

∥ y 1 − y λ ∥≤ 1 + λ α 2 α 2 ( r 2 1 + λ α 2 ∥ x 1 − x λ ∥ + | N 2 ( x 1 , y 1 ) | + ∥ v λ ∥ + ∥ f 2 ∥ ) .
(2.11)

It follows from (2.10) and (2.11) that { x λ } λ → 0 and { y λ } λ → 0 are bounded since 0< s 1 r 2 α 1 α 2 <1.

(ii) Note that for λ,μ>0

f 1 − J λ N 1 ( ⋅ , y λ ) x λ ∈ M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) x λ and f 1 − J μ N 1 ( ⋅ , y μ ) x μ ∈ M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) x μ .

By Proposition 1.1(4), we have that

0 ≤ 〈 − J λ N 1 ( ⋅ , y λ ) x λ + J μ N 1 ( ⋅ , y μ ) x μ , j ( x λ − x u ) 〉 = 〈 J λ N 1 ( ⋅ , y λ ) x λ − J μ N 1 ( ⋅ , y μ ) x μ , j ( R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ) − j ( x λ − x μ ) 〉 − 〈 J λ N 1 ( ⋅ , y λ ) x λ − J μ N 1 ( ⋅ , y μ ) x μ , j ( R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ) 〉 ≤ 〈 J λ N 1 ( ⋅ , y λ ) x λ − J μ N 1 ( ⋅ , y μ ) x μ , j ( R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ) − j ( x λ − x μ ) 〉 − α 1 ∥ R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ∥ 2 ,

and hence,

α 1 ∥ R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ∥ 2 ≤ 〈 J λ N 1 ( ⋅ , y λ ) x λ − J μ N 1 ( ⋅ , y μ ) x μ , j ( R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ) − j ( x λ − x μ ) 〉 .
(2.12)

Since x λ − R λ N 1 ( ⋅ , y λ ) x λ =λ J λ N 1 ( ⋅ , y λ ) x λ →0 (as λ→0), { J λ N 1 ( ⋅ , y λ ) x λ } λ → 0 is bounded. The j is uniformly continuous on bounded set, and (2.12) reduces to that

α 1 ∥ R λ N 1 ( ⋅ , y λ ) x λ − R μ N 1 ( ⋅ , y μ ) x μ ∥ 2 ≤ O ( ∥ x λ − x μ − R λ N 1 ( ⋅ , y λ ) x λ + R μ N 1 ( ⋅ , y μ ) x μ ∥ ) ≤ O ( λ + μ ) .

Similarly, we have that

α 2 ∥ R λ N 2 ( x λ , ⋅ ) y λ − R μ N 2 ( x μ , ⋅ ) y μ ∥ ≤O(λ+μ).

Consequently, { R λ N 1 ( ⋅ , y λ ) x λ } λ → 0 and { R λ N 2 ( x λ , ⋅ ) y λ } λ → 0 are the Cauchy net. There exists (x,y)∈ E 1 × E 2 such that R λ N 1 ( ⋅ , y λ ) x λ →x, R λ N 2 ( x λ , ⋅ ) y λ →y as λ→0 from which and x λ − R λ N 1 ( ⋅ , y λ ) x λ =λ J λ N 1 ( ⋅ , y λ ) x λ and y λ − R λ N 2 ( x λ , ⋅ ) y λ =λ J λ N 2 ( x λ , ⋅ ) y λ , it follows that x λ →x and y λ →y as λ→0.

Now, we show that (x,y) is a solution of (1.1). Since E i is reflexive and { J λ N 1 ( â‹… , y λ ) x λ } λ → 0 and { J λ N 2 ( x λ , â‹… ) y λ } λ → 0 are bounded, there exist λ n >0 (n=1,2,…) such that λ n →0 and J λ n N 1 ( â‹… , y λ n ) x λ n ⇀ z 1 , J λ n N 2 ( x λ n , â‹… ) y λ n ⇀ z 2 for some ( z 1 , z 2 )∈ E 1 × E 2 . Let w λ n ′ = f 1 − J λ n N 1 ( â‹… , y λ n ) x λ n ∈ M 1 ( h 1 ( x λ n ), g 1 ( x λ n )), w λ n ′′ = f 2 − J λ n N 2 ( x λ n , â‹… ) y λ n ∈ M 2 ( h 2 ( x λ n ), g 2 ( y λ n )). Then w λ n ′ ⇀ w 1 , w λ n ′′ ⇀ w 2 for some ( w 1 , w 2 )∈ E 1 × E 2 . Since N 1 (â‹…,y), N 2 (x,â‹…) (∀(x,y)∈ E 1 × E 2 ), T i and M i ( h i (â‹…), g 1 (â‹…)) (i=1,2) are demiclosed (see Proposition 1.1(5)), we have that

x ∈ E 1 = D ( N 1 ( ⋅ , y ) ) ∩ D ( M 1 ( h 1 ( ⋅ ) , g 1 ( ⋅ ) ) ) , y ∈ E 2 = D ( N 2 ( x , ⋅ ) ) ∩ D ( M 2 ( h 2 ( ⋅ ) , g 2 ( ⋅ ) ) ) , z 1 ∈ N 1 ( ⋅ , y ) x = N 1 ( x , y ) = T 1 x + F 1 ( x , y ) , z 2 ∈ N 2 ( x , ⋅ ) y = N 2 ( x , y ) = T 2 y + F 2 ( x , y ) , w 1 ∈ M 1 ( h 1 ( x ) , g 1 ( x ) ) and w 2 ∈ M 2 ( h 2 ( y ) , g 2 ( y ) ) .

Therefore,

f 1 = z 1 + w 1 ∈ T 1 x + F 1 ( x , y ) + M 1 ( h 1 ( x ) , g 1 ( x ) ) , f 2 = z 2 + w 2 ∈ T 2 y + F 2 ( x , y ) + M 2 ( h 2 ( y ) , g 2 ( y ) ) .

Finally, we show the uniqueness of solutions. Let (x,y) and ( x 1 , y 1 ) be two solutions of Problem (1.1). Let u∈ T 1 x, u 1 ∈ T 1 x 1 , w∈ M 1 ( h 1 (x), g 1 (x)), w 1 ∈ M 1 ( h 1 ( x 1 ), g 1 ( x 1 )) such that

f 1 =u+ F 1 (x,y)+w, f 1 = u 1 + F 1 ( x 1 , y 1 )+ w 1 .

Then by accretiveness of M i and T i , we have that

0 = 〈 f 1 − f 1 , j ( x − x 1 ) 〉 = 〈 u + F 1 ( x , y ) + w − u 1 − F 1 ( x 1 , y 1 ) − w 1 , j ( x − x 1 ) 〉 ≥ 〈 F 1 ( x , y ) − F 1 ( x 1 , y 1 ) , j ( x − x 1 ) 〉 = 〈 F 1 ( x , y ) − F 1 ( x 1 , y ) , j ( x − x 1 ) 〉 + 〈 F 1 ( x 1 , y ) − F 1 ( x 1 , y 1 ) , j ( x − x 1 ) 〉 ≥ α 1 ∥ x − x 1 ∥ 2 − ∥ F 1 ( x 1 , y ) − F 1 ( x 1 , y 1 ) ∥ ∥ x − x 1 ∥ ≥ α 1 ∥ x − x 1 ∥ 2 − s 1 ∥ y − y 1 ∥ ∥ x − x 1 ∥ .

That is,

∥x− x 1 ∥≤ s 1 α 1 ∥y− y 1 ∥.
(2.13)

Let v∈ T 2 y, v 1 ∈ T 2 y 1 , z∈ M 2 ( h 2 (y), g 2 (y)), z 1 ∈ M 2 ( h 2 ( y 1 ), g 2 ( y 1 )) such that

f 2 = v + F 2 ( x , y ) + w , f 2 = v 1 + F 2 ( x 1 , y 1 ) + w 1 .

The by the similar discussion, we have that

∥y− y 1 ∥≤ r 2 α 2 ∥x− x 1 ∥.
(2.14)

Equations (2.1), (2.13) and (2.14) imply that x= x 1 , y= y 1 . □

Theorem 2.2 Suppose that E i , T i , M i , F i , f i and h i (i=1,2) are the same as in Theorem  2.1. If for any R i >0, there exist L i >0, a i >0 and 0< L i <1 such that

| T 1 x|≤ L 1 | M 1 ( h 1 ( x ) , g 1 ( x ) ) | + a 1 ,∥x∥≤ R 1 ,
(2.15)
| T 2 y|≤ L 2 | M 2 ( h 2 ( y ) , g 2 ( y ) ) | + a 2 ,∥y∥≤ R 2 ,
(2.16)

then Problem (1.1) has a unique solution.

Proof It suffices to show that { J λ N 1 ( â‹… , y λ ) x λ } λ → 0 and { J λ N 2 ( x λ , â‹… ) y λ } λ → 0 in Theorem 2.1 are bounded. Because { x λ } and { y λ } are bounded, therefore, there exists R i >0 (i=1,2) such that for λ in (2.1), ∥ x λ ∥≤ R 1 and ∥ y λ ∥≤ R 2 . By Proposition 1.1(2) and (2.15),

∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ ≤ | N 1 ( x λ , y λ ) | = | T 1 x λ + F 1 ( x λ , y λ ) | = inf { ∥ u ∥ : u = u 0 + F 1 ( x λ , y λ ) ∈ T 1 x λ + F 1 ( x λ , y λ ) } = inf { ∥ u 0 + F 1 ( x λ , y λ ) ∥ : u 0 ∈ T 1 x λ } ≤ inf { ∥ u 0 ∥ : u 0 ∈ T 1 x λ } + ∥ F 1 ( x λ , y λ ) ∥ = | T 1 x λ | + ∥ F 1 ( x λ , y λ ) ∥ ≤ L 1 | M 1 ( h 1 ( x λ ) , g 1 ( x λ ) ) | + ∥ F 1 ( x λ , y λ ) ∥ + a 1 ;
(2.17)

Similarly, by Proposition 1.1(2) and (2.16), we get that

∥ J λ N 2 ( x λ , ⋅ ) y λ ∥ ≤ L 2 | M 2 ( h 2 ( y λ ) , g 2 ( y λ ) ) | + ∥ F 2 ( x λ , y λ ) ∥ + a 2 .
(2.18)

By (2.2),

| M 1 ( h 1 ( x λ ) , g 1 ( x λ ) ) | ≤∥ f 1 ∥+ ∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ ,
(2.19)
| M 2 ( h 2 ( y λ ) , g 2 ( y λ ) ) | ≤∥ f 2 ∥+ ∥ J λ N 2 ( x λ , ⋅ ) y λ ∥ .
(2.20)

Therefore, from (2.17)-(2.20), it follows that

∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ ≤ L 1 1 − L 1 ∥ f 1 ∥+ 1 1 − L 1 ∥ F 1 ( x λ , y λ ) ∥ + a 1 1 − L 1 ,
(2.21)
∥ J λ N 2 ( x λ , ⋅ ) y λ ∥ ≤ L 2 1 − L 2 ∥ f 2 ∥+ 1 1 − L 2 ∥ F 2 ( x λ , y λ ) ∥ + a 2 1 − L 2 .
(2.22)

Since F i (i=1,2) is uniformly continuous, F i map bounded set in E 1 × E 2 to bounded set. Hence, (2.21) and (2.22) imply that { J λ N 1 ( ⋅ , y λ ) x λ } λ → 0 and { J λ N 2 ( x λ , ⋅ ) y λ } λ → 0 are bounded. □

Theorem 2.3 Suppose that E i , T i , M i , h i , g i , F i and f i (i=1,2) are the same as in Theorem  2.1. If for any R i >0, there exists bounded functional B i : E 1 × E 2 → â„œ + (i.e., B i map a bounded set in E 1 × E 2 to a bounded set in â„œ + ) such that for [x,z]∈Graph( M 1 ( h 1 (â‹…), g 1 (â‹…))), [y,w]∈Graph( M 2 ( h 2 (â‹…), g 2 (â‹…))) and λ>0,

〈 z , j J λ N 1 ( ⋅ , y ) x 〉 ≥− B 1 (x,y),
(2.23)
〈 w , j J λ N 2 ( x , ⋅ ) y 〉 ≥− B 2 (x,y),∥x∥≤ R 1 ,∥y∥≤ R 2
(2.24)

for x∈ E 1 , ∥x∥≤ R 1 , y∈E, ∥y∥≤ R 2 , then Problem (1.1) has a unique solution.

Proof It suffices to show that { J λ N 1 ( ⋅ , y λ ) x λ } λ → 0 and { J λ N 2 ( x λ , ⋅ ) y λ } λ → 0 are bounded. Since { x λ } λ → 0 and { y λ } λ → 0 are bounded, then by (2.23), for u λ ∈ M 1 ( h 1 (⋅), g 2 (⋅)) x λ ,

∥ f 1 ∥ ∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ ≥ 〈 f 1 , j J λ N 1 ( ⋅ , y λ ) x λ 〉 = 〈 J λ N 1 ( ⋅ , y λ ) x λ + u λ , j J λ N 1 ( ⋅ , y λ ) x λ 〉 = ∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ 2 + 〈 u λ , j J λ N 1 ( ⋅ , y λ ) x λ 〉 ≥ ∥ J λ N 1 ( ⋅ , y λ ) x λ ∥ 2 − B 1 ( x λ , y λ ) ,

which implies that ∥ J λ N 1 ( â‹… , y λ ) x λ ∥≤ ( B 1 ( x λ , y λ ) + 1 4 ∥ f 1 ∥ 2 ) 1 2 + ∥ f 1 ∥ 2 . Similarly, ∥ J λ N 2 ( x λ , â‹… ) y λ ∥≤ ( B 2 ( x λ , y λ ) ) 1 2 + 1 2 ∥ f 2 ∥. This completes the proof of Theorem 2.3. □

3 Conclusion and future perspective

Two of the most difficult and important problems in variation inclusions are the establishment of system of variational inclusions and the development of an efficient numerical methods. A new system of generalized variational inclusions in the Banach space under the assumption with no continuousness is introduced, and some existence and uniqueness theorems of solutions for this kind of system of generalized variational inclusions are proved by using the Yosida approximation technique for m-accretive operator.

More approaches [13–15], which have been applied in variational inequalities, could be manipulated in variational inclusions. We will make further research to solve this kind of system of generalized variational inclusions by using extragradient method and implicit iterative methods.

References

  1. Kazmi KR, Khan FA, Shahzad M: A system of generalized variational inclusions involving generalized H(⋅,⋅)-accretive mapping in real uniformly smooth Banach spaces. Appl. Math. Comput. 2011, 217: 9679–9688. 10.1016/j.amc.2011.04.052

    Article  MATH  MathSciNet  Google Scholar 

  2. Kazmi KR, Khan FA: Iterative approximation of a unique solution of a system of variational-like inclusions in real q -uniformly smooth Banach spaces. Nonlinear Anal. 2007, 67: 917–929. 10.1016/j.na.2006.06.049

    Article  MATH  MathSciNet  Google Scholar 

  3. Verma RU: General system of A -monotone nonlinear variational inclusion problems with applications. J. Optim. Theory Appl. 2006, 131(1):151–157. 10.1007/s10957-006-9133-5

    Article  MATH  MathSciNet  Google Scholar 

  4. Verma RU: General nonlinear variational inclusions problems involving A -monotone mapping. Appl. Math. Lett. 2006, 19: 960–963. 10.1016/j.aml.2005.11.010

    Article  MATH  MathSciNet  Google Scholar 

  5. Fang YP, Huang NJ: Iterative algorithm for a system of variational inclusions involving H -accretive operators in Banach spaces. Acta Math. Hung. 2005, 108(3):183–195. 10.1007/s10474-005-0219-6

    Article  MATH  MathSciNet  Google Scholar 

  6. Zeng LC, Guo SM, Yao JC: Characterization of H -monotone operators with applications to variational inclusions. Comput. Math. Appl. 2005, 50: 329–337. 10.1016/j.camwa.2005.06.001

    Article  MATH  MathSciNet  Google Scholar 

  7. Noor MA, Huang Z: Some resolvent iterative methods for variational inclusion and nonexpansive mappings. Appl. Math. Comput. 2007, 194: 267–275. 10.1016/j.amc.2007.04.037

    Article  MATH  MathSciNet  Google Scholar 

  8. Barbu V: Nonlinear Semigroups and Differential Equations in Banach Spaces. International publishing, Leyden; 1976.

    Book  MATH  Google Scholar 

  9. Browder FE: Nonlinear Operator and Nonlinear Equations of Evolution in Banach Spaces. Am. Math. Soc., Providence; 1976.

    Book  Google Scholar 

  10. Lakshmikantham V, Leela S: Nonlinear Differential Equations in Abstract Spaces. Pergamon Press, Oxford; 1981.

    MATH  Google Scholar 

  11. Kobayashi Y: Difference approximation of Cauchy problems for quasi-dissipative operators and generation of nonlinear semigroups. J. Math. Soc. Jpn. 1975, 27: 640–665. 10.2969/jmsj/02740640

    Article  MATH  Google Scholar 

  12. Cao HW: Sensitivity analysis for a system of generalized nonlinear mixed quasi-variational inclusions with H -monotone operators. J. Appl. Math. 2011. 10.1155/2011/921835

    Google Scholar 

  13. Yao Y, Noor MA, Liou YC: Strong convergence of a modified extra-gradient method to the minimum-norm solution of variational inequalities. Abstr. Appl. Anal. 2012. 10.1155/2012/817436

    Google Scholar 

  14. Yao Y, Liou YC, Li CL, Lin HT: Extended extra-gradient methods for generalized variational inequalities. J. Appl. Math. 2012. 10.1155/2012/237083

    Google Scholar 

  15. Noor MA, Noor KI, Huang Z, Al-said E: Implicit schemes for solving extended general nonconvex variational inequalities. J. Appl. Math. 2012. 10.1155/2012/646259

    Google Scholar 

Download references

Acknowledgements

The author thanks for the guidance and support my supervisor Prof. Li-Wei Liu who taught at the department of mathematics in Nanchang university. The author thanks the anonymous referees for reading this paper carefully, providing valuable suggestions and comments. The work was supported by the National Science Foundation of China (No. 10561007) and the Youth Science Foundation of Jiangxi Provincial Department of Science and Technology (No. 20122BAB211021).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Han-Wen Cao.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Cao, HW. Yosida approximation equations technique for system of generalized set-valued variational inclusions. J Inequal Appl 2013, 455 (2013). https://doi.org/10.1186/1029-242X-2013-455

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2013-455

Keywords