Open Access

General variational inclusions involving difference of operators

  • Muhammad Aslam Noor1Email author,
  • Khalida Inayat Noor1 and
  • Rabia Kamal1
Journal of Inequalities and Applications20142014:98

https://doi.org/10.1186/1029-242X-2014-98

Received: 20 August 2013

Accepted: 17 February 2014

Published: 25 February 2014

Abstract

In this paper, we introduce and consider a new class of general variational inclusions involving the difference of operators in a Hilbert space. We establish the equivalence between the general variational inclusions and the fixed point problems as well as with a new class of resolvent equations using the resolvent operator technique. We use this alternative formulation to discuss the existence of a solution of the general variational inclusions. We again use this alternative equivalent formulation to suggest and analyze a number of iterative methods for finding a zero of the difference of operators. We also discuss the convergence of the iterative method under suitable conditions. Our methods of proofs are very simple as compared with other techniques. Several special cases of these problems are also considered. The results proved in this paper may be viewed as a refinement and an improvement of the known results in this area.

Keywords

monotone operators iterative method resolvent operator convergence

1 Introduction

Variational inclusions involving the difference of operators provide us with a unified, natural, novel, and simple framework to study a wide class of problems arising in DC programming, prox-regularity, multicommodity network, image restoring processing, tomograpy, molecular biology, optimization, pure and applied sciences, see [137] and the references therein. We would like to emphasize that the variational inclusions theory is a natural development of the variational principles, the origin of which can be traced back to Fermat, Newton, Leibniz, the Bernoulli brothers, Euler, and Lagrange, has been one of the major branches of mathematical and engineering sciences for more than two centuries. It can be used to interpret the basic principles of mathematical and physical sciences in a form characterized by simplicity and elegance. The variational principles have played a fundamental and important part as unifying influence in sciences and have played a fundamental role in the development of general theory of relativity, gauge field theory in modern particle physics, and soliton theory, see [1721, 23, 3033].

Variational inclusions involving the sum of monotone operators have been studied extensively in recent years. It is known that the sum of two (more) monotone operators is again a monotone operator, whereas the difference of two (more) monotone operators is not a monotone operator. Due to this fact, the problem of finding a zero of difference of monotone operators is very difficult as compared with finding a zero of the sum of monotone operators. Consequently, there does not exist a unified framework for the variational inclusions involving the difference of operators, see [1, 7, 1113, 32, 33] and the references therein. It is worth mentioning that this type of variational inclusions includes as a special case the problem of finding the critical points of the difference of two convex functions. Our present results are a contribution towards this goal. We also show (see Lemma 2.1) that the minimum of the difference of a nondifferentiable nonconvex function and a differentiable nonconvex function on a nonconvex set is a solution of the general variational inequality, and that it thereby extends the earlier known result for the difference of two differentiable convex functions. In addition, we have shown that the odd-order and nonsymmetric obstacle problems arising in various branches of pure and applied sciences are a special case of the general variational inclusions and can be treated in the unified framework of the general variational inclusions. This clearly shows that the field of variational inclusions involving the difference operators is very rich and offers ample opportunities for further research.

Motivated and inspired by the research activities going on in this field, we introduce and consider a new class of variational inclusions involving the difference of operators, which is called the general variational inclusion. We use the resolvent operator technique to establish the equivalence between the general mixed variational inclusions and fixed point problem, which is Lemma 3.1. The novel feature of the technique is that the resolvent step involves the maximal monotone operator only and the other part facilitates the problem decomposition. This can lead to the development of very efficient methods, since one can treat each part of the original operator independently. We use this alternative formulation to study the existence of a solution of the general mixed variational inequality, which extends the known result.

In recent years, several numerical methods including projection and its variant forms, resolvent equations, and auxiliary principle techniques have been developed. This class of iterative methods has witnessed great progress in recent years. Apart from theoretical interest, the main advantage of these methods, which makes them successful in real world problems, is computation. These methods have the ability to handle large-size problems of dimensions beyond which other methods cease to be efficient. In brief, the field of the iterative method itself is vast, see [123]. This equivalent formulation is used to suggest and analyze a new Mann-type iterative method for solving the general variational inclusions, see Algorithm 3.1. In the process of proving the main results (Theorem 3.1 and Theorem 3.2), we use the resolvent operator technique.

Related with the general variational inclusions, we have the problem of solving the resolvent equations, the origin of which can be traced back to Noor [17]. Using again the resolvent operator technique, we establish the equivalence between the general variational inclusions and the general resolvent equations. Here, we would like to emphasize the fact that one can show that the resolvent equations are equivalent to the Wiener-Hopf equations, which were initially introduced by Shi [35]. It has turned out that this approach is more general and flexible. In Section 4, consider the problem of solving the general resolvent equations. It is established that the general variational inclusions are equivalent to the resolvent equations. The resolvent equations approach is used to suggest and analyze a number of new iterative methods for solving the general variational inclusions and related optimization problems. We prove the strong convergence (main Theorem 4.1) of the new iterative method under the same conditions as in Theorem 3.2.

In this paper, we have shown that the general variational inclusions provide us a platform to investigate some unrelated problems in a unified manner. These unified frameworks also allow a cross-fertilization among various diverse field areas such as physics, mathematics, engineering, financial mathematics, economics and optimization, where both the theory and the computational techniques have been applied. We would like to emphasize that the problems discussed and results obtained in this paper may motivate and bring a large number of novel, innovative and potential applications, extensions and interesting generalizations in this area.

2 Preliminaries

Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively.

For given monotone operators T , A , g : H H , we consider the problem of finding u H such that
0 A ( g ( u ) ) T u .
(2.1)

Inequality of type (2.1) is called the general variational inclusion involving the difference of operators. Note that the difference of two monotone operators is not a monotone operator as contrast to the sum of two monotone operators. Due to this fact, the problem of finding a zero of the difference of two monotone operators is very difficult as compared to finding the zeros of the sum of monotone operators, see [1, 12, 32, 33].

We now discuss some applications of the general variational inclusions (2.1).

Applications

  1. (I)
    If g I , the identity operator, then problem (2.1) is equivalent to finding u H such that
    0 A ( u ) T u ,
    (2.2)
     
a problem considered by Noor et al. [32, 33] and Moudafi [12] recently using essentially two different techniques.
  1. (II)
    If A ( ) f ( ) , the subdifferential of a proper, convex and lower-semicontinuous function f : H R { } , then problem (2.1) is equivalent to finding u H such that
    0 f ( g ( u ) ) T u ,
    (2.3)
     

a problem considered and studied by Adly and Oettli [1].

We note that problem (2.3) can be written as: find u H such that
T u , g ( v ) g ( u ) + f ( g ( u ) ) f ( g ( v ) ) 0 , v H ,
(2.4)

which is known as the general mixed variational inequality or the variational inequality of the second kind. For the applications, numerical methods and other aspects of these mixed variational inequalities, see [122] and the references therein.

Example 2.1 To convey an idea of the applications of the general mixed variational inequality (2.4), we show that the minimum of a difference of differentiable nonconvex function and nondifferentiable nonconvex function on a nonconvex set is the solution of the mixed variational inequality (2.4). For this purpose, we recall the following well-known concepts, see [4].

Definition 2.1 [4]

Let K be any set in H. The set K is said to be relative convex (g-convex) set, if there exists a function g : H H such that
g ( u ) + t ( g ( v ) g ( u ) ) K , u , v H : g ( u ) , g ( v ) K , t [ 0 , 1 ] .

Note that every convex set is a relative convex, but the converse is not true, see [4]. If g = I , then the relative convex set K is called a convex set.

Definition 2.2 [14, 15]

The function F : K H is said to be relative convex (g-convex), if there exists a function g such that
F ( g ( u ) + t ( g ( v ) g ( u ) ) ) ( 1 t ) F ( g ( u ) ) + t F ( g ( v ) ) , u , v H : g ( u ) , g ( v ) K , t [ 0 , 1 ] .

Clearly every convex function is relative convex, but the converse is not true. For the properties and various classes of the relative convex functions, see [14, 17].

For a given differentiable relative convex function F and a nondifferentiable relative convex function f, we consider a functional of the type
I [ v ] = f ( g ( v ) ) F ( g ( v ) ) , v K .
(2.5)

One can prove that the minimum of the functional I [ v ] on the relative convex set K can be characterized by a class of variational inequalities (2.4). For the sake of completeness and to convey an idea of the technique, we include its proof.

Lemma 2.1 Let F be a differentiable relative convex function and f be a nondifferentiable relative convex function on the relative convex set K. Then u K is the minimum of I [ v ] , defined by (2.5), on K g ( H ) satisfies the inequality
F ( g ( u ) ) , g ( v ) g ( u ) f ( g ( u ) ) f ( g ( v ) ) , v K ,
(2.6)

where F ( g ( u ) ) is the differential of the differentiable nonconvex function at g ( u ) in the direction of g ( v ) g ( u ) .

Proof Let u H : g ( u ) K be a minimum of the functional I [ v ] , defined by (2.5). Then
I [ u ] I [ v ] , v H : g ( v ) K .
(2.7)
Since K is a relative convex set, so, for all u , v H : g ( u ) , g ( v ) K , t [ 0 , 1 ] , g ( v t ) = g ( u ) + t ( g ( v ) g ( u ) ) K . Setting g ( v ) = g ( v t ) in (2.7), we have
I [ u ] I [ g ( u ) + t ( g ( v ) g ( u ) ) ] ,
which implies that
f ( g ( u ) ) F ( g ( u ) ) f ( g ( u ) + t ( g ( v ) g ( u ) ) ) F ( g ( u ) + t ( g ( v ) g ( u ) ) ) f ( g ( u ) ) + t { f ( g ( v ) ) f ( g ( u ) ) } F ( u + t ( g ( v ) u ) ) ,
from which we have
F ( g ( u ) t ( g ( v ) g ( u ) ) ) F ( g ( u ) ) t ( f ( g ( v ) ) ) + f ( g ( u ) ) 0 , v H : g ( v ) K .
Dividing the above inequality by t and taking t 0 , we have
F ( g ( u ) ) , g ( v ) g ( u ) f ( g ( v ) ) f ( g ( u ) ) , v H : g ( v ) K ,

which is the required result (2.6). □

Lemma 2.1 implies that the minimum of the difference of two nonconvex functions is a solution of a general mixed variational inequality (2.6). However, the converse is not true. It is an open problem to show that the solution of the general mixed variational inequality (2.6) is a minimum of a difference of relative two relative convex functions. See also Khattri [8] for applications of convex functions.
  1. (III)
    If f is the indicator function of a closed and convex set K in a real Hilbert space, then problem (2.4) is equivalent to finding u H : g ( u ) K such that
    T u , g ( v ) g ( u ) 0 , v H : g ( v ) K ,
    (2.8)
     
which is known as the general variational inequality, introduced and studied by Noor [15] in 1988. The general variational inequalities have been studied extensively in recent years, see [1621] and the references therein for the formulation, numerical methods, applications, and other aspects of the general variational inequalities (2.8).
  1. (IV)
    If g I , the identity operator, then problem (2.8) reduces to: find u K such that
    T u , v u 0 , v K ,
    (2.9)
     
which is known as the classical variational inequalities, introduced and studied by Stampacchia [36] in 1964. See also [137] for more details.
  1. (V)
    It is well known that the necessary optimality for the problem of finding the minimum of f ( x ) g ( x ) , where f ( x ) and g ( x ) are differentiable convex functions, is equivalent to finding x H such that
    0 f ( x ) ( g ( x ) ) ,
    (2.10)
     

under some suitable conditions. Problem of type (2.10) have been considered in [1, 2, 57, 11]. It is clear from the above discussion that problem (2.10) is a special case of problem (2.1). In fact, a wide class of problems arising in different branches of pure and applied sciences can be studied in the unified framework of the general variational inclusion (2.1). For appropriate and suitable choice of the operators and the space, one can obtain several new and known classes of variational inclusions, variational inequalities and complementarity problems, see [137] and the references therein.

We now recall some basic concepts and results.

Definition 2.3 [4]

If A is a maximal monotone operator on H, then, for a constant ρ > 0 , the resolvent operator associated with A is defined by
J A ( u ) = ( I + ρ A ) 1 ( u ) , for all  u H ,

where I is the identity operator.

It is well known that a monotone operator is maximal if and only if its resolvent operator is defined everywhere. In addition, the resolvent operator is a single-valued and nonexpansive, that is,
J A ( u ) J A ( v ) u v , u , v H .
Definition 2.4 An operator T : H H is said to be:
  1. (i)
    strongly antimonotone, if there exists a constant α > 0 such that
    T u T v , u v α u v 2 , u , v H ;
     
  2. (ii)
    Lipschitz continuous, if there exists a constant β > 0 such that
    T u T v β u v , u , v H ;
     
  3. (iii)
    strongly monotone, if there exists a constant α 1 > 0 such that
    T u T v , u v α 1 u v 2 , u , v H .
     

We would like to point out that the differential f ( ) of a strongly concave functions satisfies Definition 2.4(i). Consequently, it is a strongly antimonotone operator.

3 Resolvent operator method

In this section, we establish the equivalence between the general variational inclusion (2.1) and the fixed point problem (3.1) using the resolvent operator technique. This alternative formulation is used to discuss the existence of a solution of the problem (2.1) and to suggest and analyze an iterative method for solving the variational inclusions (2.1).

Lemma 3.1 Let A be a maximal monotone operator. Then u H is a solution of the variational inclusion (2.1), if and only if u H satisfies the relation
g ( u ) = J A [ g ( u ) + ρ T u ] ,
(3.1)

where J A ( I + ρ A ) 1 is the resolvent operator and ρ > 0 is a constant.

Proof Let u H be a solution of (2.1). Then
0 g ( u ) + ρ A ( g ( u ) ) ( ρ T u + g ( u ) ) = ( I + ρ A ) ( g ( u ) ) ( g ( u ) + ρ T u ) g ( u ) = ( I + ρ A ) 1 [ g ( u ) + ρ T u ] = J A [ g ( u ) + ρ T u ] ,

the required result. □

Lemma 3.1 implies that the general variational inclusion (2.1) is equivalent to the fixed point problem (3.1). This alternative equivalent formulation is very useful from the numerical and theoretical points of view.

We rewrite the relation (3.1) in the following form:
F ( u ) = u g ( u ) + J A [ g ( u ) + ρ T u ] ,
(3.2)

which is used to study the existence of a solution of the variational inclusion (2.1).

We now study the conditions under which the general variational inclusion (2.1) has a solution and this is the main motivation of our next result.

Theorem 3.1 Let the operator T : H H be strongly antimonotone with constant α > 0 and Lipschitz continuous with constants with β > 0 , respectively. If the operator g is strongly monotone with constant σ > 0 and Lipschitz continuous with constant δ > 0 and
| ρ α β 2 | < α 2 β 2 k ( 2 k ) β 2 , α > β k ( 2 k ) , k = 2 1 2 σ + δ 2 < 1 ,
(3.3)

then there exists a solution of problem (2.1).

Proof From Lemma 3.1, it follows that problems (2.1) and (3.1) are equivalent. Thus it is enough to show that the map F ( u ) , defined by (3.2) has a fixed point. For all u v H , we have
F ( u ) F ( v ) = u v ( g ( u ) g ( v ) ) + J A [ g ( u ) + ρ T u ] J A [ g ( v ) + ρ T v ] u v ( g ( u ) g ( v ) ) + g ( u ) g ( v ) + ρ ( T u T v ) 2 u v ( g ( u ) g ( v ) ) + u v + ρ ( T u T v ) ,
(3.4)

where we have used the fact that the resolvent operator J A is nonexpansive.

Since the operator T is strongly antimonotone with constant α > 0 and Lipschitz continuous with constant β > 0 , it follows that
u v + ρ ( T u T v ) 2 u v 2 + 2 ρ T u T v , u v + ρ 2 T u T v 2 ( 1 2 ρ α + ρ 2 β 2 ) u v 2 .
(3.5)
In a similar way, using the strongly monotonicity g with constant σ > 0 and Lipschitz continuity of T with constant δ > 0 , we have
u v ( g ( u ) g ( v ) ) 2 ( 1 2 σ + δ 2 ) u v 2 .
(3.6)
From (3.4), (3.5), and (3.6), we have
F ( u ) F ( v ) { 2 1 2 σ + δ 2 + 1 2 α ρ + β 2 ρ 2 } u v = ( k + t ( ρ ) ) u v , = θ u v ,
where
t ( ρ ) = 1 2 α ρ + ρ 2 β 2
(3.7)
and
θ = k + t ( ρ ) .
(3.8)

From (3.3), it follows that θ < 1 . Thus the mapping F ( u ) , defined by (3.2) is a contraction mapping and consequently has a fixed point belonging to H satisfying the general variational inclusion (2.1). □

Using the fixed point formulation (3.1), we suggest and analyze the following iterative method for solving the variational inclusion (2.1).

Algorithm 3.1 For a given u 0 H , find the approximate solution u n + 1 by the iterative schemes
u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + J A [ g ( u n ) + ρ T u n ] } , n = 0 , 1 , ,
(3.9)

which is known as the Mann iteration process for solving the general variational inclusion (2.1).

If g I , the identity operator, then Algorithm 3.1 reduces to the following.

Algorithm 3.2 For a given u 0 H , find the approximate solution u n + 1 by the iterative schemes
u n + 1 = ( 1 α n ) u n + α n J A [ u n + ρ T u n ] , n = 0 , 1 , ,

where α n [ 0 , 1 ] n 0 .

Algorithm 3.2 is known as the Mann iteration process for solving the variational inclusion (2.2), which was discussed in [17, 23].

If A ( ) is the indicator function of a closed convex set K in H, then J A = P K , the projection of H onto the closed convex set and consequently Algorithm 3.1 reduces to the following method.

Algorithm 3.3 For a given u 0 H , find the approximate solution u n + 1 by the iterative schemes
u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + P K [ g ( u n ) + ρ T u n ] } , n = 0 , 1 , ,

which is known as the Mann iteration process for solving the general variational inequalities (2.8).

We now consider the convergence analysis of Algorithm 3.1 and this is the main motivation of our next result.

Theorem 3.2 Let the operator T : H H be strongly antimonotone with constants α > 0 and Lipschitz continuous with constants with β > 0 . Let the operator g be strongly monotone with constant σ > 0 and Lipschitz continuous with constant δ > 0 . If (3.3) holds and 0 α n 1 , for all n 0 and with n = 0 α n = , then the approximate solution u n obtained from Algorithm 3.1 converges to a solution u H satisfying the variational inclusion (2.1).

Proof Let u H be a solution of the general variational inclusion (2.1). Then, using Lemma 3.1, we have
u = ( 1 α n ) u + α n { u g ( u ) + J A [ g ( u ) + ρ T u ] } ,
(3.10)

where 0 α n 1 is a constant.

From (3.9) and (3.10), we have
u n + 1 u = ( 1 α n ) ( u n u ) + α n u n u ( g ( u n ) g ( u ) ) + α n { J A [ g ( u n ) + ρ T u n ] J A [ g ( u ) + ρ T u ] } ( 1 α n ) u n u + 2 α n u n u ( g ( u n ) g ( u ) ) + α n u n u + ρ ( T u n T u ) .
(3.11)
From (3.5), (3.6), (3.7), (3.8), and (3.11), we have
u n + 1 u ( 1 α n ) u n u + { 2 1 2 σ + δ 2 + 1 2 α ρ + β 2 ρ 2 } u n u = ( 1 α n ) u n u + ( k + t ( ρ ) ) u n u = ( 1 α n ) u n u + θ u n u .
From (3.3), it follows that θ < 1 . Thus
u n + 1 u ( 1 α n ) u n u + α n θ u n u = [ 1 ( 1 θ ) α n ] u n u i = 0 n [ 1 ( 1 θ ) α i ] u 0 u .

Since n = 0 α n diverges and 1 θ > 0 , we have lim n { i = 0 n [ 1 ( 1 θ ) α i ] } = 0 . Consequently the sequence { u n } converges strongly to u H satisfying the general variational inclusion (2.1). This completes the proof. □

In recent years, much attention have been given to develop a class of two-step and three-step iterative methods for solving the variational inclusions and inequalities using the technique of updating the solution. It has been shown that the three-step iterative methods, which are also called Noor iterations, are versatile in nature and efficient.

We now use the updating technique of the solution to rewrite (3.1) in the following form:
g ( y ) = J A [ g ( u ) + ρ T u ] , g ( w ) = J [ g ( y ) + ρ T y ] , g ( u ) = J A [ g ( w ) + ρ T w ] ,
from which, we have
y = ( 1 γ n ) y + γ n { y g ( y ) + J A [ g ( u ) + ρ T u ] } , g ( w ) = ( 1 β n ) w + β n { w g ( w ) + J A [ g ( y ) + ρ T y ] } , g ( u ) = ( 1 α n ) u + α n { u g ( u ) + J A [ g ( w ) + ρ T w ] } ,

where 0 α n , β n , γ n 1 , for all n 0 .

Using this fixed point formulation, we can suggest and investigate the following three-step iterative methods for solving problem (2.1).

Algorithm 3.4 For given u 0 , y 0 , w 0 , find u n + 1 , y n + 1 , w n + 1 by the iterative schemes
y n + 1 = ( 1 γ n ) y n + γ n { y n g ( y n ) + J A [ g ( u n ) + ρ T u n ] } , w n + 1 = ( 1 β n ) w n + β n { w n g ( w n ) + J [ g ( y n ) + ρ T y n ] } , u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + J A [ g ( w n ) + ρ T w n ] } ,

where 0 α n , β n , γ n 1 , for all n 0 .

Algorithm 3.4 is called the Noor three-step iterative method for solving the general variational inclusion (2.1). This method can be considered as a Jacobi type iterative method.

We now suggest another iterative method by using the updated value of the solution. This iterative method can be viewed as a Gauss-Seidel method.

Algorithm 3.5 For given u 0 , y 0 , w 0 , find u n + 1 , y n + 1 , w n + 1 by the iterative schemes
y n + 1 = ( 1 γ n ) y n + γ n { y n g ( y n ) + J A [ g ( u n ) + ρ T u n ] } , w n + 1 = ( 1 β n ) w n + β n { w n g ( w n ) + J [ g ( y n + 1 ) + ρ T y n + 1 ] } , u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + J A [ g ( w n + 1 ) + ρ T w n + 1 ] } ,

where 0 α n , β n , γ n 1 , for all n 0 .

If γ n = 0 , then Algorithm 3.4 and Algorithm 3.5 reduce to the following two-step iterative schemes for solving (2.1).

Algorithm 3.6 For given u 0 , y 0 , find u n + 1 , y n + 1 by the iterative schemes
y n + 1 = ( 1 β n ) y n + β n { y n g ( y n ) + J A [ g ( u n ) + ρ T u n ] } , u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + J A [ g ( y n ) + ρ T y n ] } ,

where 0 α n , β n 1 , for all n 0 .

Algorithm 3.7 For given u 0 , y 0 , find u n + 1 , y n + 1 by the iterative schemes
y n + 1 = ( 1 β n ) y n + β n { y n g ( y n ) + J A [ g ( u n ) + ρ T u n ] } , u n + 1 = ( 1 α n ) u n + α n { u n g ( u n ) + J A [ g ( y n + 1 ) + ρ T y n + 1 ] } ,

where 0 α n , β n 1 , for all n 0 .

One can use the fixed point problem (3.10) to suggest the following iterative method for solving (2.1).

Algorithm 3.8 For a given u 0 , find the approximate solution u n + 1 by the iterative scheme
u n + 1 = ( 1 α n ) u n + α n [ u n g ( u n ) + J A [ g ( u n ) + ρ T u n + 1 ] ] ,

which is called the implicit or proximal point method. Using the technique of Noor [15, 17], one can investigate the convergence analysis of Algorithm 3.8.

In brief, one can obtain a wide class of new iterative methods for solving the general variational inclusions and related problems by selecting suitable and appropriate choices of the operators and space. The interested readers are encouraged to study the convergence analysis of Algorithms 3.4-3.7, which is an interesting and challenging problem for future research. The implementation and comparison of these methods is another direction of future research.

4 Resolvent equations technique

In this section, we consider the problem of solving the resolvent equations. It is shown that the general variational inclusions (2.1) are equivalent to the general resolvent equations. This alternative equivalent formulation is used to suggest and investigate a class of iterative methods for solving the general variational inclusions (2.1).

We now consider the problem of solving the resolvent equations. Let R A = I J A , where J A is the resolvent operator and I is the identity operator. For given nonlinear operators T, A, g, consider the problem of finding z H such that
T g 1 J A z ρ 1 R A z = 0 .
(4.1)
Equations of the type (4.1) are called general resolvent equations; they were introduced and studied by Noor [17]. In particular, If A ( ) = f ( ) , the subdifferential of a proper, convex, and lower-semicontinuous function f, then it is well known that J A = P K , the projection of H onto the closed convex set K. In this case, resolvent equations are the general Wiener-Hopf equations of the type: Find z H such that
T g 1 P K z ρ 1 Q K z = 0 ,

which were introduced by Noor [16] in conjunction with the general variational inequalities (2.8). For g I , the identity operator, we obtain the original Wiener-Hopf equations, which were introduced and studied by Shi [35] in connection with variational inequalities. This shows that the Wiener-Hopf equations are a special case of the general resolvent equations. The resolvent equations technique has been used to study and develop several iterative methods for solving various type of variational inequalities and inclusions problems, see [12, 23, 3133].

Using Lemma 3.1, we show that the general variational inclusions (2.1) are equivalent to the general resolvent equations (4.1).

Lemma 4.1 The general variational inclusion (2.1) has a solution u H if and only if the general resolvent equations (4.1) have a solution z H , provided
g ( u ) = J A z ,
(4.2)
z = g ( u ) + ρ T u ,
(4.3)

where ρ > 0 is a constant.

Proof Let u H be a solution of (2.1). Then, from Lemma 3.1, we have
g ( u ) = J A [ g ( u ) + ρ T u ] .
(4.4)
Taking z = g ( u ) + ρ T u in (4.4), we have
g ( u ) = J A z .
(4.5)
From (4.5) and (4.4), we have
z = g ( u ) + ρ T u = J A z + ρ T g 1 J A z ,

which shows that z H is a solution of the resolvent equations (4.1). This completes the proof. □

From Lemma 4.1, we conclude that the variational inclusion (2.1) and the resolvent equations (4.1) are equivalent. This alternative formulation plays an important and crucial part in suggesting and analyzing various iterative methods for solving variational inclusions and related optimization problems. In this paper, by a suitable and appropriate rearrangement, we suggest a number of new iterative methods for solving the variational inclusions (2.1).
  1. (I)
    Equation (4.1) can be written as
    R A z = ρ T g 1 J A z ,
     
which implies that, using (4.2),
z = J A z + ρ T g 1 J A z = g ( u ) + ρ T u .

This fixed point formulation enables us to suggest the following iterative method for solving the variational inclusion (2.1).

Algorithm 4.1 For a given z 0 H , compute u n + 1 by the iterative schemes
g ( u n ) = J A z n ,
(4.6)
z n + 1 = ( 1 α n ) z n + α n { g ( u n ) + ρ T u n , } , n = 0 , 1 , 2 , ,
(4.7)
where 0 α n 1 , for all n 0 and with n = 0 α n = .
  1. (II)
    Equation (4.1) may be written as
    z = J A z + ρ T g 1 J A z + ( 1 ρ 1 ) R A z = g ( u ) + ρ T u + ( 1 ρ 1 ) R A z .
     

Using this fixed point formulation, we suggest the following iterative method.

Algorithm 4.2 For a given z 0 H , compute u n + 1 by the iterative schemes
g ( u n ) = J A z n , z n + 1 = ( 1 α n ) z n + α n { g ( u n ) + ρ T u n + ( 1 ρ 1 ) R A z n , } , n = 0 , 1 , 2 , ,
where 0 α n 1 , for all n 0 and with n = 0 α n = .
  1. (III)
    If the operator T is linear and T 1 exists, then the resolvent equation (4.1) can be written as
    z = ( I + ρ 1 T 1 ) R A z ,
     

which allows us to suggest the iterative method.

Algorithm 4.3 For a given z 0 H , compute z n + 1 by the iterative scheme
z n + 1 = ( 1 α n ) z n + α n { ( I ρ 1 T 1 ) R A z n , } , n = 0 , 1 , 2 , ,

where 0 α n 1 , for all n 0 and with n = 0 α n = .

We would like to point out that one can obtain a number of iterative methods for solving the general variational inclusion (2.1) for suitable and appropriate choices of the operators T, A and the space H. This shows that the iterative methods suggested in this paper are more general and unifying ones.

We now study the convergence analysis of Algorithm 4.1. In a similar way, one can analyze the convergence analysis of other iterative methods.

Theorem 4.1 Let the operators T, g satisfy all the assumptions of Theorem  3.1. If the condition (3.3) holds and 0 α n 1 , for all n 0 and with n = 0 α n = , then the approximate solution { z n } obtained from Algorithm 4.1 converges to a solution z H satisfying the resolvent equation (4.1) strongly.

Proof Let u H be a solution of (4.1). Then, using Lemma 4.1, we have
z = ( 1 α n ) z + α n { g ( u ) + ρ T u } ,
(4.8)

where 0 α n 1 , and with n = 0 a n = .

From (4.7), (4.8), (3.6), and (3.5), we have
z n + 1 z ( 1 α n ) z n z + α n g ( u n ) g ( u ) + ρ ( T u n T u ) ( 1 α n ) z n z + α n u n u ( g ( u n ) g ( u ) ) + α n u n u + ρ ( T u n T u ) ( 1 α n ) z n z + α n { k 2 + 1 2 ρ α + β 2 } u n u .
(4.9)
Also from (4.6), (4.2), (3.6), and the nonexpansivity of the resolvent operator J A , we have
u n u = u n u ( g ( u n ) g ( u ) ) + P K z n P K z k 2 u n u + z n z ,
which implies that
u n u 1 1 k 2 z n z .
(4.10)
Combining (4.11) and (4.10), we have
z n + 1 z ( 1 α n ) z n z + α n θ 1 z n z ,
(4.11)
where
θ 1 = k 2 + 1 2 ρ α + ρ 2 β 2 1 k 2 .
Using (3.3), we see that θ 1 < 1 and consequently
z n + 1 z ( 1 α n ) z n z + α n θ 1 z n z = [ 1 ( 1 θ 1 ) α n ] z n z i = 0 n [ 1 ( 1 θ ) α i ] z 0 z .

Since n = 0 α n diverges and 1 θ > 0 , we have lim n { i = 0 n [ 1 ( 1 θ ) α i ] } = 0 . Consequently the sequence { z n } converges strongly to z H , the required result. □

We now suggest another iterative method for solving the general variational inclusions (2.1). From (4.2) and (4.3), we have
g ( u ) + ρ T u = J A [ g ( u ) + ρ T u ] + ρ T g 1 J A [ g ( u ) + ρ T u ] .
Thus, for a positive parameter γ > 0 , we have
u = u γ { g ( u ) + ρ T u J A [ g ( u ) + ρ T u ] ρ T g 1 J A [ g ( u ) + ρ T u ] } .

This fixed point formulation enables us to suggest the following iterative method for solving (2.1).

Algorithm 4.4 For a given u o , find the approximate solution u n + 1 by the iterative scheme
u n + 1 = u n γ { g ( u n ) + ρ T u n J A [ g ( u n ) + ρ T u n ] ρ T g 1 J A [ g ( u n ) + ρ T u n ] } .

Using the technique of Noor [20, 23], one can study the convergence criteria of Algorithm 4.4. We leave this to the interested reader.

Conclusion

In this paper, we have shown that finding the difference of two (or more) operators is equivalent to the fixed point and resolvent equations. These alternative formulations have been used to study the existence of a zero of the difference of two (or more) operators as well as to suggest and analyze some iterative methods for solving the variational inclusions associated with difference of operators. Our method and technique is very simple as compared with other methods. The ideas and techniques presented in this paper may be used to consider the sensitivity analysis, dynamical system and other aspects of these variational inclusions. It is an interesting open problem to compare the techniques for finding the zeros of the difference of monotone operators. The interested reader is advised to explore this area further and discover novel and innovative applications of the variational inclusions. See [38] for the recent development of problem (2.1), where it is shown that problem (2.1) can be seen as a new and significant generalization of the DC programming case.

Declarations

Acknowledgements

The authors would like to thank Dr. S.M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing excellent research facilities. The authors are grateful to the referees for their constructive comments and suggestions. This research is supported by HEC Project NRPU No: 20-1966/R&D/11-2553.

Authors’ Affiliations

(1)
Mathematics Department, COMSATS Institute of Information Technology

References

  1. Adly S, Oettli W: Solvability of generalized nonlinear symmetric variational inequalities. J. Aust. Math. Soc. Ser. B, Appl. Math. 1999, 40: 289-300. 10.1017/S0334270000010912MathSciNetView ArticleMATHGoogle Scholar
  2. An LTH, Pham DT: The DC programming and DCA revisited of real world nonconvex optimization problems. Ann. Oper. Res. 2005, 133: 25-46.MathSciNetView ArticleMATHGoogle Scholar
  3. Bnouhachem A, Noor MA, Khalfssoui M, Benazza H: General system of variational inequalities in Banach spaces. Appl. Math. Inform. Sci. 2014,8(3):985-991. 10.12785/amis/080307View ArticleGoogle Scholar
  4. Brezis H Mathematical Studies 5. In Operateurs maximaux monotone. North-Holland, Amsterdam; 1973.Google Scholar
  5. Cristescu G, Lupsa L: Non-Connected Convexities and Applications. Kluwer Academic, Dordrecht; 2002.View ArticleMATHGoogle Scholar
  6. Hamdi A: A Moreau-Yosida regularization of a difference of two convex functions. Appl. Math. E-Notes 2005, 5: 164-170.MathSciNetMATHGoogle Scholar
  7. Hamdi A: A modified Bregman proximal scheme to minimize the difference of two convex functions. Appl. Math. E-Notes 2006, 6: 132-140.MathSciNetMATHGoogle Scholar
  8. Khattri SK:Three proofs of the inequality e < ( 1 + 1 n ) n + 0.5 . Am. Math. Mon. 2010,117(3):273-277. 10.4169/000298910X480126MathSciNetView ArticleMATHGoogle Scholar
  9. Khattri SK, Log T: Construction third-order derivative-free iterative methods. Int. J. Comput. Math. 2011,88(7):1509-1518. 10.1080/00207160.2010.520705MathSciNetView ArticleMATHGoogle Scholar
  10. Khattri SK, Log T: Derivative free algorithm for solving nonlinear equations. Computing 2011, 92: 169-179. 10.1007/s00607-010-0135-7MathSciNetView ArticleMATHGoogle Scholar
  11. Lions PL, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16: 964-979. 10.1137/0716071MathSciNetView ArticleMATHGoogle Scholar
  12. Moudafi A: On the difference of two maximal monotone operators: regularization and algorithmic approach. Appl. Math. Comput. 2008, 202: 446-452. 10.1016/j.amc.2008.01.024MathSciNetView ArticleMATHGoogle Scholar
  13. Moudafi A, Mainge PE: On the convergence of n approximate proximal method for DC functions. J. Comput. Math. 2006, 24: 475-480.MathSciNetMATHGoogle Scholar
  14. Moudafi, A, Noor, MA: Split algorithms for new implicit feasibility null-point problems. Appl. Math. Inform. Sci. 8(5) (2014)Google Scholar
  15. Noor MA: General variational inequalities. Appl. Math. Lett. 1988, 1: 119-121. 10.1016/0893-9659(88)90054-7MathSciNetView ArticleMATHGoogle Scholar
  16. Noor MA: Wiener-Hopf equations and variational inequalities. J. Optim. Theory Appl. 1993, 79: 197-206. 10.1007/BF00941894MathSciNetView ArticleMATHGoogle Scholar
  17. Noor MA: Some recent advances in variational inequalities. Part II. Other concepts. N.Z. J. Math. 1997, 26: 229-255.MATHGoogle Scholar
  18. Noor MA: Some algorithms for general monotone mixed variational inequalities. Math. Comput. Model. 1999, 29: 1-9.MathSciNetMATHGoogle Scholar
  19. Noor MA: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251: 217-229. 10.1006/jmaa.2000.7042MathSciNetView ArticleMATHGoogle Scholar
  20. Noor MA: Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152: 199-277. 10.1016/S0096-3003(03)00558-7MathSciNetView ArticleMATHGoogle Scholar
  21. Noor MA: Differentiable nonconvex functions and general variational inequalities. Appl. Math. Comput. 2008, 199: 623-630. 10.1016/j.amc.2007.10.023MathSciNetView ArticleMATHGoogle Scholar
  22. Noor MA: Extended general variational inequalities. Appl. Math. Lett. 2009, 22: 182-186. 10.1016/j.aml.2008.03.007MathSciNetView ArticleMATHGoogle Scholar
  23. Noor, MA: Variational Inequalities and Applications. Lecture Notes. Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan, 2007-2013Google Scholar
  24. Noor MA, Noor KI: Auxiliary principle technique for solving split feasibility problems. Appl. Math. Inform. Sci. 2013,7(1):221-227. 10.12785/amis/070127View ArticleMathSciNetGoogle Scholar
  25. Noor MA, Noor KI: Sensitivity analysis of some quasi variational inequalities. J. Adv. Math. Stud. 2013,6(1):43-52.MathSciNetMATHGoogle Scholar
  26. Noor MA, Noor KI: Some new classes of quasi split feasibility problems. Appl. Math. Inform. Sci. 2013,7(4):1547-1552. 10.12785/amis/070439View ArticleMathSciNetMATHGoogle Scholar
  27. Noor MA, Noor KI: Some parallel algorithms for a new system of quasi variational inequalities. Appl. Math. Inform. Sci. 2013,7(6):2493-2498. 10.12785/amis/070643View ArticleMathSciNetGoogle Scholar
  28. Noor MA, Noor KI, Khan AG: Some iterative schemes for solving extended general quasi variational inequalities. Appl. Math. Inform. Sci. 2013,7(3):917-925. 10.12785/amis/070309MathSciNetView ArticleMATHGoogle Scholar
  29. Noor MA, Awan MU, Noor KI: On some inequalities for relative semi-convex functions. J. Inequal. Appl. 2013., 2013: Article ID 332Google Scholar
  30. Noor MA, Noor KI, Rassias TM: Some aspects of variational inequalities. J. Comput. Appl. Math. 1993, 47: 285-312. 10.1016/0377-0427(93)90058-JMathSciNetView ArticleMATHGoogle Scholar
  31. Noor MA, Noor KI, Rassias TM: Set-valued resolvent equations and mixed variational inequalities. J. Math. Anal. Appl. 1998, 220: 741-759. 10.1006/jmaa.1997.5893MathSciNetView ArticleMATHGoogle Scholar
  32. Noor MA, Noor KI, Hamdi A, El-Shemas EH: On difference of two monotone operators. Opt. Lett. 2009, 3: 329-335. 10.1007/s11590-008-0112-7MathSciNetView ArticleMATHGoogle Scholar
  33. Noor MA, Noor KI, El-Shemas EH, Hamdi A: Resolvent iterative methods for difference of two monotone operators. Int. J. Optim.: Theory Methods Appl. 2009, 1: 15-25.MathSciNetMATHGoogle Scholar
  34. Noor MA, Noor KI, Awan MU: Geometrically relative convex functions. Appl. Math. Inform. Sci. 2014,8(2):607-616. 10.12785/amis/080218MathSciNetView ArticleMATHGoogle Scholar
  35. Shi P: Equivalence of variational inequalities with Wiener-Hopf equations. Proc. Am. Math. Soc. 1991, 111: 339-346. 10.1090/S0002-9939-1991-1037224-3View ArticleMathSciNetMATHGoogle Scholar
  36. Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258: 4413-4416.MathSciNetMATHGoogle Scholar
  37. Tuy H: Global minimization of a difference of two convex functions. Math. Program. Stud. 1987, 30: 150-182. 10.1007/BFb0121159View ArticleMathSciNetMATHGoogle Scholar
  38. Moudafi A: On critical points of the differences of two maximal monotone operators. Afr. Math. 2013. 10.1007/s13370-013-0218-7Google Scholar

Copyright

© Noor et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.