 Research
 Open Access
 Published:
General variational inclusions involving difference of operators
Journal of Inequalities and Applications volume 2014, Article number: 98 (2014)
Abstract
In this paper, we introduce and consider a new class of general variational inclusions involving the difference of operators in a Hilbert space. We establish the equivalence between the general variational inclusions and the fixed point problems as well as with a new class of resolvent equations using the resolvent operator technique. We use this alternative formulation to discuss the existence of a solution of the general variational inclusions. We again use this alternative equivalent formulation to suggest and analyze a number of iterative methods for finding a zero of the difference of operators. We also discuss the convergence of the iterative method under suitable conditions. Our methods of proofs are very simple as compared with other techniques. Several special cases of these problems are also considered. The results proved in this paper may be viewed as a refinement and an improvement of the known results in this area.
1 Introduction
Variational inclusions involving the difference of operators provide us with a unified, natural, novel, and simple framework to study a wide class of problems arising in DC programming, proxregularity, multicommodity network, image restoring processing, tomograpy, molecular biology, optimization, pure and applied sciences, see [1–37] and the references therein. We would like to emphasize that the variational inclusions theory is a natural development of the variational principles, the origin of which can be traced back to Fermat, Newton, Leibniz, the Bernoulli brothers, Euler, and Lagrange, has been one of the major branches of mathematical and engineering sciences for more than two centuries. It can be used to interpret the basic principles of mathematical and physical sciences in a form characterized by simplicity and elegance. The variational principles have played a fundamental and important part as unifying influence in sciences and have played a fundamental role in the development of general theory of relativity, gauge field theory in modern particle physics, and soliton theory, see [17–21, 23, 30–33].
Variational inclusions involving the sum of monotone operators have been studied extensively in recent years. It is known that the sum of two (more) monotone operators is again a monotone operator, whereas the difference of two (more) monotone operators is not a monotone operator. Due to this fact, the problem of finding a zero of difference of monotone operators is very difficult as compared with finding a zero of the sum of monotone operators. Consequently, there does not exist a unified framework for the variational inclusions involving the difference of operators, see [1, 7, 11–13, 32, 33] and the references therein. It is worth mentioning that this type of variational inclusions includes as a special case the problem of finding the critical points of the difference of two convex functions. Our present results are a contribution towards this goal. We also show (see Lemma 2.1) that the minimum of the difference of a nondifferentiable nonconvex function and a differentiable nonconvex function on a nonconvex set is a solution of the general variational inequality, and that it thereby extends the earlier known result for the difference of two differentiable convex functions. In addition, we have shown that the oddorder and nonsymmetric obstacle problems arising in various branches of pure and applied sciences are a special case of the general variational inclusions and can be treated in the unified framework of the general variational inclusions. This clearly shows that the field of variational inclusions involving the difference operators is very rich and offers ample opportunities for further research.
Motivated and inspired by the research activities going on in this field, we introduce and consider a new class of variational inclusions involving the difference of operators, which is called the general variational inclusion. We use the resolvent operator technique to establish the equivalence between the general mixed variational inclusions and fixed point problem, which is Lemma 3.1. The novel feature of the technique is that the resolvent step involves the maximal monotone operator only and the other part facilitates the problem decomposition. This can lead to the development of very efficient methods, since one can treat each part of the original operator independently. We use this alternative formulation to study the existence of a solution of the general mixed variational inequality, which extends the known result.
In recent years, several numerical methods including projection and its variant forms, resolvent equations, and auxiliary principle techniques have been developed. This class of iterative methods has witnessed great progress in recent years. Apart from theoretical interest, the main advantage of these methods, which makes them successful in real world problems, is computation. These methods have the ability to handle largesize problems of dimensions beyond which other methods cease to be efficient. In brief, the field of the iterative method itself is vast, see [1–23]. This equivalent formulation is used to suggest and analyze a new Manntype iterative method for solving the general variational inclusions, see Algorithm 3.1. In the process of proving the main results (Theorem 3.1 and Theorem 3.2), we use the resolvent operator technique.
Related with the general variational inclusions, we have the problem of solving the resolvent equations, the origin of which can be traced back to Noor [17]. Using again the resolvent operator technique, we establish the equivalence between the general variational inclusions and the general resolvent equations. Here, we would like to emphasize the fact that one can show that the resolvent equations are equivalent to the WienerHopf equations, which were initially introduced by Shi [35]. It has turned out that this approach is more general and flexible. In Section 4, consider the problem of solving the general resolvent equations. It is established that the general variational inclusions are equivalent to the resolvent equations. The resolvent equations approach is used to suggest and analyze a number of new iterative methods for solving the general variational inclusions and related optimization problems. We prove the strong convergence (main Theorem 4.1) of the new iterative method under the same conditions as in Theorem 3.2.
In this paper, we have shown that the general variational inclusions provide us a platform to investigate some unrelated problems in a unified manner. These unified frameworks also allow a crossfertilization among various diverse field areas such as physics, mathematics, engineering, financial mathematics, economics and optimization, where both the theory and the computational techniques have been applied. We would like to emphasize that the problems discussed and results obtained in this paper may motivate and bring a large number of novel, innovative and potential applications, extensions and interesting generalizations in this area.
2 Preliminaries
Let H be a real Hilbert space whose inner product and norm are denoted by \u3008\cdot \phantom{\rule{0.2em}{0ex}},\cdot \u3009 and \parallel \phantom{\rule{0.2em}{0ex}}\cdot \phantom{\rule{0.2em}{0ex}}\parallel, respectively.
For given monotone operators T,A,g:H\u27f6H, we consider the problem of finding u\in H such that
Inequality of type (2.1) is called the general variational inclusion involving the difference of operators. Note that the difference of two monotone operators is not a monotone operator as contrast to the sum of two monotone operators. Due to this fact, the problem of finding a zero of the difference of two monotone operators is very difficult as compared to finding the zeros of the sum of monotone operators, see [1, 12, 32, 33].
We now discuss some applications of the general variational inclusions (2.1).
Applications

(I)
If g\equiv I, the identity operator, then problem (2.1) is equivalent to finding u\in H such that
0\in A(u)Tu,(2.2)
a problem considered by Noor et al. [32, 33] and Moudafi [12] recently using essentially two different techniques.

(II)
If A(\cdot )\equiv \partial f(\cdot ), the subdifferential of a proper, convex and lowersemicontinuous function f:H\u27f6R\cup \{\mathrm{\infty}\}, then problem (2.1) is equivalent to finding u\in H such that
0\in \partial f(g(u))Tu,(2.3)
a problem considered and studied by Adly and Oettli [1].
We note that problem (2.3) can be written as: find u\in H such that
which is known as the general mixed variational inequality or the variational inequality of the second kind. For the applications, numerical methods and other aspects of these mixed variational inequalities, see [1–22] and the references therein.
Example 2.1 To convey an idea of the applications of the general mixed variational inequality (2.4), we show that the minimum of a difference of differentiable nonconvex function and nondifferentiable nonconvex function on a nonconvex set is the solution of the mixed variational inequality (2.4). For this purpose, we recall the following wellknown concepts, see [4].
Definition 2.1 [4]
Let K be any set in H. The set K is said to be relative convex (gconvex) set, if there exists a function g:H\u27f6H such that
Note that every convex set is a relative convex, but the converse is not true, see [4]. If g=I, then the relative convex set K is called a convex set.
The function F:K\u27f6H is said to be relative convex (gconvex), if there exists a function g such that
Clearly every convex function is relative convex, but the converse is not true. For the properties and various classes of the relative convex functions, see [14, 17].
For a given differentiable relative convex function F and a nondifferentiable relative convex function f, we consider a functional of the type
One can prove that the minimum of the functional I[v] on the relative convex set K can be characterized by a class of variational inequalities (2.4). For the sake of completeness and to convey an idea of the technique, we include its proof.
Lemma 2.1 Let F be a differentiable relative convex function and f be a nondifferentiable relative convex function on the relative convex set K. Then u\in K is the minimum of I[v], defined by (2.5), on K\subset g(H) satisfies the inequality
where {F}^{\prime}(g(u)) is the differential of the differentiable nonconvex function at g(u) in the direction of g(v)g(u).
Proof Let u\in H:g(u)\in K be a minimum of the functional I[v], defined by (2.5). Then
Since K is a relative convex set, so, for all u,v\in H:g(u),g(v)\in K, t\in [0,1], g({v}_{t})=g(u)+t(g(v)g(u))\in K. Setting g(v)=g({v}_{t}) in (2.7), we have
which implies that
from which we have
Dividing the above inequality by t and taking t\u27f60, we have
which is the required result (2.6). □
Lemma 2.1 implies that the minimum of the difference of two nonconvex functions is a solution of a general mixed variational inequality (2.6). However, the converse is not true. It is an open problem to show that the solution of the general mixed variational inequality (2.6) is a minimum of a difference of relative two relative convex functions. See also Khattri [8] for applications of convex functions.

(III)
If f is the indicator function of a closed and convex set K in a real Hilbert space, then problem (2.4) is equivalent to finding u\in H:g(u)\in K such that
\u3008Tu,g(v)g(u)\u3009\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}v\in H:g(v)\in K,(2.8)
which is known as the general variational inequality, introduced and studied by Noor [15] in 1988. The general variational inequalities have been studied extensively in recent years, see [16–21] and the references therein for the formulation, numerical methods, applications, and other aspects of the general variational inequalities (2.8).

(IV)
If g\equiv I, the identity operator, then problem (2.8) reduces to: find u\in K such that
\u3008Tu,vu\u3009\le 0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}v\in K,(2.9)
which is known as the classical variational inequalities, introduced and studied by Stampacchia [36] in 1964. See also [1–37] for more details.

(V)
It is well known that the necessary optimality for the problem of finding the minimum of f(x)g(x), where f(x) and g(x) are differentiable convex functions, is equivalent to finding x\in H such that
0\in \partial f(x)\partial (g(x)),(2.10)
under some suitable conditions. Problem of type (2.10) have been considered in [1, 2, 5–7, 11]. It is clear from the above discussion that problem (2.10) is a special case of problem (2.1). In fact, a wide class of problems arising in different branches of pure and applied sciences can be studied in the unified framework of the general variational inclusion (2.1). For appropriate and suitable choice of the operators and the space, one can obtain several new and known classes of variational inclusions, variational inequalities and complementarity problems, see [1–37] and the references therein.
We now recall some basic concepts and results.
Definition 2.3 [4]
If A is a maximal monotone operator on H, then, for a constant \rho >0, the resolvent operator associated with A is defined by
where I is the identity operator.
It is well known that a monotone operator is maximal if and only if its resolvent operator is defined everywhere. In addition, the resolvent operator is a singlevalued and nonexpansive, that is,
Definition 2.4 An operator T:H\u27f6H is said to be:

(i)
strongly antimonotone, if there exists a constant \alpha >0 such that
\u3008TuTv,uv\u3009\le \alpha {\parallel uv\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}u,v\in H; 
(ii)
Lipschitz continuous, if there exists a constant \beta >0 such that
\parallel TuTv\parallel \le \beta \parallel uv\parallel ,\phantom{\rule{1em}{0ex}}\mathrm{\forall}u,v\in H; 
(iii)
strongly monotone, if there exists a constant {\alpha}_{1}>0 such that
\u3008TuTv,uv\u3009\ge {\alpha}_{1}{\parallel uv\parallel}^{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall}u,v\in H.
We would like to point out that the differential {f}^{\prime}(\cdot ) of a strongly concave functions satisfies Definition 2.4(i). Consequently, it is a strongly antimonotone operator.
3 Resolvent operator method
In this section, we establish the equivalence between the general variational inclusion (2.1) and the fixed point problem (3.1) using the resolvent operator technique. This alternative formulation is used to discuss the existence of a solution of the problem (2.1) and to suggest and analyze an iterative method for solving the variational inclusions (2.1).
Lemma 3.1 Let A be a maximal monotone operator. Then u\in H is a solution of the variational inclusion (2.1), if and only if u\in H satisfies the relation
where {J}_{A}\equiv {(I+\rho A)}^{1} is the resolvent operator and \rho >0 is a constant.
Proof Let u\in H be a solution of (2.1). Then
the required result. □
Lemma 3.1 implies that the general variational inclusion (2.1) is equivalent to the fixed point problem (3.1). This alternative equivalent formulation is very useful from the numerical and theoretical points of view.
We rewrite the relation (3.1) in the following form:
which is used to study the existence of a solution of the variational inclusion (2.1).
We now study the conditions under which the general variational inclusion (2.1) has a solution and this is the main motivation of our next result.
Theorem 3.1 Let the operator T:H\u27f6H be strongly antimonotone with constant \alpha >0 and Lipschitz continuous with constants with \beta >0, respectively. If the operator g is strongly monotone with constant \sigma >0 and Lipschitz continuous with constant \delta >0 and
then there exists a solution of problem (2.1).
Proof From Lemma 3.1, it follows that problems (2.1) and (3.1) are equivalent. Thus it is enough to show that the map F(u), defined by (3.2) has a fixed point. For all u\ne v\in H, we have
where we have used the fact that the resolvent operator {J}_{A} is nonexpansive.
Since the operator T is strongly antimonotone with constant \alpha >0 and Lipschitz continuous with constant \beta >0, it follows that
In a similar way, using the strongly monotonicity g with constant \sigma >0 and Lipschitz continuity of T with constant \delta >0, we have
From (3.4), (3.5), and (3.6), we have
where
and
From (3.3), it follows that \theta <1. Thus the mapping F(u), defined by (3.2) is a contraction mapping and consequently has a fixed point belonging to H satisfying the general variational inclusion (2.1). □
Using the fixed point formulation (3.1), we suggest and analyze the following iterative method for solving the variational inclusion (2.1).
Algorithm 3.1 For a given {u}_{0}\in H, find the approximate solution {u}_{n+1} by the iterative schemes
which is known as the Mann iteration process for solving the general variational inclusion (2.1).
If g\equiv I, the identity operator, then Algorithm 3.1 reduces to the following.
Algorithm 3.2 For a given {u}_{0}\in H, find the approximate solution {u}_{n+1} by the iterative schemes
where {\alpha}_{n}\in [0,1] \mathrm{\forall}n\ge 0.
Algorithm 3.2 is known as the Mann iteration process for solving the variational inclusion (2.2), which was discussed in [17, 23].
If A(\cdot ) is the indicator function of a closed convex set K in H, then {J}_{A}={P}_{K}, the projection of H onto the closed convex set and consequently Algorithm 3.1 reduces to the following method.
Algorithm 3.3 For a given {u}_{0}\in H, find the approximate solution {u}_{n+1} by the iterative schemes
which is known as the Mann iteration process for solving the general variational inequalities (2.8).
We now consider the convergence analysis of Algorithm 3.1 and this is the main motivation of our next result.
Theorem 3.2 Let the operator T:H\u27f6H be strongly antimonotone with constants \alpha >0 and Lipschitz continuous with constants with \beta >0. Let the operator g be strongly monotone with constant \sigma >0 and Lipschitz continuous with constant \delta >0. If (3.3) holds and 0\le {\alpha}_{n}\le 1, for all n\ge 0 and with {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}, then the approximate solution {u}_{n} obtained from Algorithm 3.1 converges to a solution u\in H satisfying the variational inclusion (2.1).
Proof Let u\in H be a solution of the general variational inclusion (2.1). Then, using Lemma 3.1, we have
where 0\le {\alpha}_{n}\le 1 is a constant.
From (3.9) and (3.10), we have
From (3.5), (3.6), (3.7), (3.8), and (3.11), we have
From (3.3), it follows that \theta <1. Thus
Since {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n} diverges and 1\theta >0, we have {lim}_{n\u27f6\mathrm{\infty}}\{{\prod}_{i=0}^{n}[1(1\theta ){\alpha}_{i}]\}=0. Consequently the sequence \{{u}_{n}\} converges strongly to u\in H satisfying the general variational inclusion (2.1). This completes the proof. □
In recent years, much attention have been given to develop a class of twostep and threestep iterative methods for solving the variational inclusions and inequalities using the technique of updating the solution. It has been shown that the threestep iterative methods, which are also called Noor iterations, are versatile in nature and efficient.
We now use the updating technique of the solution to rewrite (3.1) in the following form:
from which, we have
where 0\le {\alpha}_{n},{\beta}_{n},{\gamma}_{n}\le 1, for all n\ge 0.
Using this fixed point formulation, we can suggest and investigate the following threestep iterative methods for solving problem (2.1).
Algorithm 3.4 For given {u}_{0}, {y}_{0}, {w}_{0}, find {u}_{n+1}, {y}_{n+1}, {w}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n},{\beta}_{n},{\gamma}_{n}\le 1, for all n\ge 0.
Algorithm 3.4 is called the Noor threestep iterative method for solving the general variational inclusion (2.1). This method can be considered as a Jacobi type iterative method.
We now suggest another iterative method by using the updated value of the solution. This iterative method can be viewed as a GaussSeidel method.
Algorithm 3.5 For given {u}_{0}, {y}_{0}, {w}_{0}, find {u}_{n+1}, {y}_{n+1}, {w}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n},{\beta}_{n},{\gamma}_{n}\le 1, for all n\ge 0.
If {\gamma}_{n}=0, then Algorithm 3.4 and Algorithm 3.5 reduce to the following twostep iterative schemes for solving (2.1).
Algorithm 3.6 For given {u}_{0}, {y}_{0}, find {u}_{n+1}, {y}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n},{\beta}_{n}\le 1, for all n\ge 0.
Algorithm 3.7 For given {u}_{0}, {y}_{0}, find {u}_{n+1}, {y}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n},{\beta}_{n}\le 1, for all n\ge 0.
One can use the fixed point problem (3.10) to suggest the following iterative method for solving (2.1).
Algorithm 3.8 For a given {u}_{0}, find the approximate solution {u}_{n+1} by the iterative scheme
which is called the implicit or proximal point method. Using the technique of Noor [15, 17], one can investigate the convergence analysis of Algorithm 3.8.
In brief, one can obtain a wide class of new iterative methods for solving the general variational inclusions and related problems by selecting suitable and appropriate choices of the operators and space. The interested readers are encouraged to study the convergence analysis of Algorithms 3.43.7, which is an interesting and challenging problem for future research. The implementation and comparison of these methods is another direction of future research.
4 Resolvent equations technique
In this section, we consider the problem of solving the resolvent equations. It is shown that the general variational inclusions (2.1) are equivalent to the general resolvent equations. This alternative equivalent formulation is used to suggest and investigate a class of iterative methods for solving the general variational inclusions (2.1).
We now consider the problem of solving the resolvent equations. Let {R}_{A}=I{J}_{A}, where {J}_{A} is the resolvent operator and I is the identity operator. For given nonlinear operators T, A, g, consider the problem of finding z\in H such that
Equations of the type (4.1) are called general resolvent equations; they were introduced and studied by Noor [17]. In particular, If A(\cdot )=\partial f(\cdot ), the subdifferential of a proper, convex, and lowersemicontinuous function f, then it is well known that {J}_{A}={P}_{K}, the projection of H onto the closed convex set K. In this case, resolvent equations are the general WienerHopf equations of the type: Find z\in H such that
which were introduced by Noor [16] in conjunction with the general variational inequalities (2.8). For g\equiv I, the identity operator, we obtain the original WienerHopf equations, which were introduced and studied by Shi [35] in connection with variational inequalities. This shows that the WienerHopf equations are a special case of the general resolvent equations. The resolvent equations technique has been used to study and develop several iterative methods for solving various type of variational inequalities and inclusions problems, see [12, 23, 31–33].
Using Lemma 3.1, we show that the general variational inclusions (2.1) are equivalent to the general resolvent equations (4.1).
Lemma 4.1 The general variational inclusion (2.1) has a solution u\in H if and only if the general resolvent equations (4.1) have a solution z\in H, provided
where \rho >0 is a constant.
Proof Let u\in H be a solution of (2.1). Then, from Lemma 3.1, we have
Taking z=g(u)+\rho Tu in (4.4), we have
From (4.5) and (4.4), we have
which shows that z\in H is a solution of the resolvent equations (4.1). This completes the proof. □
From Lemma 4.1, we conclude that the variational inclusion (2.1) and the resolvent equations (4.1) are equivalent. This alternative formulation plays an important and crucial part in suggesting and analyzing various iterative methods for solving variational inclusions and related optimization problems. In this paper, by a suitable and appropriate rearrangement, we suggest a number of new iterative methods for solving the variational inclusions (2.1).

(I)
Equation (4.1) can be written as
{R}_{A}z=\rho T{g}^{1}{J}_{A}z,
which implies that, using (4.2),
This fixed point formulation enables us to suggest the following iterative method for solving the variational inclusion (2.1).
Algorithm 4.1 For a given {z}_{0}\in H, compute {u}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n}\le 1, for all n\ge 0 and with {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}.

(II)
Equation (4.1) may be written as
\begin{array}{rcl}z& =& {J}_{A}z+\rho T{g}^{1}{J}_{A}z+(1{\rho}^{1}){R}_{A}z\\ =& g(u)+\rho Tu+(1{\rho}^{1}){R}_{A}z.\end{array}
Using this fixed point formulation, we suggest the following iterative method.
Algorithm 4.2 For a given {z}_{0}\in H, compute {u}_{n+1} by the iterative schemes
where 0\le {\alpha}_{n}\le 1, for all n\ge 0 and with {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}.

(III)
If the operator T is linear and {T}^{1} exists, then the resolvent equation (4.1) can be written as
z=(I+{\rho}^{1}{T}^{1}){R}_{A}z,
which allows us to suggest the iterative method.
Algorithm 4.3 For a given {z}_{0}\in H, compute {z}_{n+1} by the iterative scheme
where 0\le {\alpha}_{n}\le 1, for all n\ge 0 and with {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}.
We would like to point out that one can obtain a number of iterative methods for solving the general variational inclusion (2.1) for suitable and appropriate choices of the operators T, A and the space H. This shows that the iterative methods suggested in this paper are more general and unifying ones.
We now study the convergence analysis of Algorithm 4.1. In a similar way, one can analyze the convergence analysis of other iterative methods.
Theorem 4.1 Let the operators T, g satisfy all the assumptions of Theorem 3.1. If the condition (3.3) holds and 0\le {\alpha}_{n}\le 1, for all n\ge 0 and with {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n}=\mathrm{\infty}, then the approximate solution \{{z}_{n}\} obtained from Algorithm 4.1 converges to a solution z\in H satisfying the resolvent equation (4.1) strongly.
Proof Let u\in H be a solution of (4.1). Then, using Lemma 4.1, we have
where 0\le {\alpha}_{n}\le 1, and with {\sum}_{n=0}^{\mathrm{\infty}}{a}_{n}=\mathrm{\infty}.
From (4.7), (4.8), (3.6), and (3.5), we have
Also from (4.6), (4.2), (3.6), and the nonexpansivity of the resolvent operator {J}_{A}, we have
which implies that
Combining (4.11) and (4.10), we have
where
Using (3.3), we see that {\theta}_{1}<1 and consequently
Since {\sum}_{n=0}^{\mathrm{\infty}}{\alpha}_{n} diverges and 1\theta >0, we have {lim}_{n\u27f6\mathrm{\infty}}\{{\prod}_{i=0}^{n}[1(1\theta ){\alpha}_{i}]\}=0. Consequently the sequence \{{z}_{n}\} converges strongly to z\in H, the required result. □
We now suggest another iterative method for solving the general variational inclusions (2.1). From (4.2) and (4.3), we have
Thus, for a positive parameter \gamma >0, we have
This fixed point formulation enables us to suggest the following iterative method for solving (2.1).
Algorithm 4.4 For a given {u}_{o}, find the approximate solution {u}_{n+1} by the iterative scheme
Using the technique of Noor [20, 23], one can study the convergence criteria of Algorithm 4.4. We leave this to the interested reader.
Conclusion
In this paper, we have shown that finding the difference of two (or more) operators is equivalent to the fixed point and resolvent equations. These alternative formulations have been used to study the existence of a zero of the difference of two (or more) operators as well as to suggest and analyze some iterative methods for solving the variational inclusions associated with difference of operators. Our method and technique is very simple as compared with other methods. The ideas and techniques presented in this paper may be used to consider the sensitivity analysis, dynamical system and other aspects of these variational inclusions. It is an interesting open problem to compare the techniques for finding the zeros of the difference of monotone operators. The interested reader is advised to explore this area further and discover novel and innovative applications of the variational inclusions. See [38] for the recent development of problem (2.1), where it is shown that problem (2.1) can be seen as a new and significant generalization of the DC programming case.
References
Adly S, Oettli W: Solvability of generalized nonlinear symmetric variational inequalities. J. Aust. Math. Soc. Ser. B, Appl. Math. 1999, 40: 289300. 10.1017/S0334270000010912
An LTH, Pham DT: The DC programming and DCA revisited of real world nonconvex optimization problems. Ann. Oper. Res. 2005, 133: 2546.
Bnouhachem A, Noor MA, Khalfssoui M, Benazza H: General system of variational inequalities in Banach spaces. Appl. Math. Inform. Sci. 2014,8(3):985991. 10.12785/amis/080307
Brezis H Mathematical Studies 5. In Operateurs maximaux monotone. NorthHolland, Amsterdam; 1973.
Cristescu G, Lupsa L: NonConnected Convexities and Applications. Kluwer Academic, Dordrecht; 2002.
Hamdi A: A MoreauYosida regularization of a difference of two convex functions. Appl. Math. ENotes 2005, 5: 164170.
Hamdi A: A modified Bregman proximal scheme to minimize the difference of two convex functions. Appl. Math. ENotes 2006, 6: 132140.
Khattri SK:Three proofs of the inequality e<(1+\frac{1}{n})n+0.5. Am. Math. Mon. 2010,117(3):273277. 10.4169/000298910X480126
Khattri SK, Log T: Construction thirdorder derivativefree iterative methods. Int. J. Comput. Math. 2011,88(7):15091518. 10.1080/00207160.2010.520705
Khattri SK, Log T: Derivative free algorithm for solving nonlinear equations. Computing 2011, 92: 169179. 10.1007/s0060701001357
Lions PL, Mercier B: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16: 964979. 10.1137/0716071
Moudafi A: On the difference of two maximal monotone operators: regularization and algorithmic approach. Appl. Math. Comput. 2008, 202: 446452. 10.1016/j.amc.2008.01.024
Moudafi A, Mainge PE: On the convergence of n approximate proximal method for DC functions. J. Comput. Math. 2006, 24: 475480.
Moudafi, A, Noor, MA: Split algorithms for new implicit feasibility nullpoint problems. Appl. Math. Inform. Sci. 8(5) (2014)
Noor MA: General variational inequalities. Appl. Math. Lett. 1988, 1: 119121. 10.1016/08939659(88)900547
Noor MA: WienerHopf equations and variational inequalities. J. Optim. Theory Appl. 1993, 79: 197206. 10.1007/BF00941894
Noor MA: Some recent advances in variational inequalities. Part II. Other concepts. N.Z. J. Math. 1997, 26: 229255.
Noor MA: Some algorithms for general monotone mixed variational inequalities. Math. Comput. Model. 1999, 29: 19.
Noor MA: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251: 217229. 10.1006/jmaa.2000.7042
Noor MA: Some developments in general variational inequalities. Appl. Math. Comput. 2004, 152: 199277. 10.1016/S00963003(03)005587
Noor MA: Differentiable nonconvex functions and general variational inequalities. Appl. Math. Comput. 2008, 199: 623630. 10.1016/j.amc.2007.10.023
Noor MA: Extended general variational inequalities. Appl. Math. Lett. 2009, 22: 182186. 10.1016/j.aml.2008.03.007
Noor, MA: Variational Inequalities and Applications. Lecture Notes. Mathematics Department, COMSATS Institute of Information Technology, Islamabad, Pakistan, 20072013
Noor MA, Noor KI: Auxiliary principle technique for solving split feasibility problems. Appl. Math. Inform. Sci. 2013,7(1):221227. 10.12785/amis/070127
Noor MA, Noor KI: Sensitivity analysis of some quasi variational inequalities. J. Adv. Math. Stud. 2013,6(1):4352.
Noor MA, Noor KI: Some new classes of quasi split feasibility problems. Appl. Math. Inform. Sci. 2013,7(4):15471552. 10.12785/amis/070439
Noor MA, Noor KI: Some parallel algorithms for a new system of quasi variational inequalities. Appl. Math. Inform. Sci. 2013,7(6):24932498. 10.12785/amis/070643
Noor MA, Noor KI, Khan AG: Some iterative schemes for solving extended general quasi variational inequalities. Appl. Math. Inform. Sci. 2013,7(3):917925. 10.12785/amis/070309
Noor MA, Awan MU, Noor KI: On some inequalities for relative semiconvex functions. J. Inequal. Appl. 2013., 2013: Article ID 332
Noor MA, Noor KI, Rassias TM: Some aspects of variational inequalities. J. Comput. Appl. Math. 1993, 47: 285312. 10.1016/03770427(93)90058J
Noor MA, Noor KI, Rassias TM: Setvalued resolvent equations and mixed variational inequalities. J. Math. Anal. Appl. 1998, 220: 741759. 10.1006/jmaa.1997.5893
Noor MA, Noor KI, Hamdi A, ElShemas EH: On difference of two monotone operators. Opt. Lett. 2009, 3: 329335. 10.1007/s1159000801127
Noor MA, Noor KI, ElShemas EH, Hamdi A: Resolvent iterative methods for difference of two monotone operators. Int. J. Optim.: Theory Methods Appl. 2009, 1: 1525.
Noor MA, Noor KI, Awan MU: Geometrically relative convex functions. Appl. Math. Inform. Sci. 2014,8(2):607616. 10.12785/amis/080218
Shi P: Equivalence of variational inequalities with WienerHopf equations. Proc. Am. Math. Soc. 1991, 111: 339346. 10.1090/S00029939199110372243
Stampacchia G: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258: 44134416.
Tuy H: Global minimization of a difference of two convex functions. Math. Program. Stud. 1987, 30: 150182. 10.1007/BFb0121159
Moudafi A: On critical points of the differences of two maximal monotone operators. Afr. Math. 2013. 10.1007/s1337001302187
Acknowledgements
The authors would like to thank Dr. S.M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing excellent research facilities. The authors are grateful to the referees for their constructive comments and suggestions. This research is supported by HEC Project NRPU No: 201966/R&D/112553.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors worked jointly. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Noor, M.A., Noor, K.I. & Kamal, R. General variational inclusions involving difference of operators. J Inequal Appl 2014, 98 (2014). https://doi.org/10.1186/1029242X201498
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X201498
Keywords
 monotone operators
 iterative method
 resolvent operator
 convergence