Let Ω be a bounded polygonal domain in {\mathbb{R}}^{2}. We consider a model problem defined as follows.

For given f\in {L}^{2}(\mathrm{\Omega}), find a scalar function *ϕ* on Ω being a minimizer of the cost functional

J(u):=\frac{1}{2}{\int}_{\mathrm{\Omega}}\mathrm{\Delta}u\mathrm{\Delta}u\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}+{\int}_{\mathrm{\Omega}}|\mathrm{\nabla}u|\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}-{\int}_{\mathrm{\Omega}}fu\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}

(1)

over the convex set *K* defined by

K=\{u\in {H}_{0}^{1}(\mathrm{\Omega}):|\mathrm{\nabla}u|\le 1\text{almost everywhere on}\mathrm{\Omega}\},

where the Sobolev space {H}_{0}^{1}(\mathrm{\Omega}) is defined as follows:

Variational inequalities might be separated into two main groups as elliptic and parabolic variational inequalities. Glowinski studies these sorts of inequalities in [3] in detail. He considers the elliptic variational inequalities (EVI) as the first kind and second kind EVI and defines those in a functional context as follows.

Let *V* denote a real Hilbert space with the inner product (\cdot ,\cdot ) and the associated norm \parallel \cdot \parallel. {V}^{\ast} is a dual space of *V*, a(\cdot ,\cdot ):V\times V\to \mathbb{R} is a bilinear, continuous and *V*-elliptic form on V\times V, L:V\to \mathbb{R} continuous linear functional, *K* is a closed convex nonempty subset of *V*, j(\cdot ):V\to \mathbb{R}\cup \{\mathrm{\infty}\} is a convex lower semi-continuous functional. The first and second kind EVI are typically defined in the following way.

The first kind EVI: find \varphi \in V such that *ϕ* is a solution of the problem

a(\varphi ,v-\varphi )\ge L(v-\varphi ),\phantom{\rule{1em}{0ex}}\text{every}v\in K,\varphi \in K.

The second kind EVI: find \varphi \in V such that *ϕ* is a solution of the problem

a(\varphi ,v-\varphi )+j(v)-j(\varphi )\ge L(v-\varphi ),\phantom{\rule{1em}{0ex}}\text{every}v\in V,\varphi \in V.

The next lemma sets up the connection between optimization and VI problems.

**Lemma 1** [3]

*Let* b:V\times V\to \mathbb{R} *be a symmetric continuous bilinear* *V*-*elliptic form*. *Let* L\in {V}^{\ast} *and* j:V\to \mathbb{R}\cup \{\mathrm{\infty}\} *be a convex lower semi*-*continuous proper functional*. *Let*

J(v)=\frac{1}{2}b(v,v)+j(v)-L(v).

*Then the minimization problem*, *find* *ϕ* *such that*

J(\varphi )\le J(v),\phantom{\rule{1em}{0ex}}\mathit{\text{every}}v\in V,\varphi \in V,

*has a unique solution which is characterized by*

b(\varphi ,v-\varphi )+j(v)-j(\varphi )\ge L(v-\varphi ),\phantom{\rule{1em}{0ex}}\mathit{\text{every}}v\in V,\varphi \in V.

*Proof* The proof can be seen on p.7 of [3]. □

It is clear that we can define the following in terms of the notations of Lemma 1:

We will approximate (1) with a finite element method introduced in [3]. Assume that Ω is a polygonal domain of {\mathbb{R}}^{2}. Consider a triangulation {\mathfrak{F}}_{h} of Ω in the following sense: {\mathfrak{F}}_{h} is a finite set of triangles *T* such that

Here {T}_{i}^{\circ} denotes the inner part of the corresponding triangle. Furthermore, for all {T}_{1},{T}_{2}\in {\mathfrak{F}}_{h} and {T}_{1}\ne {T}_{2}, exactly one of the following conditions must hold:

(i) {T}_{1}\cap {T}_{2}=\mathrm{\varnothing},

(ii) {T}_{1} and {T}_{2} have only one common vertex,

(iii) {T}_{1} and {T}_{2} have only a whole common edge.

*h* is the length of the largest edge of the triangles in the triangulation. Define {P}_{k} as a space of polynomials in {x}_{1} and {x}_{2} of degree less than or equal to *k*, and

\sum _{h}:=\{P\in \overline{\mathrm{\Omega}},P\text{is a vertex of}T\in {\mathfrak{F}}_{h}\}.

The space V={H}_{0}^{1}(\mathrm{\Omega}) is approximated by the family of subspaces {({V}_{h}^{k})}_{h} with k=1 or k=2, where

{V}_{h}^{k}:=\{{v}_{h}\in {C}^{0}(\overline{\mathrm{\Omega}}),{v}_{h}{|}_{\partial \mathrm{\Omega}}=0\text{and}{v}_{h}{|}_{T}\in {P}_{k},\mathrm{\forall}T\in {\mathfrak{F}}_{h}\},\phantom{\rule{1em}{0ex}}k=1,2.

It is obvious that the {V}_{h}^{k} are finite dimensional. Then the space *K* is approximated by

{K}_{h}^{k}=\{{v}_{h}\in {V}_{h}^{k},{v}_{h}(P)\ge \psi (P),\mathrm{\forall}P\in \sum _{h}^{k}\},\phantom{\rule{1em}{0ex}}k=1,2.

Notice that {K}_{h}^{k} for k=1,2 are closed convex nonempty subsets of {V}_{h}^{k}.

With these settings, the solution \varphi \in K is approximated by

b({\varphi}_{h},u-{\varphi}_{h})+j(u)-j({\varphi}_{h})\ge L(u-{\varphi}_{h}),\phantom{\rule{1em}{0ex}}\text{every}u\in {K}_{h},\varphi \in {K}_{h},

(2)

or equivalently,

Using the augmented Lagrangian multipliers method, we find a discrete solution of (1) as follows. First, let us introduce the Lagrange functional

\stackrel{\u02c6}{\mathcal{L}}(u,\mu )=\frac{1}{2}b(u,u)+{\int}_{\mathrm{\Omega}}|\mathrm{\nabla}u|\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}-{\int}_{\mathrm{\Omega}}fu\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}+{\int}_{\mathrm{\Omega}}\mu (|\mathrm{\nabla}u|-1)\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}.

For r\ge 0, an augmented Lagrangian {\mathcal{L}}_{r} is defined by

{\mathcal{L}}_{r}(u,\mu )=\stackrel{\u02c6}{\mathcal{L}}(u,\mu )+\frac{r}{2}{\int}_{\mathrm{\Omega}}{|\mathrm{\nabla}u-1|}^{2}\phantom{\rule{0.2em}{0ex}}d\mathrm{\Omega}.

(3)

Augmented Lagrangian multipliers methods for VI problems have been introduced by Glowinski and Marrocco (see [8]). Theorem 2.1 on p.168 in [3] guarantees the existence of a solution of this optimization problem.

Let us do notice that the first component {\varphi}_{h} of (2) is then the solution of the original problem (1). Using the techniques of the variational calculus, we can write that

((1+{\mu}_{h})\mathrm{\Delta}{\varphi}_{h},\mathrm{\Delta}(v-{\varphi}_{h}))+j(v)-j({\varphi}_{h})-L(u-{\varphi}_{h})=({\mu}_{h},\mathrm{\Delta}(v-{\varphi}_{h}))\phantom{\rule{1em}{0ex}}\mathrm{\forall}\varphi \in {V}_{h},

or equivalently,

We can describe the solution algorithm as follows:

(i) Choose an initial iterate {\mu}_{h} and \lambda \ge 0;

(ii) Solve the linear problem ((1+{\mu}_{h})\mathrm{\Delta}{\varphi}_{h},\mathrm{\Delta}(v-{\varphi}_{h}))+j(v)-j({\varphi}_{h})-L(u-{\varphi}_{h})=({\mu}_{h},\mathrm{\Delta}(v-{\varphi}_{h})), \mathrm{\forall}\varphi \in {V}_{h};

(iii) Update {\mu}_{h}^{\alpha +1}=max\{0,{\mu}_{h}+\alpha (\mathrm{\nabla}{u}_{h}^{\alpha}-1)\} on each cell;

(iv) Set \alpha =\alpha +1 and go back to (ii).