Skip to main content

Self-adaptive alternating direction method of multiplier for a fourth order variational inequality

Abstract

We propose an alternating direction method of multiplier for approximation solution of the unilateral obstacle problem with the biharmonic operator. We introduce an auxiliary unknown and augmented Lagrangian functional to deal with the inequality constrained, and we deduce a constrained minimization problem that is equivalent to a saddle-point problem. Then the alternating direction method of multiplier is applied to the corresponding problem. By using iterative functions, a self-adaptive rule is used to adjust the penalty parameter automatically. We show the convergence of the method and give the penalty parameter approximation in detail. Finally, the numerical results are given to illustrate the efficiency of the proposed method.

1 Introduction

Alternating direction method of multiplier (ADMM) is also known as Uzawa block relaxation method, and this method has many applications in structured optimization problems; see, e.g., variational inequality [1, 2], contact problem [36]. For each iteration, only a linear problem needs to be solved while the auxiliary unknown and Lagrange multiplier are computed explicitly. The ADMM is globally convergent for any positive penalty parameter. However, the convergence speed of the method is very sensitive to the parameter and it is difficult to choose the optimal parameter in practice [7].

In this paper, we focus on the combination of the ADMM and a self-adaptive rule for a fourth-order variational inequality [817]. Concerning the convergence speed of the ADMM heavily relies on the penalty parameter, we propose a self-adaptive rule to choose the proper parameter by using the iterative functions [2, 1822]. Then we obtain a self-adaptive alternating direction method of multiplier (SADMM) for the variational inequality.

The paper is organized as follows. In Sect. 2, some basic results are introduced in order to express of fourth-order variational inequalities and the ADMM is applied to the problem. In particular, we propose the SADMM and show the convergence of the method in Sect. 3. In Sect. 4, we present a self-adaptive rule in detail and some numerical results to illustrate the performance of the method. Finally, some conclusions and perspectives are given in Sect. 5.

2 Fourth order variational inequality and the alternating direction method of multiplier

Let Ω be a bounded polygonal domain in \(\mathbb{R}^{2}\) with the boundary \(\Gamma =\partial \Omega \). For given \(f\in L^{2}(\Omega )\) and \(\psi \in C(\bar{\Omega})\cap C^{2}(\Omega )\) with \(\psi <0\) on Ω, we consider the following fourth-order variational inequality: find \(u\in K\) such that

$$ a(u,v-u)\ge (f,v-u),\ \forall v \in K, $$
(2.1)

where

$$ a(u,v)\colon =\int _{\Omega}\sum _{i,j=1}^{2}\big( \frac{\partial ^{2} u}{\partial x_{i}\partial x_{j}}\big) \big( \frac{\partial ^{2} v}{\partial x_{i}\partial x_{j}}\big)dx,\,\, (f,v) \colon =\int _{\Omega}fvdx, $$

and

$$ K:=\{v\in H_{0}^{2}(\Omega ),\ v\ge \psi \ \text{in}\ \Omega \}. $$

The bilinear form \(a(u,v)\) is \(H_{0}^{2}\)-elliptic and Lipschitz continuous, i.e.,

$$ a(u,u)\ge \alpha \|u\|^{2}_{H_{0}^{2}(\Omega )},\, a(v,w)\le \beta \|v \|_{H_{0}^{2}(\Omega )}\|w\|_{H_{0}^{2}(\Omega )} $$

for some \(\alpha >0\) and \(\beta >0\) independently of u, v, w in \(H_{0}^{2}(\Omega )\). We can formulate the variational inequality problem (2.1) formulated equivalently as a constrained minimization problem

$$ u=\min _{v\in K}E(v):=\frac{1}{2}a(v,v)-(f,v). $$
(2.2)

It is well known that the unique solution \(u\in K\) of (2.1) can be characterized by (2.2) [8, 9, 16].

As in [9], we introduce a Lagrange multiplier \(\lambda \in L^{2}(\Omega ; [0,+\infty ))\) such that

$$ \int _{\Omega}(u-\psi )\lambda dx=0, $$
(2.3)

and

$$ a(u,v)=(f,v)+\int _{\Omega}v\lambda dx,\ \forall v\in H_{0}^{2}( \Omega ). $$
(2.4)

Then problems (2.1) or (2.2) and (2.3)–(2.4) are equivalent. Equation (2.3)–2.4) admits a unique solution \(u\in H_{0}^{2}(\Omega )\) and an associated Lagrange multiplier \(\lambda \in L^{2}(\Omega )\). Since u and λ satisfy \(u\in K\) and \(\lambda \in L^{2}(\Omega ; [0,+\infty ))\), it follows that (2.3) is equivalent to

$$ \lambda =(\lambda -{\rho}(u-\psi ))^{+}, $$
(2.5)

for any parameter \(\rho >0\) and \((v)^{+}:=\max (0,v)\) [10, 16].

To obtain the solution of the minimization problem (2.2) by using the ADMM [2, 3, 7], we introduce the set

$$ \hat{K}:=\{q\in L^{2}(\Omega ), q\ge \varphi \text{ a.e. in } \Omega \}, $$

and its indicator functional

$$ I_{\hat{K}}(q)= \textstyle\begin{cases} 0 &\text{if } q\in \hat{K}, \\ +\infty &\text{otherwise}. \end{cases} $$

Let us introduce an auxiliary unknown \(p\in L^{2}(\Omega )\) and \(p=u\) in Ω, then (2.2) is equivalent to the following constrained minimization problem

$$ \textstyle\begin{cases} \text{Find } (u,p)\in H_{0}^{2}(\Omega )\times L^{2}(\Omega ) \text{ such that} \\ E(u)+I_{\hat{K}}(p)=\min _{(v,q)\in W}\{E(v)+I_{\hat{K}}(q)\}, \end{cases} $$

where \(W=\{(v,q)\in H_{0}^{2}(\Omega )\times L^{2}(\Omega ), v-q=0\}\). For this problem, we define an augmented Lagrangian \(L_{\rho}\) as

$$ L_{\rho}(v,q,\mu )=E(v)+I_{\hat{K}}(q)-\int _{\Omega}\mu (v-q)dx+ \frac{\rho}{2}\parallel v-q\parallel ^{2}_{L^{2}(\Omega )}, $$

where \(\{v, q, \mu \}\in H_{0}^{2}(\Omega )\times L^{2}(\Omega )\times L^{2}( \Omega )\). We consider the following saddle-point problem

$$ L_{\rho}(u,p,\mu )\le L_{\rho}(u,p,\lambda )\le L_{\rho}(v,q,\lambda ), \ \forall \{v,q,\mu \}\in H_{0}^{2}(\Omega )\times L^{2}(\Omega ) \times L^{2}(\Omega ). $$
(2.6)

Then we have the following result for problems (2.2) and (2.6) [1, 7].

Lemma 1

Let \(\{(u,p),\lambda \}\) be the solution of the saddle-point problem (2.6), then u is the solution of (2.2) and \(p=u\).

From Lemma 1 we can apply the ADMM to the saddle-point problem (2.6) and obtain the solution of the minimization problem (2.2) [2]. The method consists of computing \(p^{n+1}\), \(u^{n+1}\) in minimizations and the multiplier update successively as follows

$$\begin{aligned} &p^{n+1}=\arg \min \limits _{q} L_{\rho}(u^{n},q,\lambda ^{n}),\ q \in L^{2}(\Omega ). \end{aligned}$$
(2.7)
$$\begin{aligned} &u^{n+1}=\arg \min \limits _{v}L_{\rho}(v,p^{n+1},\lambda ^{n}),\ v \in H_{0}^{2}(\Omega ). \end{aligned}$$
(2.8)
$$\begin{aligned} &\lambda ^{n+1}=\lambda ^{n}-\rho (u^{n+1}-p^{n+1}). \end{aligned}$$
(2.9)

In this method, \(p^{n+1}\) in minimization (2.7) can be explicitly solved by \(u^{n}\) and \(\lambda ^{n}\). Minimization (2.8) leads to a variational problem that has a unique solution for given \(\lambda ^{n}\), \(p^{n+1}\) and \(\rho >0\) [2, 3]. Then we obtain the following ADMM algorithm.

Algorithm 1

(ADMM)

Step 0: \(\lambda ^{0}\in L^{2}(\Omega )\), \(u^{0}\in H_{0}^{2}(\Omega )\) and \(\rho >0\) are given arbitrarily; set \(n:=0\).

Step 1: Compute \(p^{n+1}\) by

$$ p^{n+1}=u^{n}-\frac{1}{\rho}[\lambda ^{n}-(\lambda ^{n}-{\rho}(u^{n}- \psi ))^{+}]. $$
(2.10)

Step 2: Find \(u^{n+1}\in H_{0}^{2}(\Omega )\) solution to the following variational problem

$$ a(u^{n+1},v)+\rho ( u^{n+1},v)=(f,v)+(\lambda ^{n}+\rho p^{n+1},v),\ \forall v\in H_{0}^{2}(\Omega ). $$
(2.11)

Step 3: Update the multiplier \(\lambda ^{n+1}\) by

$$ \lambda ^{n+1}=\lambda ^{n}-\rho (u^{n+1}-p^{n+1}), $$
(2.12)

and set \(n:=n+1\) and go to Step 1.

3 Self-adaptive alternating direction method of multiplier

As is known to us, the Algorithm 1 (ADMM) is convergent for any fixed parameter \(\rho >0\). However, the rate of convergence of the method strongly depends on the penalty parameter and it is difficult to choose the optimal parameter for individual problems. To improve the efficiency of the ADMM, we use iterative functions and propose a self-adaptive rule to adjust the parameter [2, 1822]. In this paper, we suppose that there is a nonnegative sequence \(\{s_{n}\}\) such that

$$ \sum _{n=0}^{+\infty}s_{n}< +\infty . $$
(3.1)

Then we propose a self-adaptive alternating direction method of multiplier (SADMM), with an automatic penalty parameter selection as following.

Algorithm 2

(SADMM)

Step 0: \(\lambda ^{0}\in L^{2}(\Omega )\), \(u^{0}\in H_{0}^{2}(\Omega )\) and \(\rho >0\) are given arbitrarily; set \(\rho _{0}=\rho \) and \(n:=0\).

Step 1: Compute \(p^{n+1}\) by

$$ p^{n+1}=u^{n}-\frac{1}{\rho _{n}}[\lambda ^{n}-(\lambda ^{n}-\rho _{n}(u^{n}- \psi ))^{+}]. $$
(3.2)

Step 2: Find \(u^{n+1}\in H_{0}^{2}(\Omega )\) solution to the following variational problem

$$ a(u^{n+1},v)+\rho _{n}( u^{n+1},v)=(f,v)+( \rho _{n}p^{n+1}+\lambda ^{n},v), \quad \forall v\in H_{0}^{2}(\Omega ). $$
(3.3)

Step 3: Update the Lagrange multiplier

$$ \lambda ^{n+1}=\lambda ^{n}-\rho _{n}(u^{n+1}-p^{n+1}). $$
(3.4)

Step 4: Adjust the penalty parameter \(\rho _{n+1}\) by

$$ \frac{1}{1+s_{n}}\rho _{n}\le \rho _{n+1}\le (1+s_{n})\rho _{n}, $$
(3.5)

set \(n:=n+1\) and go to Step 1.

For the mapping \(v\to (v)^{+}\) in \(L^{2}(\Omega )\), we have some basic results that will be needed in the following analysis.

Lemma 2

For all \(v\in L^{2}(\Omega )\) and \(w\in L^{2}(\Omega ; [0,+\infty ))\), it holds that

$$ (v-(v)^{+},w-(v)^{+})\le 0, $$
(3.6)

and

$$ (v-w,(v)^{+}-(w)^{+})\ge \|(v)^{+}-(w)^{+}\|_{L^{2}(\Omega )}^{2}. $$
(3.7)

For any \(v,\mu \in L^{2}(\Omega )\), we define

$$ B_{\rho}(\mu ,v):=B(\mu ,v;\psi ,\rho ):=\mu -(\mu -\rho (v-\psi ))^{+}, $$
(3.8)

and introduce the following lemma [19].

Lemma 3

Let \(v,\mu ,\psi \in L^{2}(\Omega )\) and \(\tilde{\rho}\ge \rho >0\), we have

$$ \|B_{\tilde{\rho}}(\mu ,v)\|_{L^{2}(\Omega )}\ge \|B_{\rho}(\mu ,v)\|_{L^{2}( \Omega )}. $$
(3.9)

Proof

Using inequality (3.6) with \(v :=\mu -\rho (v-\psi )\) and \(w :=(\mu -\tilde{\rho}(v-\psi ))^{+}\), we have

$$ (\mu -\rho (v-\psi )-(\mu -\rho (v-\psi ))^{+},(\mu -\rho (v-\psi ))^{+}-( \mu -\tilde{\rho}(v-\psi ))^{+})\ge 0. $$
(3.10)

Note that \((\mu -\rho (v-\psi ))^{+}-(\mu -\tilde{\rho}(v-\psi ))^{+}=B_{ \tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v)\), it follows from (3.10) that

$$ (B_{\rho}(\mu ,v),B_{\tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v))\ge ( \rho (v-\psi ),B_{\tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v)). $$
(3.11)

Setting \(v :=\mu -\rho (v-\psi )\) and \(w :=\mu -\tilde{\rho}(v-\psi )\) in (3.7), we have

$$ (\tilde{\rho}-\rho )(v-\psi ,B_{\tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v)) \ge \|B_{\tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v)\|^{2}. $$
(3.12)

Using (3.11) and (3.12), it follows that inequality

$$ (B_{\rho}(\mu ,v),B_{\tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v))\ge 0, $$
(3.13)

then we use Schiwarz inequality and have

$$ \|B_{\rho}(\mu ,v)\|_{L^{2}(\Omega )}\|B_{\tilde{\rho}}(\mu ,v)\|_{L^{2}( \Omega )}\ge (B_{\rho}(\mu ,v),B_{\tilde{\rho}}(\mu ,v))\ge \|B_{\rho}( \mu ,v)\|^{2}_{L^{2}(\Omega )}, $$
(3.14)

and the lemma is proved. □

For the convergence of Algorithm 2, we now give a lemma that will be needed in the following analysis.

Lemma 4

Let the sequence \(\{s_{n}\}\) satisfy \(s_{n}\ge 0\) and \(\sum \limits _{n=0}^{+\infty}s_{n}<+\infty \), then \(\prod \limits _{n=0}^{+\infty}(1+s_{n})<+\infty \).

Proof

Consider that \(s_{n}\ge 0\) and \(\sum \limits _{n=0}^{+\infty}s_{n}<+\infty \), we have \(0\le \lim \limits _{n\to +\infty}\sum \limits _{k=0}^{n}s_{k}<+ \infty \). It follows that

$$ \begin{aligned} \prod _{n=0}^{+\infty}(1+s_{n})=&\lim \limits _{n\to +\infty}\prod _{k=0}^{n}(1+s_{k}) \\ \le &\lim \limits _{n\to +\infty}\left (\frac{1}{n}\sum _{k=0}^{n} \left (1+s_{k}\right )\right )^{n} \\ =&\lim \limits _{n\to +\infty}\left (\left (1+\frac{1}{n}\sum _{k=0}^{n}s_{k} \right )^{\frac{n}{\sum \limits _{k=0}^{n}s_{k}}}\right )^{\sum \limits _{k=0}^{n}s_{k}} \\ =&\lim \limits _{n\to +\infty}e^{\sum \limits _{k=0}^{n}s_{k}}< + \infty , \end{aligned} $$

here we use the mean inequality and the important limit \(\lim \limits _{x\to 0}(1+x)^{\frac{1}{x}}=e\). □

We denote by

$$ C_{s}:=\prod _{n=0}^{+\infty}(1+s_{n}). $$

Since \(\frac{1}{1+s_{n}}\rho _{n}\le \rho _{n+1}\le (1+s_{n})\rho _{n}\) holds for every \(n\ge 0\), we have \(\rho _{n}\in [\frac{1}{C_{s}}\rho _{0},C_{s}\rho _{0}]\) which is bounded. Let us set \(\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}>0\) and \(\rho _{U}=\sup \{\rho _{n}\}_{n=0}^{+\infty}>0\), then \(\rho _{L}\ge \frac{1}{C_{s}}\rho _{0}>0\) and \(\rho _{U}\le C_{s}\rho _{0}< +\infty \).

It follows from Algorithm 2 (SADMM) and (2.3)–(2.4) that we have the following convergence result of the algorithm [2, 7].

Theorem 1

Let \(\{(u^{n},p^{n}),\lambda ^{n}\}\) be the sequence generated by Algorithm 2 (SADMM), then

$$ \begin{aligned} &\|\delta \lambda ^{n+1}+\rho _{n}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \le \|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}(\Omega )}^{2} \\ &-\rho _{n}^{2}\| u^{n}-p^{n+1}\|_{L^{2}(\Omega )}^{2}-2\rho _{n}( \delta \lambda ^{n},\delta u^{n}), \end{aligned} $$
(3.15)

where \(\delta u^{n}=u^{n}-u\), \(\delta \lambda ^{n}=\lambda ^{n}-\lambda \) and where \(\{u,\lambda \}\) is the solution of (2.4)(2.5).

Proof

Let us set \(\delta p^{n}=p^{n}-p\). According to (2.4) and (3.3), we have

$$ \textstyle\begin{cases} a(u,v)=(f,v)+\int _{\Omega}v\lambda dx,& \forall v\in H_{0}^{2}( \Omega ), \\ a(u^{n+1},v)+\rho _{n}( u^{n+1},v)=(f,v)+( \rho _{n}p^{n+1}+\lambda ^{n},v), & \forall v\in H_{0}^{2}(\Omega ). \end{cases} $$

Taking \(v=\delta u^{n+1}\) in both systems and subtracting them, then we get

$$ a(\delta u^{n+1},\delta u^{n+1})=(\delta \lambda ^{n}-\rho _{n}(u^{n+1}-p^{n+1}), \delta u^{n+1}), $$
(3.16)

From (2.6), we have \(L_{\rho}(u,p,\mu )\le L_{\rho}(u,p,\lambda )\) and \(p=u\) [7]. Using \(p=u\) in (3.16), we obtain

$$ a(\delta u^{n+1},\delta u^{n+1})=(\delta \lambda ^{n}-\rho _{n}( \delta u^{n+1}-\delta p^{n+1}),\delta u^{n+1}), $$
(3.17)

and from (3.4) we also have

$$ \delta \lambda ^{n+1}=\delta \lambda ^{n}-\rho _{n}(\delta u^{n+1}- \delta p^{n+1}). $$

Since \(a(\cdot ,\cdot )\) is \(H_{0}^{2}\)-elliptic, it follows from (3.17) that

$$ a(\delta u^{n+1},\delta u^{n+1})=(\delta \lambda ^{n+1},\delta u^{n+1}) \ge \alpha \|\delta u^{n+1}\|^{2}_{H_{0}^{2}(\Omega )}\ge 0. $$
(3.18)

From (2.3) we have \(\lambda \ge 0\), \(u-\psi \ge 0\) and \((u-\psi ,\lambda )=0\) on Ω, then we obtain

$$ \rho _{n}( u-\psi ,(\lambda ^{n}-\rho _{n}(u^{n}-\psi ))^{+}-\lambda ) \ge 0. $$
(3.19)

and

$$ ( \lambda ^{n}-\rho _{n}(u^{n}-\psi )-(\lambda ^{n}-\rho _{n}(u^{n}- \psi ))^{+},(\lambda ^{n}-\rho _{n}(u^{n}-\psi ))^{+}-\lambda )\ge 0. $$
(3.20)

Adding (3.19) and (3.20), we have

$$ ( \lambda ^{n}-\rho _{n}\delta u^{n}-(\lambda ^{n}-\rho _{n}(u^{n}- \psi ))^{+},(\lambda ^{n}-\rho _{n}(u^{n}-\psi ))^{+}-\lambda )\ge 0. $$

Then it follows from (3.2) that

$$ (\rho _{n}(u^{n}-p^{n+1})-\rho _{n}\delta u^{n},\rho _{n}(p^{n+1}-u^{n})+ \delta \lambda ^{n})\ge 0. $$

Therefore

$$ \rho _{n}(\delta \lambda ^{n},\delta u^{n})+\rho _{n}^{2}\| u^{n}-p^{n+1} \|_{L^{2}(\Omega )}^{2} \le \rho _{n}(\delta \lambda ^{n}+\rho _{n} \delta u^{n},u^{n}-p^{n+1}). $$
(3.21)

From (3.4), it follows that

$$ \delta \lambda ^{n+1}+\rho _{n}\delta u^{n+1}=\delta \lambda ^{n}+ \rho _{n}\delta u^{n}-\rho _{n}(u^{n}-p^{n+1}), $$

Consequently, from (3.21) we have

$$ \begin{aligned} &\|\delta \lambda ^{n+1}+\rho _{n}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \\ =&\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}(\Omega )}^{2}+ \rho _{n}^{2}\| u^{n}-p^{n+1}\|_{L^{2}(\Omega )}^{2}-2\rho _{n}( \delta \lambda ^{n}+\rho _{n}\delta u^{n},u^{n}-p^{n+1}) \\ \le &\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}(\Omega )}^{2}- \rho _{n}^{2}\| u^{n}-p^{n+1}\|_{L^{2}(\Omega )}^{2}-2\rho _{n}( \delta \lambda ^{n},\delta u^{n}), \end{aligned} $$

and this completes the proof. □

Theorem 2

The sequence \(\{u^{n},p^{n}\}\) generated by Algorithm 2 (SADMM) is such that

$$ u^{n}\to u\quad \textit{in}\, H_{0}^{2}(\Omega ),\quad p^{n}\to u\quad \textit{in}\, L^{2}(\Omega ), $$

where u is the solution of (2.4)(2.5).

Proof

From \(s_{n}\ge 0\), \(0<\rho _{n+1}\le (1+s_{n})\rho _{n}\) and (3.18), we have

$$ \begin{aligned} &\|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \\ =&\|\delta \lambda ^{n+1}\|_{L^{2}(\Omega )}^{2}+\rho _{n+1}^{2}\| \delta u^{n+1}\|_{L^{2}(\Omega )}^{2}+2\rho _{n+1}(\delta \lambda ^{n+1}, \delta u^{n+1}) \\ \le &(1+s_{n})^{2}(\|\delta \lambda ^{n+1}\|_{L^{2}(\Omega )}^{2}+ \rho _{n}^{2}\|\delta u^{n+1}\|_{L^{2}(\Omega )}^{2}+2\rho _{n}( \delta \lambda ^{n+1},\delta u^{n+1})) \\ =&(1+s_{n})^{2}\|\delta \lambda ^{n+1}+\rho _{n}\delta u^{n+1}\|_{L^{2}( \Omega )}^{2}. \end{aligned} $$
(3.22)

We substitute (3.22) into (3.15) and have

$$ \begin{aligned} &\|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \\ \le &(1+s_{n})^{2}\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}( \Omega )}^{2}-2\rho _{n}(1+s_{n})^{2}(\delta \lambda ^{n},\delta u^{n}) \\ &-\rho _{n}^{2}(1+s_{n})^{2}\| u^{n}-p^{n+1}\|_{L^{2}(\Omega )}^{2}. \end{aligned} $$
(3.23)

so that

$$ \begin{aligned} \|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \le (1+s_{n})^{2}\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}( \Omega )}^{2}-2\rho _{n}(1+s_{n})^{2}(\delta \lambda ^{n},\delta u^{n}). \end{aligned} $$

From (3.18) and \(\rho _{n}>0\), we have

$$ \rho _{n}(1+s_{n})^{2}(\delta \lambda ^{n},\delta u^{n})\ge 0. $$

It follows that

$$ \|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \le (1+s_{n})^{2}\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}( \Omega )}^{2}. $$
(3.24)

Let \(\xi _{n}:=2s_{n}+s_{n}^{2}\), we define \(C_{0}\), \(C_{1}\) by

$$ C_{0}:=\sum _{n=0}^{+\infty}\xi _{n},\quad C_{1}:=\prod _{n=0}^{+ \infty}(1+\xi _{n}), $$

then from Lemma 4 we have \(C_{0}<+\infty \) and \(C_{1}<+\infty \). From (3.24) it then follows that

$$ \begin{aligned} &\|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1}\|_{L^{2}(\Omega )}^{2} \\ \le &(1+\xi _{n})\|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}( \Omega )}^{2} \\ \le &\prod _{i=0}^{n}(1+\xi _{i})\|\delta \lambda ^{0}+\rho _{0} \delta u^{0}\|_{L^{2}(\Omega )}^{2} \\ \le & C_{1}\|\delta \lambda ^{0}+\rho _{0}\delta u^{0}\|_{L^{2}( \Omega )}^{2},\ \forall n\ge 1, \end{aligned} $$
(3.25)

this implies that there exists a constant \(C>0\) such that

$$ \|\delta \lambda ^{n}+\rho _{n}\delta u^{n}\|_{L^{2}(\Omega )}^{2} \le C,\ \forall n\ge 0. $$
(3.26)

Then, we from (3.23) also have

$$ \begin{aligned} &2\sum _{n=0}^{+\infty}\rho _{n}(1+\xi _{n})(\delta \lambda ^{n}, \delta u^{n})+\sum _{n=0}^{+\infty}\rho _{n}^{2}(1+\xi _{n})\| u^{n}-p^{n+1} \|_{L^{2}(\Omega )}^{2} \\ \le &\sum _{n=0}^{+\infty}(1+\xi _{n})\|\delta \lambda ^{n}+\rho _{n} \delta u^{n}\|_{L^{2}(\Omega )}^{2}) \\ &-\sum _{n=0}^{+\infty}\|\delta \lambda ^{n+1}+\rho _{n+1}\delta u^{n+1} \|_{L^{2}(\Omega )}^{2} \\ \le &\|\delta \lambda ^{0}+\rho _{0}\delta u^{0}\|_{L^{2}(\Omega )}^{2}+ \sum _{n=0}^{+\infty}\xi _{n}\|\delta \lambda ^{n}+\rho _{n}\delta u^{n} \|_{L^{2}(\Omega )}^{2} \\ \le & C(1+\sum _{n=0}^{+\infty}\xi _{n})=C(1+C_{0}), \end{aligned} $$

that is

$$ 2\sum _{n=0}^{+\infty}\rho _{n}(1+\xi _{n})(\delta \lambda ^{n}, \delta u^{n})+\sum _{n=0}^{+\infty}\rho _{n}^{2}(1+\xi _{n})\| u^{n}-p^{n+1} \|_{L^{2}(\Omega )}^{2} \le C(1+C_{0}). $$
(3.27)

It follows from (3.18) and (3.27) that

$$ \begin{aligned} &\sum _{n=0}^{\infty}2\alpha \rho _{L}\|\delta u^{n}\|^{2}_{H_{0}^{2}( \Omega )}+\sum _{n=0}^{+\infty}\rho _{L}^{2}\| u^{n}-p^{n+1}\|_{L^{2}( \Omega )}^{2} \\ \le &2\sum _{n=0}^{+\infty}\rho _{n}(1+\xi _{n})( \delta \lambda ^{n+1}, \delta u^{n+1})+\sum _{n=0}^{+\infty}\rho _{n}^{2}(1+\xi _{n})\| u^{n}-p^{n+1} \|_{L^{2}(\Omega )}^{2} \\ \le &C(1+C_{0}) \\ < &+\infty , \end{aligned} $$

where \(\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}>0\). Then, we have

$$ \lim _{n\to \infty}\rho _{L}\|\delta u^{n}\|^{2}_{H_{0}^{2}(\Omega )}=0, $$
(3.28)

and

$$ \lim _{n\to \infty}\rho _{L}\|u^{n}-p^{n+1}\|_{L^{2}(\Omega )}=0. $$
(3.29)

Therefore, \(u^{n}\) converges to u in \(H_{0}^{2}(\Omega )\). It follows that

$$ \lim _{n\to \infty}\|p^{n+1}-u\|_{L^{2}(\Omega )}\le \lim _{n\to \infty}\|p^{n+1}-u^{n}\|_{L^{2}(\Omega )}+\lim _{n\to \infty}\|u^{n}-u \|_{L^{2}(\Omega )}=0, $$

then we prove that \(p^{n}\) converges to p in \(L^{2}(\Omega )\). □

In order to obtain the convergence of the sequence \(\{\lambda ^{n}\}\), we consider the discrete version of Algorithm 2 (SADMM) using linear finite element triangulation. Let N be the number of nodes of the triangulation. We will use the following notations:

(a) \(\mathbf{A}\in \mathbb{R}^{N\times N}\) is the stiffness matrix defined by the bilinear form \(a(u,v)\);

(b) \(\mathbf{M}\in \mathbb{R}^{n\times N}\) is mass matrices;

(c) \(\mathbf{f}\in \mathbb{R}^{N}\) is the discrete external forces;

(d) \(\mathbf{p}\in \mathbb{R}^{N}\) is the discrete auxiliary unknown;

(e) \(\boldsymbol{\lambda}\in \mathbb{R}^{N}\) is the discrete lagrange multiplier.

Both the matrices A and M are symmetric, positive definite. For a vector \(\mathbf{x}\in \mathbb{R}^{N}\) in finite dimensional Hilbert space, \(\|\mathbf{x}\|\) is the norm of x, defined by the inner product \((\mathbf{x},\mathbf{x})=\mathbf{x}^{\mathsf{T}}\mathbf{x}\) and \(\mathbf{x}^{\mathsf{T}}\) denotes the transpose of x. We have the following convergence theorem [19].

Theorem 3

The sequence \(\{(\mathbf{u}^{n}, \mathbf{p}^{n}), \boldsymbol{\lambda}^{n}\}\) generated by the discrete version of Algorithm 2 (SADMM) is such that

$$ \lim _{n\to \infty}\mathbf{u}^{n}=\mathbf{u}, \lim _{n\to \infty} \mathbf{p}^{n}=\mathbf{u}, \lim _{n\to \infty}\boldsymbol{\lambda}^{n}= \boldsymbol{\lambda}. $$

Proof

As the proof the (3.15), we can from the discrete version of Algorithm 2 easily obtain

$$ \rho _{n}^{2}\|\mathbf{u}^{n}-\mathbf{P}^{n+1}\|^{2}+2\rho _{n}( \delta{\boldsymbol{\lambda}}^{n},\delta{\mathbf{u}}^{n})\le \|\delta{ \boldsymbol{\lambda}}^{n}+\rho _{n}\delta{\mathbf{u}}^{n}\|^{2}-\|\delta{ \boldsymbol{\lambda}}^{n+1}+\rho _{n}\delta{\mathbf{u}}^{n+1}\|, $$
(3.30)

where \(\delta{\mathbf{u}}^{n}=\mathbf{A}^{-1}\mathbf{M}\delta{\boldsymbol{\lambda}}^{n}\). Then we can easily prove the convergence of sequence \(\{\mathbf{u}^{n}, \mathbf{p}^{n}\}\). It is well known that the bounded sequence must have convergent subsequence. It follows from (3.30) and positive definiteness of A and M that the sequence \(\{\boldsymbol{\lambda} ^{n}\}\) is bounded, and there must exist \(\hat{\boldsymbol{\lambda}}\) and a subsequence \(\{\boldsymbol{\lambda} ^{n_{i}}\}\) such that \(\boldsymbol{\lambda} ^{n_{i}}\) converges to \(\hat{\boldsymbol{\lambda}}\). Consider that the sequence \((\boldsymbol{\lambda} ^{n_{i}},\mathbf{u}^{n_{i}})\) satisfies (3.3)–(3.4), we have

$$ \mathbf{A}\mathbf{u}^{n_{i}+1}+\rho _{n_{i}}\mathbf{M}\mathbf{u}^{n_{i}+1} =\mathbf{f}+\mathbf{M}(\rho _{n_{i}}\mathbf{p}^{n_{i}+1}+\boldsymbol{\lambda}^{n_{i}}), $$

and

$$ \boldsymbol{\lambda} ^{n_{i}+1}=\boldsymbol{\lambda} ^{n_{i}}-\rho _{n_{i}}(\mathbf{u}^{n_{i}+1}- \mathbf{p}^{n_{i}+1}). $$

Eliminating the auxiliary unknown \(\mathbf{p}^{n_{i}+1}\), we obtain

$$ \mathbf{A}\mathbf{u}^{n_{i}+1}=\mathbf{f}+\mathbf{M}{\boldsymbol{\lambda}}^{n_{i}+1}. $$

Taking the limit in the above equation, we arrive at

$$ \mathbf{A}\mathbf{u}=\mathbf{f}+\mathbf{M}\hat{\boldsymbol{\lambda}}. $$
(3.31)

Moreover, from (3.2), (3.8) and (3.30), we have

$$ \lim _{{n_{i}}\to \infty}\|B_{\rho _{n_{i}}}(\boldsymbol{\lambda}^{n_{i}}, \mathbf{u}^{n_{i}})\|=0. $$

It follows from \(0<\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}\le \rho _{n_{i}}\) and Lemma 3 that

$$ \lim _{{n_{i}}\to \infty}\|B_{\rho _{L}}(\boldsymbol{\lambda}^{n_{i}}, \mathbf{u}^{n_{i}})\|\le \lim _{{n_{i}}\to \infty}\|B_{\rho _{n_{i}}}( \boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\|=0. $$

This implies that

$$ \lim _{{n_{i}}\to \infty}B_{\rho _{L}}(\boldsymbol{\lambda}^{n_{i}}, \mathbf{u}^{n_{i}})=0, $$

because \(B_{\rho _{L}}(\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) is continuous, from \(\lim \limits _{{n_{i}}\to \infty}\boldsymbol{\lambda}^{n_{i}}= \hat{\boldsymbol{\lambda}}\) and \(\lim \limits _{{n_{i}}\to \infty}\mathbf{u}^{n_{i}}=\mathbf{u}\), we have

$$ \lim _{{n_{i}}\to \infty}B_{\rho _{L}}(\boldsymbol{\lambda}^{n_{i}}, \mathbf{u}^{n_{i}})=B_{\rho _{L}}(\hat{\boldsymbol{\lambda}},\mathbf{u})=0, $$

that is

λ ˆ = ( λ ˆ ρ L ( u ψ ) ) + .
(3.32)

Then, from (3.31) and (3.32), we obtain the fact that \((\hat{\boldsymbol{\lambda}},\mathbf{u})\) satisfy the discrete formulation of (2.4)–(2.5). Since the solution to the system (2.4)–(2.5) is unique and we have \(\hat{\boldsymbol{\lambda}}=\boldsymbol{\lambda}\). That is, the subsequence \((\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) converge to \((\boldsymbol{\lambda},\mathbf{u})\).

Now we prove that entire sequence \(\boldsymbol{\lambda}^{n}\) converges λ. Assume that there exist \(\bar{\boldsymbol{\lambda}}\in \mathbb{R}^{N}\) such that a subsequence \((\boldsymbol{\lambda}^{n_{j}},\mathbf{u}^{n_{j}})\) converge to \((\bar{\boldsymbol{\lambda}},\mathbf{u})\) and \(\bar{\boldsymbol{\lambda}}\ne \boldsymbol{\lambda}\). Note that the above subsequences \((\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) and \((\boldsymbol{\lambda}^{n_{j}},\mathbf{u}^{n_{j}})\) converge to \((\boldsymbol{\lambda},\mathbf{u})\) and \((\bar{\boldsymbol{\lambda}},\mathbf{u})\) respectively, we suppose there exists a \(n_{0}\ge 0\) and \(n_{i}\ge n_{0}\), \(n_{j}\ge n_{0}\) such that

$$ \|\boldsymbol{\lambda}^{n_{i}}-\boldsymbol{\lambda}+\rho _{n_{i}}(\mathbf{u}^{n_{i}}- \mathbf{u})\|\le \frac{1}{2\sqrt{C_{1}}}\|\bar{\boldsymbol{\lambda}}- \boldsymbol{\lambda}\|\le \frac{1}{2}\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|, $$
(3.33)

and

$$ \|\boldsymbol{\lambda}^{n_{j}}-\bar{\boldsymbol{\lambda}}+\rho _{n_{j}}(\mathbf{u}^{n_{j}}- \mathbf{u})\|\le \frac{1}{2\sqrt{C_{1}}}\|\bar{\boldsymbol{\lambda}}- \boldsymbol{\lambda}\|\le \frac{1}{2}\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|, $$
(3.34)

where \(C_{1}:=\prod \limits _{n=0}^{+\infty}(1+\xi _{n})<\infty \). For a fixed \(n_{i}\ge n_{0}\) and \(\forall n_{j}\ge n_{i}\ge n_{0}\), it follows from (3.24) and (3.33) that

$$ \begin{aligned} &\|\boldsymbol{\lambda}^{n_{j}}-\boldsymbol{\lambda}+\rho _{n_{j}}(\mathbf{u}^{n_{j}}- \mathbf{u})\|^{2} \\ \le &(1+\xi _{{n_{j}}-1})\|\boldsymbol{\lambda}^{{n_{j}}-1}-\boldsymbol{\lambda}+ \rho _{{n_{j}}-1}(\mathbf{u}^{{n_{j}}-1}-\mathbf{u})\|^{2} \\ \le &(1+\xi _{{n_{j}}-1})(1+\xi _{{n_{j}}-2})\|\boldsymbol{\lambda}^{{n_{j}}-2}- \boldsymbol{\lambda}+\rho _{{n_{j}}-2}(\mathbf{u}^{{n_{j}}-2}-\mathbf{u})\|^{2} \\ &\cdots \\ \le &\prod _{j=n_{i}}^{{n_{j}}-1}(1+\xi _{j})\|\boldsymbol{\lambda}^{n_{i}}- \boldsymbol{\lambda}+\rho _{n_{i}}(\mathbf{u}^{n_{i}}-\mathbf{u})\|^{2} \\ \le &C_{1}\|\boldsymbol{\lambda}^{n_{i}}-\boldsymbol{\lambda}+\rho _{n_{i}}( \mathbf{u}^{n_{i}}-\mathbf{u})\|^{2} \\ \le &\frac{1}{4}\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|^{2}, \end{aligned} $$

so that

$$ \|\boldsymbol{\lambda}^{n_{j}}-\boldsymbol{\lambda}+\rho _{n_{j}}(\mathbf{u}^{n_{j}}- \mathbf{u})\|\le \frac{1}{2}\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|. $$
(3.35)

For \(\forall n_{j}\ge n_{i}\), we then from (3.35) have

$$ \begin{aligned} \|\boldsymbol{\lambda}^{n_{j}}-\bar{\boldsymbol{\lambda}}+\rho _{n_{j}}(\mathbf{u}^{n_{j}}- \mathbf{u})\| \ge &\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|-\|\boldsymbol{\lambda}^{n_{j}}- \boldsymbol{\lambda}+\rho _{n_{j}}(\mathbf{u}^{n_{j}}-\mathbf{u})\| \\ \ge &\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|-\frac{1}{2}\| \bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\| \\ =&\frac{1}{2}\|\bar{\boldsymbol{\lambda}}-\boldsymbol{\lambda}\|>0. \end{aligned} $$

We have reached a contradiction with (3.34) and the assumption that the subsequence \(\{\boldsymbol{\lambda}^{n_{j}}\}\) converges \(\bar{\boldsymbol{\lambda}}\). Then it follows that the entire sequence \(\{\boldsymbol{\lambda}^{n}\}\) has exactly one limit, i.e., \(\lim \limits _{n \to \infty}\boldsymbol{\lambda}^{n}=\boldsymbol{\lambda}\). □

Remark 3.1

If we take \(s_{n}=0\) (from \(\frac{1}{1+s_{n}}\rho _{n}<\rho _{n+1}\le (1+s_{n})\rho _{n}\) we have \(\rho _{n+1}=\rho _{n}=\cdots =\rho _{0}\)) in Algorithm 2, the algorithm is same as Algorithm 1 with the fixed parameter ρ.

4 Numerical experiments

For Algorithm 2, we now propose a procedure to compute penalty parameter automatically using iterative functions. In the proof of Theorem 3, it follows from (3.30) that the sequence \(\{(\mathbf{u}^{n},\mathbf{p}^{n}),\boldsymbol{\lambda}^{n}\}\) generated by Algorithm 2 satisfies the following inequality [2, 19],

$$ \|\delta \boldsymbol{\lambda}^{n+1}+\rho _{n}\delta \mathbf{u}^{n+1}\|^{2} \le \|\delta \boldsymbol{\lambda}^{n}+\rho _{n}\delta \mathbf{u}^{n}\|^{2}, $$

To accelerate the convergent speed, we hope that

$$ \|\delta \boldsymbol{\lambda}^{n}\|\approx \rho _{n}\|\delta \mathbf{u}^{n}\|. $$

Using \(\boldsymbol{\lambda}^{n+1}\) and \(\mathbf{u}^{n+1}\) to replace λ and u respectively, we have

$$ \|\boldsymbol{\lambda}^{n} -\boldsymbol{\lambda}^{n+1}\|\approx \rho _{n}\|\mathbf{u}^{n}- \mathbf{u}^{n+1}\|. $$

For a given \(\tau >0\), we obtain a self-adaptive rule, which adjusts the parameter \(\rho _{n+1}\) by

$$ \rho _{n+1}= \textstyle\begin{cases} (1+s_{n})\rho _{n} &\text{if } w_{n}>(1+\tau )\rho _{n}, \\ \frac{1}{1+s_{n}}\rho _{n} &\text{if } w_{n}< \frac{\rho _{n}}{1+\tau}, \\ w_{n} &\text{otherwise}, \end{cases} $$
(4.1)

where \(w_{n}= \frac{\|\boldsymbol{\lambda}^{n+1} -\boldsymbol{\lambda}^{n}\|}{\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|}\) and the sequence \(\{s_{n}\}\) is generated as follows,

$$ s_{n}= \textstyle\begin{cases} \tau &\text{if } c_{n}< c_{n+1} \text{ and } c_{n+1}\le c_{\max}, \\ \frac{1}{(c_{n+1}-c_{\max})^{2}}&\text{if } c_{n}< c_{n+1} \text{ and } c_{n+1}> c_{\max}, \\ 0 &\text{otherwise}. \end{cases} $$
(4.2)

Here \(c_{n}\) denotes the change times of \(\{\rho _{n}\}\), i.e.,

$$ c_{0}=0,\qquad c_{n+1}= \textstyle\begin{cases} c_{n}&\text{if } \frac{\rho _{n}}{1+\tau}\le w_{n}\le (1+\tau ) \rho _{n}, \\ c_{n}+1 &\text{otherwise}. \end{cases} $$

For a given constant integer \(c_{\max}>0\), it is easy to see that the sequences \(\{s_{n}\}\) and \(\{\rho _{n}\}\) satisfy the conditions (3.1) and (3.5), respectively.

In this section, we have implemented the proposed methods in FreeFem+(+) with quadratic finite element, on a HP Z1G6 desktop computer. In the following tests, we choose \(\tau =10\) in (4.1), \(c_{\max}=200\) in (4.2) and stopping criterion \(\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|^{2}\le 10^{-4}\|\mathbf{u}^{n+1} \|^{2}\) for all computations.

4.1 Example 1

We first consider a fourth-order variational inequality on the pentagon \(\Omega =\{x\in (-0.5, 0.5)^{2}: x_{1}+x_{2}<0.5\}\) with \(f=0\) and \(\psi (x)=1-9|x|^{2}\). We first choose step size \(h=\frac{1}{80}\) and apply the proposed to this problem with the penalty parameter \(\rho =1000\). We present the graphs of the numerical solution \(u_{h}\) and the contact zone \(u=\psi \) in Figs. 1 and 2, respectively. Our numerical results appear to be in very good agreement with the results in [9].

Figure 1
figure 1

The numerical solution of example 1

Figure 2
figure 2

The contact zone of example 1

For the sake of comparison of the ADMM and the SADMM, we give the number of iterations and corresponding CPU time (seconds) in Table 1 and Table 2 respectively, with respect to the step size h and the initial parameter ρ. One can see that the SADMM is better than the ADMM with fixed parameter ρ, because it uses the self-adaptive rule to adjust the parameters \(\rho _{n}\). In addition, the number of iterations for the SADMM is almost same for different h and ρ.

Table 1 Number of iterations for each method
Table 2 CPU time for each method

4.2 Example 2

In this example, we consider the obstacle problem on the L-shaped domain \(\Omega =(-0.5, 0.5)^{2}\setminus [0,0.5]^{2}\) with \(f=0\) and

$$ \psi (x)=1-\biggl[\frac{(x_{1}+0.25)^{2}}{0.2^{2}}+ \frac{x_{2}^{2}}{0.35^{2}}\biggr]. $$

As in the previous example, we first consider the meshes of size \(h=\frac{1}{80}\) with \(\rho =1000\). Figures 3 and 4 plot the numerical solution \(u_{h}\) and the contact zone \(u=\psi \), respectively. We observe that our numerical results agree very well with the corresponding results in [9, 11, 14]. We also investigate the convergence behavior of proposed methods for this example. Tables 3 and 4 display the numerical results for the number of iterations and CPU time. We notice that the SADMM also shows good performance for all initial penalty parameters ρ.

Figure 3
figure 3

The numerical solution of example 2

Figure 4
figure 4

The contact zone of example 2

Table 3 Number of iterations for each method
Table 4 CPU time for each method

5 Conclusion

In this paper, we extend the ADMM to fourth-order variational inequalities for the numerical solution. To improve the performance of the method, we propose the SADMM, which uses a self-adaptive rule to adjust the penalty parameter automatically. The advantage of the methods is that each iteration only considers a linear problem and easily chooses the penalty parameter by using iterative functions. The numerical results show that the self-adaptive rule is attractive and necessary for different initial penalty parameters ρ. Our analysis also can be applied to fourth order variational inequalities with curvature obstacle, which we will consider in a forthcoming study.

Data Availability

No datasets were generated or analysed during the current study.

References

  1. Glowinski, R., Marini, L.D., Vidrascu, M.: Finite-element approximations and iterative solutions of a fourth-order elliptic variational inequality. IMA J. Numer. Anal. 4, 127–167 (1984)

    Article  MathSciNet  Google Scholar 

  2. Zhang, S.G., Guo, N.X.: Uzawa block relaxation method for free boundary problem with unilateral obstacle. Int. J. Comput. Math. 98, 671–689 (2021)

    Article  MathSciNet  Google Scholar 

  3. Koko, J.: Uzawa block relaxation domain decomposition method for a two-body frictionless contact problem. Appl. Math. Lett. 22, 1534–1538 (2009)

    Article  MathSciNet  Google Scholar 

  4. Essoufi, E.-H., Koko, J., Zafrar, A.: Alternating direction method of multiplier for a unilateral contact problem in electro-elastostatics. Comput. Math. Appl. 73, 1789–1802 (2017)

    Article  MathSciNet  Google Scholar 

  5. Benkhira, E.-H., Fakhar, R., Mandyly, Y.: Analysis and numerical approximation of a contact problem involving nonlinear Hencky-type materials with nonlocal Coulomb’s friction law. Numer. Funct. Anal. Optim. 40, 1291–1314 (2019)

    Article  MathSciNet  Google Scholar 

  6. Benkhira, E.-H., Fakhar, R., Mandyly, Y.: Analysis and numerical approach for a nonlinear contact problem with non-local friction in piezoelectricity. Acta Mech. 232, 4273–4288 (2021)

    Article  MathSciNet  Google Scholar 

  7. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (2008)

    Google Scholar 

  8. An, R.: Discontinuous Galerkin finite element method for the fourth-order obstacle problem. Appl. Math. Comput. 209, 351–355 (2009)

    MathSciNet  Google Scholar 

  9. Brenner, S.C., Gedicke, J., Sung, L.-Y., Zhang, Y.: An a posteriori analysis of \(C^{0}\) interior penalty methods for the obstacle problem of clamped Kirchhoff plates. SIAM J. Numer. Anal. 55, 87–108 (2017)

    Article  MathSciNet  Google Scholar 

  10. Brenner, S.C., Sung, L.-Y., Wang, K.: Additive Schwarz preconditioners for \(C^{0}\) interior penalty methods for the obstacle problem of clamped Kirchhoff plates. Numer. Methods Partial Differ. Equ. 38, 102–117 (2022)

    Article  Google Scholar 

  11. Brenner, S.C., Sung, L.-Y., Zhang, H.C., Zhang, Y.: A Morley finite element method for the displacement obstacle problem of clamped Kirchhoff plates. J. Comput. Appl. Math. 254, 31–42 (2013)

    Article  MathSciNet  Google Scholar 

  12. Brenner, S.C., Davis, C.B., Sung, L.-Y.: A partition of unity method for a class of fourth order elliptic variational inequalities. Comput. Methods Appl. Mech. Eng. 276, 612–626 (2014)

    Article  MathSciNet  Google Scholar 

  13. Cao, W., Yang, D.: Adaptive optimal control approximation for solving a fourth-order elliptic variational inequality. Comput. Math. Appl. 66, 2517–2531 (2014)

    Article  MathSciNet  Google Scholar 

  14. Cui, J., Zhang, Y.: A new analysis of discontinuous Galerkin methods for a fourth order variational inequality. Comput. Methods Appl. Mech. Eng. 351, 531–547 (2019)

    Article  MathSciNet  Google Scholar 

  15. Deng, Q.P., Shen, S.M.: A nonconforming finite element method for a fourth-order elliptic variational inequality. Numer. Funct. Anal. Optim. 15, 55–64 (1994)

    Article  MathSciNet  Google Scholar 

  16. Feng, F., Han, W.M., Huang, J.G.: The virtual element method for an obstacle problem of a Kirchhoff-Love plate. Commun. Nonlinear Sci. Numer. Simul. 103, 106008 (2021)

    Article  MathSciNet  Google Scholar 

  17. Pei, L.F., Shi, D.Y.: A new error analysis of Bergan’s energy-orthogonal element for a plate contact problem. Appl. Math. Lett. 69, 67–74 (2017)

    Article  MathSciNet  Google Scholar 

  18. Han, D.R.: Solving linear variational inequality problems by a self-adaptive projection method. Appl. Math. Comput. 182, 1765–1771 (2006)

    MathSciNet  Google Scholar 

  19. He, B.S., Liao, L.Z., Wang, S.L.: Self-adaptive operator splitting methods for monotone variational inequalities. Numer. Math. 94, 715–737 (2003)

    Article  MathSciNet  Google Scholar 

  20. Zhang, S.G., Li, X.L., Ran, R.S.: Self-adaptive projection and boundary element methods for contact problems with Tresca friction. Commun. Nonlinear Sci. Numer. Simul. 68, 72–85 (2019)

    Article  MathSciNet  Google Scholar 

  21. Zhang, S.G., Li, X.L.: A self-adaptive projection method for contact problems with the BEM. Appl. Math. Model. 55, 145–159 (2018)

    Article  MathSciNet  Google Scholar 

  22. Zhang, S.G.: Projection and self-adaptive projection methods for the Signorini problem with the BEM. Comput. Math. Appl. 74, 1262–1273 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was funded by the Chongqing Natural Science Foundation of China (Grant no. cstc2020jcyj-msxmX0066) and the Scientific and Technological Research Program of Chongqing Municipal Education Commission of China (Grant Nos. KJZD-K202100503 and KJZD-K202300506).

Author information

Authors and Affiliations

Authors

Contributions

All authors equally have made contributions.

Corresponding author

Correspondence to Shougui Zhang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, J., Zhang, S. Self-adaptive alternating direction method of multiplier for a fourth order variational inequality. J Inequal Appl 2024, 92 (2024). https://doi.org/10.1186/s13660-024-03163-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-024-03163-9

Keywords