- Research
- Open access
- Published:
Self-adaptive alternating direction method of multiplier for a fourth order variational inequality
Journal of Inequalities and Applications volume 2024, Article number: 92 (2024)
Abstract
We propose an alternating direction method of multiplier for approximation solution of the unilateral obstacle problem with the biharmonic operator. We introduce an auxiliary unknown and augmented Lagrangian functional to deal with the inequality constrained, and we deduce a constrained minimization problem that is equivalent to a saddle-point problem. Then the alternating direction method of multiplier is applied to the corresponding problem. By using iterative functions, a self-adaptive rule is used to adjust the penalty parameter automatically. We show the convergence of the method and give the penalty parameter approximation in detail. Finally, the numerical results are given to illustrate the efficiency of the proposed method.
1 Introduction
Alternating direction method of multiplier (ADMM) is also known as Uzawa block relaxation method, and this method has many applications in structured optimization problems; see, e.g., variational inequality [1, 2], contact problem [3–6]. For each iteration, only a linear problem needs to be solved while the auxiliary unknown and Lagrange multiplier are computed explicitly. The ADMM is globally convergent for any positive penalty parameter. However, the convergence speed of the method is very sensitive to the parameter and it is difficult to choose the optimal parameter in practice [7].
In this paper, we focus on the combination of the ADMM and a self-adaptive rule for a fourth-order variational inequality [8–17]. Concerning the convergence speed of the ADMM heavily relies on the penalty parameter, we propose a self-adaptive rule to choose the proper parameter by using the iterative functions [2, 18–22]. Then we obtain a self-adaptive alternating direction method of multiplier (SADMM) for the variational inequality.
The paper is organized as follows. In Sect. 2, some basic results are introduced in order to express of fourth-order variational inequalities and the ADMM is applied to the problem. In particular, we propose the SADMM and show the convergence of the method in Sect. 3. In Sect. 4, we present a self-adaptive rule in detail and some numerical results to illustrate the performance of the method. Finally, some conclusions and perspectives are given in Sect. 5.
2 Fourth order variational inequality and the alternating direction method of multiplier
Let Ω be a bounded polygonal domain in \(\mathbb{R}^{2}\) with the boundary \(\Gamma =\partial \Omega \). For given \(f\in L^{2}(\Omega )\) and \(\psi \in C(\bar{\Omega})\cap C^{2}(\Omega )\) with \(\psi <0\) on ∂Ω, we consider the following fourth-order variational inequality: find \(u\in K\) such that
where
and
The bilinear form \(a(u,v)\) is \(H_{0}^{2}\)-elliptic and Lipschitz continuous, i.e.,
for some \(\alpha >0\) and \(\beta >0\) independently of u, v, w in \(H_{0}^{2}(\Omega )\). We can formulate the variational inequality problem (2.1) formulated equivalently as a constrained minimization problem
It is well known that the unique solution \(u\in K\) of (2.1) can be characterized by (2.2) [8, 9, 16].
As in [9], we introduce a Lagrange multiplier \(\lambda \in L^{2}(\Omega ; [0,+\infty ))\) such that
and
Then problems (2.1) or (2.2) and (2.3)–(2.4) are equivalent. Equation (2.3)–2.4) admits a unique solution \(u\in H_{0}^{2}(\Omega )\) and an associated Lagrange multiplier \(\lambda \in L^{2}(\Omega )\). Since u and λ satisfy \(u\in K\) and \(\lambda \in L^{2}(\Omega ; [0,+\infty ))\), it follows that (2.3) is equivalent to
for any parameter \(\rho >0\) and \((v)^{+}:=\max (0,v)\) [10, 16].
To obtain the solution of the minimization problem (2.2) by using the ADMM [2, 3, 7], we introduce the set
and its indicator functional
Let us introduce an auxiliary unknown \(p\in L^{2}(\Omega )\) and \(p=u\) in Ω, then (2.2) is equivalent to the following constrained minimization problem
where \(W=\{(v,q)\in H_{0}^{2}(\Omega )\times L^{2}(\Omega ), v-q=0\}\). For this problem, we define an augmented Lagrangian \(L_{\rho}\) as
where \(\{v, q, \mu \}\in H_{0}^{2}(\Omega )\times L^{2}(\Omega )\times L^{2}( \Omega )\). We consider the following saddle-point problem
Then we have the following result for problems (2.2) and (2.6) [1, 7].
Lemma 1
Let \(\{(u,p),\lambda \}\) be the solution of the saddle-point problem (2.6), then u is the solution of (2.2) and \(p=u\).
From Lemma 1 we can apply the ADMM to the saddle-point problem (2.6) and obtain the solution of the minimization problem (2.2) [2]. The method consists of computing \(p^{n+1}\), \(u^{n+1}\) in minimizations and the multiplier update successively as follows
In this method, \(p^{n+1}\) in minimization (2.7) can be explicitly solved by \(u^{n}\) and \(\lambda ^{n}\). Minimization (2.8) leads to a variational problem that has a unique solution for given \(\lambda ^{n}\), \(p^{n+1}\) and \(\rho >0\) [2, 3]. Then we obtain the following ADMM algorithm.
Algorithm 1
(ADMM)
Step 0: \(\lambda ^{0}\in L^{2}(\Omega )\), \(u^{0}\in H_{0}^{2}(\Omega )\) and \(\rho >0\) are given arbitrarily; set \(n:=0\).
Step 1: Compute \(p^{n+1}\) by
Step 2: Find \(u^{n+1}\in H_{0}^{2}(\Omega )\) solution to the following variational problem
Step 3: Update the multiplier \(\lambda ^{n+1}\) by
and set \(n:=n+1\) and go to Step 1.
3 Self-adaptive alternating direction method of multiplier
As is known to us, the Algorithm 1 (ADMM) is convergent for any fixed parameter \(\rho >0\). However, the rate of convergence of the method strongly depends on the penalty parameter and it is difficult to choose the optimal parameter for individual problems. To improve the efficiency of the ADMM, we use iterative functions and propose a self-adaptive rule to adjust the parameter [2, 18–22]. In this paper, we suppose that there is a nonnegative sequence \(\{s_{n}\}\) such that
Then we propose a self-adaptive alternating direction method of multiplier (SADMM), with an automatic penalty parameter selection as following.
Algorithm 2
(SADMM)
Step 0: \(\lambda ^{0}\in L^{2}(\Omega )\), \(u^{0}\in H_{0}^{2}(\Omega )\) and \(\rho >0\) are given arbitrarily; set \(\rho _{0}=\rho \) and \(n:=0\).
Step 1: Compute \(p^{n+1}\) by
Step 2: Find \(u^{n+1}\in H_{0}^{2}(\Omega )\) solution to the following variational problem
Step 3: Update the Lagrange multiplier
Step 4: Adjust the penalty parameter \(\rho _{n+1}\) by
set \(n:=n+1\) and go to Step 1.
For the mapping \(v\to (v)^{+}\) in \(L^{2}(\Omega )\), we have some basic results that will be needed in the following analysis.
Lemma 2
For all \(v\in L^{2}(\Omega )\) and \(w\in L^{2}(\Omega ; [0,+\infty ))\), it holds that
and
For any \(v,\mu \in L^{2}(\Omega )\), we define
and introduce the following lemma [19].
Lemma 3
Let \(v,\mu ,\psi \in L^{2}(\Omega )\) and \(\tilde{\rho}\ge \rho >0\), we have
Proof
Using inequality (3.6) with \(v :=\mu -\rho (v-\psi )\) and \(w :=(\mu -\tilde{\rho}(v-\psi ))^{+}\), we have
Note that \((\mu -\rho (v-\psi ))^{+}-(\mu -\tilde{\rho}(v-\psi ))^{+}=B_{ \tilde{\rho}}(\mu ,v)-B_{\rho}(\mu ,v)\), it follows from (3.10) that
Setting \(v :=\mu -\rho (v-\psi )\) and \(w :=\mu -\tilde{\rho}(v-\psi )\) in (3.7), we have
Using (3.11) and (3.12), it follows that inequality
then we use Schiwarz inequality and have
and the lemma is proved. □
For the convergence of Algorithm 2, we now give a lemma that will be needed in the following analysis.
Lemma 4
Let the sequence \(\{s_{n}\}\) satisfy \(s_{n}\ge 0\) and \(\sum \limits _{n=0}^{+\infty}s_{n}<+\infty \), then \(\prod \limits _{n=0}^{+\infty}(1+s_{n})<+\infty \).
Proof
Consider that \(s_{n}\ge 0\) and \(\sum \limits _{n=0}^{+\infty}s_{n}<+\infty \), we have \(0\le \lim \limits _{n\to +\infty}\sum \limits _{k=0}^{n}s_{k}<+ \infty \). It follows that
here we use the mean inequality and the important limit \(\lim \limits _{x\to 0}(1+x)^{\frac{1}{x}}=e\). □
We denote by
Since \(\frac{1}{1+s_{n}}\rho _{n}\le \rho _{n+1}\le (1+s_{n})\rho _{n}\) holds for every \(n\ge 0\), we have \(\rho _{n}\in [\frac{1}{C_{s}}\rho _{0},C_{s}\rho _{0}]\) which is bounded. Let us set \(\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}>0\) and \(\rho _{U}=\sup \{\rho _{n}\}_{n=0}^{+\infty}>0\), then \(\rho _{L}\ge \frac{1}{C_{s}}\rho _{0}>0\) and \(\rho _{U}\le C_{s}\rho _{0}< +\infty \).
It follows from Algorithm 2 (SADMM) and (2.3)–(2.4) that we have the following convergence result of the algorithm [2, 7].
Theorem 1
Let \(\{(u^{n},p^{n}),\lambda ^{n}\}\) be the sequence generated by Algorithm 2 (SADMM), then
where \(\delta u^{n}=u^{n}-u\), \(\delta \lambda ^{n}=\lambda ^{n}-\lambda \) and where \(\{u,\lambda \}\) is the solution of (2.4)–(2.5).
Proof
Let us set \(\delta p^{n}=p^{n}-p\). According to (2.4) and (3.3), we have
Taking \(v=\delta u^{n+1}\) in both systems and subtracting them, then we get
From (2.6), we have \(L_{\rho}(u,p,\mu )\le L_{\rho}(u,p,\lambda )\) and \(p=u\) [7]. Using \(p=u\) in (3.16), we obtain
and from (3.4) we also have
Since \(a(\cdot ,\cdot )\) is \(H_{0}^{2}\)-elliptic, it follows from (3.17) that
From (2.3) we have \(\lambda \ge 0\), \(u-\psi \ge 0\) and \((u-\psi ,\lambda )=0\) on Ω, then we obtain
and
Adding (3.19) and (3.20), we have
Then it follows from (3.2) that
Therefore
From (3.4), it follows that
Consequently, from (3.21) we have
and this completes the proof. □
Theorem 2
The sequence \(\{u^{n},p^{n}\}\) generated by Algorithm 2 (SADMM) is such that
where u is the solution of (2.4)–(2.5).
Proof
From \(s_{n}\ge 0\), \(0<\rho _{n+1}\le (1+s_{n})\rho _{n}\) and (3.18), we have
We substitute (3.22) into (3.15) and have
so that
From (3.18) and \(\rho _{n}>0\), we have
It follows that
Let \(\xi _{n}:=2s_{n}+s_{n}^{2}\), we define \(C_{0}\), \(C_{1}\) by
then from Lemma 4 we have \(C_{0}<+\infty \) and \(C_{1}<+\infty \). From (3.24) it then follows that
this implies that there exists a constant \(C>0\) such that
Then, we from (3.23) also have
that is
It follows from (3.18) and (3.27) that
where \(\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}>0\). Then, we have
and
Therefore, \(u^{n}\) converges to u in \(H_{0}^{2}(\Omega )\). It follows that
then we prove that \(p^{n}\) converges to p in \(L^{2}(\Omega )\). □
In order to obtain the convergence of the sequence \(\{\lambda ^{n}\}\), we consider the discrete version of Algorithm 2 (SADMM) using linear finite element triangulation. Let N be the number of nodes of the triangulation. We will use the following notations:
(a) \(\mathbf{A}\in \mathbb{R}^{N\times N}\) is the stiffness matrix defined by the bilinear form \(a(u,v)\);
(b) \(\mathbf{M}\in \mathbb{R}^{n\times N}\) is mass matrices;
(c) \(\mathbf{f}\in \mathbb{R}^{N}\) is the discrete external forces;
(d) \(\mathbf{p}\in \mathbb{R}^{N}\) is the discrete auxiliary unknown;
(e) \(\boldsymbol{\lambda}\in \mathbb{R}^{N}\) is the discrete lagrange multiplier.
Both the matrices A and M are symmetric, positive definite. For a vector \(\mathbf{x}\in \mathbb{R}^{N}\) in finite dimensional Hilbert space, \(\|\mathbf{x}\|\) is the norm of x, defined by the inner product \((\mathbf{x},\mathbf{x})=\mathbf{x}^{\mathsf{T}}\mathbf{x}\) and \(\mathbf{x}^{\mathsf{T}}\) denotes the transpose of x. We have the following convergence theorem [19].
Theorem 3
The sequence \(\{(\mathbf{u}^{n}, \mathbf{p}^{n}), \boldsymbol{\lambda}^{n}\}\) generated by the discrete version of Algorithm 2 (SADMM) is such that
Proof
As the proof the (3.15), we can from the discrete version of Algorithm 2 easily obtain
where \(\delta{\mathbf{u}}^{n}=\mathbf{A}^{-1}\mathbf{M}\delta{\boldsymbol{\lambda}}^{n}\). Then we can easily prove the convergence of sequence \(\{\mathbf{u}^{n}, \mathbf{p}^{n}\}\). It is well known that the bounded sequence must have convergent subsequence. It follows from (3.30) and positive definiteness of A and M that the sequence \(\{\boldsymbol{\lambda} ^{n}\}\) is bounded, and there must exist \(\hat{\boldsymbol{\lambda}}\) and a subsequence \(\{\boldsymbol{\lambda} ^{n_{i}}\}\) such that \(\boldsymbol{\lambda} ^{n_{i}}\) converges to \(\hat{\boldsymbol{\lambda}}\). Consider that the sequence \((\boldsymbol{\lambda} ^{n_{i}},\mathbf{u}^{n_{i}})\) satisfies (3.3)–(3.4), we have
and
Eliminating the auxiliary unknown \(\mathbf{p}^{n_{i}+1}\), we obtain
Taking the limit in the above equation, we arrive at
Moreover, from (3.2), (3.8) and (3.30), we have
It follows from \(0<\rho _{L}=\inf \{\rho _{n}\}_{n=0}^{+\infty}\le \rho _{n_{i}}\) and Lemma 3 that
This implies that
because \(B_{\rho _{L}}(\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) is continuous, from \(\lim \limits _{{n_{i}}\to \infty}\boldsymbol{\lambda}^{n_{i}}= \hat{\boldsymbol{\lambda}}\) and \(\lim \limits _{{n_{i}}\to \infty}\mathbf{u}^{n_{i}}=\mathbf{u}\), we have
that is
Then, from (3.31) and (3.32), we obtain the fact that \((\hat{\boldsymbol{\lambda}},\mathbf{u})\) satisfy the discrete formulation of (2.4)–(2.5). Since the solution to the system (2.4)–(2.5) is unique and we have \(\hat{\boldsymbol{\lambda}}=\boldsymbol{\lambda}\). That is, the subsequence \((\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) converge to \((\boldsymbol{\lambda},\mathbf{u})\).
Now we prove that entire sequence \(\boldsymbol{\lambda}^{n}\) converges λ. Assume that there exist \(\bar{\boldsymbol{\lambda}}\in \mathbb{R}^{N}\) such that a subsequence \((\boldsymbol{\lambda}^{n_{j}},\mathbf{u}^{n_{j}})\) converge to \((\bar{\boldsymbol{\lambda}},\mathbf{u})\) and \(\bar{\boldsymbol{\lambda}}\ne \boldsymbol{\lambda}\). Note that the above subsequences \((\boldsymbol{\lambda}^{n_{i}},\mathbf{u}^{n_{i}})\) and \((\boldsymbol{\lambda}^{n_{j}},\mathbf{u}^{n_{j}})\) converge to \((\boldsymbol{\lambda},\mathbf{u})\) and \((\bar{\boldsymbol{\lambda}},\mathbf{u})\) respectively, we suppose there exists a \(n_{0}\ge 0\) and \(n_{i}\ge n_{0}\), \(n_{j}\ge n_{0}\) such that
and
where \(C_{1}:=\prod \limits _{n=0}^{+\infty}(1+\xi _{n})<\infty \). For a fixed \(n_{i}\ge n_{0}\) and \(\forall n_{j}\ge n_{i}\ge n_{0}\), it follows from (3.24) and (3.33) that
so that
For \(\forall n_{j}\ge n_{i}\), we then from (3.35) have
We have reached a contradiction with (3.34) and the assumption that the subsequence \(\{\boldsymbol{\lambda}^{n_{j}}\}\) converges \(\bar{\boldsymbol{\lambda}}\). Then it follows that the entire sequence \(\{\boldsymbol{\lambda}^{n}\}\) has exactly one limit, i.e., \(\lim \limits _{n \to \infty}\boldsymbol{\lambda}^{n}=\boldsymbol{\lambda}\). □
Remark 3.1
If we take \(s_{n}=0\) (from \(\frac{1}{1+s_{n}}\rho _{n}<\rho _{n+1}\le (1+s_{n})\rho _{n}\) we have \(\rho _{n+1}=\rho _{n}=\cdots =\rho _{0}\)) in Algorithm 2, the algorithm is same as Algorithm 1 with the fixed parameter ρ.
4 Numerical experiments
For Algorithm 2, we now propose a procedure to compute penalty parameter automatically using iterative functions. In the proof of Theorem 3, it follows from (3.30) that the sequence \(\{(\mathbf{u}^{n},\mathbf{p}^{n}),\boldsymbol{\lambda}^{n}\}\) generated by Algorithm 2 satisfies the following inequality [2, 19],
To accelerate the convergent speed, we hope that
Using \(\boldsymbol{\lambda}^{n+1}\) and \(\mathbf{u}^{n+1}\) to replace λ and u respectively, we have
For a given \(\tau >0\), we obtain a self-adaptive rule, which adjusts the parameter \(\rho _{n+1}\) by
where \(w_{n}= \frac{\|\boldsymbol{\lambda}^{n+1} -\boldsymbol{\lambda}^{n}\|}{\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|}\) and the sequence \(\{s_{n}\}\) is generated as follows,
Here \(c_{n}\) denotes the change times of \(\{\rho _{n}\}\), i.e.,
For a given constant integer \(c_{\max}>0\), it is easy to see that the sequences \(\{s_{n}\}\) and \(\{\rho _{n}\}\) satisfy the conditions (3.1) and (3.5), respectively.
In this section, we have implemented the proposed methods in FreeFem+(+) with quadratic finite element, on a HP Z1G6 desktop computer. In the following tests, we choose \(\tau =10\) in (4.1), \(c_{\max}=200\) in (4.2) and stopping criterion \(\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|^{2}\le 10^{-4}\|\mathbf{u}^{n+1} \|^{2}\) for all computations.
4.1 Example 1
We first consider a fourth-order variational inequality on the pentagon \(\Omega =\{x\in (-0.5, 0.5)^{2}: x_{1}+x_{2}<0.5\}\) with \(f=0\) and \(\psi (x)=1-9|x|^{2}\). We first choose step size \(h=\frac{1}{80}\) and apply the proposed to this problem with the penalty parameter \(\rho =1000\). We present the graphs of the numerical solution \(u_{h}\) and the contact zone \(u=\psi \) in Figs. 1 and 2, respectively. Our numerical results appear to be in very good agreement with the results in [9].
For the sake of comparison of the ADMM and the SADMM, we give the number of iterations and corresponding CPU time (seconds) in Table 1 and Table 2 respectively, with respect to the step size h and the initial parameter ρ. One can see that the SADMM is better than the ADMM with fixed parameter ρ, because it uses the self-adaptive rule to adjust the parameters \(\rho _{n}\). In addition, the number of iterations for the SADMM is almost same for different h and ρ.
4.2 Example 2
In this example, we consider the obstacle problem on the L-shaped domain \(\Omega =(-0.5, 0.5)^{2}\setminus [0,0.5]^{2}\) with \(f=0\) and
As in the previous example, we first consider the meshes of size \(h=\frac{1}{80}\) with \(\rho =1000\). Figures 3 and 4 plot the numerical solution \(u_{h}\) and the contact zone \(u=\psi \), respectively. We observe that our numerical results agree very well with the corresponding results in [9, 11, 14]. We also investigate the convergence behavior of proposed methods for this example. Tables 3 and 4 display the numerical results for the number of iterations and CPU time. We notice that the SADMM also shows good performance for all initial penalty parameters ρ.
5 Conclusion
In this paper, we extend the ADMM to fourth-order variational inequalities for the numerical solution. To improve the performance of the method, we propose the SADMM, which uses a self-adaptive rule to adjust the penalty parameter automatically. The advantage of the methods is that each iteration only considers a linear problem and easily chooses the penalty parameter by using iterative functions. The numerical results show that the self-adaptive rule is attractive and necessary for different initial penalty parameters ρ. Our analysis also can be applied to fourth order variational inequalities with curvature obstacle, which we will consider in a forthcoming study.
Data Availability
No datasets were generated or analysed during the current study.
References
Glowinski, R., Marini, L.D., Vidrascu, M.: Finite-element approximations and iterative solutions of a fourth-order elliptic variational inequality. IMA J. Numer. Anal. 4, 127–167 (1984)
Zhang, S.G., Guo, N.X.: Uzawa block relaxation method for free boundary problem with unilateral obstacle. Int. J. Comput. Math. 98, 671–689 (2021)
Koko, J.: Uzawa block relaxation domain decomposition method for a two-body frictionless contact problem. Appl. Math. Lett. 22, 1534–1538 (2009)
Essoufi, E.-H., Koko, J., Zafrar, A.: Alternating direction method of multiplier for a unilateral contact problem in electro-elastostatics. Comput. Math. Appl. 73, 1789–1802 (2017)
Benkhira, E.-H., Fakhar, R., Mandyly, Y.: Analysis and numerical approximation of a contact problem involving nonlinear Hencky-type materials with nonlocal Coulomb’s friction law. Numer. Funct. Anal. Optim. 40, 1291–1314 (2019)
Benkhira, E.-H., Fakhar, R., Mandyly, Y.: Analysis and numerical approach for a nonlinear contact problem with non-local friction in piezoelectricity. Acta Mech. 232, 4273–4288 (2021)
Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (2008)
An, R.: Discontinuous Galerkin finite element method for the fourth-order obstacle problem. Appl. Math. Comput. 209, 351–355 (2009)
Brenner, S.C., Gedicke, J., Sung, L.-Y., Zhang, Y.: An a posteriori analysis of \(C^{0}\) interior penalty methods for the obstacle problem of clamped Kirchhoff plates. SIAM J. Numer. Anal. 55, 87–108 (2017)
Brenner, S.C., Sung, L.-Y., Wang, K.: Additive Schwarz preconditioners for \(C^{0}\) interior penalty methods for the obstacle problem of clamped Kirchhoff plates. Numer. Methods Partial Differ. Equ. 38, 102–117 (2022)
Brenner, S.C., Sung, L.-Y., Zhang, H.C., Zhang, Y.: A Morley finite element method for the displacement obstacle problem of clamped Kirchhoff plates. J. Comput. Appl. Math. 254, 31–42 (2013)
Brenner, S.C., Davis, C.B., Sung, L.-Y.: A partition of unity method for a class of fourth order elliptic variational inequalities. Comput. Methods Appl. Mech. Eng. 276, 612–626 (2014)
Cao, W., Yang, D.: Adaptive optimal control approximation for solving a fourth-order elliptic variational inequality. Comput. Math. Appl. 66, 2517–2531 (2014)
Cui, J., Zhang, Y.: A new analysis of discontinuous Galerkin methods for a fourth order variational inequality. Comput. Methods Appl. Mech. Eng. 351, 531–547 (2019)
Deng, Q.P., Shen, S.M.: A nonconforming finite element method for a fourth-order elliptic variational inequality. Numer. Funct. Anal. Optim. 15, 55–64 (1994)
Feng, F., Han, W.M., Huang, J.G.: The virtual element method for an obstacle problem of a Kirchhoff-Love plate. Commun. Nonlinear Sci. Numer. Simul. 103, 106008 (2021)
Pei, L.F., Shi, D.Y.: A new error analysis of Bergan’s energy-orthogonal element for a plate contact problem. Appl. Math. Lett. 69, 67–74 (2017)
Han, D.R.: Solving linear variational inequality problems by a self-adaptive projection method. Appl. Math. Comput. 182, 1765–1771 (2006)
He, B.S., Liao, L.Z., Wang, S.L.: Self-adaptive operator splitting methods for monotone variational inequalities. Numer. Math. 94, 715–737 (2003)
Zhang, S.G., Li, X.L., Ran, R.S.: Self-adaptive projection and boundary element methods for contact problems with Tresca friction. Commun. Nonlinear Sci. Numer. Simul. 68, 72–85 (2019)
Zhang, S.G., Li, X.L.: A self-adaptive projection method for contact problems with the BEM. Appl. Math. Model. 55, 145–159 (2018)
Zhang, S.G.: Projection and self-adaptive projection methods for the Signorini problem with the BEM. Comput. Math. Appl. 74, 1262–1273 (2017)
Acknowledgements
Not applicable.
Funding
This work was funded by the Chongqing Natural Science Foundation of China (Grant no. cstc2020jcyj-msxmX0066) and the Scientific and Technological Research Program of Chongqing Municipal Education Commission of China (Grant Nos. KJZD-K202100503 and KJZD-K202300506).
Author information
Authors and Affiliations
Contributions
All authors equally have made contributions.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wu, J., Zhang, S. Self-adaptive alternating direction method of multiplier for a fourth order variational inequality. J Inequal Appl 2024, 92 (2024). https://doi.org/10.1186/s13660-024-03163-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-024-03163-9