Open Access

Two new regularization methods for solving sideways heat equation

Journal of Inequalities and Applications20152015:65

Received: 22 September 2014

Accepted: 14 January 2015

Published: 20 February 2015


We consider a non-standard inverse heat conduction problem in a bounded domain which appears in some applied subjects. We want to know the surface temperature in a body from a measured temperature history at a fixed location inside the body. This is an exponentially ill-posed problem in the sense that the solution (if it exists) does not depend continuously on the data. In this paper, we introduce the two new classes of quasi-type methods and iteration methods to solve the problem and prove that our methods are stable under both a priori and a posteriori parameter choice rules. An appropriate selection of a parameter in the scheme will get a satisfactory approximate solution. Furthermore, if we use the discrepancy principle we can avoid the selection of the a priori bound.


Cauchy problem sideways heat equation ill-posed problem error estimates


35K05 35K99 47J06 47H10

1 Introduction

In this paper, we want to determine the surface temperature \(u(x,t)\) for \(0 < x<L\) from well-known temperature measurements \(u(L,t)=g(t) \) when \(u(x,t)\) satisfies the following system:
$$ \left \{ \begin{array}{@{}l} u_{t}=u_{xx},\quad 0 < x<L, 0 \le t \le2\pi,\\ u(L,t)=g(t),\quad 0 \le t \le2\pi,\\ u(x,0)=0,\quad 0 \le t \le2\pi. \end{array} \right . $$
In order to guarantee the uniqueness of the solution, here we assume that u be bounded as \(x \to+\infty\). The functions g, h given in \(L^{2}(0,2\pi)\) are to be noised. Physically, we only obtain the measured Cauchy data with measurement errors. This is a severely ill-posed problem: any small perturbation in the observation data can cause large errors in the solution \(u(x, t)\) for \(x \in[0, L)\). Therefore, most classical numerical methods often fail to give an acceptable approximation of the solution. Thus it requires regularization techniques to stabilize the numerical computations [1].

The problem has a long history and has attracted to many authors. In several engineering contexts, it is sometimes necessary to determine the surface temperature and heat flux in a body from a measured temperature history at a fixed location inside the body [2, 3]. The physical situation at the surface may be unsuitable for attaching a sensor, or the accuracy of a surface measurement may be seriously impaired by the presence of the sensor. The special case of estimating a surface condition from interior measurements has come to be known as the inverse heat conduction problem (IHCP).

In recent years, IHCP have been researched by many authors and some valuable method are proposed, such as the difference regularization method [4], the Fourier method [5], quasi-reversibility method [2, 6], wavelet and wavelet-Galerkin method [7], a spectral regularization method [7, 8], etc.

Most of the papers involving regularization methods for sideways heat are devoted to giving a convergence analysis with the regularization parameter chosen by an a priori choice rule, under which some a priori information on the unknown exact solution which is to be simulated is often used. However, in general, the a priori bound M cannot be known exactly in practice, and working with a wrong constant M may lead to the bad regularized solution. In the present paper, we proposed two new regularization method: a modified quasi-boundary value method and an iteration method for solving Problem (1). This method has been used by Deng and Liu to deal with the sideways parabolic equation [9], but here we give another way to choose the regularization parameter by an a posteriori rule. Moreover, we shall give a criterion for making an a posteriori choice of the regularization parameter. To the authors’ knowledge, there are very few papers for choosing the regularization parameter by the a posteriori rule for this sideways heat.

This paper is constructed as follows: in Section 2, we propose two new regularization method. Convergence estimates under priori and posteriori assumptions are given in Section 3 and Section 4. Finally, we draw a conclusion to our method.

2 Formulation of the problem and a quasi-boundary value regularization method

Suppose \(g^{\epsilon }\) is for measured data in \(L^{2}(0,2\pi)\) and satisfies
$$\begin{aligned} \bigl\| g^{\epsilon }-g\bigr\| _{L^{2}(0,2\pi)} \le \epsilon , \end{aligned}$$
where \(\epsilon >0\) represents a noisy level. For \(f \in L^{2}(0,2\pi)\), we have the Fourier series
$$\begin{aligned} f(t)= \sum_{k \in\mathbb{Z} } f_{k} e^{ikt}, \end{aligned}$$
where the Fourier coefficient is
$$\begin{aligned} f_{k}= \frac{1}{2\pi} \int_{0}^{2\pi} f(t) e^{-ikt}\,dt. \end{aligned}$$
The norm of f in \(L^{2}\) is
$$\begin{aligned} \|f\|_{L^{2}(0,2\pi)}^{2}= 2\pi\sum_{k \in\mathbb{Z} } |f_{k}|^{2}. \end{aligned}$$
The principal value of \(\sqrt{ik}\) is
$$\begin{aligned} \sqrt{ik}= \left \{ \begin{array}{@{}l@{\quad}l} (1+i)\sqrt{\frac{|k|}{2}}, & k \ge0,\\ (1-i)\sqrt{\frac{|k|}{2}}, & k < 0. \end{array} \right . \end{aligned}$$
Suppose that the solution of the problem is represented as a Fourier series
$$\begin{aligned} u(x,t)= \sum_{k \in\mathbb{Z} } u_{k}(x) e^{ikt}, \end{aligned}$$
$$\begin{aligned} u_{k}(x)= \frac{1}{2\pi} \int_{0}^{2\pi} u(x,t) e^{-ikt}\,dt. \end{aligned}$$
Let \(F(x,t)= \sum_{k \in\mathbb{Z} } F_{k}(x) e^{ikt} \) with
$$\begin{aligned} F_{k}(x)= \frac{1}{2\pi} \int_{0}^{2\pi} F(x,t) e^{-ikt}\,dt. \end{aligned}$$
Then by taking the partial derivative with respect to t and x, we obtain
$$ \begin{aligned} &u_{t}(x,t)= \sum_{k \in\mathbb{Z} } u_{k}(x) ik e^{ikt},\\ &u_{xx}(x,t)= \sum_{k \in\mathbb{Z} } \frac{d^{2}}{dx^{2}}u_{k}(x) e^{ikt}. \end{aligned} $$
The main equation \(u_{t}= u_{xx}\) leads to
$$\begin{aligned} \sum_{k \in\mathbb{Z} } u_{k}(x) ik e^{ikt}= \sum_{k \in\mathbb{Z} } \frac{d^{2}}{dx^{2}}u_{k}(x) e^{ikt}. \end{aligned}$$
Combining the Cauchy data g, h, we have the following systems of second ordinary equations:
$$\begin{aligned} \frac{d^{2}}{dx^{2}}u_{k}(x)- ik u_{k}(x) = 0,\qquad u_{k}(L)= g_{k}, \end{aligned}$$
$$\begin{aligned} g_{k}=\bigl\langle g(t), e^{-ikt}\bigr\rangle = \frac{1}{2\pi} \int_{0}^{2\pi} g(t) e^{-ikt}\,dt. \end{aligned}$$
The solution of (11) is
$$\begin{aligned} u_{k}(x)= C_{k} \exp (\sqrt{ik} x )+D_{k} \exp (- \sqrt{ik} x ). \end{aligned}$$
Since \(u(x,t)\) is bounded when \(x \to+\infty\), we conclude that \(C_{k}=0\). It follows from \(u_{k}(L)=D_{k} \exp (-\sqrt{ik} L )=g_{k} \) that
$$\begin{aligned} D_{k}= g_{k} \exp (\sqrt{ik} L ). \end{aligned}$$
This leads to the formula of \(u_{k}\):
$$\begin{aligned} u_{k}(x)= \exp \bigl(\sqrt{ik} (L-x) \bigr) g_{k}. \end{aligned}$$
The exact form of u is
$$\begin{aligned} u(x,t)= \sum_{k \in\mathbb{Z} } \exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt},\quad 0\le x \le L. \end{aligned}$$

Lemma 1

For \(y \in\mathbb{R} \), we have
$$\begin{aligned} \bigl| e^{\sqrt{ik} y} \bigr| = e^{ \sqrt{\frac{|k|}{2}} y} . \end{aligned}$$


We can rewrite the term \(| e^{\sqrt{ik} y} | \) by
$$\begin{aligned} \bigl| e^{\sqrt{ik} y} \bigr| = \bigl| e^{ (1+i)\sqrt{\frac{|k|}{2}} y} \bigr| = \bigl| e^{ \sqrt{\frac{|k|}{2}} y} \bigr| \sqrt{ \cos ^{2} \biggl( \frac{|k|y}{2} \biggr)+ \sin^{2} \biggl( \frac{|k|y}{2} \biggr) }= e^{ \sqrt{\frac{|k|}{2}} y} . \end{aligned}$$

3 Quasi-boundary value method for a sideways heat equation

Define the Sobolev space \(H^{s} (0,2\pi)\) as
$$\begin{aligned} \bigl\| u(0,t)\bigr\| ^{2}_{H^{s}(0,2\pi)}= \sum_{k \in\mathbb{Z} } \bigl(1+k^{2}\bigr)^{s} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2}. \end{aligned}$$
In this section, we present a QBV regularization problem as follows:
$$ \left \{ \begin{array}{@{}l} u^{\beta(\epsilon )}_{t}= u^{\beta(\epsilon )}_{xx},\quad 0 < x<L, 0 \le t \le2\pi ,\\ u^{\beta(\epsilon )}(L,t)=B_{\epsilon }g^{\epsilon }(t),\quad 0 \le t \le2\pi,\\ u^{\beta(\epsilon )}(x,0)=0,\quad 0 \le t \le2\pi, \end{array} \right . $$
with the additive information that \(u^{\epsilon }(x,t) \) bounded when \(x \to +\infty\). And \(B_{\epsilon }\) is defined as
$$\begin{aligned} B_{\epsilon }g^{\epsilon }(t)= \sum_{k \in\mathbb{Z} } \frac{ \exp (- \sqrt {\frac{|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac {|k|}{2}} L ) } \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$

3.1 A priori parameter choice rule

Theorem 1

Let u be an exact solution to Problem (1). Let \(g_{\epsilon }\) be as in (2).
  1. (1)
    If \(\|u(0,t)\|_{L^{2}(0,2\pi)} \le M \) then by \(\beta(\epsilon )= \frac{\epsilon }{M} \), we have
    $$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(x,\cdot)-u(x,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le 2 M^{1-\frac {x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$
  2. (2)
    If \(\|u(0,t)\|_{H^{s}(0,2\pi)} \le M_{1} \) then by \(\beta(\epsilon )= \frac{\epsilon ^{k}}{M_{1}} \), \(0< k<1\), we have
    $$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(x,\cdot)-u(x,\cdot)\bigr\| _{L^{2}(0,2\pi)} \biggl(\frac {1}{M_{1}} \biggr)^{\frac{x}{L}} \epsilon ^{\frac{kx}{L}} \epsilon ^{1-k} + M_{1} A(s) \biggl(\frac{L}{\ln(M_{1}/\epsilon )} \biggr)^{2s}, \end{aligned}$$
    $$A(s)=2^{s} (2s)^{2s} e^{1-2s} \bigl(1+L^{-2s} \bigr). $$


Proof of Part 1. The solution of this regularized problem is
$$\begin{aligned} u^{\beta(\epsilon )}(x,t)= \sum_{k \in\mathbb{Z} } \frac{ \exp (- \sqrt{\frac{|k|}{2}} L ) \exp (\sqrt{ik} (L-x) ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac{|k|}{2}} L ) } \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$
Let \(v^{\beta(\epsilon )}\) obey
$$\begin{aligned} v^{\beta(\epsilon )}(x,t)= \sum_{k \in\mathbb{Z} } \frac{ \exp (- \sqrt{\frac{|k|}{2}} L ) \exp (\sqrt{ik} (L-x) ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac{|k|}{2}} L ) } \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$

Step 1. Estimate \(\|u^{\beta(\epsilon )}-v^{\beta(\epsilon )}\| _{L^{2}(0,2\pi)}\).

We have
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}-v^{\beta(\epsilon )}\bigr\| _{L^{2}(0,2\pi)}^{2} =& 2\pi\sum _{k \in\mathbb{Z} } \biggl| \frac{ \exp (- \sqrt{\frac {|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac{|k|}{2}} L ) } \exp \bigl(\sqrt{ik} (L-x) \bigr) \biggr|^{2} \\ &{}\times \bigl|\bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ =& 2\pi\sum_{k \in\mathbb{Z} } \frac{ \exp (-\sqrt{2|k|} x ) }{ [ \beta(\epsilon )+ \exp ( -\sqrt{\frac{|k|}{2}} L ) ]^{2} } \bigl|\bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt}\bigr\rangle \bigr|^{2} \\ \le& \beta(\epsilon )^{\frac{2x}{L}-2} \bigl\| g^{\epsilon }(t)-g(t) \bigr\| ^{2}_{L^{2}(0,2\pi)} \le \beta(\epsilon )^{\frac{2x}{L}-2} \epsilon ^{2}. \end{aligned}$$
This implies that
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}-v^{\beta(\epsilon )}\bigr\| _{L^{2}(0,2\pi)} \le \beta(\epsilon )^{\frac{x}{L}-1} \epsilon = M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$
Step 2. Estimate \(\| v^{\beta(\epsilon )}-u\|_{L^{2}(0,2\pi)}\). Using \(\langle u(0,t),e^{-ikt} \rangle = \exp (\sqrt{ik} L ) \langle g(t),e^{-ikt} \rangle \), we have
$$\begin{aligned} \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)}^{2} =& 2\pi\sum _{k \in\mathbb {Z} } \biggl| \frac{ \beta(\epsilon ) \exp ( \sqrt{\frac{|k|}{2}} L ) }{ 1+ \beta\exp ( \sqrt{\frac{|k|}{2}} L ) } \biggr|^{2} \exp \bigl(2 \sqrt{ik} (L-x) \bigr) \bigl|\bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ =& 2\pi\sum_{k \in\mathbb{Z} } \frac{\beta^{2} | \exp (-2\sqrt{ik} x ) | }{ [ \beta(\epsilon )+ \exp ( -\sqrt{\frac{|k|}{2}} L ) ]^{2} } \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ =& \beta(\epsilon )^{2} 2\pi\sum_{k \in\mathbb{Z} } \biggl[ \frac{\exp ( -\sqrt{\frac{|k|}{2}} x ) }{ \beta(\epsilon )+ \exp ( -\sqrt{\frac{|k|}{2}} L ) } \biggr]^{2} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ \le& \beta(\epsilon )^{2} \beta(\epsilon )^{\frac{2x}{L}-2} \bigl\| u(0,t) \bigr\| ^{2}_{L^{2}(0,2\pi)} \le \beta(\epsilon )^{\frac{2x}{L}} M^{2}. \end{aligned}$$
From \(\beta(\epsilon ) =\frac{\epsilon }{M} \), we obtain
$$\begin{aligned} \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)}^{2} \le \epsilon ^{\frac{2x}{L}} M^{2-\frac{2x}{L}}. \end{aligned}$$
This implies that
$$\begin{aligned} \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)} \le M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$
Combining (22) and (23), we obtain
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)} \le\bigl\| u^{\beta(\epsilon )}-v^{\beta (\epsilon )} \bigr\| _{L^{2}(0,2\pi)}+ \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)} \le2 M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$

Proof of Part 2.

Lemma 2

Let \(s>0\), \(X\ge0\). Then for \(0<\beta<1\), we have
$$\begin{aligned} \frac{\beta}{ (1+\sqrt{\frac{|k|}{2}} )^{s} (\beta +\exp ( -\sqrt{\frac{|k|}{2}} L ) )} \le s^{s} e^{1-s} \bigl(1+L^{-s} \bigr) \biggl(\frac{L}{\ln(1/\beta)} \biggr)^{s}. \end{aligned}$$


Case 1. \(\sqrt{\frac{|k|}{2}} \in[0, \frac{1}{L}]\). It is clear to see that
$$\begin{aligned} \frac{\beta}{ (1+\sqrt{\frac{|k|}{2}} )^{s} (\beta +\exp ( -\sqrt{\frac{|k|}{2}} L ) )} \le\frac{\beta }{ (1+\sqrt{\frac{|k|}{2}} )^{s} \exp ( -\sqrt{\frac {|k|}{2}} L ) } \le\beta \exp \biggl( \sqrt{ \frac{|k|}{2}} L \biggr) \le e\beta. \end{aligned}$$
From the inequality \(\beta \le(\frac{s}{e})^{s} (\frac{1}{\ln (1/\beta)} )^{s}\), we get
$$\begin{aligned} \frac{\beta}{ (1+\sqrt{\frac{|k|}{2}} )^{s} (\beta +\exp ( -\sqrt{\frac{|k|}{2}} L ) )} \le s^{s} e^{1-s} \bigl(1+L^{-s} \bigr) \biggl(\frac{L}{\ln(1/\epsilon )} \biggr)^{s} . \end{aligned}$$
Case 2. \(\sqrt{\frac{|k|}{2}} >\frac{1}{L}\). Set \(Y =\frac{ \exp (-\sqrt{\frac{|k|}{2}}L )}{\beta} \). Then we obtain
$$\begin{aligned} \frac{\beta}{ (1+\sqrt{\frac{|k|}{2}} )^{s} (\beta +\exp ( -\sqrt{\frac{|k|}{2}} L ) )} =&\frac{\beta }{\beta+\beta Y} \biggl(\frac{L}{L-\ln(\beta Y)} \biggr)^{s} \\ =&\frac{1}{1+Y} \biggl(\frac{L}{L-\ln(\beta Y)} \biggr)^{s} \\ =&\frac{1}{1+Y} \biggl(\frac{L}{\ln(1/\beta)} \biggr)^{s} \biggl( \frac{-\ln(\beta)}{L-\ln(\beta Y)} \biggr)^{s} \\ =& \biggl(\frac{L}{\ln(1/\beta)} \biggr)^{s}\frac{1}{1+Y} \biggl( \frac{-\ln(\beta)}{L-\ln(\beta Y)} \biggr)^{s}. \end{aligned}$$
We continue to estimate the term \(\frac{1}{1+Y} (\frac{-\ln (\beta)}{L-\ln(\beta Y)} )^{s}\).
If \(0< Y \le1\) then \(0<-\ln(\beta) <-\ln(\beta Y)\), thus
$$\begin{aligned} \frac{1}{1+Y} \biggl(\frac{-\ln(\beta)}{L-\ln(\beta Y)} \biggr)^{s}< 1, \end{aligned}$$
else if \(Y > 1\) then \(\ln Y>0\) and \(\ln(\beta Y)=-L\sqrt{\frac {|k|}{2}}<-1\) due to the assumption \(\sqrt{\frac{|k|}{2}} \in(\frac {1}{L},\infty)\). Therefore \(\ln Y (1+\ln(\beta Y) )\le0\). This implies that
$$\begin{aligned} 0< \frac{-\ln\beta}{L-\ln(\beta Y)}<\frac{-\ln\beta}{-\ln(\beta Y)}<1+\ln Y. \end{aligned}$$
Hence, in this case, we get
$$\begin{aligned} \frac{1}{1+Y} \biggl(\frac{-\ln(\beta)}{L-\ln(\beta Y)} \biggr)^{s}< \frac{ (1+\ln Y )^{s}}{Y}= (1+\ln Y )^{s} {Y^{-1}}. \end{aligned}$$
Set \(g(Y)= (1+\ln Y )^{s} {Y^{-1}}\) for \(Y>e^{-1}\). Taking the derivative of this function, we get
$$\begin{aligned} g'(Y)= (1+\ln Y )^{s-1} {Y^{-2}} (s-1-\ln Y ). \end{aligned}$$
The function g has a maximum at the point \(Y_{0}\) such that \(g'(Y_{0})=0\). This implies that \(Y_{0}=e^{s-1}\). And therefore
$$\begin{aligned} \sup_{Y\ge1} (1+\ln Y )^{s} {Y^{-1}} \le g(Y_{0})=s^{s} e^{1-s}. \end{aligned}$$
Since (26) and (27) hold, we have
$$\begin{aligned} \frac{1}{1+Y} \biggl(\frac{-\ln(\beta)}{L-\ln(\beta Y)} \biggr)^{s} \le s^{s} e^{1-s} . \end{aligned}$$
From (25), we get
$$\begin{aligned} \frac{\beta}{ (1+\sqrt{\frac{|k|}{2}} )^{s} (\beta +\exp ( -\sqrt{\frac{|k|}{2}} L ) )} \le s^{s} e^{1-s} \biggl( \frac{T}{\ln(1/\beta)} \biggr)^{s} \le s^{s} e^{1-s} \bigl(1+T^{-s}\bigr) \biggl( \frac{T}{\ln(1/\beta)} \biggr)^{s} . \end{aligned}$$
We have
$$\begin{aligned} &\bigl\| v^{\epsilon }-u\bigr\| _{L^{2}(0,2\pi)}^{2} \\ &\quad= 2\pi\sum _{k \in\mathbb{Z} } \biggl[ \frac{\beta\exp ( -\sqrt{\frac{|k|}{2}} x ) }{ \beta+ \exp ( -\sqrt{\frac{|k|}{2}} L ) } \biggr]^{2} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le 2\pi\sum_{k \in\mathbb{Z} } \frac{\beta^{2} }{ (1+\sqrt {\frac{|k|}{2}} )^{4s} (\beta+\exp ( -\sqrt{\frac {|k|}{2}} L ) )^{2}} \biggl(1+ \sqrt{\frac{|k|}{2}} \biggr)^{4s} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le (2s)^{4s} e^{2-4s} \bigl(1+L^{-2s} \bigr)^{2} \biggl(\frac{L}{\ln(1/\beta )} \biggr)^{4s} 2\pi\sum _{k \in\mathbb{Z} } \biggl(1+\sqrt{\frac {|k|}{2}} \biggr)^{4s} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le 4^{s} (2s)^{4s} e^{2-4s} \bigl(1+L^{-2s} \bigr)^{2} \biggl(\frac{L}{\ln (1/\beta)} \biggr)^{4s} 2\pi\sum _{k \in\mathbb{Z} }\bigl(1+k^{2}\bigr)^{s} \bigl| \bigl\langle u(0,t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le A(s)^{2} \biggl(\frac{L}{\ln(1/\beta)} \biggr)^{4s} \bigl\| u(0,t)\bigr\| ^{2}_{H^{s}(0,2\pi)}, \end{aligned}$$
$$A(s)=2^{s} (2s)^{2s} e^{1-2s} \bigl(1+L^{-2s} \bigr). $$
This leads to
$$\begin{aligned} \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)} \le& A(s) \biggl(\frac{L}{\ln (1/\beta)} \biggr)^{2s} \bigl\| u(0,t)\bigr\| _{H^{s}(0,2\pi)} \\ \le& M_{1} A(s) \biggl(\frac{L}{\ln(1/\beta)} \biggr)^{2s}. \end{aligned}$$
Combining (20) and (30), we obtain
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi)} \le& \bigl\| u^{\beta(\epsilon )}- v^{\beta(\epsilon )}\bigr\| _{L^{2}(0,2\pi)}+ \bigl\| v^{\beta(\epsilon )}-u\bigr\| _{L^{2}(0,2\pi )} \\ \le& \beta(\epsilon )^{\frac{x}{L}-1} \epsilon + M_{1} A(s) \biggl( \frac {L}{\ln(1/\beta)} \biggr)^{2s}. \end{aligned}$$

3.2 A posteriori parameter choice rule

Theorem 2

Suppose that \(0<\epsilon < \|g^{\epsilon }\|_{L^{2}(0,2\pi)}\). Choose \(m>1\) such that
$$\begin{aligned} 0 < m\epsilon < \bigl\| g^{\epsilon }\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
Then there exists a unique number \(\beta(\epsilon )\) such that
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(L,t)-g^{\epsilon }\bigr\| _{L^{2}(0,2\pi)} = m\epsilon . \end{aligned}$$
Furthermore, if \(\|u(0,t)\|_{L^{2}(0,2\pi)} \le M \) then by \(\beta(\epsilon )= \frac{\epsilon }{M} \), we have the following estimate:
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(x,\cdot)-u(x,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le \biggl[ \frac {2m}{m-1} M \biggr]^{1-\frac{x}{L}} \bigl[(m+1)\epsilon \bigr]^{\frac {x}{L}} . \end{aligned}$$


The following results are straightforward.

Lemma 3

Set \(\rho(\beta)=\|u^{\beta(\epsilon )}(L,t)-g^{\epsilon }\|_{L^{2}(0,2\pi)} \). If \(0<\epsilon < \|g^{\epsilon }\|_{L^{2}(0,2\pi)}\) then ρ is a continuous strictly increasing function and satisfies
  1. (a)

    \(\lim_{\beta\to0} \rho(\beta) =0\),

  2. (b)

    \(\lim_{\beta\to+\infty} \rho(\beta) =\|g^{\epsilon }\|_{L^{2}(0,2\pi)}\).


From this lemma, it follows that there exists a unique number \(\beta (\epsilon )\) satisfying (31).

Set \(z^{\beta(\epsilon )}(x,t)= u(x,t)- u^{\beta(\epsilon )}(x,t) \). Then \(z^{\beta(\epsilon )}\) satisfies the heat equation
$$\begin{aligned} z^{\beta(\epsilon )}_{t}=z^{\beta(\epsilon )}_{xx}. \end{aligned}$$
From the formula of u and \(u^{\beta(\epsilon )}\), we get
$$\begin{aligned} z^{\beta(\epsilon )}(x,t)= u^{\beta(\epsilon )} (x,t) -u(x,t)= \sum _{k \in \mathbb{Z} } \exp \bigl(\sqrt{ik} (L-x) \bigr) R_{k}\bigl( \beta,g,g^{\epsilon }\bigr) e^{ikt}, \end{aligned}$$
$$\begin{aligned} R_{k}\bigl(\beta,g,g^{\epsilon }\bigr) = \frac{ \exp (- \sqrt{\frac{|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac{|k|}{2}} L ) } \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle - \bigl\langle g(t),e^{-ikt} \bigr\rangle . \end{aligned}$$
Then using the Hölder inequality, we get
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )}(x,t)\bigr\| ^{2}_{L^{2}(0,2\pi)} =& \sum _{k \in\mathbb {Z} }\exp \biggl({\sqrt{2|k|}} L\biggl(1-\frac{x}{L} \biggr) \biggr) \bigl| R_{k}\bigl(\beta,g,g^{\epsilon }\bigr) \bigr|^{2-\frac{2x}{L}} \bigl| R_{k}\bigl(\beta ,g,g^{\epsilon }\bigr) \bigr|^{\frac{2x}{L}} \\ \le& \biggl[ \sum_{k \in\mathbb{Z} } \biggl(\exp \biggl({\sqrt {2|k|}} L\biggl(1-\frac{x}{L}\biggr) \biggr) \bigl| R_{k}\bigl( \beta,g,g^{\epsilon }\bigr) \bigr|^{2-\frac{2x}{L}} \biggr)^{\frac {1}{1-\frac{x}{L}}} \biggr]^{1-\frac{x}{L}} \\ &{}\times\biggl[ \sum_{k \in \mathbb{Z} } \bigl( \bigl| R_{k}\bigl(\beta,g,g^{\epsilon }\bigr) \bigr|^{\frac{2x}{L}} \bigr)^{\frac {1}{\frac{x}{L}}} \biggr]^{\frac{x}{L}} \\ =& \biggl[\sum_{k \in\mathbb{Z} } \exp \bigl({\sqrt{2|k|}} L \bigr) \bigl| R_{k}\bigl(\beta,g,g^{\epsilon }\bigr) \bigr|^{2} \biggr]^{1-\frac{x}{L}} \biggl[\sum_{k \in\mathbb{Z} } \bigl| R_{k}\bigl(\beta,g,g^{\epsilon }\bigr) \bigr|^{2} \biggr]^{\frac{x}{L}} \\ =&\bigl\| z^{\beta(\epsilon )}(0,t)\bigr\| ^{2-\frac{2x}{L}}_{L^{2}(0,2\pi)} \bigl\| z^{\beta(\epsilon )}(L,t)\bigr\| ^{\frac{2x}{L}}_{L^{2}(0,2\pi)}. \end{aligned}$$
This implies that
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )}(x,t)\bigr\| _{L^{2}(0,2\pi)} \le\bigl\| z^{\beta(\epsilon )}(0,t)\bigr\| ^{1-\frac{x}{L}}_{L^{2}(0,2\pi)} \bigl\| z^{\beta(\epsilon )}(L,t)\bigr\| ^{\frac {x}{L}}_{L^{2}(0,2\pi)}. \end{aligned}$$
Proof of Part (a). It is easy to see that
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )} (L,t)\bigr\| _{L^{2}(0,2\pi)} =& \bigl\| u^{\beta(\epsilon )}(L,t)-u(L,t) \bigr\| _{L^{2}(0,2\pi)} \\ \le& \bigl\| u^{\beta(\epsilon )}(L,t)-g^{\epsilon }\bigr\| _{L^{2}(0,2\pi)}+ \bigl\| g-g^{\epsilon }\bigr\| _{L^{2}(0,2\pi)} \le(m+1)\epsilon \end{aligned}$$
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )}(0,t)\bigr\| _{L^{2}(0,2\pi)} \le\bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi )}+ \bigl\| u^{\beta(\epsilon )}(0,t)\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
We have
$$\begin{aligned} m \epsilon =&\bigl\| u^{\beta(\epsilon )}(L,t)-g^{\epsilon }(t)\bigr\| _{L^{2}(0,2\pi)} \\ =& \biggl\| \sum_{k \in\mathbb{Z} } \frac{\beta(\epsilon ) \exp (- \sqrt{\frac{|k|}{2}} L ) }{ 1+\beta(\epsilon ) \exp ( -\sqrt {\frac{|k|}{2}} L ) } \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \biggl\| \sum_{k \in\mathbb{Z} } \frac{\beta(\epsilon ) \exp (- \sqrt{\frac{|k|}{2}} L ) }{ 1+\beta(\epsilon ) \exp ( -\sqrt{\frac{|k|}{2}} L ) } \bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{} + \biggl\| \sum_{k \in\mathbb{Z} } \frac{\beta(\epsilon ) }{ \beta(\epsilon )+ \exp ( -\sqrt{\frac{|k|}{2}} L ) } \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \biggl\| \sum_{k \in\mathbb{Z} } \bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} + \biggl\| \sum _{k \in\mathbb{Z} } { \beta(\epsilon ) \exp \biggl( \sqrt{ \frac{|k|}{2}} L \biggr) } \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi )} \\ =& \bigl\| g^{\epsilon }-g\bigr\| _{L^{2}(0,2\pi)} + \beta(\epsilon ) \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi )} \\ \le& \epsilon + \beta(\epsilon ) \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
This implies that
$$\begin{aligned} \frac{\epsilon }{\beta(\epsilon ) } \le \frac{1}{m-1} \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi )}. \end{aligned}$$
It follows from (19) that
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(0,t)\bigr\| _{L^{2}(0,2\pi)} = & \biggl\| \sum _{k \in \mathbb{Z} } \frac{ \exp (- \sqrt{\frac{|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac{|k|}{2}} L ) } \exp (\sqrt{ik} L ) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \biggl\| \sum_{k \in\mathbb{Z} } \frac{ \exp (- \sqrt {\frac{|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac {|k|}{2}} L ) } \exp ( \sqrt{ik} L ) \bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{} + \biggl\| \sum_{k \in\mathbb{Z} } \frac{ \exp (- \sqrt {\frac{|k|}{2}} L ) }{ \beta(\epsilon ) +\exp ( -\sqrt{\frac {|k|}{2}} L ) } \exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \biggl\| \sum_{k \in\mathbb{Z} } \frac{1}{\beta(\epsilon )} \bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{}+ \biggl\| \sum_{k \in\mathbb{Z} } \exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \frac{1}{ \beta(\epsilon )} \bigl\| g^{\epsilon }-g\bigr\| _{L^{2}(0,2\pi)}+ \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)} \\ \le& \frac{\epsilon }{\beta(\epsilon ) } + \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
Combining (36) and (37), we obtain
$$\begin{aligned} \bigl\| u^{\beta(\epsilon )}(0,t)\bigr\| _{L^{2}(0,2\pi)} \le\biggl( \frac{1}{m-1} +1\biggr) \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}= \frac{m}{m-1} \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
From (35) and (38), we obtain
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )}(0,t)\bigr\| _{L^{2}(0,2\pi)} \le \biggl(1+ \frac{m}{m-1} \biggr) \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}= \frac{2m-1}{m-1} \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
This together with (33) and (34) leads to
$$\begin{aligned} \bigl\| z^{\beta(\epsilon )}(x,t)\bigr\| _{L^{2}(0,2\pi)} \le& \bigl\| z^{\beta(\epsilon )}(0,t) \bigr\| ^{1-\frac{x}{L}}_{L^{2}(0,2\pi)} \bigl\| z^{\beta(\epsilon )}(L,t)\bigr\| ^{\frac{x}{L}}_{L^{2}(0,2\pi)} \\ \le& \biggl[ \frac{2m-1}{m-1} \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)} \biggr]^{1-\frac{x}{L}} \bigl[(m+1)\epsilon \bigr]^{\frac{x}{L}}. \end{aligned}$$
$$\begin{aligned} \bigl\| u(x,t)- u^{\beta(\epsilon )}(x,t)\bigr\| _{L^{2}(0,2\pi)} \le \biggl[ \frac {(2m-1)M}{m-1} \biggr]^{1-\frac{x}{L}} (m+1)^{\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$

4 An iteration regularization method

4.1 A priori parameter choice

To the authors’ knowledge, there are few papers for choosing the regularization parameter by a posteriori rule for the sideways heat equation. In this section, a new regularization method of iteration type for solving this problem will be given. We will use this method solving Problem (1), an a priori and a posteriori rule for choosing regularization parameters and the Hölder-type error estimates are given. To solve the problem, we introduce the following iteration scheme:
$$\begin{aligned} \bigl\langle u_{n}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle = (1-h) \bigl\langle u_{n-1}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle + h \exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \end{aligned}$$
with initial guess \(\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \rangle \) and \(0< h=e^{-L\sqrt{\frac{|k|}{2}}}<1 \), which plays an important role in the convergence proof. From the formula, it is easy to see that
$$\begin{aligned} \bigl\langle u_{n}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle =& (1-h)^{n} \bigl\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle + \sum_{i=0}^{n-1} (1-h)^{i} h \exp \bigl(\sqrt {ik} (L-x) \bigr) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \\ =& (1-h)^{n} \bigl\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle + \bigl[ 1-(1-h)^{n} \bigr]\exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle . \end{aligned}$$
Therefore, the approximate solution of Problem (1) has the following form:
$$\begin{aligned} u^{\epsilon }_{n}(x,t)=\sum_{k \in\mathbb{Z}} \bigl\langle u_{n}^{\epsilon }(x,t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$

We have the main theorem as follows.

Theorem 3

Let \(u(x, t)\) be the exact solution of Problem (1), and \(u_{n}^{\epsilon }\) be its regularization approximation defined by (42) with \(\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \rangle =0\). Assume that \(\|u(0,t)\|_{L^{2}(0,2\pi )} \le M \) for \(M>0\) and take \(n= [\frac{M}{\epsilon }]\), where \([k]\) denotes the largest integer not exceeding k, then we have the estimate
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le2M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$

Lemma 4

For \(0 \le h \le1\) and \(n \ge1\), the following inequalities hold:
$$\begin{aligned}& (1-h)^{n} h \le \frac{1}{n+1} ,\\& \frac{1-(1-h)^{n}}{h} \le n. \end{aligned}$$

Lemma 5

For \(0 \le h \le1\), \(0 \le\alpha\le1\), and \(n \ge1\), the following inequalities hold:
$$\begin{aligned}& (1-h)^{n} h^{\alpha}\le \frac{1}{(n+1)^{\alpha}},\\& \frac{1-(1-h)^{n}}{h^{\alpha}} \le n^{\alpha}. \end{aligned}$$
From \(\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \rangle =0\), we see that
$$\begin{aligned} u^{\epsilon }_{n}(x,t)= \sum_{k \in\mathbb{Z} } \bigl[ 1-(1-h)^{n} \bigr] \exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$
Let \(v^{\epsilon }(x,t)\) be given by
$$\begin{aligned} v^{\epsilon }_{n}(x,t)= \sum_{k \in\mathbb{Z} } \bigl[ 1-(1-h)^{n} \bigr] \exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt}. \end{aligned}$$
We divide the argument into two steps.
Step 1. Estimate \(\|u^{\epsilon }_{n}(x,\cdot)-v^{\epsilon }_{n}(x,\cdot) \|_{L^{2}(0,2\pi)}\). We have
$$\begin{aligned} &\bigl\| u^{\epsilon }_{n}(x,\cdot)-v^{\epsilon }_{n}(x,\cdot) \bigr\| _{L^{2}(0,2\pi)}^{2} \\ &\quad=2\pi\sum_{k \in \mathbb{Z} } \bigl[ 1-(1-h)^{n} \bigr]^{2} \bigl|\exp \bigl(2\sqrt{ik} (L-x) \bigr) \bigr| \bigl| \bigl\langle g^{\epsilon }-g(t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le 2\pi\sup_{k \in\mathbb{Z} } \bigl[ 1-(1-h)^{n} \bigr]^{2} \bigl|\exp \bigl(2\sqrt{ik} (L-x) \bigr) \bigr| \sum _{k \in\mathbb{Z} } \bigl| \bigl\langle g^{\epsilon }-g(t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ &\quad\le 2\pi\sup_{k \in\mathbb{Z} } \bigl[ 1-(1-h)^{n} \bigr]^{2} \exp \bigl({\sqrt{2|k|}} (L-x) \bigr) \bigl\| g^{\epsilon }-g \bigr\| _{L^{2}(0,2\pi)}^{2}. \end{aligned}$$
It follows from \(\exp ({\sqrt{2|k|}} (L-x) ) = h^{\frac {2x}{L}-2} \) and \(\|g^{\epsilon }-g\|_{L^{2}(0,2\pi)} \le \epsilon \) that
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-v^{\epsilon }_{n}(x,\cdot) \bigr\| _{L^{2}(0,2\pi)}^{2} \le2\pi \sup_{0< h<1 } \bigl[ 1-(1-h)^{n} \bigr]^{2} h^{\frac{2x}{L}-2} \epsilon ^{2}. \end{aligned}$$
Because of Lemma 5, we have
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-v^{\epsilon }_{n}(x,\cdot) \bigr\| _{L^{2}(0,2\pi)}^{2} \le n^{2-\frac {2x}{L}} \epsilon ^{2}. \end{aligned}$$
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-v^{\epsilon }_{n}(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le n^{1-\frac{x}{L}} \epsilon . \end{aligned}$$
Step 2. Estimate \(\|v^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \|_{L^{2}(0,2\pi)}\). We have
$$\begin{aligned} \bigl\| v^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)}^{2} =& 2\pi\sum_{k \in\mathbb {Z} } (1-h)^{2n} \exp \bigl({ \sqrt{2|k|}} (L-x) \bigr) \bigl| \bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr|^{2} \\ \le& \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)}^{2} \sup_{0< h<1} (1-h)^{2n} h^{\frac {2x}{L}} \\ \le& M^{2} (n+1)^{-\frac{2x}{L}}. \end{aligned}$$
This implies that
$$\begin{aligned} \bigl\| v^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le M (n+1)^{-\frac{x}{L}} . \end{aligned}$$
Combining (43) and (44), we obtain
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le& \bigl\| u^{\epsilon }(x,\cdot)-v^{\epsilon }(x,\cdot) \bigr\| _{L^{2}(0,2\pi)}+ \bigl\| v^{\epsilon }(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi )} \\ \le& n^{1-\frac{x}{L}} \epsilon + M (n+1)^{-\frac{x}{L}}. \end{aligned}$$
Due to \(n= [ \frac{M}{\epsilon } ]\), we have \(n \le\frac{M}{\epsilon }\) and \(n+1 \ge\frac{M}{\epsilon }\), and therefore
$$\begin{aligned} \bigl\| u^{\epsilon }_{n}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le M \biggl( \frac{M}{\epsilon } \biggr)^{-\frac{x}{L}} +\epsilon \biggl( \frac{M}{\epsilon } \biggr)^{1-\frac {x}{L}}=2M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}. \end{aligned}$$

4.2 The discrepancy principle

In this section, we discuss the a posteriori stopping rule for iteration scheme (40) based on ‘discrepancy principle’ of Morozov in the following form:
$$\begin{aligned} \bigl\| g^{\epsilon }(t)-u_{\beta}^{\epsilon }(L,t) \bigr\| _{L^{2}(0,2\pi)}=m \epsilon , \end{aligned}$$
where \(m>1\) is a constant and β denotes the regularization parameter.
If we take \(\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \rangle =0\), then (45) can be simplified to the form
$$\begin{aligned} \bigl\| g^{\epsilon }(t)-u_{\beta}^{\epsilon }(L,t) \bigr\| _{L^{2}(0,2\pi)} =& \biggl\| \sum_{k \in \mathbb{Z} } \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt}-\sum_{k \in \mathbb{Z} } \bigl[ 1-(1-h)^{\beta}\bigr] \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ =& \biggl\| \sum_{k \in\mathbb{Z} } (1-h)^{\beta}\bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)}=m \epsilon . \end{aligned}$$
We have the main theorem as follows.

Theorem 4

Let \(u(x, t)\) be the exact solution of Problem (1), and \(u_{n}^{\epsilon }\) be its regularization approximation defined by (42) with \(\langle u_{0}^{\epsilon }(x,t),e^{-ikt} \rangle =0\). If the a priori bound \(\|u(0,t)\| _{L^{2}(0,\pi)} \le M\) is valid and the iteration (40) is stopped by the discrepancy principle (45), then
$$\begin{aligned} \bigl\| u^{\epsilon }_{\beta}(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \le E M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}, \end{aligned}$$
$$\begin{aligned} E= (m+1)^{\frac{x}{L}} \biggl(\frac{m}{m-1} \biggr)^{1-\frac{x}{L}} . \end{aligned}$$

The following results are obvious.

Lemma 6

Set \(P(\beta)= \|g^{\epsilon }(t)-u_{\beta}^{\epsilon }(L,t) \|_{L^{2}(0,2\pi)} =\| (1-h)^{\beta}g^{\epsilon }(t) \|_{L^{2}(0,2\pi)} \).

Lemma 7

$$\begin{aligned} w^{\epsilon }_{\beta}(x,t) =& \sum_{k \in\mathbb{Z} } \bigl[\exp \bigl(\sqrt {ik} (L-x) \bigr) \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \\ &{}\times\exp \bigl(\sqrt{ik} (L-x) \bigr) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt}, \end{aligned}$$
the following inequality holds:
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(x,t)\bigr\| _{L^{2}(0,2\pi)} \le\bigl\| w^{\epsilon }_{\beta}(0,t)\bigr\| ^{1-\frac{x}{L}}_{L^{2}(0,2\pi)} \bigl\| w^{\epsilon }_{\beta}(L,t)\bigr\| ^{\frac {x}{L}}_{L^{2}(0,2\pi)} . \end{aligned}$$


We have
$$\begin{aligned} w^{\epsilon }_{\beta}(0,t) = \sum_{k \in\mathbb{Z} } \bigl[\exp (\sqrt {ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \exp (\sqrt{ik} L ) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \end{aligned}$$
$$\begin{aligned} w^{\epsilon }_{\beta}(L,t) = \sum_{k \in\mathbb{Z} } \bigl[ \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt}. \end{aligned}$$
We have
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(x,t)\bigr\| ^{2}_{L^{2}(0,2\pi)} = \sum _{k \in\mathbb{Z} }\exp \bigl({\sqrt{2|k|}} (L-x) \bigr) \bigl[ \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr]^{2}. \end{aligned}$$
For brevity in this case, we set \(Q_{k}(h,n)= \langle g(t),e^{-ikt} \rangle - [ 1-(1-h)^{n} ] \langle g^{\epsilon }(t),e^{-ikt} \rangle \). Then using the Hölder inequality, we get
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(x,t)\bigr\| ^{2}_{L^{2}(0,2\pi)} =& \sum _{k \in\mathbb{Z} }\exp \biggl({\sqrt{2|k|}} L\biggl(1- \frac{x}{L}\biggr) \biggr) \bigl| Q_{k}(h,n) \bigr|^{2-\frac{2x}{L}} \bigl| Q_{k}(h,n) \bigr|^{\frac {2x}{L}} \\ \le& \biggl[ \sum_{k \in\mathbb{Z} } \biggl(\exp \biggl({\sqrt {2|k|}} L\biggl(1-\frac{x}{L}\biggr) \biggr) \bigl| Q_{k}(h,n) \bigr|^{2-\frac{2x}{L}} \biggr)^{\frac{1}{1-\frac {x}{L}}} \biggr]^{1-\frac{x}{L}} \\ &{}\times\biggl[ \sum _{k \in\mathbb{Z} } \bigl( \bigl| Q_{k}(h,n) \bigr|^{\frac{2x}{L}} \bigr)^{\frac{1}{\frac {x}{L}}} \biggr]^{\frac{x}{L}} \\ =& \biggl[\sum_{k \in\mathbb{Z} } \exp \bigl({\sqrt{2|k|}} L \bigr)* \bigl| Q_{k}(h,n) \bigr|^{2} \biggr]^{1-\frac{x}{L}} \biggl[\sum _{k \in \mathbb{Z} } \bigl| Q_{k}(h,n) \bigr|^{2} \biggr]^{\frac{x}{L}} \\ =&\bigl\| w^{\epsilon }_{\beta}(0,t)\bigr\| ^{2-\frac{2x}{L}}_{L^{2}(0,2\pi)} \bigl\| w^{\epsilon }_{\beta}(L,t)\bigr\| ^{\frac{2x}{L}}_{L^{2}(0,2\pi)} . \end{aligned}$$
This completes the proof of the lemma. □

Lemma 8

The following inequality holds:
$$\begin{aligned} \beta \epsilon \le\frac{M}{m-1}. \end{aligned}$$


It follows from (46) that
$$\begin{aligned} m \epsilon =& \biggl\| \sum_{k \in\mathbb{Z} } (1-h)^{\beta}\bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \le \biggl\| \sum_{k \in\mathbb{Z} } (1-h)^{\beta}\bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{} + \biggl\| \sum_{k \in\mathbb{Z} } (1-h)^{\beta}\bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \sup_{0< h<1} (1-h)^{\beta}\biggl\| \sum _{k \in\mathbb{Z} } \bigl\langle g^{\epsilon }(t)-g(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{}+\sup_{0<h<1} (1-h)^{\beta}\bigl\| \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \bigr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
Moreover, it is easy to see that
$$\begin{aligned} (1-h)^{\beta}\le\frac{1}{\beta} \end{aligned}$$
$$\begin{aligned} \bigl\| \bigl\langle g(t),e^{-ikt} \bigr\rangle e^{ikt} \bigr\| _{L^{2}(0,2\pi)} = \bigl\| \exp (-\sqrt{ik} L ) \bigl\langle u(0,t),e^{-ikt} \bigr\rangle e^{ikt} \bigr\| _{L^{2}(0,2\pi)} \le\bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)} \le M. \end{aligned}$$
This leads to
$$\begin{aligned} \beta \epsilon \le \epsilon + \frac{M}{m}. \end{aligned}$$
This completes the proof of the lemma. □

Lemma 9

The following inequality holds:
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(L,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le(m+1)\epsilon . \end{aligned}$$


Due to the triangle inequality, we have
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(L,\cdot)\bigr\| _{L^{2}(0,2\pi)} =& \biggl\| \sum _{k \in\mathbb {Z} } \bigl[ \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \biggl\| \sum_{k \in\mathbb{Z} } \bigl\langle g(t)-g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &{}+ \biggl\| \sum_{k \in\mathbb {Z} } (1-h)^{\beta}\bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \bigl\| g^{\epsilon }-g\bigr\| _{L^{2}(0,2\pi)}+ \biggl\| \sum _{k \in\mathbb{Z} } (1-h)^{\beta}\bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi )} \\ \le& \epsilon + m \epsilon =(m+1)\epsilon . \end{aligned}$$

Lemma 10

The following inequality holds:
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(0,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le\frac{m M}{m-1}. \end{aligned}$$


Due to the triangle inequality and \(h=e^{-L\sqrt{\frac{|k|}{2}}}\), we have
$$\begin{aligned} &\bigl\| w^{\epsilon }_{\beta}(0,\cdot)\bigr\| _{L^{2}(0,2\pi)} \\ &\quad= \biggl\| \sum _{k \in\mathbb {Z} } \bigl[\exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{\beta}\bigr] \exp ( \sqrt{ik} L ) \bigl\langle g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\quad\le \biggl\| \sum_{k \in\mathbb{Z} } \bigl[\exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle - \bigl[ 1-(1-h)^{n} \bigr] \exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\qquad{} + \biggl\| \sum_{k \in\mathbb{Z} } \bigl[ \bigl[ 1-(1-h)^{\beta}\bigr] \exp (\sqrt{ik} L ) \bigl\langle g(t)-g^{\epsilon }(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\quad\le \biggl\| \sum_{k \in\mathbb{Z} } \bigl[ (1-h)^{\beta}\exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\qquad{} + \biggl\| \sum_{k \in\mathbb{Z} } \frac{1-(1-h)^{\beta}}{h} \bigl\langle g(t)-g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)}. \end{aligned}$$
Moreover, we have
$$\begin{aligned} &\biggl\| \sum_{k \in\mathbb{Z} } \bigl[ (1-h)^{\beta}\exp ( \sqrt {ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\quad\le \biggl\| \sum_{k \in\mathbb{Z} } \bigl[ \exp (\sqrt{ik} L ) \bigl\langle g(t),e^{-ikt} \bigr\rangle \bigr] e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ &\quad= \bigl\| u(0,t)\bigr\| _{L^{2}(0,2\pi)} \le M, \end{aligned}$$
and using \(\frac{1-(1-h)^{\beta}}{h} \le\beta\), we have the estimate
$$\begin{aligned} \biggl\| \sum_{k \in\mathbb{Z} } \frac{1-(1-h)^{\beta}}{h} \bigl\langle g(t)-g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \le& \beta \biggl\| \sum_{k \in\mathbb{Z} } \bigl\langle g(t)-g^{\epsilon }(t),e^{-ikt} \bigr\rangle e^{ikt} \biggr\| _{L^{2}(0,2\pi)} \\ \le& \beta\bigl\| g^{\epsilon }-g\bigr\| _{L^{2}(0,2\pi)} \le\beta \epsilon . \end{aligned}$$
Combining (47), (48), and (49), we obtain
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(0,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le M+ \beta \epsilon . \end{aligned}$$
Lemma 8 leads to
$$\begin{aligned} \bigl\| w^{\epsilon }_{\beta}(0,\cdot)\bigr\| _{L^{2}(0,2\pi)} \le M+ \frac{M}{m-1}= \frac{Mm}{m-1}. \end{aligned}$$

Now, we return to the proof of the theorem.

Combining Lemma 7, Lemma 8, and Lemma 9, we obtain
$$\begin{aligned} \bigl\| u^{\epsilon }(x,\cdot)-u(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} =& \bigl\| w^{\epsilon }_{\beta}(x,\cdot) \bigr\| _{L^{2}(0,2\pi)} \\ \le& \bigl\| w^{\epsilon }_{\beta}(0,t) \bigr\| ^{1-\frac {x}{L}}_{L^{2}(0,2\pi)} \bigl\| w^{\epsilon }_{\beta}(L,t) \bigr\| ^{\frac {x}{L}}_{L^{2}(0,2\pi)} \\ \le& \biggl(\frac{Mm}{m-1} \biggr)^{1-\frac{x}{L}} \bigl((m+1)\epsilon \bigr)^{\frac{x}{L}} \\ =&E M^{1-\frac{x}{L}} \epsilon ^{\frac{x}{L}}, \end{aligned}$$
$$\begin{aligned} E= (m+1)^{\frac{x}{L}} \biggl(\frac{m}{m-1} \biggr)^{1-\frac{x}{L}}. \end{aligned}$$



The authors wish to express their sincere thanks to the anonymous referees and the handling editor for many constructive comments, leading to the improved version of this paper. The second author is supported by Posts and Telecommunications Institute of Technology, Ho Chi Minh City, Vietnam.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University
Faculty of Basic Science, Posts and Telecommunications Institute of Technology


  1. Kirsch, A: An Introduction to the Mathematical Theory of Inverse Problems. Applied Mathematical Sciences, vol. 120. Springer, New York (1996) MATHGoogle Scholar
  2. Eldén, L: Approximations for a Cauchy problem for the heat equation. Inverse Problems 3(2), 263-273 (1987) View ArticleMATHMathSciNetGoogle Scholar
  3. Qian, Z, Fu, C-L, Xiong, X-T: A modified method for determining the surface heat flux of IHCP. Inverse Probl. Sci. Eng. 15(3), 249-265 (2007) View ArticleMATHMathSciNetGoogle Scholar
  4. Xiong, X-T, Fu, C-L, Li, H-F: Central difference method of a non-standard inverse heat conduction problem for determining surface heat flux from interior observations. Appl. Math. Comput. 173(2), 1265-1287 (2006) View ArticleMATHMathSciNetGoogle Scholar
  5. Xiong, X-T, Fu, C-L, Li, H-F: Fourier regularization method of a sideways heat equation for determining surface heat flux. J. Math. Anal. Appl. 317(1), 331-348 (2006) View ArticleMATHMathSciNetGoogle Scholar
  6. Liu, JC, Wei, T: A quasi-reversibility regularization method for an inverse heat conduction problem without initial data. Appl. Math. Comput. 219(23), 10866-10881 (2013) View ArticleMATHMathSciNetGoogle Scholar
  7. Reginska, T, Elden, L: Solving the sideways heat equation by a wavelet-Galerkin method. Inverse Probl. 13(4), 1093-1106 (1997) View ArticleMATHMathSciNetGoogle Scholar
  8. Elden, L, Berntsson, F, Reginska, T: Wavelet and Fourier methods for solving the sideways heat equation. SIAM J. Sci. Comput. 21(6), 2187-2205 (2000) View ArticleMATHMathSciNetGoogle Scholar
  9. Deng, YJ, Liu, ZH: Iteration methods on sideways parabolic equations. Inverse Probl. 25, 095004 (2009) View ArticleMathSciNetGoogle Scholar


© Nguyen and Luu; licensee Springer. 2015