Alternating Direction Implicit Method for Poisson Equation with Integral Conditions

. In this paper, we investigate the convergence of the Peaceman-Rachford Alternating Direction Implicit method for the system of difference equations, approximating the two-dimensional elliptic equations in rectangular domain with nonlocal integral conditions. The main goal of the paper is the analysis of spectrum structure of difference eigenvalue problem with nonlocal conditions. The convergence of iterative method is proved in the case when the system of eigenvectors is complete. The main results are generalized for the system of difference equations, approximating the differential problem with truncation error O ( h 4 )


Introduction
Boundary value problems for differential equations with various types of nonlocal conditions are currently being studied quite intensively in the theory of differential equations and numerical analysis.
The study of numerical methods for elliptic equations with nonlocal conditions is strongly influenced by two causes.Firstly, over the past few decades, new mathematical models with nonlocal conditions have been developed for applications in physics, thermoelasticity, ecology, biotechnology, etc.Secondly, investigating the problems of pure mathematics, several scientific articles have been published on the generalization of classical boundary conditions for elliptic equations [6,10].
The first results on the solution of a two-dimensional elliptic equation with a nonlocal condition were obtained in [14,15,23,25].This condition was later named the Bitsadze-Samarskii nonlocal condition.These papers began to investigate iterative methods for system of difference equations with nonlocal conditions.We note one characteristic feature of such systems.Due to the nonlocal condition, the matrix of the system of difference equations is not symmetric.But quite often it has some nice properties, for example, all the eigenvalues of the matrix are positive.
Many articles are devoted to the estimation of the error of the finite difference method and the convergence of elliptic equations with various types of nonlocal conditions [2,7,15,28,29,30].The alternating direction method for system of difference equations with nonlocal conditions is examined in the papers [20,21,27].In many cases, the matrix of system of difference equations has properties which are typical to M-matrices.Therefore, the theory of Mmatrices can be applied to the study and solution of problems with nonlocal conditions [11,19,22,27].The works [1,12,13,24] are devoted to high-precision finite difference methods for the simplest elliptic equations with nonlocal conditions.
To examine the convergence conditions for the ADI method, we analyze in sufficient detail the structure of the spectrum of the corresponding difference problem.The structure of the spectrum for a differential problem with other types of nonlocal conditions is considered in many papers (see, for example, [16,17,20,21,23,26].As in previous our papers [20,21,27], we proved the convergence of ADI method in the case of the system of eigenvectors is complete.But in the present paper we take some comments and examples about the convergence of ADI method without this condition.
The further structure of this paper is as follows.The difference problem corresponding to the differential problem is formulated in Section 2, where the ADI method is also introduced.The structure of the spectrum of difference problems is discussed in Section 3. The convergence of the ADI method is demonstrated in Section 4. In Section 5, a higher order finite difference method is considered.In Section 6, numerical results are provided to verify the accuracy and efficiency of the proposed algorithms.The last Section 7 presents comments and conclusions.

Difference problem
Consider a uniform mesh in x and y with step size h = 1/N (1 < N ∈ N): We use the following notation Let us replace the differential problem (1.1)-(1.3)with the following difference problem on the mesh The integral conditions (1.2) are approximated by the trapezoidal rule.For simplicity, we assume that the values ξ 1 and ξ 2 are such that ξ 1 = rh and ξ 2 = sh, r, s ∈ N, 0 < r < N , 0 < s < N .Assume that N , r and s are even numbers.Note, that it is not a strong restriction, as we can always halve the step size h.
The existence and uniqueness of the solution of the differential problem (1.1)-(1.3)are investigated in [4,5].The error estimate and convergence of the solution of the finite difference method are presented in [8,9].
The corresponding difference scheme for this problem under the condition that the desired solution belongs to the Sobolev space W s 2 (1 < s ≤ 3) has been investigated in [9].
Next, we will consider the system (2.8)-(2.9),(2.4).We will write the system (2.1)-(2.4) in the matrix form (2.5).For this purpose let us define matrices of order (N − 1): Note that only the first and the last row of matrix C are non-zero.Here we indicate the column numbers on top of the matrix.Let us denote The system (2.1)-(2.4)can be written in matrix form (2.5) using matrices , where I x and I y are the identity matrices of order (N −1) (in our case of square domain Our main goal is to study the ADI method for solving a system of difference equations.We write the ADI method for system (2.5): where τ 1 n , τ 2 n , n = 0, 1, . . .are iteration parameters.We give a explicit formula for determining iteration parameters in Section 4.

The structure of the spectrum of the difference problem
Proof of the convergence of the method (2.12) is based on the structure of the spectrum of the one-dimensional eigenvalue problems.We consider two difference eigenvalue problems.First of these is problem with nonlocal boundary conditions: The second problem is classical problem: In problem with nonlocal conditions we express v 0 and v N from (3.2)-(3. 3) and substitute into (3.1).So we get We can write the problem (3.4)-(3.5) in matrix form The problem (3.7) has known solution We will find eigenvalues and eigenvectors of the problem (3.6).As far as we know, this problem has not been investigated.
Lemma 1.The eigenvalues of the problem (3.6), which satisfy conditions can be expressed in the form where α k is root of any of equations Eigenvectors v k can be expressed as where (c 1 , c 2 ) is nontrivial solution of system in the case α = α k .
Proof.The inequality where α is a new parameter instead of η.
So, the statement of Lemma follows from here.
For even numbers r, s and There are N − 3 roots. ) In each of the formulas (3.12)-(3.14), the roots are different.However, the eigenvalue (3.9) can be multiple if some of the roots of different formulas coincide.For example, in the case ξ 1 = 1 − ξ 2 the equations (3.12) and (3.13) have the same roots.In this case, the eigenvalues are multiples, and the system of eigenvectors may not be complete.
Lemma 2. In the case of even numbers r, s and is the eigenvalue of the problem (3.6) of multiplicity 2 with two linearly independent corresponding eigenvectors v 1 and v 2 : Proof.The general solution of Equation (3.1) in the case η = 4/h 2 is

The convergence of the ADI method
Let us write the iterative method (2.12) as a matrix equation where where I = I y ⊗ I x is the identity matrix of order (N − 1) 2 .We will prove that spectral radius ϱ(S n ) < 1.
Lemma 3. A 1 and A 2 are commuting matrices Proof.It is easy to check that Proof.It follows from definition of A 1 and U kl by formulas (2.11) and (4.3) that Furthermore, η and v are eigenvalue and eigenvector of matrix Λ x ; µ and w are eigenvalue and eigenvector of matrix Λ y .So, from properties of tensor product we obtain that According to Lemma 1 It follows from (4.4) and (4.5) that We note that the system of eigenvectors w l , l = 1, . . ., N −1 is always complete.So, if the system of eigenvectors v k , k = 1, . . ., N − 1 is complete, then the system of eigenvectors U kl = w l ⊗ v k is complete, too.It is follows from properties of tensor product.Now, we can prove the statement on convergence of the iterative method.
Proof.Let be U * the solution of the difference problem (2.1)-(2.4).We denote the error of iterative method as From (4.1) and Remark 2 follows that Hence, for any vector norm we have Let us estimate the factor ϱ( n j=0 S n−j ) and prove that it tends to 0 when n → ∞.It means that the ADI method converges.
Since A 1 and A 2 commute and have the same system of eigenvectors, then From Lemma 4 and (4.2), it follows that an eigenvalue of matrix If taking concrete values τ n and λ(A 1 ), the inequality 1 here ϱ 1 depends only on β 1 and λ(A 1 ) but does not depend on n.Analogously, if 1 − τ n λ(A 1 ) ≤ 0 then where ϱ 2 depends only on β 2 and λ(A 2 ) but does not depend on n.
The second factor in the formula (4.7) is estimated similarly where ϱ 0 depends on β 1 , β 2 and λ(A 2 ) but does not depend on n.Finally we get from (4.7) where ρ depends on β 1 , β 2 , λ(A 1 ) and λ(A 2 ) but does not depend on n.Because ρ < 1 then If parameters of ADI method τ 1 and τ 2 does not depend on n, then Theorem 1 is correct even without the assumption of completeness of the system of eigenvectors.We have that when τ 1 n = τ 2 n ≡ τ > 0, then the iterative method is stationary where A necessary and sufficient condition for the convergence of a stationary process (4.8) with any initial data U 0 is ϱ(S) ≤ 1.
It is known that when applying the ADI method, optimal parameters τ 1 n and τ 2 n are usually used.For obtaining optimal set of iteration parameters in the case of commuting operators we suppose that eigenvalues of A 1 and A 2 satisfy inequalities Following [18, Ch.X, §4], the set of ADI parameters by Jordan gives where m is number of iterations and we use notation and Such algorithm for the optimal choice of parameters τ 1 n and τ 2 n is proposed for the case of symmetric matrices A 1 and A 2 , that is, when the system of eigenvectors for the finite difference scheme is complete.
Let's briefly discuss what such a choice of parameters means in the case when the system of eigenvectors is incomplete (for problem (2.1)- (2.4).
From the definition of the ADI method (2.12), it follows that where Z n = U * − U n and U * is the exact solution of the finite difference problem.
Without details, we note that the choice of optimal parameters τ 1 n and τ 2 n is based on the solution of the minimax problem, namely the finding of the minimum value of the spectral radius of the matrix n−1 j=0 S n−1−j .Since for a symmetric matrix the spectral radius can be considered as the norm of the matrix, then in the case of symmetric matrices A 1 and A 2 , it follows from (4.9) However, if the system of eigenvectors of the finite difference problem (2.1)-(2.4) is incomplete, then (4.10) does not follow from (4.9).In this case, we can use the following statement from linear algebra [3].
Let A be an arbitrary square matrix.If ε > 0 is given, then there is matrix norm ∥A∥ * such that ∥A∥ * ≤ ϱ(A) + ε.
It means, particularly, that from inequality ϱ(A) + ε < 1 follows ∥A∥ * < 1.These arguments, in our opinion, are not proof of the convergence of the ADI method with optimal parameters.But, at least, this is a strong motivation for the corresponding numerical experiment with optimal parameters in the case when the system of eigenvectors is incomplete.

The higher-order method
Now, we will consider the difference problem approximating the differential problem (1.1)-(1.3)with truncation error O(h 4 ).So, let us replace the differential problem with the following difference problem In this case one dimensional eigenvalue problem with nonlocal boundary conditions is We rewrite the eigenvalue problem (5.5)-(5.7) in equivalent matrix form.For this we express the values v 0 ir v N from the condition (5.6)-(5.7).Putting these expressions into Equation (5.5) we can rewrite problem (5.5)-(5.7)as follows Similarly, as in Section 2, we have Λ x = Λ + C/h 2 , where Λ is defined in (2.10) and C is equal to We note, that two eigenvalue problems (5.5)-(5.7)and (5.8) are equivalent.
Lemma 5.The eigenvalues of the problem (5.5)-(5.7),which satisfy inequality 0 < η k < 4/h 2 can be expressed as where α k is root of any of equations Proof.The proof is analogous to the proof of Lemma 1. Putting general solution of Equation (5.5) into the Equations (5.6)-(5.7)we get a system of equations 2 + cos(αh) sin(αh) = 0, (5.9) The system (5.9) is analogous to (3.11),only instead of a multiplier 1 2 tan αh 2 ̸ = 0 there is another multiplier (2 + cos αh) / sin(αh) ̸ = 0.The condition D = 0 in both cases means that So, the statement of lemma follows from here.⊓ ⊔ where α is root of equation cosh(αh) = 2, is the eigenvalue of the problem (5.5)-(5.7) of multiplicity 2 with two linearly independent corresponding eigenvectors v 1 and v2 : Proof.In the case η > 0, the general solution of the Equation (5.5) is After substitution this expression into conditions (5.6) and (5.7), we get the system of equations So, D = 0 if cosh(αh) = 2 or α = arccosh 2 h .Then it follows that (5.10) Now, we can conclude that there are two linearly independent vectors v 1 , v 2 : Now, we can formulate the alternating direction method for the system (5.1)-(5.4).We rewrite this system in the matrix form where A 1 = I ⊗ Λ x , A 2 = Λ y ⊗ I and Φ = Φ(F, µ a , µ b , µ c , µ d , h).The newly defined matrices A 1 and A 2 have all the same properties as the matrices defined by system (2.1)-(2.3).In other words, Lemmas 3 and 4 are true for the matrices of the system of Equations (5.1)- (5.4).Since the matrices A 1 and A 2 commute, the system (5.11) can be written in another form where κ = h 2 /12.Further, we rewrite system (5.12) in equivalent form where (see [21] for details).Now, we write Peaceman-Rachford alternating direction method for the system (5.13): We will notice that if all eigenvalues of the matrices A 1 and A 2 are positive, then all eigenvalues of the matrices Ā1 and Ā2 are also positive for sufficiently small values of h.Thus, it follows from Lemmas 5 and 6 that for method (5.8) the statements proven in Theorem 1 and Remark 3 are valid.

Numerical experiments
In the section, some numerical examples will be computed to verify the numerical accuracy and efficiency of difference schemes that we presented in the work.Truncation error analysis provides a widely applicable framework for analyzing the accuracy of finite difference schemes.We consider three illustrative examples.The first example deals with the second order difference scheme.The second and third examples demonstrate empirical verification of the truncation error for higher order difference scheme with or without a known exact solution, respectively.
We consider a model problem We consider the uniform grids with the different mesh sizes h = 1/2 k , k = 3, . . ., 7 and analyze the convergence and the accuracy of the computed solution of the second and the fourth order difference schemes.Test problems were solved with the different values of the parameters ξ 1 , ξ 2 .We compute the maximum norm of the error of the numerical solution with respect to the exact solution, which is defined as We define a number p as p = ε 2h /ε h , which theoretically must be approximately p ≈ 4 for the second order difference scheme and p ≈ 16 for the fourth order difference scheme.
Example 1.The computational results for ADI method (2.12) are reported in Table 1.We can clearly observe a second-order convergence in the maximum norm for all presented choices of ξ 1 and ξ 2 .From the last two columns of Table 1 it is clear that the number of iterations of the ADI method is quite accurately proportional to the value of log(1/h)).When we were planning a numerical experiment, this is what we wanted to check (see the last paragraph of Section 4).Please note that ξ 1 and ξ 2 are chosen so that ξ 1 = 1 − ξ 2 , that is, the difference problem has multiple eigenvalues (see Corollary 1).
Example 2. The outputs for the different values of the pairs of parameters (ξ 1 , ξ 2 ), respectively, together with the experimental convergence order for higher-order difference scheme (5.1)-(5.4)are shown in Table 2.We demonstrate a fourth-order convergence in the maximum norm for all choices of ξ 1 and ξ 2 .
In the next example, we check that the empirically observed convergence rates in experiments coincide with the theoretical value of it found from truncation error analysis.Example 3. We solve this problem as test problem without a known solution.
We use Runge's rule for a practical error estimate for higher-order method.We performed numerical experiments with the scheme and compared results with the ones demonstrated in Table 2.As expected, there is a fourth-order convergence in the maximum norm for all choices of ξ 1 and ξ 2 .The results are recorded in Table 3.

Conclusions
We have proven that for the second order and the fourth-order difference schemes, the spectrum of difference problem consists only from positive eigenvalues.The convergence of the ADI method is proven with an additional assumption that the system of eigenvectors of difference problem is complete.A numerical experiment showed that the ADI method with optimal parameters practically converges in all studied cases.In addition, it converges as quickly as in the case with Dirichlet conditions.
.17)Putting this expression into conditions (3.2) and (3.3) we have that these conditions are satisfied with all values of c 1 and c 2 , if N , r and s are even.So, it is possible to choose constants c 1 and c 2 such that two linearly independent eigenvectors are defined by formula (3.17), corresponding to eigenvalue(3.15).
In particularly, those are vectors(3.16).⊓ ⊔ Corollary 1.It follows from Lemmas 1 and 2 that all N − 1 eigenvalues of the difference eigenvalue problem (3.6) are positive.Depending on ξ 1 and ξ 2 the system of eigenvectors may be complete or not.

Table 1 .
Accuracy of the solution and the number of the iterations for the ADI method (2.12).

Table 2 .
Accuracy of the solution and the number of the iterations for the ADI method (5.14).

Table 3 .
Accuracy of the solution and the number of the iterations for the ADI method (5.14) using Runge rule for error estimate.