Simple and Efficient Fifth Order Solvers for Systems of Nonlinear Problems

. In this study, two multi-step iterative techniques of fifth order convergence are explored to solve nonlinear equations. The techniques are designed with the prime objective of keeping the computational cost as low as possible. To claim this objective, the efficiency indices are determined and compared with the efficiencies of the existing techniques of same order. The outcome of comparison analysis is remarkable from the view of high computational efficiency of new methods. Performance and stability are illustrated by executing the numerical tests on some nonlinear problems of diverse nature. The entire analysis significantly favors the new techniques compared to their existing counterparts, especially for the case of large dimensional systems.


Introduction
The mathematical models representing the vital aspects of the physical phenomena in the field of science, engineering, etc. are inherently nonlinear in nature. A particular model, so constructed, usually describes a phenomenon by a set of equations involving relationship among a set of variables. Mathematically, a nonlinear model is generally expressed as, where, O ∈ R m represents the zero vector, F : Ω ⊆ R m → R m is a nonlinear mapping, commonly represented as (f 1 (x), f 2 (x), . . . , f m (x)) T , f i : R m → R (i = 1, . . . , m) are nonlinear scalar functions, and x = (x 1 , . . . , x m ) T ∈ Ω.
To analyze the behavior of a physical phenomenon in diverse conditions, the fundamental requirement is to obtain the solution of corresponding nonlinear model. In general, obtaining the analytical or closed form solution of nonlinear models is not feasible, but in such cases, the iterative techniques can provide the solution [6,11] in numerical form up to the desired accuracy. The working mechanism of iterative techniques is based on the theory of fixed point iterations. Under some assumptions, the solution, x * ∈ Ω, of the Equation (1.1) can be obtained as a fixed point of some suitable function ψ : R m → R m , such that x (n+1) = ψ(x (n) ), n = 0, 1, 2, . . . . , where x (0) is the initial approximation to the solution. The Newton's technique is the most widely used iterative procedure to find solution of nonlinear models, and it approximates solution of (1.1) iteratively as, x (n+1) = ψ(x (n) ) = x (n) − F ′ (x (n) ) −1 F (x (n) ), n = 0, 1, 2, . . . , (1.2) where, F ′ (x) ∈ L(R m , R m ) is a linear operator, and is generally represented as a Jacobian matrix ∂fi ∂xj m×m . Under the hypotheses that F (x) is continuous and differentiable in some neighborhood of its simple solution and that initial approximation is chosen sufficiently close to the solution, Newton's technique shows quadratic convergence. That simply means the number of significant digits double with the progress of each iteration. Numerous iterative techniques have been developed (see, for example [3,4,7,8,10,13,14,15,16,17], and references therein) to improve the convergence rate of the Newton's technique by introducing the additional functional or Jacobian evaluations. Evidently, Newton's technique utilizes a function evaluation (F ), a Jacobian matrix (F ′ ), and a matrix inversion (F ′−1 ) per iteration. At the cost of an additional function evaluation, the third order Potra and Pták [7] technique is the most simple improvement over the Newton's technique, which is presented below, where y (n) is the Newton's iteration given by (1.2). In addition, the modified techniques developed in [8,13] require two evaluations each of F , F ′ , and F ′−1 , the techniques in [3,10] require two evaluations each of F , F ′ , and three F ′−1 , the techniques in [4,15] involve evaluations of two F , four F ′ , and three F ′−1 , the technique in [14] requires evaluations of three F , and two each of F ′ and F ′−1 , whereas the technique in [16] requires evaluations of three F , two F ′ , and one F ′−1 . Apart from these, some hybrid algorithms have also been developed that merge the iterative techniques together with the optimization algorithms to generate new algorithms with the improved convergence rate (see, for example [9], and references therein).
Undoubtedly, these additional evaluations significantly affect the cost of computation in terms of mathematical operations. Therefore, the development and utilization of computationally efficient techniques have gained much importance in recent times. The concept of computational efficiency index is established (see [6,11]) to analyze and compare the efficiencies of iterative techniques. Moreover, the necessary parameters have been introduced for the rigorous investigation of this concept in [8]. Keeping into mind the challenge of optimizing the computational cost with the increasing order of convergence, here we shall present two simple yet highly efficient iterative techniques with fifth order of convergence.
We summarize here the contents of the rest of paper. Section 2 includes the development and convergence analysis of the new iterative techniques. Computational efficiency for the new techniques is determined in Section 3, and the same is analyzed and compared with the existing fifth order techniques. Numerical testing is executed in Section 4 to certify the theoretical deductions. Concluding remarks are given in Section 5.

Development of methods
In what follows, we shall propose a three step iterative scheme involving some undetermined parameters. The parameters' values to be chosen optimally so as to maximize the rate or order of convergence. We shall show that the proposed scheme converges with the order five. The intention here is to develop a technique which accelerates the convergence rate of Potra-Pták method (1.3) without requiring any additional inverse operators so that the computational cost may be as small as possible. In view of this, it will be judicious to consider an iterative technique of the type, where a, b, c, and d are the parameters to be determined in the sequel. In order to analyze the convergence rate, we first state a preliminary result (see [5]) as a lemma which is followed by the main theorem to prove the fifth order of convergence of the aforementioned technique.
function in an open convex set Ω ∈ R m , then for any x, t ∈ Ω, the following result holds: where t i = (t, i−times . . . . . . , t), F (i) (x) ∈ L(R m × i−times . . . . . . ×R m , R m ) for each i = 1, 2, . . ., and As a consequence of above lemma, the Taylor expansion of F (x), in the neighborhood of its zero x * , can be written as Theorem 1. Suppose that a nonlinear mapping, F : Ω ⊆ R m → R m , is differentiable sufficient number of times in some neighborhood of its simple zero, x * , contained in an open convex set Ω. Further, assume that F ′ (x) is nonsingular and continuous in that neighborhood, and the initial estimate, x (0) , is sufficiently close to x * . Then, the sequence of iterates generated by the iterative scheme (2.1) converges to x * with the fifth order of convergence, provided a = 1, c = 2, d = −1, and either b = 1 or b = −5.
Proof. Let e (n) = x (n) − x * be the local error generated at n th iteration of (2.1). Using the fact that F (x * ) = O, then as a consequence of Lemma 1, the Taylor expansions of F (x (n) ) and F ′ (x (n) ), about x * , can be established as where e (n) i = (e (n) , . . . , e (n) ), . . , and therefore, y = y (n) − x * be the local error at the first step of (2.1). Then, using the expressions of (2.2)-(2.3) in the first step of (2.1), and after simplifying, we have that e (n) y = C 1 e (n) + C 2 e (n) 2 + C 3 e (n) 3 + C 4 e (n) 4 + C 5 e (n) 5 + O(e (n) 6 ), (2.4) where Using the Equation (2.4), the Taylor developments of F (y (n) ), F ′ (y (n) ) about x * are given as 5 ]+O(e (n) 6 ), (2.5) Denoting e (n) z = z (n) − x * as the local error at the second step of method (2.1), and using the Equations (2.3) and (2.5), the second step of (2.1) yields, Here, the expression of M 5 , being lengthy, is not shown explicitly. Taylor expansion of F (z (n) ), using the Equation (2.7), is developed as , and the explicit forms of P 4 and P 5 are being avoided here for their lengthy expressions.
Consequently, the error equation at the (n + 1) th iteration is obtained by substituting the expressions of (2.3), (2.6), and (2.8) in the final step of (2.1), which is given by, , and the expressions of Q 4 and Q 5 are not shown explicitly here.
Finally, the values of parameters should be selected in the sense that the scheme (2.1) attains maximum possible order of convergence. Accordingly, the coefficients Q 1 , Q 2 , Q 3 , and Q 4 in the Equation (2.9) vanish for two sets of parametric values. For a = b = 1, c = 2, and d = −1, the error equation is reduced to, whereas, for the values a = 1, b = −5, c = 2, and d = −1, the error equation becomes, Hence, the fifth order of convergence is proved for the two sets of parametric values. ⊓ ⊔ The proposed fifth order iterative techniques are finally presented below.
Technique-1 : For a = b = 1, c = 2 and d = −1, the scheme (2.1) is expressed as Clearly, both of the techniques utilize three functional evaluations, two Jacobian matrices, and one matrix inversion per iteration. The techniques (2.10) and (2.11) are denoted as T 1 and T 2 , respectively, for the further reference.
Remark 1. It can be observed that the general parametric family of three-step iterative scheme developed by Zhanlav et al. in [16] includes (2.10) as a particular case.

Computational complexity
The term computational complexity refers to the analysis of an algorithm's characteristics that how much computational resources it utilizes during its course of action. Thus, for the development of an algorithm, achieving higher rate of convergence should not be the only target, but the element of efficiency must be accounted as a crucial factor. In practice, solving the systems of nonlinear equations involve large number of mathematical or numerical operations, and therefore, an efficient algorithm should be the ideal choice for the implementation of this process. The objective here is to thoroughly investigate the developed iterative algorithms on the subject of computational efficiency. For that purpose, number of definitions are available in the literature. Defined in any way, it always shows positive relation with the convergence order (r) and negative relation with the number of mathematical operations per iteration i.e. cost of computation (C). Ostrowski [6] proposed the measure of efficiency as, E O1 = log r/C, and E O2 = r 1/C , whereas Traub [11] considered it straightforwardly as, E T = r/C.
In order to utilize the concept of efficiency in a rigorous way, and further, to compare the efficiencies of iterative techniques, we shall describe here the convenient approach for the estimation of computational efficiency. To locate the solution, x * , of a system of nonlinear equations, the initial approximation, x (0) , is selected in some neighborhood of the solution. The stopping criterion for iterations is generally prescribed as, where n is the iteration index, δ is the desired precision for approximate solution x (n) , and d is the number of significant decimal digits of that approximation. Conveniently, we assume that ∥x (0) −x * ∥ ≈ 10 −1 . Then, the number of steps for an iterative procedure, which are required to reach the prescribed accuracy, can be estimated in the analytical way from the approximation, 10 −d ≈ 10 −r n or n ≈ log d/ log r, where r is the convergence order. As already discussed that the computational efficiency is reciprocally proportional to the total computational cost, nC, of the completed iterative process comprising of n iterative steps, the computational efficiency or more conventionally the efficiency index of an iterative scheme can be estimated as, , then the estimation of computational cost (see [8]) per iteration for an iterative technique can be obtained by, where N 0 (m) and N 1 (m) represent the number of evaluations of scalar functions in the computation of F and F ′ , respectively, and N (m, l) corresponds to the number of product or quotient operations per iteration. In order to estimate C(m, ν 0 , ν 1 , l) in units of products, it is essential to evaluate the ratios ν 0 > 0 and ν 1 > 0, which express relation between the products and evaluations, and a ratio l > 1, relating products and quotients.
For the suitable application of definition (3.1) along with (3.2), it is necessary to assess all the relevant elements which constitute the total numerical operational cost in the entire process. In particular, the computation of a function F involves evaluation of m scalar functions in any iteration, while m 2 scalar functions are required to compute a derivative F ′ . Additionally, the technique of LU decomposition is employed, to compute a inverse linear operator, followed by the resolution of two triangular linear systems to finally work out F ′−1 F . Note that, LU decomposition involves m(m − 1)(2m − 1)/6 products and m(m − 1)/2 quotients, whereas m(m − 1) products and m quotients are required for the resolution of two triangular systems. Moreover, m products for scalar-vector multiplication, and m 2 products for matrix-vector multiplication must be taken into the account.

Comparison of efficiencies
Consider a ratio R i,j , which is defined below, to compare the efficiencies of iterative techniques, say T i versus T j , where r i and r j , respectively, are the convergence orders of the techniques T i and T j . Apparently, technique T i is computationally more efficient than T j , only if R i,j > 1, and we symbolize mathematically it as T i ⋗ T j . The proposed techniques T 1 and T 2 , shall be analyzed and compared by analytical as well as visual approach with the existing techniques, T i , i = 3, 4, . . ., which are already expressed in this section. The efficiency indices are compared analytically by resolving the inequality R i,j > 1, but we omit the details of verifying the results, as they are fairly straightforward. The results so obtained in analytic way are projected geometrically by presenting the boundary lines R i,j = 1 in (ν 0 , ν 1 )-plane, corresponding to the particular cases of m = 5, 10, 25, and 50, and taking l = 3 in each case. Observe that each boundary line will divide the region into two parts, where T i ⋗ T j on one side and T j ⋗ T i on the other.
It can be easily verified that the inequality R 1,2 > 1 holds for all m ∈ N, and consequently T 1 ⋗ T 2 for all m ∈ N. T 1 versus T 3 case: In this case, the ratio is .
The resolution of inequality R 1,6 > 1 results into ν 0 < 1 3 (2m 2 −6m+ 4 + 3l(m− 2)), and this proves (v). The boundary lines, for comparison, are depicted in Figure 4, where T 1 ⋗ T 6 holds in the lower region of line for each particular case.  T 1 versus T 7 case: In this case, the ratio is .
T 1 versus T 8 case: In this case, the ratio is .
T 1 versus T 9 case: In this case, the ratio is .
Resolution of the inequality R 1,9 > 1 results into ν 0 < 2mν 1 + 1 3 (2m 2 − 6m + 16 + 3l(m − 2)), and this proves (viii). The boundary lines for the comparison are shown in Figure 6 with T 1 ⋗ T 9 on the lower (right) side of line for each case. T 1 versus T 10 case: It is clear that for this case, the ratio is R 1,10 = 1, which immediately concludes that E 1 = E 10 for all m ∈ N.  T 2 versus T 3 case: In this case, the ratio is .
T 2 versus T 10 case: In this case, the ratio is R 2,10 = 3mν 0 + 2m 2 ν 1 + m 6 (2m 2 + 27m − 17 + 3l(7 + m)) 3mν 0 + 2m 2 ν 1 + m 6 (2m 2 + 27m − 11 + 3l(7 + m)) . Figure 11. Boundary lines for comparison of T2 and T7. It can be clearly observed that the inequality R 2,10 < 1 holds for all m ∈ N, and consequently T 10 ⋗ T 2 for all m ∈ N. ⊓ ⊔ It can be deduced from the above analysis that both of the proposed techniques display better efficiency in comparison to the existing techniques with increasing values of m. We conclude this section with a remark that, as large as the system is, the proposed techniques are, in general, superior than the existing ones in reference to the subject of computational complexity.

Numerical experimentation
The numerical experimentation shall be carried out to assess the performance and stability of the proposed techniques by executing their algorithms on a digital platform. The outcome of testing needs to be compared with the corresponding outcome of existing techniques to arrive at some valid and logical conclusion. Some of the nonlinear problems, emerging from different practical situations, have been selected for this purpose. The performance of an iterative technique is generally evaluated on the basis of two factors: (i) Number of iterations required to converge, and (ii) CPU time elapsed during the entire course of action. Both of these factors tend to vary with the convergence behavior of technique as well as with the proximity of initial approximation. Let us note that the elapsed CPU time also varies with the characteristics of digital platform. In our case, numerical experimentation is being done using the multi-precision arithmetic software Mathematica [12], which is installed on the machine with specifications: Intel(R) Core (TM) i5-9300H processor and Windows 10 operating system.
Numerical performance of an iterative technique is considerably governed by the number of evaluations or mathematical operations involved per iteration. This indicates the existence of correlation between the performance and efficiency. To build this relation, the evaluation cost of each mathematical operation and function needs to be expressed in terms of product units. As discussed in Section 3, the numerical estimates of parameters ν 0 , ν 1 , and l are required for that purpose. In this regard, Table 1   To demonstrate the performance of proposed techniques, and to further compare these with the existing ones, their algorithms are executed using the software Mathematica under the similar framework. The following stopping criterion is used to terminate the iterations: Further, the approximated computational order of convergence (ACOC) is computed to authenticate the theoretically established convergence order, which is given by the expression (see [2]), Now, considering the following nonlinear problems for the performance analysis, we display the outcomes in respect of: (i) Number of iterations (n), (ii) ACOC, (iii) Computational cost (C i ), (iv) Efficiency index (E i ), and (v) Elapsed CPU time (in seconds). Note that, to illustrate the efficiency index of techniques, we conveniently choose D = 10 −5 for each problem. Problem 1. Starting with the three dimensional nonlinear problem, which is given by To evaluate the computational cost and efficiency index for this problem, the parameters used in equation (3.2) are estimated as, (m, ν 0 , ν 1 , l) = (3, 2.33, 0.67, 2.81). Numerical results for the performance of methods are displayed in the Table 2.
In particular for a = 2, and setting k = 16, the given system reduces to 15 nonlinear equations satisfying the solution, The initial estimate to the given solution is chosen as (− 3 2 ,

15
· · · · · ·, − 3 2 ) T . The approximate numerical solution, so obtained, is compared with the exact solution of the given problem in the Figure 13.
In this problem, the estimated values of parameters, used in the equation (3.2), are given by (m, ν 0 , ν 1 , l) = (15, 2, 0.067, 2.81). Table 3 exhibits the comparison of performance of techniques.  To transform the Equation (4.2) into a finite-dimensional problem, the given interval [0, 1] is partitioned into sub-intervals of equal length, h = 1/k, as follows: Denoting u(t i ) = u i for each i = 1, 2, . . . , k, and approximating the integral in Equation (4.2) using the trapezoidal rule of integration, we obtain the system of k nonlinear equations as where s i = t i = i/k for each i. Setting k = 25 in particular, the solution of system of equations (4.3) is given as We set the initial approximation as (−1, 25 · · · · · ·, −1) T . Numerical results so obtained are displayed in Table 4. The exact solution of the given integral equation along with numerical solution obtained is compared in the Figure 14   where m = 200. We set the initial estimate as 3 2 , · · · , 3 2 T to obtain the solution, x * = (0.0050 . . . , · · ·, 0.0050 . . .) T .
It can be inferred from the findings of performance outcomes, displayed in Tables 2-6, that the proposed fifth order techniques are computationally more efficient as compared to their existing counterparts, as both of these exhibit superiority in terms of elapsed CPU time as well as efficiency index. Computation of ACOC further authenticates the theoretically proven fifth order of convergence. Similar numerical tests, conducted for a variety of problems, largely confirm the above conclusions.

Conclusions
Two multi-step iterative techniques, for solving the systems of nonlinear equations, have been proposed in the present study. The techniques are found to possess the fifth order of convergence under some prescribed assumptions.
With the fact that only single evaluation of matrix inversion is required per iteration, both of the developed techniques are highly economical from the view of computational complexity, typically for the large scale systems. Iterative techniques with these characteristics are hardly found in the literature. Performance and stability of the proposed techniques have been assessed by executing the numerical experimentation on the selected nonlinear problems arising in different practical situations. Outcomes of the testing conclude that the proposed techniques dominate the majority of existing ones when examined in the context of computational cost, efficiency index, convergence behavior, and the elapsed CPU time in the execution of algorithm.