IASC IASC IASC Intelligent Automation & Soft Computing 2326-005X1079-8587 Tech Science Press USA 15285 10.32604/iasc.2021.015285 Article Optimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical EngineeringOptimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical EngineeringOptimal Eighth-Order Solver for Nonlinear Equations with Applications in Chemical Engineering Solaiman Obadah Said Hashim Ishak ishak_h@ukm.edu.my Department of Mathematical Sciences, Universiti Kebangsaan Malaysia, Bangi Selangor, 43600, Malaysia *Corresponding Author: Ishak Hashim. Email: ishak_h@ukm.edu.my 15 01 2021 27 2 379 390 13 11 2020 01 12 2020 © 2021 Solaiman and Hashim 2021 Solaiman and Hashim This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A new iterative technique for nonlinear equations is proposed in this work. The new scheme is of three steps, of which the first two steps are based on the sixth-order modified Halley’s method presented by the authors, and the last is a Newton step, with suitable approximations for the first derivatives appeared in the new scheme. The eighth-order of convergence of the new method is proved via Mathematica code. Every iteration of the presented scheme needs the evaluation of three functions and one first derivative. Therefore, the scheme is optimal in the sense of Kung-Traub conjecture. Several test nonlinear problems are considered to compare the performance of the proposed method according to other optimal methods of the same order. As an application, we apply the new scheme to some nonlinear problems from the field of chemical engineering, such as a chemical equilibrium problem (conversion in a chemical reactor), azeotropic point of a binary solution, and volume from van der Waals equation. Comparisons and examples show that the presented method is efficient and comparable to the existing techniques of the same order.

Nonlinear equations root finding method iterative methods Halley’s method optimal order of convergence
Introduction

Searching for a solution of g(x)=0 , when g(x) is nonlinear is highly significant in mathematics. Newton’s iterative technique for solving such equations is defined as

zn+1=zg(zn)g(zn).

It was shown by Traub  that the scheme given by (1) has the second-order of convergence. Many researchers have improved the method of Newton to attain better results and to increase the convergence order, for instance see  and the references therein. Petković  has presented a general class of multipoint root finding methods of arbitrary order 2n . Because of the huge number of iterative techniques that appear in the literature, Petković et al.  have presented a review for the most efficient iterative methods and developed techniques in a general sense. Also, Cordero et al.  have presented a general survey on optimal iterative schemes and how to design optimal methods of different orders.

One of the most famous improvements of Newton’s scheme is the technique of order three given by Halley :

zn+1=zn2g(zn)g(zn)2(g(zn))2g(zn)g(zn).

Halley’s method has been studied widely and improved in different ways. For example, a two-step Halley’s scheme with order of convergence equals six was implemented by Noors et al.  using a predictor-corrector technique. But finding the second derivative is not always an easy task. Because of that Noor et al.  have improved the previous technique with the help of the finite difference and implemented a new second derivative-free scheme of order five. Very recently, Said Solaiman et al.  have established two sixth-order modifications of Halley’s method, with one of them without second derivative.

One of the most common ways to compare the efficiency of iterative methods is the efficiency index which can be determined by q1/r , where q is convergence order of the iterative scheme and r represents number of functions needed to be found at each iteration. Kung et al.  mentioned in a conjecture that the iterative scheme with the number of functional evaluations equals r is optimal if its order of convergence equals 2r1 . Many authors have constructed optimal iterative methods of different orders. The default way for constructing optimal method is the composition technique together with the usage of some interpolations and approximations to minimize the number of functional evaluations. Different optimal fourth-order iterative methods have been constructed, see for examples . Optimal eighth-order of convergence methods have been presented by many authors, see [2,17,1822]. A comparison using the dynamics of different families of optimal eighth-order of convergence methods was proposed by Chun et al. .

We propose in this work a new optimal eighth-order iterative technique for nonlinear equations. The new method is a modification of the modified Halley method (MH2) introduced by Said Solaiman et al. . We use the composition technique with Hermite’s interpolation for the first derivative to reach the eighth-order of convergence with optimality, which can be considered as the major motivation of this research. The work in this paper is distributed as follows. In Section 2 , below the new scheme is illustrated. In Section 3 , the order of convergence of the new scheme is determined. In Section 4 , four chemical engineering problems in addition to six nonlinear examples are used to demonstrate the efficiency of the proposed scheme, and tables are used to illustrate the comparison between our optimal method with other techniques having equal order. Lastly, in Section 5 the conclusion is given.

The New Method

Let g(x)=0 be an equation such that g(x) is a nonlinear function defined on some open interval A and sufficiently differentiable. Let αA be a simple root of g(x) , and consider x0 as an initial guess which is sufficiently close to α . Said Solaiman and Hashim  obtained the following iterative scheme using Taylor’s expansion of g(x) with Newton’s and Halley’s methods.

<italic>Let</italic> <inline-formula id="ieqn-21"> <alternatives><inline-graphic xlink:href="ieqn-21.png"/><tex-math id="tex-ieqn-21"><![CDATA[${x_0}$]]></tex-math><mml:math id="mml-ieqn-21"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>be an initial guess of the solution of</italic> <inline-formula id="ieqn-22"> <alternatives><inline-graphic xlink:href="ieqn-22.png"/><tex-math id="tex-ieqn-22"><![CDATA[$g\left( x \right) = 0$]]></tex-math><mml:math id="mml-ieqn-22"><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> </alternatives></inline-formula>. <italic>Then we can approximate</italic> <inline-formula id="ieqn-23"> <alternatives><inline-graphic xlink:href="ieqn-23.png"/><tex-math id="tex-ieqn-23"><![CDATA[${x_{n + 1}}$]]></tex-math><mml:math id="mml-ieqn-23"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>by the iterative method defined by:</italic>
 (3) {yn=xn−g(xn)g′(xn),xn+1=yn−g(yn)g′(yn)−2(g(yn))2g′(yn)g′′(yn)4(g′(yn))4−4g(yn)(g′(yn))2g′′(yn)+(g(yn))2(g′′(yn))2.

Said Solaiman et al.  named Algorithm 1 as MH1. They proved that MH1 is of order of convergence equals six. MH1 needs at each iteration the evaluation of two functions, two first derivatives, and one second derivative evaluation. So, the efficiency index of MH1 is (6)151.431 , which is not as good as Halley’s method.

To make the efficiency index of Algorithm 1 better, Hermite’s approximation of the second derivative is used to produce a second derivative-free method given by:

<italic>Let</italic> <inline-formula id="ieqn-25"> <alternatives><inline-graphic xlink:href="ieqn-25.png"/><tex-math id="tex-ieqn-25"><![CDATA[${x_0}$]]></tex-math><mml:math id="mml-ieqn-25"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>be an initial guess of the solution of</italic> <inline-formula id="ieqn-26"> <alternatives><inline-graphic xlink:href="ieqn-26.png"/><tex-math id="tex-ieqn-26"><![CDATA[$g\left( x \right) = 0$]]></tex-math><mml:math id="mml-ieqn-26"><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> </alternatives></inline-formula>. <italic>Then we can approximate</italic> <inline-formula id="ieqn-27"> <alternatives><inline-graphic xlink:href="ieqn-27.png"/><tex-math id="tex-ieqn-27"><![CDATA[${x_{n + 1}}$]]></tex-math><mml:math id="mml-ieqn-27"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>by the iterative method defined by:</italic>
 (4) {yn=xn−g(xn)g′(xn),xn+1=yn−g(yn)g′(yn)−2(g(yn))2g′(yn)R(xn,yn)4(g′(yn))4−4g(yn)(g′(yn))2R(xn,yn)+(g(yn))2(R(xn,yn))2,

where the second derivative g(yn) is approximated by

R(xn,yn)=[3g(yn)g(xn)ynxn2g(yn)g(xn)]2xnyn.

Algorithm 2 is called MH2. Said Solaiman et al.  proved that it is of order six. MH2 needs at each iteration the computation of two functions, and two first derivatives only. So, MH2 has efficiency index equals (6)141.565 , which is better than (6)151.431 of MH1 and 3131.442 of Halley’s method.

In order to reach the optimality, we reduce the number of functions needed to be evaluated at each iteration by using divided differences, Hermite’s interpolation, and the composition of Algorithm 2 with Newton’s method. Now, by using Algorithm 2 as a predictor, and Newton’s technique as a corrector one obtains the following algorithm:

<italic>Let</italic> <inline-formula id="ieqn-32"> <alternatives><inline-graphic xlink:href="ieqn-32.png"/><tex-math id="tex-ieqn-32"><![CDATA[${x_0}$]]></tex-math><mml:math id="mml-ieqn-32"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>be an initial guess of the solution of</italic> <inline-formula id="ieqn-33"> <alternatives><inline-graphic xlink:href="ieqn-33.png"/><tex-math id="tex-ieqn-33"><![CDATA[$g\left( x \right) = 0$]]></tex-math><mml:math id="mml-ieqn-33"><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> </alternatives></inline-formula>. <italic>Then we can approximate</italic> <inline-formula id="ieqn-34"> <alternatives><inline-graphic xlink:href="ieqn-34.png"/><tex-math id="tex-ieqn-34"><![CDATA[${x_{n + 1}}$]]></tex-math><mml:math id="mml-ieqn-34"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>by the iterative method defined by:</italic>
 (6) {yn=xn−g(xn)g′(xn),wn=yn−g(yn)g′(yn)−2(g(yn))2g′(yn)R(xn,yn)4(g′(yn))4−4g(yn)(g′(yn))2R(xn,yn)+(g(yn))2(R(xn,yn))2,xn+1=wn−g(wn)g′(wn).

Algorithm 3 has order of convergence equals twelve with error term en+1=c25(c4c2c3)2en12+O(en13) . At each iteration, Algorithm 3 requires three function evaluations and needs three first derivatives. Our goal is to rewrite g(yn) and g(wn) by using a combination of already evaluated functions.

Using the second-order polynomial interpolation of the function g(yn) one simply obtains

g(yn)g[yn,xn]+(ynxn)g[yn,xn,xn],

where g[yn,xn,xn]=(g[yn,xn]g(xn))/(ynxn) . Simplifying (7) and using q(yn)=g(yn) gives

g(yn)=q(yn)2g[yn,xn]g(xn).

Now, we will use the technique which proposed before by Petković  and Petković et al  to approximate g(wn) , consider Hermite’s interpolating polynomial of order 3

k(t)=c1+c2(txn)+c3(txn)2+c4(txn)3,

where c1,c2,c3, and c4 need to be found. With the conditions

g(xn)=k(xn),g(yn)=k(yn),g(wn)=k(wn),g(xn)=k(xn), and by solving the system of linear equations resulted from the above conditions, we get

c1=g(xn), c2=g(xn), c3=(xnwn)g[wn,yn](ynwn)(xnyn)(xnyn)g[xn,wn](ynwn)(xnwn)g(xn)(1wnxn+1ynxn), c4=g[xn,wn](xnwn)(ynwn)g[yn,xn](xnyn)(ynwn)+g(xn)(xnwn)(xnyn). . Substituting these into Eq. (9) and using the approximation g(wn)=k(wn) , one can write

k(wn)=g[wn,xn](2+xnwnynwn)(xnwn)2(xnyn)(ynwn)g[xn,yn]+g(xn)ynwnxnyng(wn).

Replacing g(yn) and g(wn) in Algorithm 3 and in Eq. (5) with the approximations (8) and (10) respectively, the following algorithm is obtained.

<italic>Let</italic> <inline-formula id="ieqn-52"> <alternatives><inline-graphic xlink:href="ieqn-52.png"/><tex-math id="tex-ieqn-52"><![CDATA[${x_0}$]]></tex-math><mml:math id="mml-ieqn-52"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mn>0</mml:mn></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>be an initial guess of the solution of</italic> <inline-formula id="ieqn-53"> <alternatives><inline-graphic xlink:href="ieqn-53.png"/><tex-math id="tex-ieqn-53"><![CDATA[$g\left( x \right) = 0$]]></tex-math><mml:math id="mml-ieqn-53"><mml:mi>g</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> </alternatives></inline-formula>. <italic>Then we can approximate</italic> <inline-formula id="ieqn-54"> <alternatives><inline-graphic xlink:href="ieqn-54.png"/><tex-math id="tex-ieqn-54"><![CDATA[${x_{n + 1}}$]]></tex-math><mml:math id="mml-ieqn-54"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math> </alternatives></inline-formula> <italic>by the iterative method defined by:</italic>
 (11) {yn=xn−g(xn)g′(xn),wn=yn−g(yn)q(yn)−2(g(yn))2q(yn)R(xn,yn)4(q(yn))4−4g(yn)(q(yn))2R(xn,yn)+(g(yn))2(R(xn,yn))2,xn+1=wn−g(wn)k′(wn).

We call the above scheme the third modified Halley’s method MH3, which has convergence order equals eight as we will see in the next section. Each iteration in Algorithm 4 requires the evaluation of three functions, and one first derivative only. Based on the conjecture of Kung et al. , MH3 attains optimality and has efficiency index (8)14=1.6818 .

Order of Convergence

In this section we establish the order of convergence of the presented method MH3 given by Algorithm 4 by using Mathematica codes to prove the order of convergence. Consider for the next theorem that α is a root of g(x) , and let en=xnα be the error at the n -th iteration. Using Taylor’s series expansion of g(x) about x=α : g(xn)=g(α)[en+c2en2+c3en3+c4en4+] , where ck=1k!g(k)(α)g(α), k=2,3, we have the following:

Theorem 1 Let αA be a simple root of the function g:ARR , where g(x) is sufficiently differentiable in an open interval A . Let x0 be an initial guess close enough to the root α . The proposed scheme given by (11) has at least eighth-order of convergence.

Proof. The following Mathematica code proves the theorem

In := g[e_]:= dg[ α ] (e+c 2 e 2 +c 3 e 3 +c 4 e 4 ); (*g(x) series with dg[ α ]= g(α) *).

In := g[x_,y_]:= g[x]g[y]xy ; (*This is the finite difference*).

In := q[x_,y_]:= 2 g[x,y]-g [x]; (* g(y) approximation *).

In := R[x_,y_]:= (3 g[y]g[x]yx -2q[x,y]-g [y]) 2xy ; (*Second derivative approximation*).

In := k[x_,y_,w_]:= g[w,x] (2+ wxwy )- (wx)2(wy)(yx) g[x,y]+g’[x] wyyx ;(* g(w) approximation*).

In := y =e-Series[ g[e]g[e] ,{e,0,8}]; (*First step of Alg. 4 *).

In := w =y- g[y]q[e,y]2(g[y])2q[e,y]R[e,y]4(q[e,y])44g[y](q[e,y])2R[e,y]+(g[y])2(R[e,y])2 ;(*Second step of Alg. 4 *).

In := e n+1 = w- g[w]k[e,y,w] // FullSimplify (* Third step of Alg. 4 *).

Out := c 22 c 3 (c 2 c 3 -c 4 )e 8 +O[e 9 ]

Hence, MH3 technique given by Algorithm 4 is of eighth-order of convergence.

Applications and Numerical Examples

To show the efficiency of the new optimal eighth-order method MH3, several examples will be tested including some chemical engineering problems. Comparison will be done against the following schemes of optimal eighth-order of convergence: the method proposed by Kung et al. , the method presented by Cordero et al.  with β=1 , the second case of the first family with β=1 proposed by Sharma et al. , the method presented by Behl et al.  with β=1 , and the special case 2 with β=1 from the method presented by Behl et al. . We denote the methods by the following abbreviations respectively: KT, CLMT, SA, BGMM, and BAM.

We consider |xnxn1|<1030 and |f(xn)f(xn1)|<1030 at the same time as a stopping criterion of the computer programs. Mathematica 9 was used to carry out all computations with 10000 significant digits.

Tabs. 15 illustrate the comparisons between the iterative methods, where n indicates number of the iterations such that the stopping criterion is affirmed, xn is the approximate root, |xnxn1| is the absolute difference between two successive approximations of the root such that |xnxn1|<1030 and |f(xn)f(xn1)|<1030 , f(xn) is the value of the approximate root, the approximated computational order of convergence (ACOC) given by Cordero et al. , which can be estimated as follows

ACOCln|(xn+1xn)/(xnxn1)|ln|(xnxn1)/(xn1xn2)|, and finally, the time in seconds required to satisfy the stopping criterion using the built-in function “TimeUsed” in Mathematica 9 software. All calculations have been performed under the same conditions on Intel Core i7-3770 CPU @3.40 GHz with 4GB RAM, with Microsoft Windows 10, 64 bit based on X64-based processor.

Consider the following test examples:

Example 1 (A chemical equilibrium problem) Consider the equation from  which describes the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is called fractional conversion). Also, consider that we have pressure of 250 atm and temperature of 500 C, the original problem consists of finding the root of the function

f1(x)=8(4x)2x2(63x)2(2x)0.186, which can be reduced in polynomial form as:

f1(x)=x47.79075x3+14.7445x2+2.511x1.674.

The four roots of this function are: x1=0.27776,x2=0.384094,x3=3.94854+0.316124i and x4=3.94854+0.316124i . By the definition, the factional conversion must be between 0 and 1 . So, only the first real root x1=0.27776 is acceptable and physically meaningful. We started by x0=0.3 as an initial guess. The results are concluded in Tab. 1.

Comparisons between different methods on test function <inline-formula id="ieqn-125"> <alternatives><inline-graphic xlink:href="ieqn-125.png"/><tex-math id="tex-ieqn-125"><![CDATA[${f_1}\left( x \right)$]]></tex-math><mml:math id="mml-ieqn-125"><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math> </alternatives></inline-formula>
Method n xn |xnxn1| |f(xn)| ACOC CPU Time
f1(x);x0=0.3
KT 3 0.27775954284172066 5.82E-96 2.41E-760 8 0.171
CLMT 3 0.27775954284172066 1.72E-103 1.87E-821 8 0.203
SA 3 0.27775954284172066 6.38E-67 2.21E-524 8 0.203
BGMM 3 0.27775954284172066 9.36E-70 2.17E-547 8 0.422
BAM 3 0.27775954284172066 3.81E-96 7.95E-762 8 0.203
MH3 3 0.27775954284172066 3.41E-109 9.49E-868 8 0.188

Example 2 (Azeotropic point of a binary solution) Consider the problem obtained by Shacham et al.  to determine the azeotropic point of a binary solution:

f2(x)=AB[B(1x)2Ax2][x(AB)+B]2+0.14845, where A and B are coefficients in the Van Laar equation which describes phase equilibria of liquid solutions. Consider for this problem that A=0.38969 and B=0.55954 .

The root of this equation is x=0.6914737357 . We took the initial approximation x0=1 . See Tab. 2 for the results and comparisons.

Comparisons between different methods on test function <inline-formula id="ieqn-136"> <alternatives><inline-graphic xlink:href="ieqn-136.png"/><tex-math id="tex-ieqn-136"><![CDATA[${f_2}\left( x \right)$]]></tex-math><mml:math id="mml-ieqn-136"><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math> </alternatives></inline-formula>
Method n xn |xnxn1| |f(xn)| ACOC CPU Time
f2(x);x0=1
KT 3 0.69147373574714142 7.41E-48 3.05E-379 8 0.187
CLMT 3 0.69147373574714142 1.74E-56 1.87E-449 8 0.234
SA 3 0.69147373574714142 7.73E-83 6.19E-665 8 0.172
BGMM 3 0.69147373574714142 4.61E-76 1.77E-609 8 0.453
BAM 3 0.69147373574714142 1.38E-44 1.36E-352 8 0.188
MH3 3 0.69147373574714142 8.37E-54 7.36E-428 8 0.171

Example 3 (Conversion in a chemical reactor) In this example from , the following nonlinear equation is to be solved

f3(x)=x1x5ln[0.4(1x)0.40.5x]+4.45977, where x is the fractional conversion of species in a chemical reactor. Therefore, x should be bounded between 0 and 1 .

The solution of this equation is x=0.7573962463 . As an initial solution, we selected x0=0.77 . Check the results in Tab. 3.

Comparisons between different methods on test function <inline-formula id="ieqn-149"> <alternatives><inline-graphic xlink:href="ieqn-149.png"/><tex-math id="tex-ieqn-149"><![CDATA[${f_3}\left( x \right)$]]></tex-math><mml:math id="mml-ieqn-149"><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>3</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math> </alternatives></inline-formula>
Method n xn |xnxn1| |f(xn)| ACOC CPU Time
f3(x);x0=0.77
KT 3 0.75739624625375388 6.61E-47 6.39E-360 8 0.312
CLMT 3 0.75739624625375388 6.24E-53 2.86E-409 8 0.765
SA 4 0.75739624625375388 1.75E-106 1.08E-831 8 0.280
BGMM 4 0.75739624625375388 5.99E-173 6.79E-1363 8 7.484
BAM 3 0.75739624625375388 4.82E-42 2.29E-320 8 0.312
MH3 3 0.75739624625375388 2.37E-48 2.79E-372 8 0.264

Example 4 (Volume from Van Der Waals equation) Van Der Waals’ equation is given by

(p+n2aV2)(Vnb)=nRT,

where p,V,T,n are the pressure, volume, temperature in Kelvin and number of moles of the gas. R is the gas constant equals 0.0820578 . Finally, a and b are called Van Der Waals constants and they depend on the gas type. Its clear that the above equation is nonlinear in V . It can be reduced to the following function of V .

f(V)=pV3n(RT+bp)V2+n2aVn3ab. For instance, if one has to find the volume of 1.4 moles of benzene vapor under pressure of 40 atm and temperature of 500 C, given that Van Der Waals constants for benzene are a=18 and b=0.1154 , then the problem arises is to find roots of this polynomial

f4(x)=40x395.26535116x2+35.28x5.6998368. The above equation has three roots: x=1.97078 and x=0.205425±0.173507i . As V is a volume, therefore only the positive real roots are physically meaningful, that is the first root. We considered the initial approximation x0=2 for this problem. The results and comparisons are concluded in Tab. 4.

Comparisons between different methods on test function <inline-formula id="ieqn-172"> <alternatives><inline-graphic xlink:href="ieqn-172.png"/><tex-math id="tex-ieqn-172"><![CDATA[${f_4}\left( x \right)$]]></tex-math><mml:math id="mml-ieqn-172"><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>4</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math> </alternatives></inline-formula>
Method n xn |xnxn1| |f(xn)| ACOC CPU Time
f4(x);x0=2
KT 3 1.9707842194070294 2.41E-89 2.11E-706 8 0.171
CLMT 3 1.9707842194070294 1.10E-98 3.31E-782 8 0.171
SA 3 1.9707842194070294 3.43E-40 2.07E-305 8 0.187
BGMM 3 1.9707842194070294 3.86E-44 1.35E-337 8 0.312
BAM 3 1.9707842194070294 2.83E-88 1.04E-697 8 0.187
MH3 3 1.9707842194070294 7.22E-107 1.32E-848 8 0.156

Example 5 To study the proposed method on some nonlinear functions, consider the following six test functions:

f5(x)=(x1)31,f6(x)=x310,f7(x)=cos(x)x, f8(x)=1x2+sin2(x),f9(x)=(2+x)ex1, f10(x)=ln(x2x+1)4sin(x1).

Comparisons’ results of Example 5 are presented in Tab. 5.

Comparisons between different methods on test functions <inline-formula id="ieqn-181"> <alternatives><inline-graphic xlink:href="ieqn-181.png"/><tex-math id="tex-ieqn-181"><![CDATA[${f_5}\left( x \right) - {f_{10}}\left( x \right)$]]></tex-math><mml:math id="mml-ieqn-181"><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>5</mml:mn></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>−</mml:mo><mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>10</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math> </alternatives></inline-formula>
Method n xn |xnxn1| |f(xn)| ACOC CPU Time
f5(x);x0=2.5
KT 4 2 6.26E-188 3.94E-1497 8 0.187
CLMT 4 2 2.50E-241 2.06E-1925 8 0.156
SA 4 2 9.54E-105 5.85E-830 8 0.203
BGMM 4 2 3.50E-131 5.28E-1042 8 0.203
BAM 4 2 2.16E-185 1.13E-1476 8 0.172
MH3 3 2 4.68E-32 7.73E-252 8 0.156
f6(x);x0=2
KT 3 2.1544346900318837 3.63E-65 1.09E-516 8 0.171
CLMT 3 2.1544346900318837 1.21E-75 1.30E-601 8 0.188
SA 4 2.1544346900318837 5.84E-175 4.83E-1391 8 0.188
BGMM 4 2.1544346900318837 3.10E-236 6.70E-1882 8 0.218
BAM 3 2.1544346900318837 2.62E-63 1.16E-501 8 0.203
MH3 3 2.1544346900318837 1.56E-81 2.55E-649 8 0.156
f7(x);x0=1.7
KT 3 0.7390851332151606 4.05E-46 1.06E-366 8 0.281
CLMT 3 0.7390851332151606 2.55E-52 1.21E-417 8 0.281
SA 3 0.7390851332151606 2.94E-34 7.85E-273 8 0.281
BGMM 3 0.7390851332151606 4.47E-42 2.58E-363 8 1.313
BAM 3 0.7390851332151606 1.22E-47 5.96E-379 8 0.329
MH3 3 0.7390851332151606 4.13E-53 2.35E-424 8 0.249
f8(x);x0=1
KT 4 1.4044916482153412 2.97E-123 2.04E-980 8 0.281
CLMT 4 1.4044916482153412 2.34E-226 2.05E-1806 8 0.390
SA 3 1.4044916482153412 2.21E-32 2.30E-253 8 0.265
BGMM 3 1.4044916482153412 8.57E-40 5.22E-314 8 1.657
BAM 4 1.4044916482153412 8.75E-92 1.46E-728 8 0.374
MH3 3 1.4044916482153412 6.83E-38 1.23E-299 8 0.250
f9(x);x0=0.5
KT 3 0.4428544010023886 2.74E-85 1.24E-677 8 0.313
CLMT 3 0.4428544010023886 2.30E-95 2.55E-759 8 0.328
SA 3 0.4428544010023886 2.47E-79 2.76E-629 8 0.328
BGMM 3 0.4428544010023886 1.83E-88 2.48E-703 8 5.126
BAM 3 0.4428544010023886 1.59E-82 3.06E-655 8 0.454
MH3 3 0.4428544010023886 2.57E-96 5.13E-767 8 0.313
f10(x);x0=1.5
KT 3 1 1.78E-42 5.04E-338 8 0.282
CLMT 3 1 2.53E-47 3.47E-377 8 0.282
SA 3 1 5.25E-48 2.07E-381 8 0.344
BGMM 3 1 6.53E-51 5.40E-405 8 4.359
BAM 3 1 5.28E-42 4.30E-334 8 0.328
MH3 3 1 1.80E-54 2.40E-487 9 0.281

It is clear from Tabs. 15 that MH3 needs less iterations to satisfy the stopping criterion than the other tested methods, or in some cases it needs the same number of iterations. Based on the numerical experiments, the iterative scheme given by MH3 is comparable to the tested schemes of equal order. Note that even if MH3 has the same number of iterations needed to satisfy the convergence criterion, it is still superior to the other schemes considered in this study since |xnxn1| and f(xn) are less for MH3 than the other tested methods of the same order. Also, in the last column of Tabs. 15, the CPU time required to satisfy the convergence condition of MH3 is less in nine out of 10 functions than that of the other tested methods. Overall, based on either the number of iterations or CPU time needed to satisfy the convergence criterion, the new method would be preferable as compared to the tested methods.

For the test functions in Example 5, we test another convergence condition, that is number of required iterations such that |xnxn1|<10200 . It is obvious from Tab. 6 that MH3 requires number of iterations which is fewer or equal to the number needed by the tested methods of equal order of convergence to satisfy the convergence criterion. Overall, MH3 is comparable to the other tested methods if we want to take in account the accuracy of the approximate zero with the CPU time needed to satisfy the stopping criterion.

Comparisons between different methods such that <inline-formula id="ieqn-1125"> <alternatives><inline-graphic xlink:href="ieqn-1125.png"/><tex-math id="tex-ieqn-1125"><![CDATA[$\left| {{x_n} - {x_{n - 1}}} \right|\lt10^{-200}$]]></tex-math><mml:math id="mml-ieqn-1125"><mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>−</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mi>n</mml:mi><mml:mo>−</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>|</mml:mo></mml:mrow><mml:mtext>< </mml:mtext><mml:msup><mml:mrow><mml:mn>10</mml:mn></mml:mrow><mml:mrow><mml:mo>−</mml:mo><mml:mn>200</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math> </alternatives></inline-formula>
Method f5(x) f6(x) f7(x) f8(x) f9(x) f10(x)
x0=2.5 x0=2 x0=1.7 x0=1 x0=0.5 x0=1.5
KT 5 4 4 5 4 4
CLMT 4 4 4 4 4 4
SA 5 5 4 4 4 4
BGMM 4 4 4 4 4 4
BAM 5 4 4 5 4 4
MH3 4 4 4 4 4 4
Conclusion

A new optimal root finding scheme for nonlinear equations has been established in this work. The optimality of the proposed method was reached by using composition technique with Hermite’s polynomial and finite differences. The software Mathematica has been used to show that the optimal technique is convergent with convergence order equals eight. Several numerical examples with four real life problems from the field of chemical engineering were examined, demonstrating the strength of the proposed method. Overall, the implemented method is comparable to the tested iterative schemes of equal order of convergence.

Funding Statement: We are grateful for the financial support from UKM’s research Grant GUP-2019-033.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References J. F. Traub , Iterative Methods for the Solution of Equations. Englewood Cliffs, NJ, USA: Prentice-Hall, 1964. A. Cordero, T. Lotfi, K. Mahdiani and J. R. Torregrosa , “ A stable family with high order of convergence for solving nonlinear equations,” Applied Mathematics and Computation, vol. 254, pp. 240251, 2015. A. Kumar, P. Maroju, R. Behl, D. K. Gupta and S. S. Motsa , “ A family of higher order iterations free from second derivative for nonlinear equations in R,” Journal of Computational and Applied Mathematics, vol. 330, pp. 676694, 2018. O. S. Solaiman and I. Hashim , “ Efficacy of optimal methods for nonlinear equations with chemical engineering applications,” Mathematical Problems in Engineering, vol. 2019, pp. 11, 2019. S. Sharifi, M. Salimi, S. Siegmund and T. Lotfi , “ A new class of optimal four-point methods with convergence order 16 for solving nonlinear equations,” Mathematics and Computers in Simulation, vol. 119, pp. 6990, 2016. M. S. Petković , “ On a general class of multipoint root-finding methods of high computational efficiency,” SIAM Journal on Numerical Analysis, vol. 47, no. 6, pp. 44024414, 2010. M. S. Petković, B. Neta, L. D. Petković and J. Džunić , “Multipoint methods for solving nonlinear equations: a survey, ” in Applied Mathematics and Computation, vol. 226, Elsevier Inc., pp. 635660, 2014. A. Cordero and J. R. Torregrosa , “On the design of optimal iterative methods for solving nonlinear equations, ” in Advances in Iterative Methods for Nonlinear Equations, SEMA SIMAI Springer Series, S. Amat, S. Busquier , 10. New York City, NY: Springer International Publishing, pp. 79111, 2016. E. Halley , “ A new; exact and easy method of finding the roots of equations generally and that without any previous reduction,” Philosophical Transactions of the Royal Society, vol. 18, no. 210, pp. 136148, 1694. K. I. Noor and M. A. Noor , “ Predictor-corrector Halley method for nonlinear equations,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 15871591, 2007. M. A. Noor, W. A. Khan and A. Hussain , “ A new modified Halley method without second derivatives for nonlinear equation,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 12681273, 2007. O. S. Solaiman and I. Hashim , “ Two new efficient sixth order iterative methods for solving nonlinear equations,” Journal of King Saud University—Science, vol. 31, no. 4, pp. 701705, 2019. H. T. Kung and J. F. Traub , “ Optimal order of one-point and multipoint iteration,” Journal of the ACM, vol. 21, no. 4, pp. 643651, 1974. C. Chun , “ Some fourth-order iterative methods for solving nonlinear equations,” Applied Mathematics and Computation, vol. 195, no. 2, pp. 454459, 2008. R. Sharma and A. Bahl , “ An optimal fourth order iterative method for solving nonlinear equations and its dynamics,” Journal of Complex Analysis, vol. 2015, no. 8, pp. 19, 2015. R. Behl, P. Maroju and S. S. Motsa , “ A family of second derivative free fourth order continuation method for solving nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 318, pp. 3846, 2017. O. S. Solaiman, S. A. A. Karim and I. Hashim , “ Optimal fourth- and eighth-order of convergence derivative-free modifications of King’s method,” Journal of King Saud University—Science, vol. 31, no. 4, pp. 14991504, 2019. R. Behl, I. K. Argyros and S. S. Motsa , “ A new highly efficient and optimal family of eighth-order methods for solving nonlinear equations,” Applied Mathematics and Computation, vol. 282, pp. 175186, 2016. J. R. Sharma and H. Arora , “ Some novel optimal eighth order derivative-free root solvers and their basins of attraction,” Applied Mathematics and Computation, vol. 284, pp. 149161, 2016. G. Matthies, M. Salimi, S. Sharifi and J. L. Varona , “ An optimal three-point eighth-order iterative method without memory for solving nonlinear equations with its dynamics,” Japan Journal of Industrial and Applied Mathematics, vol. 33, no. 3, pp. 751766, 2016. R. Behl, D. González, P. Maroju and S. S. Motsa , “ An optimal and efficient general eighth-order derivative free scheme for simple roots,” Journal of Computational and Applied Mathematics, vol. 330, pp. 666675, 2018. Y. H. Geum, Y. I. Kim and B. Neta , “ Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points,” Journal of Computational and Applied Mathematics, vol. 333, pp. 131156, 2018. C. Chun and B. Neta , “ Comparison of several families of optimal eighth order methods,” Applied Mathematics and Computation, vol. 274, pp. 762773, 2016. A. Cordero and J. R. Torregrosa , “ Variants of Newton’s method using fifth-order quadrature formulas,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 686698, 2007. G. V. Balaji and J. D. Seader , “ Application of interval Newton’s method to chemical engineering problems,” Reliable Computing, vol. 1, no. 3, pp. 215223, 1995. M. Shacham and E. Kehat , “ An iteration method with memory for the solution of a non-linear equation,” Chemical Engineering Science, vol. 27, no. 11, pp. 20992101, 1972. M. Shacham , “ Numerical solution of constrained non-linear algebraic equations,” International Journal for Numerical Methods in Engineering, vol. 23, no. 8, pp. 14551481, 1986.