next up previous contents
Next: Linear Algebraic and Equations Up: Numerical Analysis for Chemical Previous: Modeling, Computers, and Error

Subsections


Roots of Equations

Backeting Methods

Graphical Methods

A simple method for obtaining an estimate of the root of the equation $ f(x)=0$ is to make a plot of the function and observe where it crosses the x axis.

The Bisection Method

In general, if $ f(x)$ is real and continuous in the interval from $ x_l$ to $ x_u$ and $ f(x_l)$ and $ f(x_u)$ have opposite signs, that is

$\displaystyle f(x_l)f(x_u) < 0$ (2.1)

then there is at least one real root between $ x_l$ and $ x_u$.

The False-Position Method

A shortcoming of the bisection method An alternative method is to join $ f(x_l)$ and $ f(x_u)$ by a straight line and the intersection of this line with the x axis represents an improved estimate of the root. This mothod is called as method of false position, regula falsi, or linear interpolation method.

The false-position formula is

$\displaystyle x_r = x_u - \frac{f(x_u)(x_l - x_u)}{f(x_l)-f(x_u)}$ (2.2)

See Figure 5.14 in textbook

Open Methods

Simple Fixed-point Iteration

Open methods employ a formula to predict the root. Such a formula can be develped for simple fixed-point iteration by rearranging the function $ f(x)=0$ so that s is on the left-hand side of the equation:

$\displaystyle x=g(x)$ (2.3)

Figure 2.1: Graphical depiction of simple fixed-point method.
\includegraphics[scale=.4]{pic/fixed-point}

The Newton-Raphson Method

If the initial guess at the root is $ x_i$, a tangent can be extended from the point $ [x_i, x(x_i)]$. The point where this tangent crosses the x axis usaually represents an improved estimate of the root.

The Newton-Raphson formula is

$\displaystyle x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}$ (2.4)

Pitfalls of the Newton-Raphson Method are shown in Figure 6.6

The Secant Method

A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative. In Secant method the derivative is approximated by a backward finite divided difference

$\displaystyle f'(x_i) \simeq \frac{f(x_{i-1}) - f(x_i)}{x_{i-1} - x_i}$ (2.5)

The Secant formula is

$\displaystyle x_{i+1} = x_i - \frac{f(x_i)(x_{i-1}-x_i)}{f(x_{i-1})-f(x_i)}$ (2.6)

The difference between the secant method and the false-position method is how one of the initial values is replaced by the new estimate.

Rather than using two arbitrary values to estimate the derivative, an alternative approach involves a fractional perturbation of the independent variable to estimate $ f'(x)$,

$\displaystyle f'(x_i) \simeq \frac{f(x_i + \delta x_i) - f(x_i)}{\delta x_i}$ (2.7)

where $ \delta$ is a small perturbation fraction. This approximation gives the following iterative equation:

$\displaystyle x_{i+1} = x_i - \frac{\delta x_i f(x_i)}{f(x_i+\delta x_i) - f(x_i)}$ (2.8)

Multiple Roots

Some difficulities in multiple roots problem

Another alternative is to define a new function $ u(x)$,

$\displaystyle u(x) = \frac{f(x)}{f'(x)}$ (2.9)

An alternative form of the Newton-Raphson method:

$\displaystyle x_{i+1} = x_i - \frac{u(x)}{u'(x)}$ (2.10)

where $ u'(x)$ is

$\displaystyle u'(x)=\frac{f'(x)f'(x)-f(x)f''(x)}{\left[ f'(x) \right]^2}$ (2.11)

And finally

$\displaystyle x_{i+1} = x_i - \frac{f(x_i)f'(x_i)}{\left[f'(x_i)\right]^2 - f(x_i)f''(x_i)}$ (2.12)

Systems of Nonlinear Equations

The Newton-Raphson method can be used to solve a set of nonlinear equations. The Newton-Raphson method employ the derivative of a function to estimate its intercept with the axis of the independent variable. This estimate was based on a first-order Taylor series expansion. For example, we consider two variable case,

$\displaystyle u(x,y)$ $\displaystyle = 0$ (2.13)
$\displaystyle v(x,y)$ $\displaystyle = 0$ (2.14)

A first-order Taylor series expasion can be written as

$\displaystyle u_{i+1}= u_i + (x_{i+1}-x_i) \frac{\partial u_i}{\partial x} + (y_{i+1}-y_i) \frac{\partial u_i}{\partial y}$ (2.15)
$\displaystyle v_{i+1}= v_i + (x_{i+1}-x_i) \frac{\partial v_i}{\partial x} + (y_{i+1}-y_i) \frac{\partial v_i}{\partial y}$ (2.16)

The above equation can be rearranged to give

$\displaystyle \frac{\partial u_i}{\partial x}x_{i+1} + \frac{\partial u_i}{\partial y}y_{i+1}$ $\displaystyle = -u_i + x_i \frac{\partial u_i}{\partial x} + y_i \frac{\partial u_i}{\partial y}$ (2.17)
$\displaystyle \frac{\partial v_i}{\partial x}x_{i+1} + \frac{\partial v_i}{\partial y}y_{i+1}$ $\displaystyle = -v_i + x_i \frac{\partial v_i}{\partial x} + y_i \frac{\partial v_i}{\partial y}$ (2.18)

Finally

$\displaystyle x_{i+1}$ $\displaystyle = x_i - \frac{u_i \frac{\partial v_i}{\partial y} - v_i \frac{\pa...
...{\partial y} - \frac{\partial u_i}{\partial y} \frac{\partial v_i}{\partial x}}$ (2.19)
$\displaystyle y_{i+1}$ $\displaystyle = y_i - \frac{v_i \frac{\partial u_i}{\partial x} - u_i \frac{\pa...
...{\partial y} - \frac{\partial u_i}{\partial y} \frac{\partial v_i}{\partial x}}$ (2.20)

Roots of Polynomials

The roots of polynomials have the following properties

Polynomials in Engineering and Science

Polynomial are used extensively in curve-fitting. However, another most important application is in characterizing dynamic system and, in particular, linear systems.

For example, we consider the following simple second-order ordinary differential equation:

$\displaystyle a_2 \frac{d^2y}{dt} + a_1\frac{dy}{dt} + a_0 y = F(t)$ (2.21)

where $ F(t)$ is the forcing function. And the above ODE can be expressed as a system of 2 first-order ODEs by defining a new variable z,

$\displaystyle z=\frac{dy}{dt}$ (2.22)

This reduces the problem to solving

$\displaystyle \frac{dz}{dt}$ $\displaystyle =\frac{F(t)-a_1z-a_0y}{a_2}$ (2.23)
$\displaystyle \frac{dy}{dt}$ $\displaystyle =z$ (2.24)

In a simliar fashion, nth-order linear ODE can always be expressed as a system of n first-order ODEs.

The general solution of ODE equation deals with the case when the forcing function is set to zero.

$\displaystyle a_2 \frac{d^2y}{dt} + a_1\frac{dy}{dt} + a_0 y = 0$ (2.25)

This equation gives something very fundamental about the system being simulated-that is, how the system reponds in the absence of external stimuli. The general solution to all unforced linear system is of the form $ y=e^{rt}$.

$\displaystyle a_2 r^2 e^{rt} + a_1 r e^{rt} + a_0 e^{rt} = 0$ (2.26)

or cancelling the exponential terms,

$\displaystyle a_2 r^2 + a_1 r + a_0 = 0$ (2.27)

This polynomial is called as characteristic equation and these r's are referred to as eigenvalues.

\begin{displaymath}\begin{array}{l} r_1   r_2 \end{array}=\frac{-a_1 \pm \sqrt{a^2_1 - 4 a_2 a_0}}{a_0}\end{displaymath} (2.28)

Computing with Polynomials

For nth-order polynomial calculation, general approach requires $ n(n+1)/2$ multiplications and $ n$ additions. However, if we use a nested format, $ n$ multiplications and $ n$ additions are required.
DO j=n, 0
   p = p * x + a(j)
END DO

If you want to find all roots of a polynomial, you have to remove the found root before another processing. This removal process is referred to as polynomial deflation.

Conventional Methods

Root Location with Libraries and Packages

Engineering Applications: Roots of Equations

See the textbook
next up previous contents
Next: Linear Algebraic and Equations Up: Numerical Analysis for Chemical Previous: Modeling, Computers, and Error
Taechul Lee
2001-11-29