Brain Dump

Numerical Methods

Tags
math

Refers to several methods to determine whether a curve has a root in the interval [a,b].

To do this you must show that there's a change in sign between f(a) and f(b) and that there is continuity on the equation between the points x=a and x=b.

Warn: without continuity a change of sign no longer guarantees the presence of a root. Consider a hyperbola y=1x. Between f(1) and f(1) there's a definite change in the sign but no definite root that we can find.

The Need for Numerical Methods

These approaches show their value when you realise that not all equations have solvable roots. Recall that for the several types of equations, there are formula for the root (see ). But their isn't such a formula for Quintic. Meaning there're some equations that cannot be solved algebraically. In such cases numerical methods can yield a root.

Table 1: List of algebraic functions and the sort of terms found in the formulae for their roots. root-formulae
FunctionTerms in the Root Formula
QuadraticMissing argument for \sqrt
CubicMissing argument for \sqrt
QuarticMissing argument for \sqrt

Methods

Each of the methods described below try to narrow down from a wide range or start-position to an x-value sufficiently close to 0 (or exactly 0 if possible).

They all have the possibility to diverge and not find a root, but under some conditions their guaranteed to converge to a root.

Interval Bisection

Is an approach for narrowing down to the root. It's essentially a binary search.

Given that f(a) is positive and f(b) is negative the algorithm is:

r = a, b
while True:
    m = (a + b) / 2
    fm = f(m)
    if fm > 0:
	a = m
    elif fm < 0:
	b = m
    else:  # fm == 0
	root = m
	break

Essentially we continually cut off half the search space, moving closer to the root as we move long, until we've found the root to a desired accuracy.

Linear Interpolation

Is a variant that uses the linear distance between the current range to jump to the approximate location of the root. This works by using the ratio of the two triangles formed through intersecting the root.

\begin{figure}[ht]
  \centering
  \begin{tikzpicture}[scale=2]
    % Definitions
    \def\xa{1cm} \def\a{3cm} \def\xb{4cm}
    \def\yb{1cm} \def\ya{-2cm}

    % Axis and Line
    \draw[very thick,->]  (0,0) -- (5,0) node[right] {$x$};
    \draw (\xa,0) -- ++(0, 4pt) node[above] {$x_1$}
	  (\a,0) -- ++(0, 4pt) node[above] {$\alpha$}
	  (\xb,0) -- ++(0, -4pt) node[below] {$x_2$};
    \draw[thick] (\xa,\ya) -- (\xb,\yb);
    \fill (\xa, \ya) circle (2pt) node[left] {$y_1$}
	  (\xb, \yb) circle (2pt) node[right] {$y_2$};

    % Witdh Indicators
    \draw[dotted] (\xb, \yb) -- (\xb,0) (\a, 0) -- (\a, \ya);
    \draw[dotted,<->] ({\xa + 2pt}, \ya) -- (\a,\ya) node[below,midway] {$x_2 - \alpha$};
    \draw[dotted,<->] (\a,-3pt) -- (\xb, -3pt) node[below,midway] {$\alpha - x_1$};
  \end{tikzpicture}
  \caption{A demonstration of the ratio between the triangles formed by the
x-intercept of a straight-line.}
  \label{fig:larp-demo}
\end{figure}

In

{\Delta{y}} \) is the same for both triangles. If we equate them we can get an equation for α in terms of the points.

αx1y1=x2αy2 (αx1)y2=(x2α)(y1) αy2x1y2=x2y1αy1 α(y2y1)=x2y1+x1y2 α=x2y1+x1y2y2y1

Now for example if we had x1=1,f(x1)=4 and x2=0,f(x2)=1, we can substitute this into eq:larp to get α=0×4+(1)×11(4)=15 Of course this only gets us the exact root when the curve is linear between y1 and y2, however we can repeat the same process as interval bisection, using linear-interpolation to jump to an approximate root and then repeating the same process until we approach the correct root to a certain degree of accuracy.

Newton-Raphson Method

This approach is a sort of gradient-descent to reach the root of an equation. It works by:

  1. Having an existing x-value, xn,
  2. Finding the tangent of the curve at that point, and then
  3. Setting xn+1 to the x-intercept of the tangent.
\begin{figure}[ht]
  \centering
  \incfig{20210618042247}
  \caption{Visualisation of the Newton Raphson method, starting with \( x_n \) we
find the root \( x_{n+3} \) in 3 steps.}
  \label{fig:newton-raphson}
\end{figure}

We define the tangent of a 2D curve at a point (x1,y1) as

yy1=m(xx1) yf(xn)=(f(xn))(xxn)

Furthermore because each subsequent point in the sequence of the newton-rhapson method is an X intercept, x=xn+1 when y=0, meaning following from eq:curve-tangent.

0f(xn)=(f(xn))(xn+1xn) f(xn)f(xn)=xn+1xn xn+1=xnf(xn)f(xn)

eq:newton-raphson defines the algorithm of the newton-raphson method. For each iteration we take the current x position and subtract from it the height of the curve weighted by the gradient at that point. This can be visualised in fig:newton-raphson.

This method is only guaranteed to converge to a root if g(x)<1 in some neighbourhood of the root.