Brain Dump

Numerical Methods

Tags
math

Refers to several methods to determine whether a curve has a root in the interval \( [a,b] \).

To do this you must show that there's a change in sign between \( f(a) \) and \( f(b) \) and that there is continuity on the equation between the points \( x=a \) and \( x=b \).

Warn: without continuity a change of sign no longer guarantees the presence of a root. Consider a hyperbola \( y = \frac{1}{x} \). Between \( f(-1) \) and \( f(1) \) there's a definite change in the sign but no definite root that we can find.

The Need for Numerical Methods

These approaches show their value when you realise that not all equations have solvable roots. Recall that for the several types of equations, there are formula for the root (see ). But their isn't such a formula for Quintic. Meaning there're some equations that cannot be solved algebraically. In such cases numerical methods can yield a root.

Table 1: List of algebraic functions and the sort of terms found in the formulae for their roots. root-formulae
FunctionTerms in the Root Formula
Quadratic\( a,b,c,+,-,\times,\div,\sqrt[2] \)
Cubic\( a,b,c,d,+,-,\times,\div,\sqrt[2],\sqrt[3] \)
Quartic\( a,b,c,d,e,+,-,\times,\div,\sqrt[2],\sqrt[3],\sqrt[4] \)

Methods

Each of the methods described below try to narrow down from a wide range or start-position to an \( x \)-value sufficiently close to \( 0 \) (or exactly \( 0 \) if possible).

They all have the possibility to diverge and not find a root, but under some conditions their guaranteed to converge to a root.

Interval Bisection

Is an approach for narrowing down to the root. It's essentially a binary search.

Given that \( f(a) \) is positive and \( f(b) \) is negative the algorithm is:

r = a, b
while True:
    m = (a + b) / 2
    fm = f(m)
    if fm > 0:
	a = m
    elif fm < 0:
	b = m
    else:  # fm == 0
	root = m
	break

Essentially we continually cut off half the search space, moving closer to the root as we move long, until we've found the root to a desired accuracy.

Linear Interpolation

Is a variant that uses the linear distance between the current range to jump to the approximate location of the root. This works by using the ratio of the two triangles formed through intersecting the root.

\begin{figure}[ht]
  \centering
  \begin{tikzpicture}[scale=2]
    % Definitions
    \def\xa{1cm} \def\a{3cm} \def\xb{4cm}
    \def\yb{1cm} \def\ya{-2cm}

    % Axis and Line
    \draw[very thick,->]  (0,0) -- (5,0) node[right] {$x$};
    \draw (\xa,0) -- ++(0, 4pt) node[above] {$x_1$}
	  (\a,0) -- ++(0, 4pt) node[above] {$\alpha$}
	  (\xb,0) -- ++(0, -4pt) node[below] {$x_2$};
    \draw[thick] (\xa,\ya) -- (\xb,\yb);
    \fill (\xa, \ya) circle (2pt) node[left] {$y_1$}
	  (\xb, \yb) circle (2pt) node[right] {$y_2$};

    % Witdh Indicators
    \draw[dotted] (\xb, \yb) -- (\xb,0) (\a, 0) -- (\a, \ya);
    \draw[dotted,<->] ({\xa + 2pt}, \ya) -- (\a,\ya) node[below,midway] {$x_2 - \alpha$};
    \draw[dotted,<->] (\a,-3pt) -- (\xb, -3pt) node[below,midway] {$\alpha - x_1$};
  \end{tikzpicture}
  \caption{A demonstration of the ratio between the triangles formed by the
x-intercept of a straight-line.}
  \label{fig:larp-demo}
\end{figure}

In

{\Delta{y}} \) is the same for both triangles. If we equate them we can get an equation for \( \alpha \) in terms of the points.

\begin{align} \frac{\alpha - x_1}{y_1} &= \frac{x_2 - \alpha}{y_2} \
(\alpha - x_1)y_2 &= (x_2-\alpha)(y_1) \
\alpha y_2 - x_1 y_2 &= x_2 y_1 - \alpha y_1 \
\alpha (y_2 - y_1) &= x_2 y_1 + x_1 y_2 \
\alpha &= \frac{x_2 y_1 + x_1 y_2}{y_2 - y_1} \label{eq:larp} \end{align}

Now for example if we had \( x_1 = -1, f(x_1) = -4 \) and \( x_2 = 0, f(x_2) = 1 \), we can substitute this into eq:larp to get \[ \alpha = \frac{0 \times -4 + (-1) \times 1}{1 - (-4)} = \frac{1}{5} \] Of course this only gets us the exact root when the curve is linear between \( y_1 \) and \( y_2 \), however we can repeat the same process as interval bisection, using linear-interpolation to jump to an approximate root and then repeating the same process until we approach the correct root to a certain degree of accuracy.

Newton-Raphson Method

This approach is a sort of gradient-descent to reach the root of an equation. It works by:

  1. Having an existing \( x \)-value, \( x_n \),
  2. Finding the tangent of the curve at that point, and then
  3. Setting \( x_{n+1} \) to the x-intercept of the tangent.
\begin{figure}[ht]
  \centering
  \incfig{20210618042247}
  \caption{Visualisation of the Newton Raphson method, starting with \( x_n \) we
find the root \( x_{n+3} \) in 3 steps.}
  \label{fig:newton-raphson}
\end{figure}

We define the tangent of a 2D curve at a point \( (x_1, y_1) \) as

\begin{align} y - y_1 &= m (x - x_1) \
y - f(x_n) &= (f'(x_n)) (x - x_n) \label{eq:curve-tangent} \end{align}

Furthermore because each subsequent point in the sequence of the newton-rhapson method is an X intercept, \( x = x_{n+1} \) when \( y = 0 \), meaning following from eq:curve-tangent.

\begin{align} 0 - f(x_n) &= (f'(x_n))(x_n+1 - x_n) \
\frac{-f(x_n)}{f'(x_n)} &= x_{n+1} - x_n \
x_{n+1} &= x_n - \frac{f(x_n)}{f'(x_{n})} \label{eq:newton-raphson} \end{align}

eq:newton-raphson defines the algorithm of the newton-raphson method. For each iteration we take the current \(x\) position and subtract from it the height of the curve weighted by the gradient at that point. This can be visualised in fig:newton-raphson.

This method is only guaranteed to converge to a root if \( g'(x) < 1 \) in some neighbourhood of the root.