Friday, January 22, 2010

Newton & Secant Methods

Been a while since I added new methods, but here comes two.

The first one is called Newton's method. Basically it will "draw" a tangent that goes through the initial guess. Then in the next iteration it will draw a second tangent for the x point where the first tangent crossed the x-axis and so on. You need to supply it an initial guess.

The formula is x = x0 - f(x0)/f'(x0).

type F = Float
newton :: (F -> F) -> (F -> F) -> F -> Integer -> [F]
newton _ _ _ 0 = []
newton f f' xi k = xc : newton f f' xc (k-1)
where xc = xi - f(xi)/f'(xi)

func :: F -> F
func x = x^3+x-1

func' :: F -> F
func' x = 3*x^2+1

main = do
let answer = newton (func) (func') 1 4
print answer

The second method is called the Secant method. Instead of using a tangent it uses a secant line, which is a line intersecting two points on the function curve. In the next iteration the next secant line will go through the last two x values. In this method like in the Bisection Method we need to choose the two initial guesses around the true solution! In other words for the two initial guesses x0, x1 the following must be true f(x0)*f(x1)<0 to guarantee a solution for a continues function.

The formula is x = x1 - f(x1)(x1-x0)/(f(x1)-f(x0)).


type F = Float
secant :: (F -> F) -> F -> F -> Integer -> [F]
secant _ _ _ 0 = []
secant f x0 x1 k = xc : secant f x1 xc (k-1)
where xc = x1 - f(x1)*(x1-x0)/(f(x1)-f(x0))

func :: F -> F
func x = x^3+x-1

main = do
let answer = secant (func) 0.5 1 4
print answer

Saturday, January 9, 2010

Bisection Method vs Fixed-Point Iteration

How much faster is FPI compared to the Bisection Method? For that we need to think about accuracy. Or the error compared to the true solution. Now I know we don't know the true solution otherwise we would not be needing these numeric methods! But we can enclose the true solution inside an interval where we know that it lies in it (well for a continuous functions in the interval of interest).

Starting from The Bisection Method we know for each step the interval [a,b] is cut in half! After n steps our new interval [an, bn] have the length (b-a)/2^n. But choosing the solution to be the midpoint of the n-interval we know the true solution (if it is not the midpoint) is within an interval of length (b-a)/2^(n+1).

A solution is correct within p decimal places if the error |xc-r|<0.5*10^-p.

How many times do we need to cut the interval [0,1] in half to get an approximation with 6 correct digits for the problem f(x)=x^3+x-1 using the Bisection Method?

Solving (b-a)/2^(n+1) < 0.5*10^(-6) => n = 19.9. Which means The Bisection method will need 20 steps for 6 correct digits. The approximated solution with 6 correct digits is xc=0.682327.

Using FPI with g(x)=(1+2x^3)/(1+3x^2) and an initial guess x0=1. After 4 steps we get the same solution. A major improvement of number of steps by using FPI with a "nice" g(x)!

Fixed-Point Iteration

Fixed-Point Iteration, FPI for short, is another equation solving method. But! Unlike The Bisection Method you have to exercise more caution.

First we need to rewrite the initial problem from f(x)=0 to g(x)=x and iterate until the solution stabilizes. Here is the code for FPI:

fpi :: (F -> F) -> F -> Integer -> [F]
fpi _ _ 0 = []
fpi g x0 k = xc : fpi g xc (k-1)
where xc = g(x0)

The code is simple and the method returns a list with the intermediate approximated solutions. The best solution is in the end of the list.

Let us try to solve the same equation as the one presented in The Bisection Method, f(x)=x^3+x-1. Re-writing it we would get the equation: x=1-x^3 thus g(x)=1-x^3 (remember it should be of the form g(x)=x). But FPI will fail and the list returned would look like:

[0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,...]

That is no good! But if we happen to add 2x^3 to both sides of the equation, and rearrange things we get: x = (1+2x^3)/(1+3x^2). So using g(x)=(1+2x^3)/(1+3x^2) we will have a good approximation in just 6 steps!

[0.75,0.68604654,0.68233955,0.68232775,0.6823278,
0.6823278]

Why? Well because there is a theorem that says: Assume g(x) is continuously differentiable, and g(r)=r, and |g'(r)|<1. Then FPI converges to the fixed point r for sufficiently close initial guess to r.

Checking |g'(r)| (r≃0.68) for the first attempt we can see that it is larger than 1, but not the second attempt.

FPI can converge to a solution faster than The Bisection Method, but we need to re-write the original solution. Because we dont now the solution r beforehand we can test re-writing it differently until FPI succeeds.

Bisection Method

Hi fellow internet user,

I am a computer engineering student in my last year (not quite true but its a long story!). I have recently learned Haskell which I quite fancy for the moment. I was wondering how I can find the motivation to dwell into this particulate programming language more than just passing my course assignments! Fortunate or not I have not passed my Numerical Analysis course. So while waiting for the re-exam in late August I will rewrite some of the numerical methods presented to me in the course in Haskell instead of as Matlab scripts.

And I start with an easy one, a bracketing method called The Bisection Method. Without further ado here is my code:

type F = Float

bisec :: (F -> F) -> (F,F) -> F -> F
bisec f (a, b) tol
| (b-a)/2 < tol = (a+b)/2
| f(c) == 0 = c
| f(a)*f(c) < 0 = bisec f (a,c) tol
| otherwise = bisec f (c,b) tol
where c = (a+b)/2

func :: F -> F
func x = x^3+x-1

main = do
let answer = bisec (func) (0, 2) 0.0000005
print answer