Tuesday, February 16, 2010

Code rewrite!!

Hi again! I was searching the web for other people writing numerical solvers in Haskell and found this nice guy called James. His idea is to extract the tolerance test to a new function. This function takes an infinite list of approximated x values and then take the value that gives a relative error less than the specified tolerance.

Now relative error is calculated as absolute value of (x(n-1)-x(n))/x(n-1). I changed his convergence function and corrected a mistake in it, cause he forgot to take the absolute value for the whole expression of relative error. Here is my code:

module HaskNum where

converge :: (Ord a, Fractional a) => a -> [a] -> a
converge tol (old:new:xs) = if reldiff old new <= tol
then new
else converge tol (new:xs)
where reldiff old new = abs $ (old-new) / old
converge tol (x:_) = x -- to handle when some functions
-- succeed from the first time

What I did was put it in a module I call HaskNum that will grow with time. And here is the new version of my previous four functions:

bisec :: (F -> F) -> (F,F) -> [F]
bisec f (a, b)
| f(c) == 0 = [c]
| f(a)*f(c) < 0 = c : bisec f (a,c)
| otherwise = c : bisec f (c,b)
where c = (a+b)/2
fpi :: (F -> F) -> F -> [F]
fpi = iterate
newton :: (F -> F) -> (F -> F) -> F -> [F]
newton f f' = iterate xc
where xc xi = xi - f(xi)/f'(xi)
secant :: (F -> F) -> (F, F) -> [F]
secant f (x0, x1) = xc : secant f (x1, xc)
where xc = x1 - f(x1)*(x1-x0)/(f(x1)-f(x0))

Now things look really nice! And oh here how u can call one of them with a function that u define:

main = do
let answer = (converge 0.0000005 . bisec (func)) (0,1)
print answer

An interesting note is that all the methods except the Bisection Method gives the answer xc=0.6823278. The Bisection Method gives 0.682328, i.e. one digit less. Now that is a mystery for me, but I would guess its something with computer arithmetic and the intermediate operations.