In this post I will show a simple way of finding the inverse of a function on the real line in Haskell.

Let *f* be a continuous (real) function defined on a closed interval [*a*,*b*]. The inverse of *f* is well-defined if and only if *f* is injective (i.e. no value can be assumed at two distinct points, i.e. *f*(*x*)=*f*(*y*) implies *x*=*y*). This is not a property that Haskell can easily check for us, so it is up to us to make sure that *f* is injective. From now on, we assume that *f* is injective.

The problem at hand is this: for each *y* in the range of *f* find *x* such that *f*(*x*)=*y* (the *range* of *f* in our case is simply the interval [*f*(*a*),*f*(*b*)], or [*f*(*b*),*f*(*a*)] if *f*(*b*)<*f*(*a*)). The function which maps *y* to *x* is called the *inverse* of *f*. This problem can be solved by introducing a new function *F*(*x*)=*f*(*x*)-*y* and then finding the zero of *F* for each *y* in the range of *f*. Note that I say *the* zero here, since *f* is assumed to be injective and hence the zero is unique. In general the function *F* need not be linear, so we’ll need a non-linear equation solver to tackle this problem.

To find zeros of a non-linear equation I’ll use the *bisection method*. Given *f* and two points *l*<*r* such that *f* changes sign on the open interval (*l*,*r*), the bisection method halves the interval and picks a subinterval where *f* changes sign and repeats. If the function is zero at the midpoint the method stops (otherwise *f* must change sign on at least one of the two subintervals since *f* is continuous and hence the intermediate value theorem applies).

When implementing the bisection method we will eventually run out of precision and the interval can no longer be halved. Our implementation checks for this condition first, and bisects only if it is possible. (For simplicity, we always bisect to maximum precision in this implementation instead of allowing an arbitrary precision to be specified. This is fine as long as we’re using fixed precision arithmetic and the function *f* does not take too long to evaluate.)

```
> bisect' f l r
> | m <= l = l
> | m >= r = r
> | f l * f m < 0 = bisect' f l m
> | f m * f r < 0 = bisect' f m r
> | otherwise = m
> where m = (l+r)/2
```

Note that *f*(*l*)**f*(*m*)<0 implies that *f* changes sign on the open interval (*l*,*m*) and *f*(*m*)**f*(*r*)<0 implies that *f* changes sign on (*m*,*r*).

The function `bisect'`

is guaranteed to find a zero if *f* changes sign on (*l*,*r*). If *f* does not change sign then it will still return a solution even though there may be none. To fix this problem so that our bisection method only returns a zero if there is one we call this function from a “wrapper” which checks that *f* changes sign. Also, we make some sanity checks on the endpoints *a* and *b*.

```
> bisect f a b
> | a > b = bisect f b a
> | f a * f b < 0 = bisect' f a b
> | f a == 0 = a
> | f b == 0 = b
> | otherwise = error "bisect: failed"
```

Normally the return value should be wrapped in a `Maybe`

and return `Nothing`

instead of raising an exception in the case where the method fails, but for my purposes I really want the program to halt if there is no solution, hence the use of `error`

.

With `bisect`

in hand we can now easily find the inverse of a continuous and injective function *f* defined on the closed interval [*a*,*b*]:

```
> inverse f a b y = bisect (\x -> f x - y) a b
```

Let’s use this to find the inverse of a function *f* on [0,1] which cannot be inverted “by hand”. Note the *f* below *is* injective on this interval, but not on any interval which strictly contains [0,1]. In GHCi:

```
*Main> let f x = x^2 * abs (tan x)
*Main> let g = inverse f 0 1
*Main> g 1
0.8952060453842319
*Main> f it
1.0
*Main> f (g 0.3)
0.30000000000000004
*Main> it - 0.3
5.551115123125783e-17
```

That looks good! (Haskell uses double precision floating point, so the last result is zero up to machine epsilon [approx. 10^{-16} for double precision].) Note that you have to be careful with the inverse *g* since it is only defined on the range of *f*, namely [*f*(0),*f*(1)] in this case. At this point it would probably make sense to write some QuickCheck tests, but I’ll stop now anyway.

you don’t know how much pain i will avoid thanks to this.

this is useful when doing gauss-legendre integration. i really appreciate this!

BTW, you could explain the algebra better

you can say for example that you move the graph to make the zero of the ecuation the solution “(\x -> f x – y)” and then check whether that point is. how we know it is a zero it will change sign. but that can be done nicely with in gnuplot.

either way good post!

Thanks a lot It really helped me while coding similar thing for Monte Carlo in Erlang !!! I would have wasted some time :)