Taylor Polynomial Approximation

$$ % Define colors used throughout LaTeX explanation \require{color} \definecolor{error}{RGB}{ 255, 0, 0 } \definecolor{taylor}{RGB}{ 0, 0, 255 } \definecolor{estimate}{RGB}{ 160, 80, 0 } \definecolor{normal}{RGB}{ 0, 0, 0 } % i.e. black \definecolor{builtin}{RGB}{ 0, 180, 0 } $$

Note (Spring 2025)

I have updated my old code for drawing on the HTML canvas. (The old code, used in the TIME III Conference Presentation, is still available at graph.js.) If you have saved the old version of this web page, you can discard it.

Now the code is re-factored and split among several files:

LaTeX is now based on the most recent (and externally loaded) MathJax. The code we worked on in class is located in the files with self-descriptive names: cos.js, exp.js, tan.js, ln.js.

The general case

Suppose $n$ is a non-negative integer, $\ U$ is an open interval of the real number line, and $x_0 \in U$. Whenever a function $f$ is defined and continuously differentiable $n+1$ times on $U$, we can write the following identity for any other $x \in U$:

$$ \color{builtin} f( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ i = 0 }^{ n } \frac{ 1 }{ i \ ! } \cdot \left( { \left( \frac{ d }{ d \ t } \right)^{ i } \ {\rule[-25px]{1px}{60px}} }_{ \ t = x_0 } f( t ) \right) \cdot ( x - x_0 )^i \color{normal} + \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = x_0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } f( t ) \right) \cdot ( x - t )^n \ d \ t \color{normal} .$$

The sum is called the $ \color{taylor} \text{Taylor polynomial} $ of $f$ at $x_0$. We view it as the approximation of $ \color{builtin} f( x ) \color{normal} $, so that the integral is the $ \color{error} \text{error term} $ of that approximation.

It turns out that in many cases:

In those cases, the $ \color{taylor} \text{Taylor polynomial} $ can be used for approximating $ \color{builtin} f( x ) \color{normal} $ — perhaps only for the values of $x$ sufficiently close to $x_0$ — with an arbitrary, and guaranteed, precision $\varepsilon > 0$.

$\cos(x)$

When used for $f( x ) = \cos( x )$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into

$$ \color{builtin} \cos( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ j = 0 }^{ k } \frac{ ( -1 )^j }{ ( 2j ) \ ! } \cdot x^{2j} \color{normal} + \color{error} \frac{ 1 }{ ( 2k ) \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ 2k + 1 } \cos( t ) \right) \cdot ( x - t )^{ 2k } \ d \ t \color{normal} $$
for an even $n = 2k$ when denoting $i = 2j$.

The size of the $ \color{error} \text{integral error term} $ can be estimated from above:

$$ \left| \ \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \cos( t ) \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} \ \right| \le \color{estimate} \frac{ \left| x \right|^{ n + 1 } }{ ( n + 1 ) \ ! } \color{normal} .$$

The $ \color{estimate} \text{estimate of the error term} $ (and thus the $ \color{error} \text{error term} $ itself) converges to zero for any real $x$:

$$ \lim_{ n \rightarrow \infty } \color{estimate} \frac{ \left| x \right|^{n + 1} }{ ( n + 1 ) \ ! } \color{normal} = 0 .$$

These facts provide the basis for computing $ \color{builtin} \cos( x ) \color{normal} $ with an arbitrary guaranteed precision* $\varepsilon > 0$, as done in the source code of this HTML page. To see a demonstration of this computation, press the "Show" button below. To start from scratch, press the "Reset" button or refresh the page.

* Our computation uses floating point computer representation of real numbers, and thus suffers from all the usual limitations of floating point arithmetic. This precision can be guaranteed only modulo floating point errors.

$e^x$

When used for $f( x ) = e^x$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into

$$ \color{builtin} e^x \color{normal} = \color{taylor} \displaystyle \sum_{ i = 0 }^{ n } \frac{ 1 }{ i \ ! } \cdot x^{i} \color{normal} + \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} e^t \cdot ( x - t )^{ n } \ d \ t \color{normal} .$$

The size of the $ \color{error} \text{integral error term} $ can be estimated from above:

$$ \left| \ \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 0 }^{x} e^t \cdot ( x - t )^{ n } \ d \ t \color{normal} \ \right| \le \color{estimate} \frac{ \max \left({\rule[-5px]{0px}{25px}} 1, 3^{ \left\lceil \left| x \right| \right\rceil } \right) \cdot \left| x \right|^{ n + 1 } }{ ( n + 1 ) \ ! } \color{normal} .$$

The $ \color{estimate} \text{estimate of the error term} $ (and thus the $ \color{error} \text{error term} $ itself) converges to zero for any real $x$:

$$ \lim_{ n \rightarrow \infty } \color{estimate} \frac{ \max \left({\rule[-5px]{0px}{25px}} 1, 3^{ \left\lceil \left| x \right| \right\rceil } \right) \cdot \left| x \right|^{n + 1} }{ ( n + 1 ) \ ! } \color{normal} = 0 .$$

These facts provide the basis for computing $ \color{builtin} e^x \color{normal} $ with an arbitrary guaranteed precision* $\varepsilon > 0$, as done in the source code of this HTML page. To see a demonstration of this computation, press the "Show" button below. To start from scratch, press the "Reset" button or refresh the page.

* Our computation uses floating point computer representation of real numbers, and thus suffers from all the usual limitations of floating point arithmetic. This precision can be guaranteed only modulo floating point errors.

$\tan( x )$

When used for $f( x ) = \tan( x )$ and $x_0 = 0$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into something rather complicated:

$$ \color{builtin} \tan( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ j = 0 }^{ k } \frac{ ( -1 )^{ j + 1 } \ 4^{ j + 1 } \left( 1 - 4^{ j + 1 } \right) \ B_{ 2j + 2 } }{ ( 2 j + 1 ) \ ! } \cdot x^{ 2j + 1 } \color{normal} + \color{error} \frac{ 1 }{ ( 2k + 1 ) \ ! } \displaystyle \int_{ t = 0 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ 2 ( k + 1 ) } \tan( t ) \right) \cdot ( x - t )^{ 2k + 1 } \ d \ t \color{normal} .$$

In the above, the terms $B_i$ are the so-called Bernoulli numbers. Even though it is done in the source code of this page, the computation of the Bernoulli numbers is a bit too far from the main subject at hand — namely the Taylor polynomials — to discuss here in full detail.

The size of the $ \color{error} \text{integral error term} $ can be estimated from above, but — as you can probably guess from the behavior of the Taylor polynomials below — that estimate can be guaranteed to converge to zero only for $x \in \left( - \frac{\pi}{2}, \frac{\pi}{2} \right)$. Having omitted a discussion of Bernoulli numbers, we are likewise leaving off the explicit estimation of the error term.

$\ln( x )$

When used for $f( x ) = \ln( x )$ and $x_0 = 1$, the general $ \color{taylor} \text{Taylor polynomial} $ approximation turns into

$$ \color{builtin} \ln( x ) \color{normal} = \color{taylor} \displaystyle \sum_{ i = 1 }^{ n } \frac{ ( -1 )^{ i + 1 } }{ i } \cdot ( x - 1 )^{ i } \color{normal} + \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 1 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \ln( t ) \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} .$$

For any $n = 0, 1, \ldots$ we have: $ \displaystyle \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \ln( t ) \right) = ( -1 )^n \frac{ n \ ! }{ t^{ n + 1 } } $ and thus the $ \color{error} \text{integral error term} $ is: $$ \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 1 }^{x} \left( \left( \frac{ d }{ d \ t } \right)^{ n + 1 } \ln( t ) \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} = \color{error} \frac{ 1 }{ n \ ! } \displaystyle \int_{ t = 1 }^{x} \left( ( -1 )^n \frac{ n \ ! }{ t^{ n + 1 } } \right) \cdot ( x - t )^{ n } \ d \ t \color{normal} = \color{error} \displaystyle \int_{ t = 1 }^{x} ( -1 )^n \frac{ ( x - t )^{ n } }{ t^{ n + 1 } } \ d \ t \color{normal} = \color{error} \displaystyle \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} .$$

The size of the $ \color{error} \text{error term} $ can be estimated from above, but that estimate can be guaranteed to converge to zero only for $x \in \left( 0, 2 \right]$. Indeed, the expression $ \displaystyle 1 - \frac{ x } { t } ,$ considered as a function of $t$, has the graph in the shape of a hyperbola (with the vertical asymptote $t = 0$, horizontal asymptote $y = 1$, and crossing over the $t$-axis at $t = x$). Thus this function of $t$ is monotonic on the interval $t \in [ 1, x ]$, reaching its maximum deviation from zero $ 1 - x \ $ (negative if $x > 1$ and positive if $x < 1$) at the point $t = 1$:

Therefore we can estimate the size of the error term for any $x > 0$ as follows: $$ \left| \ \color{error} \displaystyle \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} \ \right| \le \color{estimate} \left| \ \displaystyle \int_{ t = 1 }^{x} \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \left| 1 - \frac{ x } { t } \right| \right)^n \ \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \frac{ 1 }{ t } \right) \ d \ t \ \right| \color{normal} = \color{estimate} \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \left| 1 - \frac{ x } { t } \right| \right)^n \cdot \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} \frac{ 1 }{ t } \right) \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \displaystyle \ d \ t \ \right| \color{normal} = \color{estimate} \left| x - 1 \right|^n \cdot \max\left( 1, \frac{1}{x} \right) \cdot \left| x - 1 \right| \color{normal} = \color{estimate} \left| x - 1 \right|^{ n + 1 } \cdot \max\left( 1, \frac{1}{x} \right) \color{normal} .$$

The last expression $ \color{estimate} E_{n} \color{normal} = \color{estimate} \left| x - 1 \right|^{ n + 1 } \cdot \max\left( 1, \frac{1}{x} \right) \color{normal} $ is directly computable, and for any $x \in \left( 0, 2 \right)$, we have that $$ \lim_{ n \rightarrow \infty} \color{estimate} E_{n} \color{normal} = 0 .$$

Using a slightly different estimation, we can demonstrate that the error term converges to zero even for $x = 2$. Indeed, for any $x \in [ 1, 2 ]$ and $n = 1, 2, \ldots \ $, we have that $$ \left| \ \color{error} \displaystyle \int_{ t = 1 }^{x} \left( 1 - \frac{ x } { t } \right)^n \ \frac{ 1 }{ t } \ d \ t \color{normal} \ \right| = \left| \ \color{error} \displaystyle \int_{ t = 1 }^{x} \frac{ ( x - t )^{ n } }{ t^{ n + 1 } } \ d \ t \color{normal} \ \right| \le \color{estimate} \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} | x - t |^{ n } \right) }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} = \color{estimate} \displaystyle \max_{ t \in [ 1, x ] } \left( \rule[-5px]{0px}{30px} | x - t |^{ n } \right) \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ 1 }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ 1 }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} $$ $$ = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} \frac{ 1 }{ t^{ n + 1 } } \ d \ t \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} t^{ -n - 1 } \ d \ t \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \ \displaystyle \int_{ t = 1 }^{x} d \ \frac{ t^{ -n } }{ -n } \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \left| \rule[-25px]{0px}{70px} \ { \frac{ t^{ -n } }{ -n } \ {\rule[-25px]{1px}{60px}} }_{ \ t = 1 }^{x} \ \right| \color{normal} = \color{estimate} | x - 1 |^{ n } \cdot \frac{ \left| \frac{ 1 }{ x^n } - 1 \ \right| }{ n } \color{normal} \displaystyle \xrightarrow[ n \to +\infty ]{} 0 .$$

The Taylor polynomial of $\ln( x )$ is implemented in the source code of this page.

As the graphs of the above Taylor polynomials suggest, the error term does not converge to zero outside of the interval $ x \in ( 0, 2] $. The proof of this fact is left as an exercise to the reader.

This non-convergence can be remedied for $x > 2$ with the help of algebraic properties of logarithms. Indeed, $$ \ln( x ) = \ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } \right) - n \cdot \ln\left( \frac{1}{2} \right) ,$$ and for any $x > 0$, we can find an integer $n > 0$ that will place the $ \displaystyle %\ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } %\right) $ into the interval of convergence $\left( 0, 2 \right)$ where the Taylor polynomial of the logarithm can be used to compute both $ \displaystyle \ln\left(\rule[-5px]{0px}{30px} \frac{ x }{ 2^n } \right) $ and $ \ln\left( \frac{1}{2} \right) .$