# Calculating a Maclaurin series

Categories: maclaurin series taylor series

A Maclaurin series allows us to calculate the approximate value of a function *f(x)* as a polynomial:

The values *a _{0}*,

*a*,

_{1}*a*... can be calculated in terms of the derivatives of the function at

_{2}*x = 0*:

In this formula:

*f(0)*is the value of the function for*x = 0*.*f'(0)*is the value of the first derivative function for*x = 0*.*f''(0)*is the value of the second derivative function for*x = 0*.*f'''(0)*is the value of the third derivative function for*x = 0*.- And so on.

The method can be applied to many common functions, for example the exponential function *e ^{x}*, the natural logarithm

*ln(x)*, sine and cosine functions, hyperbolic sine and cosine functions, and many others.

The result is often a polynomial with an infinite number of terms. In many common cases, the terms get smaller very quickly for higher powers of *x*, so it is possible to calculate a value to any required accuracy by calculating the values of a sufficient number of terms.

The Maclaurin series approximates the value of the function at *x = 0*. It also works well for values of *x* that are close to zero. For values of *x* that are far away from zero, it is generally necessary to calculate more terms to obtain the required accuracy.

A Taylor series, which is a generalisation of the Maclaurin series, can be used to calculate accurate values of *f(x)* when *x* has a value other than zero. That will be covered in a later article.

## How the Maclaurin series works

We are trying to express some function *f(x)* as a polynomial of the form given above. We aim to make this approximation as accurate as possible for values of *x* that are close to 0.

We will start by assuming that it is possible to find such a series.

We can then apply a process to find the values of *a _{0}*,

*a*etc.

_{1}Finally, we can verify that the method works for a particular *f(x)* by creating a graph of the polynomial, and comparing it to a graph of t*f(x)*.

## Step 1 - finding the value of a_{0}

Given the approximation we defined above:

We can determine the value of *a _{0}* quite easily. If we set

*x*to zero, all the terms in

*x*go to zero, so we are left with:

So the value of *a _{0}* is simply

*f(0)*.

## Step 2: finding the value of a_{1}

Looking again at the original formula:

How can we find the value of *a _{1}*? We can do this by differentiating each side. Since

*f(x)*and the infinite polynomial are supposed to be the same function, it follows that the first derivatives should also be equal.

We have just used the power rule to differentiate the RHS of the equation:

- The derivative of
*a*is zero, so the term in_{0}*a*disappears._{0} - The derivative of
*a*is_{1}x*a*._{1} - The derivative of
*a*is_{2}x^{2}*2 a*._{2}x - The derivative of
*a*is_{3}x^{3}*3 a*._{3}x^{2} - And so on.

As before, we set *x* to zero. all the terms in *x* go to zero, so we are left with:

This tells us the *a _{1}* is equal to the first derivative of

*f(x)*for

*x = 0*.

## Step 3 - finding the value of a_{2}

Here is the previous formula, where we differentiated each side:

Now we would like to find the value of *a _{2}*? We could try differentiating each side again since it worked quite well last time. The same logic applies - since

*f(x)*and the infinite polynomial are the same function, it follows that the second derivative should also be equal.

We use the power rule to differentiate the RHS of the equation, but this time we need to take into account the factors from the previous time:

- The derivative of
*a*is zero, so the term in_{1}*a*disappears._{1} - The derivative of
*2 a*is_{2}x*2 a*._{2} - The derivative of
*3 a*is_{3}x^{2}*(2 × 3) a*._{3}x - And so on.

Now we can set *x* to zero again, and once again all the terms in *x* will be zero, so we have:

Which gives us this value for *a _{2}*:

## Step 4 - finding the value of a_{3}

To find the value of *a _{3}* we just need to repeat the previous procedure.

Taking the previous equation:

Differentiating once more gives:

When *x = 0* we have

Which gives us this value for *a _{3}*:

## General formula for a Maclaurin series

There is a pattern here that we can use to extend the series to any number of terms. The nth term is equal to the nth derivative of *f(x)* divided by a factor. We need to know how to calculate the factor.

*a _{3}* has a divisor of 6 because we differentiated the term 3 times, first as a cube (giving a factor of 3), then as a square (giving an extra factor of 2), then as a term in

*x*(giving an extra factor of 1, which has no effect).

The divisor is therefore (3 × 2 × 1), which is 3 factorial.

For *a _{4}*, there is an extra step of differentiating the power of 4, which gives an extra factor of 4. So the divisor for

*a*is 4 factorial. For

_{4}*a*the divisor is 5 factorial, and so on. So we can write the general series as:

_{5}This can be written as a sum, using sigma notation:

There are a couple of things to note about this equation:

*f*means the^{(n)}*nth*derivative.*f*therefore means the^{(0)}*zeroeth*derivative, which is simply the function itself,*f*.- 0 factorial is defined to have a value of 1.

## See also

## Join the GraphicMaths Newletter

Sign up using this form to receive an email when new content is added:

## Popular tags

adder adjacency matrix alu and gate angle area argand diagram binary maths cartesian equation chain rule chord circle cofactor combinations complex modulus complex polygon complex power complex root cosh cosine cosine rule cpu cube decagon demorgans law derivative determinant diagonal directrix dodecagon eigenvalue eigenvector ellipse equilateral triangle euler eulers formula exponent exponential exterior angle first principles flip-flop focus gabriels horn gradient graph hendecagon heptagon hexagon horizontal hyperbola hyperbolic function hyperbolic functions infinity integration by parts integration by substitution interior angle inverse hyperbolic function inverse matrix irrational irregular polygon isosceles trapezium isosceles triangle kite koch curve l system line integral locus maclaurin series major axis matrix matrix algebra mean minor axis nand gate newton raphson method nonagon nor gate normal normal distribution not gate octagon or gate parabola parallelogram parametric equation pentagon perimeter permutations polar coordinates polynomial power probability probability distribution product rule proof pythagoras proof quadrilateral radians radius rectangle regular polygon rhombus root sech set set-reset flip-flop sine sine rule sinh sloping lines solving equations solving triangles square standard curves standard deviation star polygon statistics straight line graphs surface of revolution symmetry tangent tanh transformation transformations trapezium triangle turtle graphics variance vertical volume of revolution xnor gate xor gate