Squeeze theorem
Categories: limits calculus
The squeeze theorem is a useful way to find a limit in certain specific situations. In this article, we will use a simple example to explain how the squeeze theorem works, and then go on to prove the theorem.
Example - x^2 \sin (1/x)
As a first example, we will use the squeeze theorem to find:
The function is shown here:
The problem here is that we cannot evaluate or find the limit of sin (1/x) at zero because the argument 1/x goes to infinity, so the function oscillates infinitely many times as it approaches 0.
What can we do? Well, we can observe that the value of sin (1/x) is always in the range [-1, 1] for any value of x. Even though its value oscillates infinitely many times as we move towards zero, it can never go outside that range. In other words:
This alone doesn't help us find the limit, because although the function is bounded, it is the oscillations that cause the problem. In order for some function u(x) to have a limit L and x approaches some value a, then as x gets very close to a we expect u(x) to get very close to L.
But in the case of sin (1/x), as x approaches 0, this condition is not true. If we choose an x value of 0.1 (for example) then sin (1/x) will oscillate infinitely many times as x moves from 0.1 to 0. So we can't say that sin (1/x) is very close to 0 in this region because it continuously varies between -1 and +1.
The same is true if we start at 0.01, or 0.001, or any other very small value of x. The function never gets close to zero, so sin (1/x) has no limit as x approaches 0.
But our function f(x) has an extra term of x squared. If we multiply sin (1/x) by x squared we get this inequality:
The central term is now our original function, so we have:
We are allowed to perform this multiplication because we know that x squared is positive for any $x$. If we multiply an inequality by a positive value then the inequality still holds (if we multiplied by a negative value, it would reverse the sense of the ≤ condition).
So as x tends to 0, the sin (1/x) term might oscillate crazily, but it will always be finite. And the term in x squared will tend to 0. So the value of $f(x)$ must tend to 0 because any finite number multiplied by zero is zero. So we have:
This situation is shown here, where f(x) is shown along with the positive and negative x squared functions:
The squeeze theorem
The squeeze theorem is a generalisation of this example. Let f(x), g(x) and h(x) be three functions such that:
Now suppose that g(x) and h(x) both approach the same limit L at some point a:
Under those conditions, the squeeze theorem tells us that:
As an illustration, here is an example of three functions meeting those conditions:
It is easy to see from this why the theorem might be true. g and h both approach the same value L as x approaches a. And since we know that f is always somewhere between g and h, it has to take the value L at that point - how else could it meet that condition?
Finally, it is useful to know that the conditions for the squeeze theorem to apply can be relaxed in a couple of ways.
Firstly, we don't require equation (1) to be true for all values of x. We only need it to be true over some interval in the neighbourhood of a (the point where g and h have the same limit L). So, for example, if f(x) became greater than h(x) for some value of x that is distant from a, we could still apply the squeeze theorem at a.
Secondly, and quite crucially, we can still apply the squeeze theorem even if any of f, g or h are not defined at a. If they are defined at every point that is close to a, but not necessarily at a itself, then the theorem still applies.
This is important in our example. The function sin (1/x) is undefined at 0 and has no limit at 0. But f(x), while still undefined at 0, does have a limit at 0.
Proof of squeeze theorem
The formal definition of a limit is:
What this is saying is that, if and only if g(x) has a limit L as x approaches a, then we can make g(x) as close as we like to L (that is, to within some arbitrarily small distance ε) simply by choosing a value of x that is very close to $a$ (within some sufficiently small distance δ1)
In other words, if δ1 is small enough, g(x) will meet the condition that:
Now if h(x) also has a limit L as x approaches a, a similar condition will also apply:
Notice that we have used the value δ2 in this case. Since g(x) and h(x) are different functions, we might need different delta values to ensure that each function is within the same distance ε of L.
However, if we choose a value δ that is the smaller of δ1 and δ2, that is min(δ1, δ2), then the two inequalities above will both be true simultaneously:
To avoid repetition, from here on we will assume that the condition |x - a| < δ is met. The first part of equation (2) gives:
The first part of equation (1) gives:
Combining the two gives:
We can do a similar thing with h(x):
Combining equations (4) and (5) gives the following result:
This proves that f(x) also has a limit L as x approaches a.
See also
Join the GraphicMaths Newletter
Sign up using this form to receive an email when new content is added:
Popular tags
adder adjacency matrix alu and gate angle area argand diagram binary maths cartesian equation chain rule chord circle cofactor combinations complex modulus complex polygon complex power complex root cosh cosine cosine rule cpu cube decagon demorgans law derivative determinant diagonal directrix dodecagon eigenvalue eigenvector ellipse equilateral triangle euler eulers formula exponent exponential exterior angle first principles flip-flop focus gabriels horn gradient graph hendecagon heptagon hexagon horizontal hyperbola hyperbolic function hyperbolic functions infinity integration by parts integration by substitution interior angle inverse hyperbolic function inverse matrix irrational irregular polygon isosceles trapezium isosceles triangle kite koch curve l system line integral locus maclaurin series major axis matrix matrix algebra mean minor axis nand gate newton raphson method nonagon nor gate normal normal distribution not gate octagon or gate parabola parallelogram parametric equation pentagon perimeter permutations polar coordinates polynomial power probability probability distribution product rule proof pythagoras proof quadrilateral radians radius rectangle regular polygon rhombus root sech set set-reset flip-flop sine sine rule sinh sloping lines solving equations solving triangles square standard curves standard deviation star polygon statistics straight line graphs surface of revolution symmetry tangent tanh transformation transformations trapezium triangle turtle graphics variance vertical volume volume of revolution xnor gate xor gate