Review of Taylor’s Formula for Functions of a Single Variable
Let’s review the Taylor series for single variable functions. Suppose is differentiable at , then it has a linear approximation at , and we have:
with
Therefore, is the linear approximation and the error in this approximation is: The subscripts and in and show the maximum power of that appears in the polynomial and the point about which we approximate the function .
Note that as , tends to 0 faster than because Mathematically we write
If we want a better approximation of , instead of a linear function, we may use a quadratic function To determine the coefficients and , we match the values of the functions and their first and second derivatives at . This means the graph of has the same value, the same slope and the same concavity as the graph of at . The quadratic polynomial reads:
and the error is
We still can improve approximations of at by using higher order polynomials and matching more derivatives at the selected base point. If we use a polynomial of order , we can prove (see the following theorem) that the error in the approximation goes to zero faster than as . Mathematically we write which means as . In general, we have the following theorem:
Theorem 1. Suppose is a function that is times differentiable at ; that is all exist. Let where and Then .
- Remark that , , , , and .
Show the proof
Hide the proof
Proof: Notice that
If we apply l’Hôpital’s rule times, we obtain:
Definitions of the “Taylor polynomial” and “remainder”
is called the Taylor polynomial of degree for at . The error is also called the remainder term. You should verify that
To estimate the error of this approximation, we would like to have an expression for the remainder . Various expressions under stronger regularity assumptions on exist in the literature. We mention one of them which is called the Lagrange form of the remainder term, after the great mathematician Joseph-Louis Lagrange.
Theorem 2. If is continuous on an open interval that contains and , then there exists a number between and such that
Show the proof
Hide the proof
Proof: For clarity we replace by and show that there is some between and such that We choose such that and define
Notice that
because we constructed by matching the first derivatives of and at , and the first derivatives of are all zero. Also note that we chose such that .
According to Rolle’s theorem, there is between and such that .
Again because , by Rolle’s theorem there is a number between and such that . We can repeat this argument, finding a between and such that . If we use this argument again, there is a number between and such that . Let’s evaluate : Thus:
Let’s put . If we use the definition of in (*), we have:
Rolle’s theorem says if for and is differentiable between and and continuous on , then there is at least a number such that . This theorem is very intuitive just by looking at the following figure.

We don’t know anything about except that is between and .
If we place , we have:
Taylor’s Formula for Functions of Several Variables
Now we wish to extend the polynomial expansion to functions of several variables. We learned that if is differentiable at , we can approximate it with a linear function (or more accurately an affine function), . Matching the value and first partial derivatives and placing and result in For a better approximation we consider . Matching the zero, first, and second partial derivatives results in
where the partial derivatives are evaluated at . The above expression can also be written as:
where
Another form of writing is:
Hessian Matrix
where again the partial derivatives are evaluated at . The matrix in the above expression is called the Hessian matrix and is denoted by . We will talk about it later in this section.
We can use the results for functions of a single variable to derive formulas for the Taylor polynomial and the remainder of a function of two or more variables. Assume is continuous and has continuous partial derivatives at . Let where and are treated as constants and is a variable. Then By Taylor’s formula, we have:
where is a number between 0 and . Using the chain rule, as we saw before, we have:
We can show
the third derivative is
and in general:
This may be proved by induction.
Therefore
Induction has two steps: (1) We prove (**) holds true for , (2) We prove that if (**) is true for any value , then it is also true for
and
Substituting in (), we have
for some between 0 and . Because this is true for all values of , we can plug in and find the Taylor’s formula for functions of two variables. Here we proved the following theorem for , but generalization is easy.
Theorem 3. Let where is an open set. Suppose has continuous partial derivatives up (at least) to order , and consider and such that for . Then there is a number such that
Another form of writing Taylor’s formula is:

for some .
If we place in the above formula, the polynomial that we obtain is called the polynomial approximation of of degree at .
Example 1
Given
, find a second degree polynomial approximation to
near the point
and use it to estimate the value
Solution
For the polynomial approximation of degree 2, we need to find the first and second partial derivatives of
.
Thus
If we place and , we obtain the second degree polynomial approximation of near
Therefore:
The exact value of is 1.59704. The error of this approximation is , while if we used the linear approximation , the error would be .
The quadratic term can be written as
Similar to the case of , the matrix of the second order derivatives is called the Hessian matrix and is denoted by . Therefore, we can write:
where is considered an column matrix.1 Using this notation, the linear expansion of can be written as: