152. Differentiation of functions of several variables.
So far we have been concerned exclusively with functions of a single variable , but there is nothing to prevent us applying the notion of differentiation to functions of several variables , , ….
Suppose then that is a function of two real variables and , and that the limits exist for all values of and in question, that is to say that possesses a derivative or with respect to and a derivative or with respect to . It is usual to call these derivatives the partial differential coefficients of , and to denote them by or or simply , or , . The reader must not suppose, however, that these new notations imply any essential novelty of idea: ‘partial differentiation’ with respect to is exactly the same process as ordinary differentiation, the only novelty lying in the presence in of a second variable independent of .
In what precedes we have supposed and to be two real variables entirely independent of one another. If and were connected by a relation the state of affairs would be very different. In this case our definition of would fail entirely, as we could not change into without at the same time changing . But then would not really be a function of two variables at all. A function of two variables, as we defined it in Ch. II, is essentially a function of two independent variables. If depends on , is a function of , say ; and then is really a function of the single variable . Of course we may also represent it as a function of the single variable . Or, as is often most convenient, we may regard and as functions of a third variable , and then , which is of the form , is a function of the single variable .
Example LX
1. Prove that if , , so that , , then
2. Account for the fact that and . [When we were considering a function of one variable it followed from the definitions that and were reciprocals. This is no longer the case when we are dealing with functions of two variables. Let (Fig. 46) be the point or . To find we must increase , say by an increment , while keeping constant. This brings to . If along we take , the increment of is , say; and . If on the other hand we want to calculate , and being now regarded as functions of and , we must increase by , say, keeping constant. This brings to , where : the corresponding increment of is , say; and Now : but . Indeed it is easy to see from the figure that but so that

The fact is of course that and are not formed upon the same hypothesis as to the variation of .]
3. Prove that if then .
4. Find , , … when , . Express , as functions of , and find , , ….
5. Find , … when , , ; express , , in terms of , , and find , ….
[There is of course no difficulty in extending the ideas of the last section to functions of any number of variables. But the reader must be careful to impress on his mind that the notion of the partial derivative of a function of several variables is only determinate when
all the independent variables are specified. Thus if
,
,
, and
being the independent variables, then
. But if we regard
as a function of the variables
,
, and
, so that
, then
.]
153. Differentiation of a function of two functions.
There is a theorem concerning the differentiation of a function of one variable, known generally as the Theorem of the Total Differential Coefficient, which is of very great importance and depends on the notions explained in the preceding section regarding functions of two variables. This theorem gives us a rule for differentiating with respect to .
Let us suppose, in the first instance, that is a function of the two variables and , and that , are continuous functions of both variables (§ 107) for all of their values which come in question. And now let us suppose that the variation of and is restricted in that lies on a curve where and are functions of with continuous differential coefficients , . Then reduces to a function of the single variable , say . The problem is to determine .
Suppose that, when changes to , and change to and . Then by definition
But, by the Mean Value Theorem, where and each lie between and . As , and , and , : also Hence where we are to put , after carrying out the differentiations with respect to and . This result may also be expressed in the form
Example LXI
1. Suppose , , so that the locus of is the circle . Then where and are to be put equal to and after carrying out the differentiations.
We can easily verify this formula in particular cases. Suppose, e.g., that . Then , , and it is easily verified that , which is obviously correct, since .
2. Verify the theorem in the same way when (a) , , ; (b) , , .
3. One of the most important cases is that in which is itself. We then obtain where is to be replaced by after differentiation.
It was this case which led to the introduction of the notation , . For it would seem natural to use the notation for either of the functions and , in one of which is put equal to before and in the other after differentiation. Suppose for example that and . Then , but .
The distinction between the two functions is adequately shown by denoting the first by and the second by , in which case the theorem takes the form though this notation is also open to objection, in that it is a little misleading to denote the functions and , whose forms as functions of are quite different from one another, by the same letter in and .
4. If the result of eliminating between , is , then
5. If and are functions of , and and are the polar coordinates of , then , , dashes denoting differentiations with respect to .
154. The Mean Value Theorem for functions of two variables.
Many of the results of the last chapter depended upon the Mean Value Theorem, expressed by the equation or as it may be written, if ,
Now suppose that is a function of the two independent variables and , and that and receive increments , or , respectively: and let us attempt to express the corresponding increment of , viz. in terms of , and the derivatives of with respect to and .
Let . Then where . But, by § 153, Hence finally which is the formula desired. Since , are supposed to be continuous functions of and , we have where and tend to zero as and tend to zero. Hence the theorem may be written in the form where and are small when and are small.
The result embodied in (1) may be expressed by saying that the equation is approximately true; i.e. that the difference between the two sides of the equation is small in comparison with the larger of and . We must say ‘the larger of and ’ because one of them might be small in comparison with the other; we might indeed have or .
It should be observed that if any equation of the form is ‘approximately true’ in this sense, we must have , . For we have where , , , all tend to zero as and tend to zero; and so where and tend to zero. Hence, if is any assigned positive number, we can choose so that for all values of and numerically less than . Taking we obtain , or , and, as may be as small as we please, this can only be the case if . Similarly .