## 127. Integration.

We have in this chapter seen how we can find the derivative of a given function \(\phi(x)\) in a variety of cases, including all those of the commonest occurrence. It is natural to consider the converse question, that of *determining a function whose derivative is a given function*.

Suppose that \(\psi(x)\) is the given function. Then we wish to determine a function such that \(\phi'(x) = \psi(x)\). A little reflection shows us that this question may really be analysed into three parts.

(1) In the first place we want to know whether such a function as \(\phi(x)\) *actually exists*. This question must be carefully distinguished from the question as to whether (supposing that there is such a function) we can find any simple formula to express it.

(2) We want to know whether it is possible that more than one such function should exist, *i.e.* we want to know whether our problem is one which admits of a *unique* solution or not; and if not, we want to know whether there is any simple relation between the different solutions which will enable us to express all of them in terms of any particular one.

(3) If there is a solution, we want to know *how to find an actual expression for it*.

It will throw light on the nature of these three distinct questions if we compare them with the three corresponding questions which arise with regard to the differentiation of functions.

(1) A function \(\phi(x)\) may have a derivative for all values of \(x\), like \(x^{m}\), where \(m\) is a positive integer, or \(\sin x\). It may generally, but not always have one, like \(\sqrt[3]{x}\) or \(\tan x\) or \(\sec x\). Or again it may never have one: for example, the function considered in Ex. XXXVII. 20, which is nowhere continuous, has obviously no derivative for any value of \(x\). Of course during this chapter we have confined ourselves to functions which are continuous except for some special values of \(x\). The example of the function \(\sqrt[3]{x}\), however, shows that a continuous function may not have a derivative for some special value of \(x\), in this case \(x = 0\). Whether there are continuous functions which *never* have derivatives, or continuous curves which never have tangents, is a further question which is at present beyond us. Common-sense says *No*: but, as we have already stated in § 111, this is one of the cases in which higher mathematics has proved common-sense to be mistaken.

But at any rate it is clear enough that the question ‘has \(\phi(x)\) a derivative \(\phi'(x)\)?’ is one which has to be answered differently in different circumstances. And we may expect that the converse question ‘is there a function \(\phi(x)\) of which \(\psi(x)\) is the derivative?’ will have different answers too. We have already seen that there are cases in which the answer is *No*: thus if \(\psi(x)\) is the function which is equal to \(a\), \(b\), or \(c\) according as \(x\) is less than, equal to, or greater than \(0\), then the answer is *No* (Ex. XLVII. 3), unless \(a = b = c\).

This is a case in which the given function is discontinuous. In what follows, however, we shall always suppose \(\psi(x)\) continuous. And then the answer is *Yes*: *if \(\psi(x)\) is continuous then there is always a function \(\phi(x)\) such that \(\phi'(x) = \psi(x)\)*. The proof of this will be given in Ch. VII.

(2) The second question presents no difficulties. In the case of differentiation we have a direct definition of the derivative which makes it clear from the beginning that there cannot possibly be more than one. In the case of the converse problem the answer is almost equally simple. It is that if \(\phi(x)\) is one solution of the problem then \(\phi(x) + C\) is another, for any value of the constant \(C\), and that all possible solutions are comprised in the form \(\phi(x) + C\). This follows at once from § 126.

(3) The practical problem of actually finding \(\phi'(x)\) is a fairly simple one in the case of any function defined by some finite combination of the ordinary functional symbols. The converse problem is much more difficult. The nature of the difficulties will appear more clearly later on.

**Definitions.**If \(\psi(x)\) is the derivative of \(\phi(x)\), then we call \(\phi(x)\) an**integral**or**integral function**of \(\psi(x)\). The operation of forming \(\psi(x)\) from \(\phi(x)\) we call**integration**.We shall use the notation \[\phi(x) = \int \psi(x)\, dx.\] It is hardly necessary to point out that \(\int\dots dx\) like \(d/dx\) must, at present at any rate, be regarded purely as a symbol of operation: the \(\int\) and the \(dx\) no more mean anything when taken by themselves than do the \(d\) and \(dx\) of the other operative symbol \(d/dx\).

## 128. The practical problem of integration.

The results of the earlier part of this chapter enable us to write down at once the integrals of some of the commonest functions. Thus \[\begin{equation*} \int x^{m}\, dx = \frac{x^{m+1}}{m + 1},\quad \int \cos x\, dx = \sin x,\quad \int \sin x\, dx = -\cos x. \tag{1} \end{equation*}\]

These formulae must be understood as meaning that the function on the right-hand side is *one* integral of that under the sign of integration. The *most general* integral is of course obtained by adding to the former a constant \(C\), known as the **arbitrary constant** of integration.

There is however one case of exception to the first formula, that in which \(m = -1\). In this case the formula becomes meaningless, as is only to be expected, since we have seen already (Ex. XLII. 4) that \(1/x\) cannot be the derivative of any polynomial or rational fraction.

That there really is a function \(F(x)\) such that \(D_{x}F(x) = 1/x\) will be proved in the next chapter. For the present we shall be content to assume its existence. This function \(F(x)\) is certainly not a polynomial or rational function; and it can be proved that it is not an algebraical function. It can indeed be proved that \(F(x)\) is an essentially new function, independent of any of the classes of functions which we have considered yet, that is to say incapable of expression by means of any finite combination of the functional symbols corresponding to them. The proof of this is unfortunately too detailed and tedious to be inserted in this book; but some further discussion of the subject will be found in Ch. IX, where the properties of \(F(x)\) are investigated systematically.

Suppose first that \(x\) is positive. Then we shall write \[\begin{equation*} \int \frac{dx}{x} = \log x, \tag{2} \end{equation*}\] and we shall call the function on the right-hand side of this equation the **logarithmic function**: it is defined so far only for positive values of \(x\).

Next suppose \(x\) negative. Then \(-x\) is positive, and so \(\log(-x)\) is defined by what precedes. Also \[\frac{d}{dx} \log(-x) = \frac{-1}{-x} = \frac{1}{x},\] so that, when \(x\) is negative, \[\begin{equation*} \int \frac{dx}{x} = \log(-x). \tag{3} \end{equation*}\]

The formulae (2) and (3) may be united in the formulae \[\begin{equation*} \int \frac{dx}{x} = \log(\pm x) = \log|x|, \tag{4} \end{equation*}\] where the ambiguous sign is to be chosen so that \(\pm x\) is positive: these formulae hold for all real values of \(x\) other than \(x = 0\).

The most fundamental of the properties of \(\log x\) which will be proved in Ch. IX are expressed by the equations \[\log 1 = 0,\quad \log (1/x) = -\log x,\quad \log xy = \log x + \log y,\] of which the second is an obvious deduction from the first and third. It is not really necessary, for the purposes of this chapter, to assume the truth of any of these formulae; but they sometimes enable us to write our formulae in a more compact form than would otherwise be possible.

It follows from the last of the formulae that \(\log x^{2}\) is equal to \(2\log x\) if \(x > 0\) and to \(2\log(-x)\) if \(x < 0\), and in either case to \(2\log |x|\). Either of the formulae (4) is therefore equivalent to the formula \[\begin{equation*} \int \frac{dx}{x} = \tfrac{1}{2}\log x^{2}. \tag{5} \end{equation*}\]

The five formulae (1)-(3) are the five most fundamental *standard forms* of the Integral Calculus. To them should be added two more, viz. \[\begin{equation*} \int \frac{dx}{1 + x^{2}} = \arctan x,\quad \int \frac{x}{\sqrt{1 – x^{2}}} = \pm\arcsin x. \tag{6} \end{equation*}\] ^{1}

$\leftarrow$ 125-126. The Mean Value Theorem | Main Page | 129. Integration of polynomials $\rightarrow$ |