Skip to main content

Section 5.4 Convergence Tests for Series

Given the importance of series, there are a number of tests for it convergence. We only discuss a few basic ones in this section. Check out this article for more information.
As we have seen, it is often easier to determine the convergence of a sequence than finding (meaning recognizing a nice expression) its limit. Likewise, it is often easier to determine the convergence of a series than finding its sum.
The first series of results is based on the simple observation: for a series \(\sum a_n\) of non-negative real numbers, the sequences of partial sums clearly increasing. Hence it follows form the Monotonic Convergence Theorem 5.16 that
By interpreting the sum of a series with non-negative terms as area, we can compare it with an improper integral.

Proof.

Since \(f\) is decreasing and is positive on \([1,\infty)\text{,}\) for \(k \ge 1\text{,}\)
\begin{equation} \int_{k}^{k+1} f(x) dx \le f(k) \le \int_{k-1}^k f(x) dx\tag{5.1} \end{equation}
Summing \(k\) from \(2\) through \(n\) yields
\begin{equation} \int_{2}^{n+1} f(x) dx \le \sum_{n=2}^n f(k) \le \int_{1}^{n} f(x) dx. \tag{5.2} \end{equation}
Adding \(f(1)\) to each term above, we get
\begin{gather*} f(1) + \int_{2}^{n+1} f(x) dx \le \sum_{n=1}^n f(k) \le f(1) + \int_{1}^{n} f(x) dx. \\ \int_1^{n+1} f(x) dx \le s_n \le f(1) + \int_1^{n} f(x) dx \end{gather*}
where \(s_n\) is the \(n\)th partial sum of the series \(\sum f(k)\text{.}\) Thus, if the improper integral \(\int_1^\infty f(x) dx\) is convergent, then since \(f\) is positive on \([1,\infty)\text{,}\) \(\int_1^n f(x) dx\) is bounded above by this value for each \(n\text{.}\) And by the second inequality above \(s_n \le f(1) + \int_1^{\infty} f(x) dx\) for each \(n\text{.}\) Therefore, the series \(\sum_{k=1}^\infty f(k)\) is convergent (Proposition 5.35) moreover, its sum cannot excceed \(f(1)+\int_1^{\infty} f(x)dx \text{.}\)
On the other hand if the improper integral is divergent, then \(\int_1^{n+1} f(x) dx\) diverges to \(+\infty\text{,}\) and so must the series \(\sum_{k=1}^\infty f(k)\) because of the first inequality above.
As a bonus of the proof of the integral test, we get a bound of the error when the sum a series is approximated by a partial sum. Since
\begin{equation*} \sum_{n=m+1}^k f(n) \le \int_m^k f(x) dx, \end{equation*}
so letting \(k \to \infty\text{,}\) we have
\begin{equation*} R_m: = \sum_{n=m+1}^\infty f(n) \le \int_m^\infty f(x) dx \end{equation*}
in case the improper integral \(\int_1^\infty f(x) dx\) converges. The sum \(R_m\) is called the \(m\)th tail (or \(m\)th remainder) of the series \(\sum f(n)\text{.}\) It is the error when estimating a convergent series by its \(m\)th partial sum.

Example 5.37.

It follows form the integral test and Proposition 4.3 that the Harmonic series, \(\sum_{n=1}^{\infty} \frac{1}{n}\text{,}\) is divergent and the series \(\sum_{n=1}^{\infty} \frac{1}{n^2}\) is convergent. It sums to \(\frac{\pi^2}{6}\) and this realization has an interesting history. See this page for the story and see this video for an interesting explanation.
Taking absolute value of the terms of a series results in a series of non-negative numbers. This leads to the concept of absolute convergence. A series \(\sum a_n\) is absolutely convergent (or converges absolutely) if the series \(\sum |a_n|\) is convergent.
For series of non-negative numbers, there is no difference between convergence and absolute convergence. In general, however, a convergence series needs not be absolutely convergent. We said that a series is conditionally convergent if it is convergent but not absolutely convergent.

Example 5.39.

The series \(\sum \frac{(-1)^n}{n^2}\) is absolutely convergent and hence is convergent.
Continuing the theme of comparison, we have the follow two results on convergence for series with positive terms.

Proof.

Let \((s_k)\) and \((u_k)\) be the sequence of partial sums of \(\sum x_n\) and \(\sum y_n\) respectively. The conditions \(0 \le a_n \le b_n\) for all \(n\) imply \(0 \le s_k \le u_k\) for all \(k\text{.}\) The proposition now follows from Proposition 5.35

Example 5.41.

The series \(\sum \frac{1}{n^2 + n+1}\) converges. This follows from the comparison test since \(0 \le \frac{1}{n^2+n+1} \le \frac{1}{n^2}\) and \(\sum \frac{1}{n^2}\) is convergent.
The following test makes the comparison easier to apply in many cases.
Before giving the proof, let us argue why the limit comparison test is intuitively clear. If the limit \(L\) of \((a_n/b_n)\) is non-zero, that means \(a_n\) is more or less \(L b_n\) and \(b_n\) is more or less \(a_n/L\text{.}\) Thus, the series \(\sum a_n\) and \(\sum b_n\) have the same convergence. If \(L=0\text{,}\) that means eventually, \(b_n\) is must bigger than \(a_n\) and so \(\sum a_n\) converges if \(\sum b_n\) does.

Proof.

Since \(a_n/b_n \to L \gt 0\text{,}\) for all \(n\) sufficiently large,
\begin{equation*} \frac{L}{2} \lt \frac{a_n}{b_n} \lt \frac{3L}{2}. \end{equation*}
Since \(b_n \gt 0\text{,}\) that means for all \(n\) sufficiently large
\begin{equation*} \frac{L}{2}b_n \lt a_n \lt \frac{3L}{2}b_n. \end{equation*}
So the \(L \gt 0\) case follows from Proposition 5.40 and the fact that the convergence of a series of positive numbers is unaffected by multiplying the series by a positive constant. If \(L=0\text{,}\) then \(0 \lt a_n/b_n \lt 1\) and so \(0 \lt a_n \lt b_n\) for all \(n\) sufficiently large. Thus, the convergence of \(\sum a_n\) follows from the convergence of \(\sum b_n\) by comparison test as well.

Example 5.43.

The series \(\sum \frac{1}{n^2 -n+1}\) is convergent. This follows from the limit comparison test since it terms are positive,
\begin{equation*} \frac{\frac{1}{n^2}}{\frac{1}{n^2-n+1}} = \frac{n^2-n+1}{n^2} = 1 - \frac{1}{n}+\frac{1}{n^2} \to 1 \end{equation*}
and \(\sum \frac{1}{n^2}\) is convergent.

Proof.

Let \(r = (1+L)/2\text{.}\) If \(L \lt 1\text{,}\) then \(r \gt 1\text{.}\) Also, eventually,
\begin{equation*} 0 \lt \frac{|a_{n+1}|}{|a_n|} \lt r:=\frac{1+L}{2} \lt 1. \end{equation*}
That is \(|a_{n+1}| \lt |a_n|r\text{.}\) Since convergence is the only concern here, we can assume the inequality holds for all \(n \ge 1\text{.}\) Consequently,
\begin{equation*} |a_2| \lt |a_1|r, |a_3| \lt |a_2|r \lt |a_1|r^2, \ldots \end{equation*}
Since \(0 \lt r \lt 1\text{,}\) the geometric series \(|a_1|\sum r^n\) converges and so must \(\sum |a_n|\) by the Comparison Test.
On the other hand, if \(L \gt 1\text{,}\) then so is \(r\) and
\begin{equation*} |a_{n}| > |a_1|r^{n-1}. \end{equation*}
Therefore, \((a_n)\) is not a null sequence and so \(\sum a_n\) diverges by the Divergence Test.

Example 5.45.

The series \(\sum a_n = \sum \frac{-7^n}{n!}\) is absolutely convergent by Ratio test because
\begin{equation*} \frac{|a_{n+1}|}{|a_n|} = \frac{\frac{7^{n+1}}{(n+1)!}}{\frac{7^n}{n!}} = \frac{7^{n+1}}{(n+1)!}\frac{n!}{7^n}= \frac{7}{n+1} \to 0. \end{equation*}

Proof.

Let \(r = (1+L)/2\text{.}\) If \(L \lt 1\text{,}\) then so is \(r\text{.}\) And a similar argument as in the proof of the Ratio test, we can assume for all \(n \ge 1\text{,}\)
\begin{equation*} |a_n|^{1/n} \lt r \end{equation*}
That is \(|a_n| \lt r^n\text{.}\) So we know that \(\sum |a_n|\) converges by comparing it with the geometric series \(\sum r^n\text{.}\)
On the other hand, if \(L \gt 1\text{,}\) then so is \(r\text{.}\) Again by a similar argument as in the proof of the Ratio Test, we can assume
\begin{equation*} |a_n|^{1/n} \lt r \end{equation*}
and hence \(|a_n| > r^n\) for all \(n \ge 1\text{.}\) That means \((a_n)\) is not null and hence \(\sum a_n\) diverges.

Example 5.47.

The series \(\sum \frac{2^n}{n^4 5^{n+1}}\) converges absolutely. To see this, note that the series is \(\frac{1}{5}\sum a_n\) where \(a_n = \frac{2^n}{n^4 5^n}\text{.}\) So the original series and the series \(\sum a_n\) have the same convergence. Note that
\begin{equation*} \left|\frac{2^n}{n^4 5^n}\right|^{1/n} = \frac{2}{n^{4/n}5} \to \frac{2}{5} \lt 1 \end{equation*}
because \(n^{1/n} \to 1\) Example 5.23. Thus, \(\sum a_n\) and hence the original series converges absolutely according to the root test.

Remark 5.48.

The Ratio and the Root test are both inconclusive when the corresponding limit \(L\) is \(1\text{.}\) The series \(\sum 1/n\) and \(\sum 1/n^2\) both have \(L=1\) for the \(L\) in both the root and the ratio test, yet \(\sum 1/n^2\) converges absolutely but \(\sum 1/n\) diverges.
There are finer versions the ratio and the root test involving limit superior.
The root test is stronger than the ratio test in the sense that whenever the ratio test can decide the convergence of a series so does the root test but not the other way around.
A series is alternating if the sign of its terms change from one to the next. That means it is a series of form \(\sum (-1)^{n+1} a_n\) or \(\sum (-1)^n a_n\) where \((a_n)\) is a sequence of positive numbers. Clearly, these two forms have the same convergence, so when discussing the convergence of an alternating series, we can assume its first term is positive.

Proof.

Let \((s_k)\) be the sequence of partial sums of \(\sum (-1)^{n+1} a_n\text{.}\) Since \((a_n)\) is a strictly decreasing sequence of positive numbers, it follows that
\begin{equation*} s_2 \lt s_4 \lt s_6 \lt \cdots \lt s_5 \lt s_3 \lt s_1 \end{equation*}
Thus, \((s_{2m})\) is a strictly increasing sequence bounded above by \(s_1\) and \((s_{2m-1})\) is a strictly decreasing sequence bounded below by \(s_2\text{.}\) So, both of them converge, say \(s_{2m} \to L_0\) and \(s_{2m-1} \to L_1\text{.}\) As the limit of a strictly decreasing sequence \(L_1 \lt s_{2m-1}\) for all \(m\text{.}\) Likewise, \(L_0 \gt s_{2m}\) for all \(m\text{.}\) Moreover,
\begin{equation*} |L_1-L_0| \lt |s_{2m-1}-s_{2m}| = a_{2m}. \end{equation*}
By the assumption \(a_n \to 0\text{,}\) \(a_{2m} \to 0\) as well. Therefore, \(L_0 = L_1\text{.}\) Denote, this common value by \(L\) and we have \(L\) is always between any two consecutive partial sums and so for any \(k\text{,}\) \(|L-s_k| \lt |s_k-s_{k+1}| = a_{k+1}\text{.}\)

Example 5.50.

The alternating series
\begin{equation*} \sum \frac{(-1)^{n+1}}{n} = 1-\frac{1}{2} + \frac{1}{3} - \frac{1}{4} + ... \end{equation*}
satisfies the criteria in the alternating series test (with \(a_n = 1/n\)) and so is convergent. Since \(\sum |(-1)^{n+1}/n| = \sum 1/n\) is the Harmonic series, so \(\sum (-1)^{n+1}/n\) is a series that converges conditionally, i.e. it converges but not absolutely converges.
Suppose \((a_n)\) is a sequence satisfying the criteria in the alternating series test. Then sum \(L\) of the series and the partial sums \(s_k\) satisfying \(|L-s_k| \le a_{k+1}\) for all \(k\text{,}\) we can use this information to compute, \(k\text{,}\) the number needed so that the \(k\)th partial sum \(s_k\) is within a given distant from \(L\text{.}\) Let see an example.

Example 5.51.

We want to find a \(k\) so that \(s_k\) the \(k\)th partial sum of the alternating series \(\sum_{n=1}^{\infty} (-1)^{n-1} \frac{4}{n^4}\) is within given distance \(E \gt 0\) from the sum \(L\) of the series. Since the sequence \(a_n = 4/n^4\) satisfying the criteria in the alternating series test, the series converges. Moreover, we know that \(|L-s_k|\le a_{k+1}\) for each \(k\text{.}\) So all we need is to find a \(k\) such that \(a_{k+1} =\frac{4}{(k+1)^4} \le E\text{.}\) Solving for \(k\text{,}\) we find that we can take \(k\) to be the least integer so that
\begin{equation*} k \ge \left( \frac{4}{E} \right)^{1/4} -1. \end{equation*}
If a specific \(E\) is given, say \(E = 0.0006\) then we need
\begin{equation*} k \ge \left( \frac{4}{0.0006} \right)^{1/4} -1 \approx 8.03. \end{equation*}
So \(k=9\) will do. That is \(s_9 \approx 3.788370\) the \(9\)th partial sums of the series approximates the actual sum to within \(0.0006\text{.}\)