Homework 1 Solution¶

Due: Monday, 1/26/26, 23:59:59 \begin{equation*} \newcommand{\mat}[1]{\mathbb{#1}} \newcommand{\vb}[1]{\mathbf{#1}} \newcommand{\dd}[1]{\,\mathrm{d}#1} \end{equation*}

Problem 1 — Summing a series
Consider the infinite series \begin{equation} S = \sum_{n=1}^{\infty} \frac{1}{n(n+1)} \end{equation}

(a) Without explicitly summing the series, use an integral test to determine whether it converges.

(b) Sum the series.

Solution¶

(a) Since $\displaystyle S < \sum_{n=1}^\infty \frac{1}{n^2}$, if we can find an integral yielding an upper limit for the simpler sum, we will have shown it is finite. Consider the figure generated below.

In [1]:
import matplotlib.pyplot as plt
%matplotlib widget
import numpy as np

fig, ax = plt.subplots()
ax.set_xlim(0,10)
n = np.arange(1, 10)
ax.bar(n, 1 / n**2, alpha=0.5, width=1, align='edge', edgecolor="#000088", label="$1/n^2$")
ytrue = 1 / (n * (n+1))
ax.bar(n, ytrue, alpha=0.5, width=1, align='edge', edgecolor="k", facecolor='#dddddd', label="$1/n(n+1)$")
x = np.linspace(2, 10, 81)
y = 1 / (x-1)**2
ax.plot(x, y, 'r--')
ax.set_xlabel("$n$")
ax.legend()
ax.annotate(r"$\frac{1}{(n-1)^2}$", (3, 0.4), fontsize='x-large', color="r");
Figure
No description has been provided for this image

The area of the shaded blue rectangles is the sum of $1/n^2$. (The lighter portions are $1/n(n+1)$.) For the rectangles $n \ge 2$, the area of the rectangles is clearly smaller than the area under the red dashed curve, which is \begin{equation*} A = \int_2^\infty \frac{1}{(n-1)^2} \dd{n} = \int_1^\infty \frac{1}{x^2} \dd{x} = 1 \end{equation*} Therefore, the sum is indeed bounded.

(b) This sum is straightforward to evaluate exactly: \begin{equation*} S = \sum_{n=1}^{\infty} \frac{1}{n(n+1)} = \sum_{n=1}^{\infty} \frac{1}{n} - \frac{1}{n+1} = \left(1 - \frac12\right) + \left(\frac12 - \frac13\right) + \left(\frac13 - \frac14\right) + \cdots = 1 \end{equation*}

Problem 2 — Paramagnetism

In the Langevin model of paramagnetic behavior, the magnetization takes the form $$ M(x) = M_0 \left[ \frac{\cosh x}{\sinh x} - \frac{1}{x} \right] $$ where $M_0$ is a constant and $x$ is proportional to the applied magnetic field.

(a) What is the limiting value of the magnetization as $x \to \infty$?

(b) How does the magnetization depend on $x$ as $x \to 0$? Note: I'm not looking for the value of $M(0)$ but the way $M(x)$ depends on $x$ for small values of $|x|$.

Solution¶

(a) Recall that $\cosh x = \frac{e^x + e^{- x}}{2}$ and $\sinh x = \frac{e^x - e^{-x}}{2}$. As $x \to \infty$, each goes to $e^x / 2$, so their ratio goes to one. Hence, \begin{equation} \lim_{x \to \infty} M(x) = M_0 \end{equation}

(b) The Taylor series for $\cosh x$ is $1 + x^2/2! + x^4/4! + \cdots$ and that for $\sinh x = x + x^3/3! + x^5/5! + \cdots = x(1 + x^2/3! + x^4/5! + \cdots)$. So, $$ \frac{M(x)}{M_0} = \frac{1}{x} \left[ \frac{1 + x^2/2 + x^4/24 + \cdots}{1 + x^2/6 + x^4/120 + \cdots} - 1 \right] $$ Using the binomial approximation $\frac{1}{1+x} \approx 1 - x + x^2/2! +\cdots$, we can approximate the series in the denominator to get \begin{align*} \frac{M(x)}{M_0} &= \frac{1}{x} \left[ (1 + x^2/2 + x^4/24 + \cdots)(1 - x^2/6 + x^4/120 + \cdots) - 1 \right] \\ &= \frac{1}{x} \left[ 1 + x^2 \left(\frac{1}{2}-\frac{1}{6} \right) + O(x^4) - 1\right] \\ &= \frac{x}{3} + O(x^3) \end{align*} Therefore, for small values of $x$, $M \approx M_0 \frac{x}{3}$.

In [2]:
plt.close('all')
fig, ax = plt.subplots()
x = np.linspace(-8, 8, 120)
m = np.cosh(x) / np.sinh(x) - 1 / x
ax.plot(x, m)
# Now add a line through the origin with slope 1/3
xx = np.array([-3, 3])
yy = xx/3
ax.plot(xx, yy, 'r--', label="$x/3$")
ax.set_xlabel("$x$")
ax.set_ylabel("$M(x) / M_0$");
Figure
No description has been provided for this image
Problem 3 — Limits
Find the following limits:

(a) $\displaystyle \lim_{x\to0} \left(\frac{1}{\sin^2 x} - \frac{1}{x^2} \right)$

(b) $\displaystyle \lim_{x\to0} \left(\frac{2}{x} + \frac{1}{1 - \sqrt{1+x}} \right)$

(c) $\displaystyle \lim_{x\to0} \left(\frac{1 - \cos kx}{1 - \cosh kx} \right)$

Solution¶

(a) For small $x$, $\sin x \approx x$, which means that the term in parentheses tends to $\frac{1}{x^2} - \frac{1}{x^2}$ and we need to expand the sine term more carefully: \begin{align} \frac{1}{\sin^2 x} - \frac{1}{x^2} & \approx \frac{1}{\left(x - \frac{x^3}{3!} + \cdots\right)^2} - \frac{1}{x^2} \\ & \approx \frac{1}{x^2} \bigg\{ \bigg(1 - \frac{x^2}{6} + \cdots\bigg)^{-2} - 1 \bigg\} \end{align} Now, we can use the binomial expansion, $(1 + \epsilon)^n \approx 1 + n \epsilon + \frac{n(n-1)}{2!} \epsilon^2 + \cdots$, to invert the first term: \begin{align*} \frac{1}{\sin^2 x} - \frac{1}{x^2} & \approx \frac{1}{x^2} \bigg\{1 + \frac{x^2}{3} + \cdots - 1 \bigg\} = \frac{1}{3} \end{align*} We can run a quick-and-dirty numerical check:

In [6]:
ε = 0.001
1 / np.sin(ε)**2 - 1/ε**2
Out[6]:
np.float64(0.33333340007811785)

(b) Let’s attack this one starting with the denominator of the second fraction: $1 - \sqrt{1 + x} \approx 1 - \{1 + \frac12 x - \frac18 x^2 + \cdots \} \approx -\frac{x}{2}(1 - x/4)$, by the binomial expansion. Combining with the first term, we get \begin{equation*} \frac{2}{x} + \frac{1}{1-\sqrt{1+x}} \approx \frac{2}{x} - \frac{1}{\frac{x}{2}(1 - x/4)} \approx \frac{2}{x} - \frac{2}{x} \left(1 + \frac{x}{4} \right) = -\frac12 \end{equation*} Another quick check:

In [7]:
2/ε + 1 / (1 - np.sqrt(1 + ε))
Out[7]:
np.float64(-0.49987506246498015)

(c) Let’s just expand numerator and denominator through quadratic order: \begin{equation*} \frac{1 - \cos kx}{1 - \cosh kx} \approx \frac{1 - (1 - k^2 x^2 / 2 + \cdots)}{1 - (1 + k^2 x^2 / 2 + \cdots)} \approx \frac{k^2 x^2 / 2 + \cdots}{-k^2 x^2/2 + \cdots} = -1 \end{equation*} This should work provided that $k \ne 0$. Quick check:

In [8]:
k = -0.234
(1 - np.cos(k * ε)) / (1 - np.cosh(k * ε))
Out[8]:
np.float64(-0.9999999918896704)
Problem 4 — Numerically summing a series

The Riemann zeta function is defined by \begin{equation} \zeta(\nu) = \sum_{n=1}^{\infty} \frac{1}{n^\nu} \end{equation} When $\nu = 1$, it is equal to the harmonic series, which we showed does not converge. For $\nu > 1$, the series does converge, although convergence can be slow for values of $\nu$ that are not large.

(a) For $\nu = 2$, the series converges to $\pi^2/6 \approx 1.64493$. About how many terms do you need to sum to achieve an accuracy of 0.01%? (Use Python and NumPy; include your commented code in your solution.)

(b) Now consider a way of estimating the series as \begin{equation} S = 1 + \frac{1}{2^2} + \frac{1}{3^2} + \cdots + \frac{1}{(n-1)^2} + \sum_{j=n}^{\infty} \frac{1}{j^2} \end{equation} where we explicitly sum the first $n-1$ terms and then approximate the remaining infinite sum via an integral. About how many terms do you need to sum explicitly to achieve the same 0.01% accuracy using this method? Comment.

In [3]:
zeta2 = np.pi**2 / 6
n, s = 1, 1.0
while (1 - s / zeta2) > 0.0001:
    n += 1
    s += 1 / n**2
print(f"That took {n} terms")
That took 6079 terms

Now we sum through $(n-1)$ and then add $\int_n^{\infty} \frac{1}{x^2}\dd{x} = \frac{1}{n}$.

In [6]:
n, s = 1, 1.0
while True:
    n += 1
    s += 1 / n**2
    if n > 10:
        δ = 1 - (s + 1/(n+1)) / zeta2
        if δ > 0 and δ < 1e-4:
            break
print(f"That took {n} terms")
That took 55 terms
Problem 5 — Division of series

One way to develop the Taylor series expansion of $\tan x$ about $x = 0$ is by taking derivatives. An alternative is to divide the series for $\sin x$ by the series for $\cos x$ and to use the binomial expansion to “bring the denominator to the numerator.” That is, the denominator will have the form \begin{equation*} 1 - \frac{1}{2!}x^{2} + \frac{1}{4!} x^{4} - \cdots = 1 - q \end{equation*} where the term $-q$ is the sum of all but the first term. Therefore, \begin{equation*} \tan x = \frac{\sin x}{\cos x} = \left(x - \frac{x^{3}}{3!} + \frac{x^{5}}{5!} - \cdots \right)\left(1 + q + q^{2} + \cdots \right) \end{equation*} since $1/(1-q) = 1 + q + q^2 + q^3 + \cdots$.

Use this fact to develop the Taylor series for $\tan x$ through at least $x^{5}$ and compare your result to \begin{equation*} \tan x = x + \frac{x^{3}}{3} + \frac{2 x^{5}}{15} + \frac{17 x^{7}}{315} + \frac{62 x^{9}}{2835} +\cdots \end{equation*} Use matplotlib to prepare a plot comparing your approximation to $\tan x$, and estimate the range over which your expression agrees with the true value within 0.03%.

Solution¶

If we follow the hint and factor out $x$ from the series for $\sin x$, we have through $x^7$ \begin{equation*} \tan x \approx x \bigg( 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots \bigg) \bigg\{ 1 + \left(\frac{x^2}{2!} - \frac{x^4}{4!} + \frac{x^6}{6!} \right) + \left(\frac{x^2}{2!} - \frac{x^4}{4!} + \cdots \right)^2 + \left(\frac{x^2}{2} - \cdots \right)^3 \bigg\} \end{equation*} valid for $|x| \ll 1$. Let’s collect terms with like powers of $x$: \begin{align*} x^1 &: 1 \times 1 = 1 \\ x^3 &: \frac{1}{2} - \frac16 = \frac13 \\ x^5 &: \frac{1}{120} - \frac{1}{6\times 2} - \frac{1}{24} + \frac14 = \frac{16}{120} = \frac{2}{15} \\ x^7 &: \frac{-1+21 + 35 - 210 + 7 - 7\cdot6\cdot5 + 3\cdot 5 \cdot 6 \cdot 7}{7!} \\ &\qquad = \frac{55 - 210 + 7(1 + 60)}{7!} = \frac{272}{7!} = \frac{16 \cdot 17}{7!} = \frac{17}{3\cdot5 \cdot 3 \cdot 7} = \frac{17}{315} \end{align*}

In [7]:
def tanseries(x, nterms: int=3):
    coeffs = (1, 1/3, 2/15, 17/315, 62/2835)
    ts = x * coeffs[0]
    xp = x.copy()
    for j in range(1, nterms):
        xp *= x**2
        ts += coeffs[j] * xp
    return ts

fig, ax = plt.subplots()
x = np.logspace(-1, 0, 40)
ax.loglog(x, 1 - tanseries(x) / np.tan(x), 'r.', label=r"$x^5$")
ax.loglog(x, 1 - tanseries(x, 4) / np.tan(x), 'g.', label=r"$x^7$")
ax.loglog(x, 1 - tanseries(x, 5) / np.tan(x), 'b.', label=r"$x^9$")
ax.legend()
ax.grid(axis='both', which='both')
ax.set_xlabel("$x$")
ax.set_ylabel("Relative Error");
Figure
No description has been provided for this image

I used the np.logspace function to get equally spaced points on a logarithmic axis. If we use the approximation through $x^5$, the error reaches 0.1% at about $x = 0.5$. For $x^7$, we make it almost to $x = 0.7$, and for $x^9$ to $x = 0.8$. Having chosen a range from 0.1 to 1 in which the value of $x$ increased by one order of magnitude, we see that the relative error increased by 6 orders of magnitude. Can you explain that value of 6?