Homework 8 Solution¶

Due: Wednesday, 3/25/26, 23:59:59 \begin{equation*} \newcommand{\mat}[1]{\mathbb{#1}} \newcommand{\vb}[1]{\mathbf{#1}} \newcommand{\dd}[1]{\,\mathrm{d}#1} \end{equation*}

In [1]:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.fft import fft, ifft, fftfreq
%matplotlib widget
Problem 1 — Sampling Rates

Suppose that we have sampled a repetitive signal at the sampling rate $\nu_0 = 1/\tau$ for a total of $N$ samples. We take its fast Fourier transform (FFT), calling the output FFT1.

  1. We now sample the same signal at the same rate for twice as long and compute the FFT of the $2N$ points, calling the output FFT2. How would FFT2 differ from FFT1?

  2. How would their power spectra differ?

  3. Suppose we now sample for the same time as in the first case, but at twice the frequency, obtaining $2N$ points as in the second example. How would resulting transform (FFT3) and its power spectrum compare to the first one?

Solution¶

  1. Because we have not changed the sampling rate, the Nyquist frequency is unchanged and the transform will span the range $- \frac{1}{2\tau} < \nu \le \frac{1}{2\tau}$ ($-\frac{\nu_0}{2} < \nu \le \frac{\nu_0}{2}$). However, there will be $2N$ distinct frequencies in the transform instead of $N$. So, the interval between successive frequencies will be halved and the amplitude of the peaks will be doubled.

  2. Assuming that we compute the power spectrum by taking the magnitude squared of the FFT, we may expect peaks to increase by a factor of 4, at least for sinusoidal signals that fit (that go through an integral number of cycles during the sampling interval, because these correspond to summing 1 for the $N$ (or $2N$) points.

  3. Because we sample twice as fast, the Nyquist frequency is doubled. The spectrum we compute has the same frequency steps as before but spans $-\nu_0 < \nu \le \nu_0$. The amplitude of peaks that fit should increase by a factor of two and the power spectrum by a factor of 4.

Illustration¶

I decided to use the function $$ s = \cos(6\omega_0 t) + \frac12 \sin(4\omega_0 t) $$ where $\omega_0 = 2\pi \times$ (1 Hz) with $N = 128$. As the plots below reveal, the real part of the FFT has amplitude 64, which is $128/2$. Why divided by 2? Because $\cos(6\omega_0 t) = \frac12 e^{i 6\omega_0 t} + \frac12 e^{-i 6\omega_0 t}$, so each frequency component has amplitude $\frac12$ and the FFT sums $N$ factors of 1 times the amplitude of a frequency component. Since the power spectrum is the magnitude squared of the Fourier transform, the peak for the component at 6 Hz is four times as high as the peak at 4 Hz. Notice, as well, that the negative and positive frequency components of the real part are the same, but the imaginary components at negative frequency have opposite sign from those at positive frequency.

In [5]:
plt.close('all')
t1 = np.linspace(0, 1, 128, endpoint=False)
s1 = np.cos(2 * np.pi * 6 * t1) + 0.5 * np.sin(2 * np.pi * 4 * t1)
fft1 = fft(s1)
freq1 = fftfreq(len(t1), 1/len(t1))
def do_plot(t, s, f, freq, title):
    fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(9,6))
    ax = axs[0][0]; ax.plot(t, s); ax.set_xlabel('$t$ (s)'); ax.set_ylabel('$s(t)$')
    ax = axs[0][1]; ax.plot(freq, np.real(f), 'o')
    ax.set_xlabel(r'$\nu$ (Hz)')
    ax.set_ylabel(r'$\mathrm{Re\ }\tilde{s}(\nu)$')
    
    ax = axs[1][1]; ax.plot(freq, np.imag(f), 'o')
    ax.set_xlabel(r'$\nu$ (Hz)')
    ax.set_ylabel(r'$\mathrm{Im\ }\tilde{s}(\nu)$')
    
    ax = axs[1][0]; ax.plot(freq, np.abs(f)**2, 'o')
    ax.set_xlabel(r'$\nu$ (Hz)')
    ax.set_ylabel(r'Power');
    axs[0][1].set_xlim(-10, 10)
    axs[1][1].set_xlim(-10, 10)
    axs[1][0].set_xlim(-10, 10)
    fig.align_ylabels(axs)
    fig.suptitle(title)
do_plot(t1, s1, fft1, freq1, r"$N = 128$ and $\nu_{\rm Ny} = 64$ Hz")
Figure
No description has been provided for this image
In [6]:
t2 = np.linspace(0, 2, 256, endpoint=False)
s2 = np.cos(2 * np.pi * 6 * t2) + 0.5 * np.sin(2 * np.pi * 4 * t2)
fft2 = fft(s2)
freq2 = fftfreq(len(t2), 2/len(t2))
do_plot(t2, s2, fft2, freq2, r"$N = 256$ and $\nu_{\rm Ny} = 64$ Hz")
Figure
No description has been provided for this image

Having doubled the time over which the signal is sampled, we find that amplitude of the peaks has doubled and the amplitude of the peaks in the power spectrum have increased fourfold.

In [7]:
t3 = np.linspace(0, 1, 256, endpoint=False)
s3 = np.cos(2 * np.pi * 6 * t3) + 0.5 * np.sin(2 * np.pi * 4 * t3)
fft3 = fft(s3)
freq3 = fftfreq(len(t3), 1/len(t3))
do_plot(t3, s3, fft3, freq3, r"$N = 256$ and $\nu_{\rm Ny} = 128$ Hz")
Figure
No description has been provided for this image

When we sample twice as fast, we double the Nyquist frequency, so the output spectrum covers twice the range. For signals that align with the sampling period, having doubled the number of samples we double the amplitude in the transform and quadruple the amplitude in the power spectrum.

Problem 2 — FFT Sleuthing

Data from sampling a periodic signal at 1024 samples per second are stored in a text file Prob2.txt, available on the course page. The values in the file are measured potentials (in volts). To load text data into a numpy array, you can use the function np.loadtxt(filename).

  1. Load the data and prepare a properly labeled plot of the potential as a function of time.

  2. Use the FFT to deduce the frequency composition of this signal, which is the superposition of a fundamental and several harmonics with varying amplitudes. That is, determine the amplitudes and basis functions that went into generating the “measured” potentials (okay, I admit it, I didn’t measure the voltages; I computed them).

Solution¶

In [9]:
v = np.loadtxt('Prob2.txt')
fig, ax = plt.subplots()
t = np.arange(0, len(v) / 1024, 1/1024)
ax.plot(t, v, 'o', ms=1)
ax.set_xlabel(r'$t$ (s)')
ax.set_ylabel(r'$V$ (V)');
Figure
No description has been provided for this image

The plot shows that the function goes through four periods in 1 second, so the fundamental frequency should be 4 Hz. It is clearly not a pure sinusoid, so there are undoubtedly harmonics of the fundamental. To figure them out, I’ll take the Fourier transform, send small values arising from round-off error to zero, and list out the nonzero amplitudes.

In [10]:
ft = fft(v)                               # compute the FFT
freq = fftfreq(len(v), 1/1024)            # and corresponding frequency comb
ft[np.abs(ft) < 1e-6] = 0                 # set round-off error to zero
for n in np.argwhere(ft):                 # only report the nonzero amplitudes
    m = n[0]                              # argwhere returns a list of 1-element lists
    r, i = np.real(ft[m]), np.imag(ft[m]) 
    if m > 512:                           # center frequencies around zero
        m -= 1024
    if abs(r) < 1e-8: r = 0               # round off
    if abs(i) < 1e-8: i = 0
    print(f"{m:> 6d}  {r:> 8.5g}  {i:> 8.5g}") 
     4      2048         0
     8         0     -1024
    12      -512         0
    16         0       256
    20       128         0
    24         0       -64
    28       -32         0
    32         0        16
    36         8         0
    40         0        -4
    44        -2         0
    48         0         1
   -48         0        -1
   -44        -2         0
   -40         0         4
   -36         8         0
   -32         0       -16
   -28       -32         0
   -24         0        64
   -20       128         0
   -16         0      -256
   -12      -512         0
    -8         0      1024
    -4      2048         0

I see. Every fourth coefficient is nonzero, with a magnitude one half the previous nonzero coefficient, and they rotate by 90° ($i$). So, it looks like \begin{equation} v = \sum_{n = 1}^{11} 2^{2-n} \mathrm{Re} \left[ i^{n-1} e^{i n \nu_0 t} + (-i)^{n-1} e^{-i n \nu_0 t} \right] \end{equation} where $\nu_0 = 4$ Hz.

In [11]:
# Let's see if my guess is right
myv = np.zeros(1024)
omega = 2 * np.pi * 4
for n in range(1, 11):
    myv += 2**(2-n) * np.real( (-1j)**(n-1) * np.exp(complex(0,n) * omega * t) + \
               (1j)**(n-1) * np.exp(complex(0,-n) * omega * t) )
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(t, myv, 'r.', alpha=0.25)
ax.plot(t, v, c='k', lw=1.5)
ax.set_xlabel("$t$");
Figure
No description has been provided for this image
Problem 3 — Deconvolution

The file Prob3.txt contains a mystery signal that was recorded after convolution with an instrument whose response function has the form \begin{equation} g(t) = \begin{cases} 0 & t < 0 \\ \alpha e^{-\alpha t} & t \ge 0 \end{cases} \end{equation}

The sample rate was $10^3$ samples/s and the instrument response time constant, $\tau = \alpha^{-1}$ was 0.1 s. Use the convolution theorem to deduce the original signal. That theorem holds that \begin{equation} \mathscr{F}[f * g] = \mathscr{F}[f] \times \mathscr{F}[g] \end{equation} where $\mathscr{F}[f]$ represents the Fourier transform of $f$. Prepare a plot of the original signal with a properly labeled time axis.

Solution¶

Let’s first plot the measured signal

In [12]:
f3 = np.loadtxt('Prob3.txt')
t3 = np.arange(0, 0.001*len(f3), 0.001)  # construct a corresponding vector of sample times
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(t3, f3, lw=0.75)
ax.set_title("Measured Signal");
Figure
No description has been provided for this image

To remove the influence of the (known) instrument response function, we need to take its Fourier transform, divide the Fourier transform of the measured signal by the transform of the response function, and then take the inverse Fourier transform.

In [13]:
g3 = 10*np.exp(-10*t3)   # here's the instrument response function
# to deconvolve, we divide the transform of the measured signal by the
# transform of g3 and then take the inverse transform.
s3 = ifft(fft(f3) / fft(g3))
s3
Out[13]:
array([ 2.67062513e-10+1.92914513e-19j, -3.66864226e-07+9.25346080e-20j,
       -7.31289529e-07-1.79833753e-20j, ...,
       -6.33610901e-12-5.43969645e-20j,  2.72345154e-10-9.43802521e-20j,
        3.86015387e-10+9.40516060e-21j], shape=(1151,))
In [14]:
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(t3, np.real(s3), lw=0.75)
ax.set_xlabel(r"$t$ (s)")
ax.set_ylabel("Signal")
plt.subplots_adjust(left=0.15)
ax.set_title("Deconvolved Signal");
Figure
No description has been provided for this image

The result of deconvolving seems to be “peaks” of equal amplitude, but with steadily increasing frequency. That is, it looks like a chirped Gaussian pulse. I created the source file a long time ago and don’t remember what parameters I used. Let’s see if we can reconstruct it.

In [15]:
tt = np.linspace(0,1.024, 1024)
dt = tt - 0.5
s = np.exp(-25 * dt**2) * np.sin(2 * np.pi * 100 * dt * (1 + 1 * dt)) * dt
plt.close('all')
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(tt, s, lw=0.75);
Figure
No description has been provided for this image

The reconstruction is clearly not perfect, but I think it captures the essence.

Problem 4 — Windows

Because a discrete Fourier transform is necessarily taken over a finite sampling time, it cannot capture an integral number of cycles of all frequencies below the Nyquist frequency, which means that the periodic continuation of the sampling interval implies a discontinuity. As we have seen, this discontinuity leads to significant power leaking into nearby frequencies (see Fig. 8 in the notes ). A way to suppress this problem to a significant degree is to multiply the raw data by a window function that goes smoothly to zero at the edges of the sampled interval. A quick search will bring up a bunch of window functions that folks use (cosine, gaussian, Hamming, Hanning, Welch, Parzen, Tukey, etc.). A boatload of these are defined in scipy.signal.windows.

Use the following snippet to generate a source signal and compare the power spectrum computed from the raw signal with that obtained using two different window functions (of your own choosing). Plot the power on a logarithmic scale, use a legend to distinguish the three curves, and write a short summary of your conclusions.

In [16]:
t4 = np.linspace(0, 1, 2048, endpoint=False)
freq = 16.47
s4 = np.cos(2 * np.pi * freq * t4)
fig, ax = plt.subplots()
ax.plot(t4, s4)
ax.set_xlabel('$t$');
Figure
No description has been provided for this image

First, let's see what a transform yields without applying a window function. Rather than plotting the real and imaginary parts, I will plot the power and the phase angle of the transform ($\tan \phi_k = \mathrm{Im} f_k / \mathrm{Re} f_k$).

In [17]:
def pxform(freq, ft, title="", frac=0.05):
    fig, axs = plt.subplots(nrows=2, sharex=True)
    axs[0].set_ylabel("Power")
    axs[1].set_ylabel("Phase (°)")
    axs[1].set_xlabel(r"$\nu$ (Hz)")
    axs[1].plot(freq, np.angle(ft, True), '.-')
    axs[1].set_xlim((frac*np.min(freq), frac*np.max(freq)))
    axs[0].semilogy(freq, np.abs(ft)**2, '-')
    axs[0].grid(axis='y')
    axs[0].set_ylim(1e-4,1e6)
    axs[1].set_ylim(-180,180)
    axs[1].set_yticks(np.arange(-180,181, 90))
    axs[1].grid()
    fig.align_ylabels(axs)
    axs[0].set_title(title)
    return fig, axs
In [18]:
from scipy.fft import fftshift
f = fftshift(fftfreq(len(t4), 1/len(t4)))
pxform(f, fftshift(fft(s4)), "No Window");
Figure
No description has been provided for this image

The window function page lists a whole bunch of window functions. I'll explore a few.

In [19]:
from scipy.signal.windows import *

def show_window_effects(freq, f_of_t, windows):
    ncols = 1+len(windows)
    if freq[-1] < freq[0]:
        freq = fftshift(freq)
    fig, axs = plt.subplots(nrows=2, ncols=ncols, sharex=True, sharey='row', figsize=(10,7))
    for n in range(ncols):
        axs[1][n].set_xlabel(r"$\nu$ (Hz)")
        axs[1][n].tick_params(axis='x', which='both', pad=8);
        axs[1][n].set_xlim(1,100)
        axs[1][n].set_ylim(-180,180)
        axs[1][n].set_yticks(np.arange(-180,181,90))
        axs[1][n].grid()
        axs[0][n].set_ylim(1e-4, 1e6)
    axs[0][0].set_ylabel("Power")
    axs[0][0].set_title("No Window")
    axs[1][0].set_ylabel("Phase (°)")
    nowindow = fftshift(fft(f_of_t))
    axs[0][0].loglog(freq, np.abs(nowindow)**2)
    axs[1][0].semilogx(freq, np.angle(nowindow, True), ".-")
    col = 0
    for wind, name in windows:
        col += 1
        ft = fftshift(fft(wind * f_of_t))
        axs[1][col].semilogx(freq, np.angle(ft, True), '.-')
        axs[0][col].loglog(freq, np.abs(ft)**2)
        axs[0][col].set_title(name)
In [20]:
N = len(t4)
windows = [
    (blackman(N), "blackman"),
    (blackmanharris(N), "blackmanharris"),
    (kaiser(N, beta=14), "kaiser")
]
freq = fftshift(fftfreq(len(t4), 1/len(t4)))
show_window_effects(freq, s4, windows)
Figure
No description has been provided for this image

What can we learn here?

  • With no window function, we see a narrow peak, but wings that die off very slowly. The phase angle varies in a smooth and sensible manner.
  • With the Blackman window, the wings die off much more rapidly, but the phase angle is significantly shifted. Furthermore, it oscillates violently around the peak, smoothing out on either side.
  • The Blackman-Harris window is even cleaner, at the expense of even funkier phase behavior.
  • The Kaiser window is similar to the Blackman-Harris window, but has even funkier phase behavior.
In [ ]: