Physics 064 — Mathematics and Computation for Physicists

Office Hours

  • Mondays from 13:00 to 14:30
  • Tuesdays from 15:00 to 16:00
  • Wednesdays from 13:00 to 15:00
  • Thursdays from 15:00 to 16:00
  • most Friday from 13:15 to 15:00

  Date Meeting Resources Key Concepts
Week 1
Tue, 1/20 Lecture 1Are you series?
Reading:Infinite Series
  • Overview of the course: “everything” is a vector space.
  • We will use Python/Jupyter Lab for computation in this course. Please check your installation at Configuration

  • An infinite series is the limit of a finite series \( S_N = \sum_{n=1}^{N} a_n \) as \( N \to \infty \).

  • For such a series to converge, we must have \( |a_n| \to 0 \) as \( n\to\infty \). This is a necessary but not sufficient condition.
  • As an example of a series that doesn't converge, consider the harmonic series: \( H = 1 + \frac12 + \frac13 + \frac14 + \cdots \)
  • Geometric series: \( S_N = a_0(1 + r + \cdots + r^N) = a_0\frac{1 - r^{N+1}}{1-r} \)
  • Convergence criteria
  • Taylor and Maclaurin series (power series)
  • Binomial series: \( (1 + x)^n = 1 + n x + \frac{n(n-1)}{2!} x^2 + \frac{n!}{(n-3)! 3!} x^3 + \cdots \)
  • Manipulation of series: when expanding in powers of a small quantity \( x \) through a given order (\( x^n \)), you must keep all terms proportional to \( x^p \) for \( p \le n \)
  • Common Taylor series, including \( e^x, \cos x, \sin x, \ln(1+x), \cosh(x), \sinh(x) \)
  • Series inversion as a way to compute series for \( \mathrm{sech} x = \frac{1}{\cosh x} \)
Thu, 1/22 Lecture 2
Reading:Euler’s Gamma Function
  • Uniqueness of the power series representation of functions
  • Ways of estimating \( n! \):
    • via \( \ln n! \approx n \ln n - n \), so \( n! \approx n^n e^{-n} \)
    • using \( \Gamma(n+1) = n! = \int_0^\infty x^n e^{-x} \, dx \)
  • Derivation of Stirling's formula: \( n! \approx n^n e^{-n} \sqrt{2\pi n} \left(1 + \frac{1}{12 n} + \cdots \right) \)
Week 2
Tue, 1/27 Lecture 3Linear Algebra I
Homework:HW01 due Monday @ 23:59 Reading:Linear Algebra
Thu, 1/29 Lecture 4Linear Algebra II
Reading:Gauss Jordan
  • Hilbert spaces
  • commuting matrices and matrix diagonalization
  • functions of matrices
  • Kronecker delta, \( \delta_{ij} \) is 1 when \( i=j \) and zero otherwise
  • The Levi-Civita antisymmetric unit tensor, \( \varepsilon_{ijk} \) is 1 when \( ijk = 123 = xyz \) or cyclic permutation (123, 231, 312); is \( -1 \) when \( ijk = 321 = zyx \) or cyclic permutation; is zero otherwise.
  • The cross product may be written \( \vb{a} \times \vb{b} = \sum_{i,j,k} \varepsilon_{ijk} \vu{e}_i a_j b_k \)
Week 3
Tue, 2/3 Lecture 5Eigenvalues and Eigenvectors
Homework:HW02 due Mon. 2/3 @ 23:59 Reading:Eigenvalue problems
  • Confirmation that Gauss-Jordan without pivoting is numerically unstable
  • Eigenvalue problem: \( \mat{A} \cdot \vb{x} = \lambda \vb{x} \)
  • Eigenvalue equation: \( \mathrm{det} ( \mat{A} - \lambda \mat{I} ) = 0 \)
  • If a matrix lacks symmetry, its eigenvalues are typically complex.
  • A Hermitian matrix is equal to its conjugate transpose. Hermitian matrices have real eigenvalues and eigenvectors corresponding to different eigenvalues are orthogonal.
  • The inertia tensor \( \mat{J} \) allows us to express the kinetic energy of a rigid body for given angular velocity vector \( \vb*{\omega} \). See Diagonalization. It is an example of a positive definite matrix, which means that it has positive real eigenvalues.
  • The inertia tensor also allows us to express the angular momentum of a rotating rigid body from its angular velocity: \( \vb{L} = \mat{J} \cdot \vb*{\omega} \). Because \( \mat{J} \) is positive definite, we can always find a basis in which it is diagonal, with positive coefficients (eigenvalues) along the main diagonal. If you rotate an object along the associated eigenvector, the angular momentum is parallel to the angular velocity and things are simple. As a poorly thrown football shows, however, the angular velocity and angular momentum don't have to be parallel, so the spin axis precesses around the fixed angular momentum vector (the ball wobbles).
Thu, 2/5 Lecture 6Algebra of Complex Numbers
Reading:Fourier Series
Week 4
Tue, 2/10 Lecture 7Fourier Series
Reading:Fourier Series
  • Orthogonality of trigonometric functions: \( \int_0^L e^{i 2\pi n x/L} \dd{x} = \frac{L}{2} \)
  • Periodic boundary conditions: the function \( u_n(x) \) and its derivative have the same value at the lower and upper limit of the range
  • Example Fourier series
Thu, 2/12 Lecture 8Fourier Series
Reading:Fourier Series
Week 5
Tue, 2/17 Lecture 9Fourier Series
Reading:Fourier Series
  • Bilinear concomitant: \( ( u_1^{\prime \ast} u_2 - u_1^* u_2^{\prime} )_a^b \)
  • If the bilinear comcomitant vanishes (one of either \( u \) or \( u' \) is zero at both \( a \) and \( b \)), then
    • the eigenvalues are real
    • eigenfunctions \( u_n(x) \) corresponding to different eigenvalues \( \lambda \) are orthogonal in the sense that \( \int_a^b u^*_n(x) u_m(x) \dd{x} = 0 \) unless \( m = n \).
  • Convergence: the Fourier series of \( f(x) \) converges at a point \( x' \) to the average of the limits of \( f(x) \) as \( x \to x' \) from the left and from the right
  • Boundary conditions: fixed and periodic
Thu, 2/19 Lecture 10Functions of a Complex Variable
Reading:Complex Variables
  • Algebra of complex variables \( z = x + iy \), for \( x,y \in \mathbb{R} \)
  • Euler relation, \( e^{i\phi} = \cos\phi + i \sin\phi \); \( \cos\phi = (e^{i\phi}+e^{-\phi})/2 \) and \( \sin\phi = (e^{i\phi} - e^{-i\phi})/2i \)
  • Cauchy-Riemann conditions: for the derivative of a function of a complex variable \( f(z) = u(x, y) + i v(x,y) \) to exist, it must satisfy \( \partial u/\partial x = \partial v/\partial y \) and \( \partial u/\partial y = -\partial v/\partial x \)
Week 6
Tue, 2/24 Lecture 11
  • Cauchy’s integral theorem: \( \oint f(z) \, dz = 0 \), provided that the derivative exists on the contour and at all interior points.
  • Laurent expansion: near a singularity \( z_0 \), we often can expand \( f(z-z_0) = \sum_{n=-\infty}^{\infty} a_n (z-z_0)^n \), called a Laurent series
  • An \( n \)th order pole at \( z_0 \) has a Laurent series whose most negative power of \( (z - z_0) \) is \( n \).
  • Integrating a Laurent series on a closed path around \( z_0 \) yields \( 2 \pi i a_{-1} \), where \( a_{-1} \) is the coefficient of the term with \( n = -1 \) and is called the residue of the pole at \( z_0 \)
  • We evaluated \( \int_{-\infty}^{\infty} \frac{dx}{1+x^2} \) using Cauchy's integral theorem, finding \( \pi \) as we expected from evaluating \( \tan^{-1} (\infty) \)
Thu, 2/26 Lecture 12Contour integration
  • We evaluated \( \int_{-\infty}^{\infty} \frac{\dd{x}}{1 + x^4} \) using the residue theorem, using a contour we closed in a giant semicircle at \( R \) in the upper half-plane.
  • Residue theorem: the integral around a closed contour on the complex plane of a function \( f(z) \) is equal to \( 2\pi i \) times the sum of the residues at the poles of the integrand in the interior of the contour.
  • Methods of computing residues:
    • Expand the integrand in a Laurent series around the pole at \( z_0 \) and identify the coefficient of the \( (z - z_0)^{-1} \) term.
    • Take the limit as \( z \to z_0 \) of \( (z - z_0) f(z) \) for a first-order pole at \( z_0 \).
    • Use l'Hopital's rule to evaluate this limit.
    • For an \( n \)th-order pole, take \( \lim_{z \to z_0} \frac{1}{(n-1)!} \frac{d^{n-1}}{z^{n-1}} (z-z_0)^n f(z) \).
Week 7
Tue, 3/3 Lecture 13Fourier Transforms
Reading:Fourier Transforms
  • the Fourier transform is the limiting case of a Fourier series for which the period tends to infinity
  • the Fourier transform converts a function of space (time) into a corresponding function of spatial frequency (frequency); the transform is invertible
  • \( \tilde{f}(\omega) = \int_{-\infty}^{\infty} f(t) e^{-i\omega t} \, dt \) and \( f(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \tilde{f}(\omega) e^{i\omega t} \, d\omega \)
  • the Fourier representation of the Dirac delta function is \( \delta(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{ikx}\,dk \), where \( \delta(x) \) has the property that \( \displaystyle \int_{a}^{b} f(x) \delta(x-x_0) \, dx = \begin{cases} f(x_0) & a < x_0 < b \ -f(x_0) & a > x_0 > b \ 0 & \text{otherwise}\end{cases} \)
  • it is often convenient to evaluate Fourier transforms using contour integration
  • if a pole lies on the path of integration, its contribution to the integral is \( i \pi \times \text{its residue} \), a result known as the Cauchy principle value
Thu, 3/5 Lecture 14Practice with Contour Integration and Fourier Transforms
Week 8
Tue, 3/10 Lecture 15Midterm Review
Thu, 3/12 Exam 1Midterm
Week 9
Tue, 3/17 Holiday 1Spring Break
Thu, 3/19 Holiday 2Spring Break
Week 10
Tue, 3/24 Lecture 16Fast Fourier transforms
Reading:Numerical- FFT
  • Midterm redemption
  • Examples of the fast Fourier transform
  • The Cooley-Tukey algorithm (originally described by Gauss)
Thu, 3/26 Lecture 17Random Numbers, Stochastic Processes, and Monte Carlo Simulations
Reading:Random, Stochastic
  • If you need a refresher on the binomial distribution, and or differentiating inside a summation or integration, see Binomial.
  • Linear congruent generators, once a standard sort of random number generator, are efficient but generate strongly correlated values.
  • Linear congruent generators, once a standard sort of random number generator, are efficient but generate strongly correlated values.
  • Xorshift algorithms do significantly better in many tests of randomness
  • We looked at cumulative sums to test visually for obvious repeating patterns
  • The \( \chi^2 \) test applied to a binned histogram of the output provides a way to compare observed deviations from mean values to expectation
  • Numpy's random number generator: from numpy.random import default_rng; rng = default_rng()
  • Transforming a probability distribution. If \( y = f(x) \) and \( x \) is distributed such that \( P(x)\,\mathrm{d}x \) measures the probability that the random value is between \( x \) and \( x + \mathrm{d}x \), then \( p(y) = P(x) / |f'(x)| \).
Week 11
Tue, 3/31 Lecture 18Review of Ordinary Differential Equations
Reading:DE1, DE2
Thu, 4/2 Lecture 19Sturm-Liouville
Reading:Sturm Liouville
  • The Frobenius method of solving a second-order linear differential equation is to guess a power-series form of solution and shuffle indices until all terms proportional to a given power of the independent variable can be grouped together and set to zero.
  • Sturm-Liouville theory allows us to characterize the solutions to self-adjoint operators \( L \), which satisfy \( \langle Lf, g\rangle = \langle f, Lg \rangle \) when the bilinear concomitant vanishes at the limits of a range of integration.
  • We will use scipy's solve_ivp to calculate numerical solutions to sets of ordinary differential equations.
Week 12
Tue, 4/7 Lecture 20The Diffusion Equation
Reading:PD1
  • Project check-in
  • Derivation of the heat equation, \( u_t = D u_{xx} \)
  • Gibbs phenomenon again
  • Seasonal variation of underground temperature: worked at the boards
Thu, 4/9 Lecture 21Laplace’s Equation
Reading:PD2
  • If a function \( V \) satisfies Laplace’s equation in a region, the minimum (maximum) of \( V \) occurs on the boundary; Sara offered the proof.
  • A harmonic function satisfies Laplace’s equation. It is either a plane or it has positive curvature in one or more directions and negative curvature in one or more directions.
  • Solving Laplace's in a circular region: cylindrical harmonics. \( V(r, \theta) = a_0 + a'_0 \ln r + \sum_{n=1}^{\infty} (r/a)^n (a_n \cos n\theta + b_n \sin n\theta) \).
Week 13
Tue, 4/14 Lecture 22Numerical Approaches to Solving Partial Differential Equations
Reading:PD5
  • Approximating derivatives with finite differences
  • Laplace's equation implies each point on a square grid must have the average value of its neighbors.
  • Successive over-relaxation accelerates convergence by updating a value to shift more than to the average of its neighbors, but in the same direction.
  • Naive finite differencing of the diffusion equation in both \( x \) and \( t \) leads to the FTCS (forward in time, centered in space) method which is numerically unstable unless time steps are small enough.
  • Crank-Nicolson is the average of the FTCS and fully implicit methods, and is unconditionally stable. It involves solving a tridiagonal matrix equation, which is very efficient. See the Wikipedia page for a discussion of the Thomas algorithm.
Thu, 4/16 Lecture 23Project Work Day
Week 14
Tue, 4/21 Lecture 24The Wave Equation
Reading:PD3
Thu, 4/23 Lecture 25Using Fourier Transforms to Solve PDEs
Reading:PD4
Week 15
Tue, 4/28 Lecture 26Green's function for the one-dimensional diffusion equation
Reading:PD4
Thu, 4/30 Lecture 27
Week 16
Wed, 5/6 Exam 2Presentations
Reading:Project Tips