Scientific Computing

Author

Aditya Dendukuri

1 Preface

Every time I encountered a concept I could not explain from scratch, I wrote it down. Richard Feynman held that the surest test of understanding is whether you can teach something in plain terms. These notes exist because that test kept exposing gaps. They started as a study guide for CS 111 at UC Santa Barbara and grew as topics came up in research: eigenvalue algorithms, iterative solvers, spectral graph methods, Koopman theory. The result is a reference guide written by a graduate student for himself, not a textbook. Treat it accordingly.

For a rigorous treatment of this material, use the lecture notes of Professor David Bindel (Cornell, CS 6210: Matrix Computations), publicly available on his course website.

Computers were built to do numerical computation. The word ``computer’’ originally meant a person whose job was to evaluate functions by hand. The U.S. Army employed hundreds of them during the Second World War to compute artillery firing tables, a task so large it directly motivated the construction of ENIAC (1945). The machine’s first classified application was thermonuclear weapon simulation, directed by John von Neumann, whose 1947 paper with Goldstine laid the foundations of numerical stability analysis. In 1950, Jule Charney used ENIAC to run the first machine weather forecast: Lewis Fry Richardson had estimated the same computation would require 64,000 human computers working in parallel. The Apollo guidance computer navigated to the Moon in real time with 4 kilobytes of memory. The Cooley-Tukey FFT (1965) reduced a core signal processing computation from \(O(n^2)\) to \(O(n \log n)\). The demand for numerical computation at scale was the original motivation for programmable machines.

Almost every numerical algorithm reduces a nonlinear problem to a sequence of linear ones, because linear systems are the only class of problems we understand completely: their solution structure, their sensitivity to perturbations, their reliable algorithms. Newton’s method, finite element discretization, PCA, gradient-based optimization, each is, at its core, a repeated linearization. Linear algebra is the substrate of scientific computing.

Much of the perspective here came from CS 210A: Matrix Analysis, taught by Professor Shivkumar Chandrasekaran in the ECE department at UC Santa Barbara. If you have the chance to take that course, take it.

Aditya Dendukuri \ UC Santa Barbara, 2025