9  Linearity

Linear algebra is built around the idea of linearity: a property that lets us decompose complex problems into independent parts and recombine their solutions. Before studying specific algorithms, we establish the core vocabulary — linear systems, rank, and subspaces — that underpins everything in this course.

A mapping \(L\) is linear if it satisfies two properties for all inputs \(\bx, \by\) and scalars \(\alpha, \beta\):

Together these give the superposition principle: \(L(\alpha\bx + \beta\by) = \alpha L(\bx) + \beta L(\by)\).

9.1 Linear Systems

A linear system packages finitely many linear equations into a single matrix equation. The central question is when such a system has solutions and how many.

NoteDefinition: Linear System

% A system of linear equations in matrix-vector form is \[\bA\bx = \bb,\] where \(\bA \in \fR^{m \times n}\) is the coefficient matrix, \(\bx \in \fR^n\) the unknown vector, and \(\bb \in \fR^m\) the right-hand side: \[\bA = \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{pmatrix}, \quad \bx = \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}, \quad \bb = \begin{pmatrix} b_1 \\ \vdots \\ b_m \end{pmatrix}.\]

WarningExercise

Refer to the result above.

  1. Write the system \(\{x + 2y = 3,\; 3x - y = 1\}\) in matrix-vector form \(\bA\bx = \bb\).

  2. In NumPy, solve it using np.linalg.solve(A, b) and verify by computing the residual np.linalg.norm(b - A @ x).

NoteTheorem: Matrix-Vector Multiplication is Linear

The map \(T: \fR^n \to \fR^m\) defined by \(T(\bx) = \bA\bx\) is a linear transformation: for all \(\bx, \by \in \fR^n\) and \(\alpha, \beta \in \fR\), \[T(\alpha\bx + \beta\by) = \alpha T(\bx) + \beta T(\by).\]

By distributivity: \(\bA(\bx + \by) = \bA\bx + \bA\by = T(\bx) + T(\by)\). By scalar compatibility: \(\bA(\alpha\bx) = \alpha(\bA\bx) = \alpha T(\bx)\). Combining gives \(T(\alpha\bx + \beta\by) = \alpha T(\bx) + \beta T(\by)\).

NoteCorollary: Superposition Principle

If \(\bA\bx_1 = \bb_1\) and \(\bA\bx_2 = \bb_2\), then for any \(\alpha, \beta \in \fR\), \[\bA(\alpha\bx_1 + \beta\bx_2) = \alpha\bb_1 + \beta\bb_2.\]

WarningExercise

In this exercise, we will prove the result above. Refer to the result above.

  1. Suppose \(\bx_1\) solves \(\bA\bx = \bb_1\) and \(\bx_2\) solves \(\bA\bx = \bb_2\). Apply \(T\) to \(\alpha\bx_1 + \beta\bx_2\) and use linearity to conclude \(\bA(\alpha\bx_1 + \beta\bx_2) = \alpha\bb_1 + \beta\bb_2\).

  2. Use this result to solve \(\bA\bx = 3\bb_1 - 2\bb_2\) without any new computation.

  3. Why is the result above computationally valuable when solving \(\bA\bX = \bB\) for a matrix right-hand side \(\bB\)? How does np.linalg.solve(A, B) exploit this?

NoteTheorem: Existence and Uniqueness

% For a square system \(\bA\bx = \bb\) with \(\bA \in \fR^{n \times n}\), the system has a unique solution if and only if \(\bA\) is invertible (\(\det(\bA) \neq 0\)). If \(\bA\) is singular, the system either has no solution or infinitely many.

NoteTheorem: Equivalences for Invertibility

Let \(\bA \in \fR^{n \times n}\). The following are equivalent:

  1. \(\bA\) is invertible.

  2. \(\bA\bx = \bb\) has a unique solution for every \(\bb \in \fR^n\).

  3. \(\bA\bx = \bzero\) has only the trivial solution \(\bx = \bzero\).

  4. \(\text{rank}(\bA) = n\).

  5. The columns of \(\bA\) are linearly independent.

\((1)\Rightarrow(2)\): Set \(\bx = \bA^{-1}\bb\). If \(\bA\bx_1 = \bA\bx_2\), then \(\bA(\bx_1 - \bx_2) = \bzero\), so \(\bx_1 = \bx_2\).

\((2)\Rightarrow(3)\): Apply (2) with \(\bb = \bzero\); the unique solution is \(\bx = \bzero\).

\((3)\Rightarrow(4)\): The null space is \(\{\bzero\}\), so nullity\((\bA) = 0\). By the Rank-Nullity Theorem, rank\((\bA) = n\).

\((4)\Rightarrow(1)\): A square matrix with rank \(n\) has \(n\) linearly independent columns spanning \(\fR^n\), which is exactly the invertibility condition.

WarningExercise

Refer to the result above.

  1. For each matrix, determine invertibility and give a one-sentence justification: \[\bA = \begin{pmatrix} 2 & -1 \\ 4 & -2 \end{pmatrix}, \qquad \bB = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}.\]

  2. For \(\bA = \begin{pmatrix}1&2\\3&6\end{pmatrix}\) and \(\bb = \begin{pmatrix}1\\4\end{pmatrix}\): is the system consistent, and if so, how many solutions? Verify with np.linalg.matrix\_rank.

NoteDefinition: Linear Combination

A linear combination of vectors \(\{\bx_1, ..., \bx_k\}\) is any vector of the form \[\alpha_1\bx_1 + \alpha_2\bx_2 + \cdots + \alpha_k\bx_k, \quad \alpha_i \in \fR.\]

Tip

Matrix-vector multiplication \(\bA\bx\) is a linear combination of the columns of \(\bA\) with coefficients given by the entries of \(\bx\): \[\bA\bx = x_1\ba_1 + x_2\ba_2 + \cdots + x_n\ba_n,\] where \(\ba_i\) is the \(i\)-th column of \(\bA\). The system \(\bA\bx = \bb\) asks: is \(\bb\) a linear combination of the columns of \(\bA\)?

WarningExercise

Refer to the result above.

  1. Express \(\bb = (7, -1)^T\) as a linear combination of \(\bv_1 = (1,1)^T\) and \(\bv_2 = (2,-1)^T\) by solving a \(2 \times 2\) system for the coefficients.

  2. For \(\bA = \begin{pmatrix}1&2\\3&4\end{pmatrix}\), write \(\bA\bx\) explicitly as a linear combination of the columns of \(\bA\).

NoteDefinition: Linear Dependence and Independence

A set of vectors \(\{\bx_1, ..., \bx_k\}\) is linearly dependent if there exist scalars \(\alpha_1, ..., \alpha_k\), not all zero, such that \[\alpha_1\bx_1 + \cdots + \alpha_k\bx_k = \bzero.\] If the only solution is \(\alpha_1 = \cdots = \alpha_k = 0\), the vectors are linearly independent.

Tip

Linear dependence means redundancy: some vector can be written as a combination of the others. If the columns of \(\bA\) are linearly dependent, then \(\bA\bx = \bzero\) has a nontrivial solution, so \(\bA\) is not invertible (the result above(3)).

WarningExercise

Refer to the result above.

  1. Determine whether \(\{(1,2,3)^T,\,(4,5,6)^T,\,(7,8,9)^T\}\) is linearly dependent or independent. (Hint: stack as columns, find the rank.)

  2. If \(\bA \in \fR^{m \times n}\) with \(m < n\), must the columns be linearly dependent? Justify.

NoteDefinition: Basis

A basis for a vector space \(\mathcal{V}\) is a linearly independent spanning set. Every vector in \(\mathcal{V}\) can be uniquely written as a linear combination of the basis vectors. The number of basis vectors is the dimension of \(\mathcal{V}\).

WarningExercise

Refer to the result above and the result above.

  1. Show that \(\be_1 = (1,0,0)^T\), \(\be_2 = (0,1,0)^T\), \(\be_3 = (0,0,1)^T\) form a basis for \(\fR^3\).

  2. Can a set of 4 vectors form a basis for \(\fR^3\)? Why or why not?

9.2 Rank

Rank measures how many truly independent directions a matrix can produce. It determines whether a linear system has solutions and, if so, how many.

NoteDefinition: Rank

The rank of \(\bA \in \fR^{m \times n}\) is the maximum number of linearly independent rows (equivalently, columns) of \(\bA\). It equals the dimension of the column space and satisfies \(\text{rank}(\bA) \leq \min(m,n)\).

Tip

Rank measures the effective information content of \(\bA\): the number of independent directions it can produce. A matrix with low rank relative to its size has redundant rows or columns. In NumPy, np.linalg.matrix\_rank(A) computes rank via SVD. In many applications such as data compression and machine learning, low-rank structure is actively exploited to reduce dimensionality.

NoteDefinition: Full Rank

A matrix \(\bA \in \fR^{m \times n}\) is full rank if \(\text{rank}(\bA) = \min(m, n)\). For a square matrix, full rank is equivalent to invertibility.

NoteExample

Consider \[\bA = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{pmatrix}.\] Row 2 is twice row 1. Row-reducing: \[\xrightarrow{R_2 - 2R_1,\; R_3 - R_1} \begin{pmatrix}1&2&3\\0&0&0\\0&-1&-2\end{pmatrix} \xrightarrow{\text{swap } R_2, R_3} \begin{pmatrix}1&2&3\\0&-1&-2\\0&0&0\end{pmatrix}.\] Two pivot rows, so \(\text{rank}(\bA) = 2\).

WarningExercise

Refer to the result above and the result above.

  1. Determine the rank by row reduction: \[\bA = \begin{pmatrix} 1 & 3 \\ 2 & 6 \end{pmatrix}, \qquad \bB = \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & -1 \\ 2 & -1 & 5 \end{pmatrix}.\]

  2. For a \(5 \times 3\) matrix \(\bA\) with \(\text{rank}(\bA) = 3\), what can you say about solutions to \(\bA\bx = \bb\)?

NoteTheorem: Rank-Nullity Theorem

For any \(\bA \in \fR^{m \times n}\), \[\text{rank}(\bA) + \text{nullity}(\bA) = n,\] where \(\text{nullity}(\bA) = \dim(\ker(\bA))\) and \(\ker(\bA) = \{\bx \in \fR^n : \bA\bx = \bzero\}\).

Tip

For a square \(n \times n\) system \(\bA\bx = \bb\):

  • \(\text{rank}(\bA) = n\): the null space is \(\{\bzero\}\), so \(\bA\bx = \bb\) has a unique solution.

  • \(\text{rank}(\bA) = r < n\): if a solution exists, there are infinitely many; the solution set is an affine subspace of dimension \(n - r\).

WarningExercise

Refer to the result above. Let \[\bA = \begin{pmatrix} 1 & 2 & -1 \\ 2 & 5 & 0 \\ -1 & 1 & 6 \end{pmatrix}, \quad \bb = \begin{pmatrix} 1 \\ 4 \\ 3 \end{pmatrix}.\]

  1. Determine \(\text{rank}(\bA)\) by row-reducing \([\bA \mid \bb]\).

  2. How many solutions does this system have? Justify using the result above.

  3. Find the solution(s).

  4. In NumPy, solve using np.linalg.solve(A, b) and check the residual np.linalg.norm(b - A @ x). What residual do you expect for a consistent system?

WarningExercise

(Linear Independence and the Null Space) Refer to the result above and the result above. Let \(\bA \in \fR^{m \times n}\) with \(m < n\).

  1. Argue that the columns of \(\bA\) must be linearly dependent. ()

  2. What does this imply about solutions to \(\bA\bx = \bb\)?

  3. Give a \(2 \times 3\) example where \(\bA\bx = \bb\) has infinitely many solutions. Find the general solution as a particular solution plus the null space.

  4. Can a wide system ever have no solution? Explain.

9.3 Subspaces and the Four Fundamental Subspaces

Every matrix naturally defines four subspaces that completely characterize the solution structure of \(\bA\bx = \bb\). These subspaces reappear in least squares, SVD, and eigenvalue problems throughout the notes.

NoteDefinition: Subspace

A nonempty subset \(\mathcal{S} \subseteq \fR^n\) is a subspace if it is closed under linear combinations:

  1. \(\bzero \in \mathcal{S}\),

  2. \(\bu, \bv \in \mathcal{S} \Rightarrow \bu + \bv \in \mathcal{S}\),

  3. \(\bu \in \mathcal{S},\; \alpha \in \fR \Rightarrow \alpha\bu \in \mathcal{S}\).

Tip

Every subspace must contain \(\bzero\) (set \(\alpha = 0\) in condition 3). Geometrically, subspaces are flat objects passing through the origin. Lines and planes through the origin in \(\fR^3\) are subspaces; those not passing through the origin are affine subspaces.

WarningExercise

Refer to the result above.

  1. Determine whether each set is a subspace of \(\fR^2\): \[\mathcal{S}_1 = \{(x,\, 2x)^T : x \in \fR\}, \qquad \mathcal{S}_2 = \{(x,\, y)^T : x + y = 1\}.\]

  2. Show that \(\ker(\bA) = \{\bx : \bA\bx = \bzero\}\) is always a subspace of \(\fR^n\).

NoteDefinition: The Four Fundamental Subspaces

For \(\bA \in \fR^{m \times n}\) of rank \(r\), four subspaces arise naturally:

  1. Column space: \(\mathcal{R}(\bA) \subseteq \fR^m\), \(\dim = r\).

  2. Null space: \(\mathcal{N}(\bA) \subseteq \fR^n\), \(\dim = n - r\).

  3. Row space: \(\mathcal{R}(\bA^T) \subseteq \fR^n\), \(\dim = r\).

  4. Left null space: \(\mathcal{N}(\bA^T) \subseteq \fR^m\), \(\dim = m - r\).

The dimensions follow from the result above applied to \(\bA\) and \(\bA^T\).

WarningExercise

Refer to the result above and the result above. Let \(\bA = \begin{pmatrix}1&2&3\\2&4&6\end{pmatrix}\).

  1. State the dimension of each of the four fundamental subspaces.

  2. Find explicit bases for \(\mathcal{R}(\bA)\) and \(\mathcal{N}(\bA)\).

  3. Verify the dimensions sum correctly as required by the result above.

NoteTheorem: Fundamental Theorem of Linear Algebra

The four fundamental subspaces of \(\bA \in \fR^{m \times n}\) satisfy: \[\mathcal{R}(\bA^T) \perp \mathcal{N}(\bA) \;\text{ in } \fR^n, \qquad \mathcal{R}(\bA) \perp \mathcal{N}(\bA^T) \;\text{ in } \fR^m.\] Moreover, these are orthogonal complements: \(\mathcal{R}(\bA^T) \oplus \mathcal{N}(\bA) = \fR^n\) and \(\mathcal{R}(\bA) \oplus \mathcal{N}(\bA^T) = \fR^m\).

If \(\bv \in \mathcal{R}(\bA^T)\), then \(\bv = \bA^T\by\) for some \(\by\). For any \(\bx \in \mathcal{N}(\bA)\): \[\bv^T\bx = (\bA^T\by)^T\bx = \by^T(\bA\bx) = \by^T\bzero = 0.\] Complementarity follows from the result above: dimensions sum to \(n\) and the intersection is \(\{\bzero\}\). The second relation follows by applying the first to \(\bA^T\).

Tip

The orthogonal complement picture has immediate payoffs for \(\bA\bx = \bb\):

  • Solvability: \(\bA\bx = \bb\) has a solution iff \(\bb \in \mathcal{R}(\bA)\), equivalently \(\bb \perp \mathcal{N}(\bA^T)\).

  • Solution structure: If \(\hat{\bx}\) is any particular solution, the full set is \(\hat{\bx} + \mathcal{N}(\bA)\).

  • Uniqueness: Unique iff \(\mathcal{N}(\bA) = \{\bzero\}\), i.e., full column rank.

NoteExample

Let \(\bA = \begin{pmatrix}1&2&3\\2&4&6\end{pmatrix}\), \(\text{rank}(\bA) = 1\).

  • \(\mathcal{R}(\bA) = \text{span}\{(1,2)^T\}\); \(\mathcal{N}(\bA^T) = \text{span}\{(2,-1)^T\}\). Check: \((1,2)\cdot(2,-1) = 0\). \(✓\)

  • \(\mathcal{R}(\bA^T) = \text{span}\{(1,2,3)^T\}\); \(\mathcal{N}(\bA)\) is a plane spanned by \((-2,1,0)^T\) and \((-3,0,1)^T\).

\(\bA\bx = \bb\) has a solution only when \(\bb = \alpha(1,2)^T\).

These four subspaces reappear in every major topic of these notes. In least squares, the residual \(\bb - \bA\hat{\bx}\) lies in \(\mathcal{N}(\bA^T)\). In SVD, the right singular vectors split into bases for \(\mathcal{R}(\bA^T)\) and \(\mathcal{N}(\bA)\). In eigenvalue problems, the concept generalizes to invariant subspaces.

NoteDefinition: Linear Span

The span of \(\{\bx_1, ..., \bx_k\}\) is the set of all their linear combinations: \[\text{span}\{\bx_1, ..., \bx_k\} = \{\alpha_1\bx_1 + \cdots + \alpha_k\bx_k : \alpha_i \in \fR\}.\] This is the smallest subspace of \(\fR^n\) containing the given vectors.

NoteExample

For \(\bv_1 = (1,2)^T\) and \(\bv_2 = (3,4)^T\): assume \(\alpha(1,2) + \beta(3,4) = (0,0)\). Then \(\alpha + 3\beta = 0\) and \(2\alpha + 4\beta = 0\), forcing \(\beta = 0\) then \(\alpha = 0\). So \(\{\bv_1, \bv_2\}\) is independent and \(\text{span}\{\bv_1, \bv_2\} = \fR^2\).

WarningExercise

Refer to the result above and the result above.

  1. What is \(\text{span}\{(1,0,0)^T,\,(0,1,0)^T\}\) in \(\fR^3\)? What is its dimension?

  2. For \(\bA = \begin{pmatrix}1&2&3\\2&4&6\end{pmatrix}\), verify that \(\mathcal{R}(\bA) = \text{span}\{\text{columns of }\bA\}\) and find its dimension.

  3. In NumPy, confirm that adding the column \((6,12)^T\) to \(\bA\) does not increase np.linalg.matrix\_rank(A).