
The Dimension of a Vector Space A Measure of Degrees of Freedom
In the previous article, we established that a basis for a vector space \( V \) is a set of vectors that is both linearly independent and spans \( V \). A natural question arises: for a given vector space, do all bases have the same number of elements? The answer, for a vast and important class of spaces, is a resounding yes . This fundamental number, the number of vectors in any basis, is called the dimension of the vector space. It is the single most important numerical invariant associated with a vector space, providing a precise measure of its "degrees of freedom" or "complexity."
- Foundational Theorems: The Road to Dimension
The entire theory of dimension rests on two pivotal theorems concerning the relationship between spanning sets and linearly independent sets.
Theorem 2.1 (The Spanning Set Theorem): Let \( V \) be a vector space, let \( S = \{\mathbf{v}_1, \dots, \mathbf{v}_p\} \) be a set that spans \( V \), and let \( T = \{\mathbf{w}_1, \dots, \mathbf{w}_m\} \) be a linearly independent set in \( V \). Then \( m \leq p \).
*Informal Proof Sketch:* The core idea is one of "replacement." Since \( S \) spans \( V \), we can express each vector in \( T \) as a linear combination of vectors in \( S \). The linear independence of \( T \) forces a kind of economy: you cannot have more independent vectors \( \mathbf{w}_i \) than you have spanning vectors \( \mathbf{v}_j \) to build them from. A rigorous proof often proceeds by assuming \( m > p \) and showing this leads to a contradiction of the linear independence of \( T \).
This theorem has an immediate and profound corollary.
Theorem 2.2 (Invariance of Basis Size): If a vector space \( V \) has a basis \( \mathcal{B} \) containing \( n \) vectors and another basis \( \mathcal{C} \) containing \( m \) vectors, then \( m = n \).
*Proof:* Apply Theorem 2.1 twice.
- Since \( \mathcal{B} \) spans \( V \) and \( \mathcal{C} \) is linearly independent in \( V \), we have \( m \leq n \).
- Since \( \mathcal{C} \) spans \( V \) and \( \mathcal{B} \) is linearly independent in \( V \), we have \( n \leq m \).
The only conclusion is \( m = n \). ∎
This theorem is the bedrock. It guarantees that the following definition is unambiguous.
- The Formal Definition of Dimension
A vector space \( V \) is called finite-dimensional if it has a basis consisting of a finite number of vectors.
The dimension of a finite-dimensional vector space \( V \), denoted \( \dim(V) \), is the number of vectors in any basis for \( V \).
The dimension of the zero vector space \( \{\mathbf{0}\} \) is defined to be 0.
If a space does not have a finite basis, it is called infinite-dimensional .
Example 3.1 (Canonical Examples):
* \( \dim(\mathbb{R}^n) = n \). The standard basis \( \{\mathbf{e}_1, \dots, \mathbf{e}_n\} \) has \( n \) vectors.
* \( \dim(\mathbb{C}^n) = n \), when considered as a complex vector space.
* \( \dim(P_n) = n+1 \), where \( P_n \) is the space of polynomials of degree at most \( n \). The standard basis \( \{1, x, x^2, \dots, x^n\} \) has \( n+1 \) vectors.
* The space of all \( m \times n \) matrices with real entries, \( M_{m \times n} \), has dimension \( m \cdot n \). A standard basis is the set of matrices \( \{E_{ij}\} \) where \( E_{ij} \) has a 1 in the \( (i,j) \)-th position and 0 elsewhere.
Example 3.2 (An Infinite-Dimensional Space):
The space \( P \) of *all* polynomials (with no upper bound on the degree) is infinite-dimensional. The infinite set \( \{1, x, x^2, x^3, \dots\} \) is linearly independent and spans \( P \), but no finite subset of it can span \( P \). Any finite set of polynomials has a maximum degree, say \( N \), and thus cannot generate a polynomial of degree \( N+1 \).
- Consequences and Applications of Dimension
The concept of dimension is not merely a label; it has powerful theoretical and practical consequences.
4.1 Characterizing Bases
Dimension provides a simple test for a set to be a basis.
Theorem 4.1: Let \( V \) be an \( n \)-dimensional vector space, and let \( S = \{\mathbf{v}_1, \dots, \mathbf{v}_n\} \) be a set of exactly \( n \) vectors in \( V \). Then the following are equivalent:
- \( S \) is a basis for \( V \).
- \( S \) is linearly independent.
- \( S \) spans \( V \).
*Proof:* The implication (1) ⇒ (2) and (1) ⇒ (3) are true by the definition of a basis. We prove (2) ⇒ (1) and (3) ⇒ (1).
(2) ⇒ (1): Suppose \( S \) is linearly independent but not a basis. Then it does not span \( V \), so there exists a vector \( \mathbf{v} \in V \) not in \( \operatorname{span}(S) \). The larger set \( S \cup \{\mathbf{v}\} \) is linearly independent (as \( \mathbf{v} \) is not a linear combination of the others). But this is a linearly independent set of size \( n+1 \) in a space of dimension \( n \), which contradicts Theorem 2.1. Therefore, \( S \) must also span \( V \), making it a basis.
(3) ⇒ (1): Suppose \( S \) spans \( V \) but is not a basis. Then it is linearly dependent. By the Spanning Set Theorem, we can remove a vector from \( S \) to create a smaller set \( S' \) that still spans \( V \). But \( S' \) has size \( n-1 \), which is less than the known dimension \( n \). This contradicts the fact that a spanning set must have at least as many vectors as a basis (again by Theorem 2.1). Therefore, \( S \) must also be linearly independent, making it a basis. ∎
Example 4.2: We know \( \dim(\mathbb{R}^3) = 3 \). Consider the set \( S = \left\{ \begin{bmatrix} 1 \\ 0 \\ 2 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \\ 3 \end{bmatrix} \right\} \). To check if it's a basis, we don't need to verify both spanning and independence. We can check just one property. Let's check linear independence by forming a matrix and computing its determinant:
\[A = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 2 & 1 & 3 \end{bmatrix}.\]
We find \( \det(A) = 1(1\cdot3 - 1\cdot1) - 0 + 1(0\cdot1 - 1\cdot2) = 1(2) + 1(-2) = 0 \). Since the determinant is zero, the columns are linearly dependent. Thus, by Theorem 4.1, \( S \) is not a basis.
4.2 Dimensions of Subspaces
Dimension gives a very clean way to understand the structure of subspaces.
Theorem 4.3: Let \( V \) be an \( n \)-dimensional vector space, and let \( W \) be a subspace of \( V \). Then:
- \( \dim(W) \leq \dim(V) \).
- If \( \dim(W) = \dim(V) \), then \( W = V \).
*Proof:*
- Any basis for \( W \) is a linearly independent set in \( V \). By Theorem 2.1, the size of this basis cannot exceed \( n \).
- If \( \dim(W) = n \), then a basis for \( W \) is a set of \( n \) linearly independent vectors in \( V \). By Theorem 4.1, this set is also a basis for \( V \). Hence, \( W = \operatorname{span}(\text{basis}) = V \). ∎
This theorem is incredibly useful. For example, to prove that a subspace \( W \) of \( \mathbb{R}^3 \) is actually all of \( \mathbb{R}^3 \), it suffices to find a basis for \( W \) with 3 vectors. You don't need to check that every vector in \( \mathbb{R}^3 \) is in \( W \).
Example 4.4: Let \( W \) be the subspace of \( \mathbb{R}^4 \) spanned by:
\[\mathbf{v}_1 = \begin{bmatrix} 1 \\ 2 \\ -1 \\ 3 \end{bmatrix}, \quad\mathbf{v}_2 = \begin{bmatrix} 2 \\ 4 \\ -2 \\ 6 \end{bmatrix},\quad\mathbf{v}_3 = \begin{bmatrix} 1 \\ 3 \\ 0 \\ 4\end{bmatrix}, \quad\mathbf{v}_4 = \begin{bmatrix} 2 \\ 5 \\ -1 \\ 7\end{bmatrix}.\]
Find \( \dim(W) \).
*Solution:* We find a basis for \( W \) by placing the vectors as columns in a matrix \( A \) and row reducing to find the pivot columns.
\[A = \begin{bmatrix}1 & 2 & 1 & 2 \\2 & 4 & 3 & 5 \\-1 & -2 & 0 & -1 \\3 & 6 & 4 & 7\end{bmatrix} \sim\begin{bmatrix}1 & 2 & 1 & 2 \\0 & 0 & 1 & 1\\0 & 0 & 1 & 1 \\0 & 0 & 1 & 1\end{bmatrix} \sim\begin{bmatrix}1 & 2 & 0 & 1 \\0 & 0 & 1 & 1 \\0 & 0 & 0 & 0 \\0 & 0 & 0 & 0\end{bmatrix}.\]
The pivot columns are columns 1 and 3. Therefore, \( \{\mathbf{v}_1, \mathbf{v}_3\} \) is a basis for \( W \), and \( \dim(W) = 2 \).
4.3 The Rank-Nullity Theorem
Perhaps the most profound application of dimension is in the Fundamental Theorem of Linear Algebra, Part I, also known as the Rank-Nullity Theorem .
Theorem 4.5 (Rank-Nullity): Let \( T: V \to W \) be a linear transformation between finite-dimensional vector spaces. Then:
\(\dim(V) = \dim(\operatorname{null}(T)) + \dim(\operatorname{range}(T)) = \operatorname{nullity}(T) + \operatorname{rank}(T).\)
*Interpretation:* For a linear transformation, the dimension of the domain \( V \) is "partitioned" into two parts: the dimension of the kernel (or null space), which measures how much information is "lost" or "collapsed to zero" by \( T \), and the dimension of the image (or range), which measures how much of the target space \( W \) is actually "hit" by \( T \).
Example 4.6: Let \( T: \mathbb{R}^5 \to \mathbb{R}^4 \) be the linear transformation given by \( T(\mathbf{x}) = A\mathbf{x} \), where
\[A = \begin{bmatrix}1 & 2 & 0 & -1 & 3 \\0 & 0 & 1 & 2 & -1 \\2 & 4 & 0 & -2 & 6 \\1 & 2 & 1 & 1 & 2\end{bmatrix}.\]
Find the rank and nullity of \( T \).
*Solution:* We row reduce \( A \):
\[A \sim \begin{bmatrix}1 & 2 & 0 & -1 & 3 \\0 & 0 & 1 & 2 & -1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
There are 2 pivot columns. Therefore:
* \( \text{rank}(T) = \dim(\operatorname{range}(T)) = \dim(\operatorname{Col}(A)) = 2 \).
* \( \text{nullity}(T) = \dim(\operatorname{null}(T)) = \dim(V) - \text{rank}(T) = 5 - 2 = 3 \).
This tells us that the transformation \( T \) maps a 5-dimensional space onto a 2-dimensional plane within \( \mathbb{R}^4 \), and the set of vectors that get mapped to zero is a 3-dimensional subspace of \( \mathbb{R}^5 \).
- Conclusion: Dimension as a Unifying Concept
The dimension of a vector space is far more than a mere count. It is a fundamental invariant that:
* Classifies vector spaces up to isomorphism. (Any two \( n \)-dimensional vector spaces over the same field are isomorphic).
* Governs the possible sizes of linearly independent and spanning sets.
* Quantifies the structure of subspaces and the behavior of linear transformations via the Rank-Nullity Theorem.
* Provides a powerful computational shortcut for verifying bases.
In essence, dimension is the lens through which we perceive the size and structure of linear spaces, translating geometric intuition into precise algebraic language. It is the bridge between the abstract world of vector spaces and the concrete world of coordinate geometry and matrix algebra.












