Many difficult problems can be handled easily once relevant information is organized in a certain way. This text aims to teach you how to organize information in cases where certain mathematical structures are present. Linear algebra is, in general, the study of those structures. Namely

In broad terms, vectors are things you can add and linear functions are functions of vectors that respect vector addition. The goal of this text is to teach you to organize information about vector spaces in a way that makes problems involving linear functions of many variables easy. (Or at least tractable.)

Example 1: Organizing and Reorganizing Information

You own stock in 3 companies: Google, Netflix, and Apple. The value \( V \) of your stock portfolio as a function of the number of shares you own \( s_G, s_N, s_A \) of these companies is:

\(V(s_G, s_N, s_A) = 24s_G + 80s_A + 35s_N\)

Here is an ill-posed question: What is \( V \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \)?

The column of three numbers is ambiguous! Is it meant to denote:

* 1 share of \( G \), 2 shares of \( N \), and 3 shares of \( A \)?

* 1 share of \( N \), 2 shares of \( G \), and 3 shares of \( A \)?

Do we multiply the first number of the input by 24 or by 35? No one has specified an order for the variables, so we do not know how to calculate an output associated with a particular input.

A different notation for \( V \) can clear this up; we can denote \( V \) itself as an ordered triple of numbers that reminds us what to do to each number from the input.

The value of a stock portfolio is given by the function:

\(V(s_G, s_N, s_A) = 24s_G + 80s_A + 35s_N\)

Asking for \( V \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \) is ambiguous. What order do the input numbers correspond to?

The solution is to pair the function with a specific order for the variables. We denote this organized function as a row vector.

Using order \(B = [G, A, N]\):

The function is represented as:

\(\begin{bmatrix} 24 & 80 & 35 \end{bmatrix}_B\)

The calculation is then defined as:

\(V \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}_B = \begin{bmatrix} 24 & 80 & 35 \end{bmatrix} \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} = 24(1) + 80(2) + 35(3) = 334\)

Here, the input \(\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}_B\) means \(\begin{pmatrix} s_G \\ s_A \\ s_N \end{pmatrix}\).

Using a different order \(B' = [N, A, G]\):

The same function is now represented as:

\(\begin{bmatrix} 35 & 80 & 24 \end{bmatrix}_{B'}\)

The calculation becomes:

\(V \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}_{B'} = \begin{bmatrix} 35 & 80 & 24 \end{bmatrix} \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} = 35(1) + 80(2) + 24(3) = 264\)

Here, the input \(\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}_{B'}\) means \(\begin{pmatrix} s_N \\ s_A \\ s_G \end{pmatrix}\).

This demonstrates a key idea: the same function \(V\) assigns different numerical results to the same column vector, depending on the chosen order (the basis ). Organizing this information-specifying the order of variables-is crucial for clear communication and calculation. This freedom to organize information is a central theme of linear algebra.


What are Vectors?

Vectors are things you can add together and multiply by scalars (numbers). Here are some examples of different kinds of vectors:

Example 2 (Vector Addition)

(A) Numbers:

Both 3 and 5 are numbers, and so is their sum:

\(3 + 5 = 8\)

(B) 3-vectors:

\[\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix} = \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix}\]

(C) Polynomials:

If \( p(x) = 1 + x - 2x^2 + 3x^3 \) and \( q(x) = x + 3x^2 - 3x^3 + x^4 \), then their sum is:

\[p(x) + q(x) = 1 + 2x + x^2 + 0x^3 + x^4 = 1 + 2x + x^2 + x^4\]

(D) Power Series:

If \( f(x) = 1 + x + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \cdots \) and \( g(x) = 1 - x + \frac{1}{2!}x^2 - \frac{1}{3!}x^3 + \cdots \), then:

\[f(x) + g(x) = 2 + \frac{2}{2!}x^2 + \frac{2}{4!}x^4 + \cdots\]

(E) Functions:

If \( f(x) = e^x \) and \( g(x) = e^{-x} \), then:

\[f(x) + g(x) = e^x + e^{-x} = 2\cosh x\]

Important Observations:

  1. Different kinds of vectors cannot be added - it makes no sense to add a number vector to a function vector:

\[\begin{pmatrix} 9 \\ 3 \end{pmatrix} + e^x \quad \text{is meaningless}\]

  1. Scalar multiplication works naturally for each type:

\[4\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} = \begin{pmatrix} 4 \\ 4 \\ 0 \end{pmatrix}\]

\[\frac{1}{3}\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} = \begin{pmatrix} \frac{1}{3} \\ \frac{1}{3} \\ 0 \end{pmatrix}\]

  1. Every vector type has a zero vector :

- Numbers: \( 0 \)

- 3-vectors: \( \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \)

- Polynomials: \( 0 \) (the zero polynomial)

- Power series: \( 0 + 0x + 0x^2 + \cdots \)

- Functions: \( 0 \) (the zero function)

Summary

Vectors are things you can add and scalar multiply.

Examples of vector types:

  • - Numbers
  • - n-vectors
  • - Polynomials
  • - Power series
  • - Functions with a certain domain

In any situation using vectors, you need to define how to add vectors and multiply them by scalars. The key insight is that many different mathematical objects can be treated as vectors if they follow these two operations.


What are Linear Functions?

In linear algebra, we study functions that take vectors as inputs and produce vectors as outputs. Unlike calculus where we often work with functions from real numbers to real numbers, linear algebra deals with functions between vector spaces.

Examples of Problems as Vector Functions

Example 3 (Questions involving Functions of Vectors)

(A) What number \( x \) satisfies \( 10x = 3 \)?

→ Function: \( f(x) = 10x \)

(B) What 3-vector \( \mathbf{u} \) satisfies:

\[\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} \times \mathbf{u} = \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}\]

→ Function: \( f(\mathbf{u}) = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} \times \mathbf{u} \)

(C) What polynomial \( p \) satisfies:

\[\int_{-1}^{1} p(y)dy = 0 \quad \text{and} \quad \int_{-1}^{1} yp(y)dy = 1\]

→ Function: \( f(p) = \begin{pmatrix} \int_{-1}^{1} p(y)dy \\ \int_{-1}^{1} yp(y)dy \end{pmatrix} \)

(D) What power series \( f(x) \) satisfies:

\[x\frac{d}{dx}f(x) - 2f(x) = 0\]

→ Function: \( L(f) = x\frac{df}{dx} - 2f \)

(E) What number \( x \) satisfies \( 4x^2 = 1 \)?

→ Function: \( f(x) = 4x^2 \)

All these questions have the form: What vector \( X \) satisfies \( f(X) = B \)?

The Essential Property: Linearity

A function \( L \) is linear if it satisfies two key properties:

  1. Additivity:

\[L(\mathbf{u} + \mathbf{v}) = L(\mathbf{u}) + L(\mathbf{v}) \]

  1. Homogeneity:

\[L(c\mathbf{u}) = cL(\mathbf{u})\]

for any scalar \( c \).

Combined Property: For any vectors \( \mathbf{u}, \mathbf{v} \) and scalars \( c, d \):

\[L(c\mathbf{u} + d\mathbf{v}) = cL(\mathbf{u}) + dL(\mathbf{v})\]

Key Example: The Derivative Operator

The derivative operator \( \frac{d}{dx} \) is linear because:

  1. \( \frac{d}{dx}(cf) = c\frac{df}{dx} \)
  2. \( \frac{d}{dx}(f + g) = \frac{df}{dx} + \frac{dg}{dx} \)

If we view functions as vectors, this shows the derivative is a linear transformation.

Terminology

Linear functions are also called:

- Linear transformations

- Linear operators

- Linear maps

Important Note:

Most functions are not linear. For example, \( f(x) = x^2 \) is not linear because:

\[f(1 + 1) = 4 \neq f(1) + f(1) = 2\]

The power of linear algebra comes from studying systems that can be expressed as:

\[L\mathbf{v} = \mathbf{w}\]

where \( L \) is a linear transformation, \( \mathbf{w} \) is known, and \( \mathbf{v} \) is unknown.


So, What is a Matrix?

Matrices are concrete representations of linear functions. They organize information about how linear transformations work.

The Fruity Example: A System of Linear Equations

A room contains \(x\) bags and \(y\) boxes of fruit:

- Each bag: 2 apples + 4 bananas

- Each box: 6 apples + 8 bananas

- Total: 20 apples + 28 bananas

This gives us the system:

\[\begin{aligned}2x + 6y &= 20 \\4x + 8y &= 28\end{aligned}\]

Matrix Representation

We can rewrite this using vectors and matrices:

\[x\begin{pmatrix}2\\4\end{pmatrix} +y\begin{pmatrix}6\\8\end{pmatrix} =\begin{pmatrix}20\\28\end{pmatrix}\]

This becomes:

\[\begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}20\\28\end{pmatrix}\]

Where the matrix is defined by:

\[\begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} :=x\begin{pmatrix}2\\4\end{pmatrix} +y\begin{pmatrix}6\\8\end{pmatrix} = \begin{pmatrix}2x + 6y\\4x +8y\end{pmatrix}\]

General Matrix-Vector Multiplication

For a general \(2 \times 2\) matrix:

\[\begin{pmatrix}p & q\\r &s\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} = \begin{pmatrix}px+ qy\\rx + sy\end{pmatrix} =x\begin{pmatrix}p\\r\end{pmatrix} +y\begin{pmatrix}q\\s\end{pmatrix}\]

The set of all possible outputs is called the column space - it's all linear combinations of the matrix's columns.

Matrices as Linear Functions

Matrix multiplication is linear because it satisfies:

  1. Homogeneity:

\[\begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\left(\lambda\begin{pmatrix}x\\y\end{pmatrix}\right)= \lambda\left(\begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}\right)\]

  1. Additivity:

\[\begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\left(\begin{pmatrix}x\\y\end{pmatrix} +\begin{pmatrix}x'\\y'\end{pmatrix}\right) = \begin{pmatrix}2 & 6\\4 &8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} + \begin{pmatrix}2& 6\\4 & 8\end{pmatrix}\begin{pmatrix}x'\\y'\end{pmatrix}\]

Matrix Multiplication as Function Composition

If we chain linear transformations:

\[\begin{pmatrix}1 & 2\\0 & 1\end{pmatrix}\left(\begin{pmatrix}2 &6\\4 & 8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}\right) =\begin{pmatrix}10 & 22\\4 &8\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}\]

This shows that:

\[\begin{pmatrix}1 & 2\\0 & 1\end{pmatrix}\begin{pmatrix}2 & 6\\4 &8\end{pmatrix} = \begin{pmatrix}10 & 22\\4 & 8\end{pmatrix}\]

The Matrix Detour: Different Notations, Same Function

The same linear function can be represented by different matrices depending on how we organize the information.

Example: The differential operator \(\frac{d}{dx} + 2\) acting on quadratic polynomials:

- Convention B: Represent \(ax^2 + bx + c\) as \(\begin{pmatrix}a\\b\\c\end{pmatrix}_B\)

\[\left(\frac{d}{dx} + 2\right)\begin{pmatrix}a\\b\\c\end{pmatrix}_B =\begin{pmatrix}2 & 0 & 0\\2 & 2 & 0\\0 & 1 &2\end{pmatrix}\begin{pmatrix}a\\b\\c\end{pmatrix}\]

- Convention B': Represent \(a + bx + cx^2\) as \(\begin{pmatrix}a\\b\\c\end{pmatrix}_{B'}\)

\[\left(\frac{d}{dx} +2\right)\begin{pmatrix}a\\b\\c\end{pmatrix}_{B'}= \begin{pmatrix}2 & 1 & 0\\0 & 2 & 2\\0 & 0 &2\end{pmatrix}\begin{pmatrix}a\\b\\c\end{pmatrix}\]

Same linear function, different matrix representations!

Key Insight

Linear algebra is about linear functions, not matrices. Matrices are just one way to represent linear functions. The same linear transformation can be represented by different matrices depending on the choice of basis (organizing principle).

The power of matrices is that they allow us to:

- Systematically solve linear equations

- Organize complex information

- Compute compositions of linear transformations efficiently


Gaussian Elimination: A Step-by-Step Guide

The goal of Gaussian elimination is to transform a system of linear equations (represented by an augmented matrix) into Row Echelon Form (REF) and then into Reduced Row Echelon Form (RREF) . From RREF, the solution can be read directly.

We use Elementary Row Operations :

  1. Swap two rows.
  2. Multiply a row by a nonzero constant.
  3. Add a multiple of one row to another row.

The symbol ~ means the matrices are row-equivalent (they represent systems with the same solution set).

Example 1: Solving the System

We start with the augmented matrix derived from the system:

\[\begin{array}{rcl}x + y & = & 27 \\2x - y & = & 0\end{array}\quad\rightarrow \quad\left(\begin{array}{cc|c}1 & 1 & 27 \\2 & -1 & 0\end{array}\right)\]

Step 1: Eliminate the `x`-term from the second equation.

The pivot (first nonzero entry) in Row 1 is 1. To create a 0 below it, we perform the row operation:

\[R_2 \leftarrow R_2 - 2R_1\]

Let's compute this:

- New \( R_2 \): \( [2 \ \ -1 \ \ | \ \ 0] - 2 \times [1 \ \ 1 \ \ | \ \ 27] \)

- \( 2 - 2(1) = 0 \)

- \( -1 - 2(1) = -3 \)

- \( 0 - 2(27) = -54 \)

So the new matrix is:

\[\left(\begin{array}{cc|c}1 & 1 & 27 \\0 & -3 & -54\end{array}\right)\]

Interpretation: The system is now:

\[\begin{array}{rcl}x + y & = & 27 \\-3y & = & -54\end{array}\]

This is Row Echelon Form (REF) . The diagonal from the top-left now contains 1 and -3, which are our pivots.

Step 2: Make the pivot in the second row equal to 1.

We scale Row 2 by multiplying by \( -\frac{1}{3} \):

\[R_2 \leftarrow \left(-\frac{1}{3}\right) R_2\]

Let's compute this:

- New \( R_2 \): \( -\frac{1}{3} \times [0 \ \ -3 \ \ | \ \ -54] = [0 \ \ 1 \ \ | \ \ 18] \)

The matrix becomes:

\[\left(\begin{array}{cc|c}1 & 1 & 27 \\0 & 1 & 18\end{array}\right)\]

Interpretation: The system is now:

\[\begin{array}{rcl}x + y & = & 27 \\y & = & 18\end{array}\]

Step 3: Eliminate the y-term from the first equation.

Now that we know \( y = 18 \), we substitute it back conceptually by eliminating the `1` above our pivot in column 2. We perform:

\[R_1 \leftarrow R_1 - R_2\]

Let's compute this:

- New \( R_1 \): \( [1 \ \ 1 \ \ | \ \ 27] - [0 \ \ 1 \ \ | \ \ 18] \)

- \( 1 - 0 = 1 \)

- \( 1 - 1 = 0 \)

- \( 27 - 18 = 9 \)

The final matrix is:

\[\left(\begin{array}{cc|c}1 & 0 & 9 \\0 & 1 & 18\end{array}\right)\]

This is Reduced Row Echelon Form (RREF) .

Solution:

From the final matrix, we read:

\[x = 9, \quad y = 18\]

Complete Elimination Sequence:

\[\left(\begin{array}{cc|c}1 & 1 & 27 \\2 & -1 & 0\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 1 & 27 \\0 & -3 & -54\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 1 & 27 \\0 & 1 & 18\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 0 & 9 \\0 & 1 & 18\end{array}\right)\]

Pivot Example: Solving the System

We start with the augmented matrix:

\[\begin{array}{rcl}x + y & = & 5 \\x + 2y & = & 8\end{array}\quad \rightarrow \quad\left(\begin{array}{cc|c}1 & 1 & 5 \\1 & 2 & 8\end{array}\right)\]

Step 1: Eliminate the `x`-term from the second equation.

The pivot in Row 1 is `1`. To create a `0` below it:

\[R_2 \leftarrow R_2 - R_1\]

Let's compute this:

- New \( R_2 \): \( [1 \ \ 2 \ \ | \ \ 8] - [1 \ \ 1 \ \ | \ \ 5] \)

- \( 1 - 1 = 0 \)

- \( 2 - 1 = 1 \)

- \( 8 - 5 = 3 \)

The new matrix is:

\[\left(\begin{array}{cc|c}1 & 1 & 5 \\0 & 1 & 3\end{array}\right)\]

Interpretation: The system is now:

\[\begin{array}{rcl}x + y & = & 5 \\y & = & 3\end{array}\]

Step 2: Eliminate the y-term from the first equation.

We use the pivot `1` in Row 2 to eliminate the entry above it.

\[R_1 \leftarrow R_1 - R_2\]

Let's compute this:

- New \( R_1 \): \( [1 \ \ 1 \ \ | \ \ 5] - [0 \ \ 1 \ \ | \ \ 3] \)

- \( 1 - 0 = 1 \)

- \( 1 - 1 = 0 \)

- \( 5 - 3 = 2 \)

The final matrix is:

\[\left(\begin{array}{cc|c}1 & 0 & 2 \\0 & 1 & 3\end{array}\right)\]

This is Reduced Row Echelon Form (RREF) .

Solution:

From the final matrix, we read:

\[x = 2, \quad y = 3\]

Complete Elimination Sequence:

\[\left(\begin{array}{cc|c}1 & 1 & 5 \\1 & 2 & 8\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 1 & 5 \\0 & 1 & 3\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 0 & 2 \\0 & 1 & 3\end{array}\right)\]

Summary and Takeaway

* Gaussian Elimination is a systematic method of solving systems of equations by using elementary row operations to simplify the augmented matrix.

* The process has two main phases:

  1. Forward Elimination: Get the matrix into Row Echelon Form (REF) , where all entries below pivots are zero.
  2. Back Substitution: Get the matrix into Reduced Row Echelon Form (RREF) , where each pivot is `1` and is the only nonzero entry in its column.

* The pivot in a row is its first nonzero element. The goal is to use pivots to create zeros below and above them.

* The final RREF matrix allows you to read the solution directly. A matrix of the form \(\left(\begin{array}{cc|c}1 & 0 & a \\ 0 & 1 & b\end{array}\right)\) gives the unique solution \(x = a\), \(y = b\).

🧠 Quiz Gaussian elimination:

Solve the system step-by-step using Gaussian elimination.

\[\begin{cases}2x_1 + 5x_2 - 8x_3 + 2x_4 + 2x_5 = 0 \\6x_1 + 2x_2 -10x_3 + 6x_4 + 8x_5 = 6 \\3x_1 + 6x_2 + 2x_3 + 3x_4 + 5x_5 = 6 \\3x_1 + 1x_2 - 5x_3 + 3x_4 + 4x_5 = 3 \\6x_1 + 7x_2 - 3x_3 + 6x_4 + 9x_5 = 9\end{cases}\]

Step 1: Write the augmented matrix

\[\left(\begin{array}{ccccc|c}2 & 5 & -8 & 2 & 2 & 0 \\6 & 2 & -10 & 6 & 8 &6 \\3 & 6 & 2 & 3 & 5 & 6 \\3 & 1 & -5 & 3 & 4 & 3 \\6 & 7 & -3 & 6 & 9 & 9\end{array}\right)\]

Step 2: Make first column below (1,1) zero

We can scale \(R_1\) later; let's first eliminate using \(R_1\).

I'll use \(R_2 \leftarrow R_2 - 3R_1\), \(R_3 \leftarrow R_3 - \frac{3}{2}R_1\), \(R_4 \leftarrow R_4 - \frac{3}{2}R_1\), \(R_5 \leftarrow R_5 - 3R_1\).

But to avoid fractions early, maybe swap \(R_1\) with a row having 1? Not necessary yet; let's proceed systematically.

Actually, let's do \(R_2 \leftarrow R_2 - 3R_1\):

\(R_2: [6, 2, -10, 6, 8 | 6] - 3 \times [2, 5, -8, 2, 2 | 0] = [6-6, 2-15, -10+24, 6-6, 8-6 | 6-0]\)

\[R_2: [0, -13, 14, 0, 2 | 6]\]

\(R_3 \leftarrow R_3 - \frac{3}{2}R_1\) will introduce fractions. Let's instead use \(R_3 \leftarrow 2R_3 - 3R_1\) to keep integers:

\(2R_3: [6, 12, 4, 6, 10 | 12]\)

Subtract \(3R_1: [6, 15, -24, 6, 6 | 0]\):

\[[0, -3, 28, 0, 4 | 12] \quad \text{New } R_3\]

\(R_4 \leftarrow 2R_4 - 3R_1\):

\(2R_4: [6, 2, -10, 6, 8 | 6]\)

Subtract \(3R_1: [6, 15, -24, 6, 6 | 0]\):

\[[0, -13, 14, 0, 2 | 6] \quad \text{New } R_4\]

Wait, \(R_4\) becomes same as \(R_2\)? That means \(R_2\) and \(R_4\) are same initially? Let's check original \(R_2\) and \(R_4\):

\(R_2: 6, 2, -10, 6, 8 | 6\)

\(R_4: 3, 1, -5, 3, 4 | 3\)

Indeed \(R_4 = \frac12 R_2\) — so they are dependent. So we can remove \(R_4\) later.

\(R_5 \leftarrow R_5 - 3R_1\):

\(R_5: [6, 7, -3, 6, 9 | 9] - 3 \times [2, 5, -8, 2, 2 | 0] = [0, 7-15, -3+24, 6-6, 9-6 | 9]\)

\[[0, -8, 21, 0, 3 | 9]\]

So matrix now:

\[\left(\begin{array}{ccccc|c}2 & 5 & -8 & 2 & 2 & 0 \\0 & -13 & 14 & 0 & 2& 6 \\0 & -3 & 28 & 0 & 4 & 12 \\0 & -13 & 14 & 0 & 2 & 6 \\0 & -8 & 21 &0 & 3 & 9\end{array}\right)\]

Rows 2 and 4 are identical ⇒ remove \(R_4\):

\[\left(\begin{array}{ccccc|c}2 & 5 & -8 & 2 & 2 & 0 \\0 & -13 & 14 & 0 & 2& 6 \\0 & -3 & 28 & 0 & 4 & 12 \\0 & -8 & 21 & 0 & 3 & 9\end{array}\right)\]

Step 3: Simplify \(R_1\)

Divide \(R_1\) by 2:

\[[1, 2.5, -4, 1, 1 | 0]\]

Better to keep fractions exact: \(R_1 \leftarrow \frac12 R_1\):

\[[1, \frac{5}{2}, -4, 1, 1 | 0]\]

Step 4: Eliminate second column below \(R_2\)

\(R_2: [0, -13, 14, 0, 2 | 6]\)

\(R_3 \leftarrow 13R_3 - 3R_2\) to avoid fractions? Let's instead swap \(R_2\) and \(R_3\) maybe? Let's instead make pivot 1 in \(R_2\) later; first eliminate below \(R_2\):

We can do \(R_3 \leftarrow R_3 - \frac{3}{13}R_2\) — messy. Let's scale \(R_2\) to have 1 in (2,2) first.

\(R_2 \leftarrow -\frac{1}{13} R_2\):

\[[0, 1, -\frac{14}{13}, 0, -\frac{2}{13} | -\frac{6}{13}]\]

Now \(R_3 \leftarrow R_3 + 3R_2\):

\(R_3: [0, -3, 28, 0, 4 | 12] + [0, 3, -\frac{42}{13}, 0, -\frac{6}{13} | -\frac{18}{13}]\)

= \([0, 0, 28 - \frac{42}{13}, 0, 4 - \frac{6}{13} | 12 - \frac{18}{13}]\)

\(28 - \frac{42}{13} = \frac{364 - 42}{13} = \frac{322}{13}\)

\(4 - \frac{6}{13} = \frac{52 - 6}{13} = \frac{46}{13}\)

\(12 - \frac{18}{13} = \frac{156 - 18}{13} = \frac{138}{13}\)

So \(R_3: [0, 0, \frac{322}{13}, 0, \frac{46}{13} | \frac{138}{13}]\)

Multiply by 13: \( [0, 0, 322, 0, 46 | 138] \)

Divide by 2: \( [0, 0, 161, 0, 23 | 69] \)

\(R_5 \leftarrow R_5 + 8R_2\):

\(R_5: [0, -8, 21, 0, 3 | 9] + [0, 8, -\frac{112}{13}, 0, -\frac{16}{13} | -\frac{48}{13}]\)

= \([0, 0, 21 - \frac{112}{13}, 0, 3 - \frac{16}{13} | 9 - \frac{48}{13}]\)

\(21 - \frac{112}{13} = \frac{273 - 112}{13} = \frac{161}{13}\)

\(3 - \frac{16}{13} = \frac{39 - 16}{13} = \frac{23}{13}\)

\(9 - \frac{48}{13} = \frac{117 - 48}{13} = \frac{69}{13}\)

So \(R_5: [0, 0, \frac{161}{13}, 0, \frac{23}{13} | \frac{69}{13}]\)

Multiply by 13: \( [0, 0, 161, 0, 23 | 69] \) — same as \(R_3\)! So \(R_5\) is redundant.

So now matrix:

\[\left(\begin{array}{ccccc|c}1 & \frac{5}{2} & -4 & 1 & 1 & 0 \\0 & 1 & -\frac{14}{13} & 0 & -\frac{2}{13} & -\frac{6}{13} \\0 & 0 & 161 & 0 & 23 &69\end{array}\right)\]

Step 5: Simplify \(R_3\)

Divide \(R_3\) by 23: \(161/23 = 7\), \(23/23 = 1\), \(69/23 = 3\):

\[[0, 0, 7, 0, 1 | 3]\]

Step 6: Back substitution

From \(R_3\): \(7x_3 + x_5 = 3\) ⇒ \(x_5 = 3 - 7x_3\).

From \(R_2\): \(x_2 - \frac{14}{13}x_3 - \frac{2}{13}x_5 = -\frac{6}{13}\).

Substitute \(x_5\):

\(x_2 - \frac{14}{13}x_3 - \frac{2}{13}(3 - 7x_3) = -\frac{6}{13}\)

\(x_2 - \frac{14}{13}x_3 - \frac{6}{13} + \frac{14}{13}x_3 = -\frac{6}{13}\)

The \(x_3\) terms cancel! ⇒ \(x_2 - \frac{6}{13} = -\frac{6}{13}\) ⇒ \(x_2 = 0\).

From \(R_1\): \(x_1 + \frac{5}{2}x_2 - 4x_3 + x_4 + x_5 = 0\)

\(x_1 + 0 - 4x_3 + x_4 + (3 - 7x_3) = 0\)

\(x_1 - 11x_3 + x_4 + 3 = 0\) ⇒ \(x_1 + x_4 = 11x_3 - 3\).

Step 7: Free variables

Let \(x_3 = t\), \(x_4 = s\). Then:

\(x_1 = 11t - 3 - s\)

\(x_2 = 0\)

\(x_3 = t\)

\(x_4 = s\)

\(x_5 = 3 - 7t\)

Final solution:

\[\boxed{\begin{cases}x_1 = -3 - s + 11t \\x_2 = 0 \\x_3 = t \\x_4 = s \\x_5 = 3 - 7t\end{cases}}\]

where \(s, t \in \mathbb{R}\).


🧠 General Quiz

Solve The System Below:

\[\begin{cases}0x_1 + 0x_2 - 8x_3 + 2x_4 + 2x_5 = 0 \\6x_1 + 2x_2 -10x_3 + 6x_4 + 8x_5 = 6 \\3x_1 + 6x_2 + 2x_3 + 3x_4 + 5x_5 = 6 \\3x_1+ 1x_2 - 5x_3 + 3x_4 + 4x_5 = 3 \\6x_1 + 7x_2 - 3x_3 + 6x_4+9x_5= 9\end{cases}\]

Step 1: Augmented matrix

\[\left[\begin{array}{ccccc|c}0 & 0 & -8 & 2 & 2 & 0 \\6 & 2 & -10 & 6 & 8 &6 \\3 & 6 & 2 & 3 & 5 & 6 \\3 & 1 & -5 & 3 & 4 & 3 \\6 & 7 & -3 & 6 & 9 & 9\end{array}\right]\]

Step 2: Swap rows to bring a nonzero to top-left

Swap \(R_1 \leftrightarrow R_2\):

\[\left[\begin{array}{ccccc|c}6 & 2 & -10 & 6 & 8 & 6 \\0 & 0 & -8 & 2 & 2 & 0 \\3 & 6 & 2 & 3 & 5 & 6 \\3 & 1 & -5 & 3 & 4 & 3 \\6 & 7 & -3 & 6 & 9 & 9\end{array}\right]\]

Step 3: Make first column below (1,1) zero

\(R_3 \leftarrow R_3 - \frac12 R_1\) (since \(6/2=3\)):

\(R_3: [3, 6, 2, 3, 5 | 6] - [3, 1, -5, 3, 4 | 3] = [0, 5, 7, 0, 1 | 3]\)

\(R_4 \leftarrow R_4 - \frac12 R_1\):

\(R_4: [3, 1, -5, 3, 4 | 3] - [3, 1, -5, 3, 4 | 3] = [0, 0, 0, 0, 0 | 0]\) → Row 4 is all zeros (so \(R_4\) is redundant).

\(R_5 \leftarrow R_5 - R_1\):

\(R_5: [6, 7, -3, 6, 9 | 9] - [6, 2, -10, 6, 8 | 6] = [0, 5, 7, 0, 1 | 3]\)

So \(R_3\) and \(R_5\) are identical.

Step 4: Remove redundant rows

Matrix now:

\[\left[\begin{array}{ccccc|c}6 & 2 & -10 & 6 & 8 & 6 \\0 & 0 & -8 & 2 & 2 & 0 \\0 & 5 & 7 & 0 & 1 & 3 \\0 & 0 & 0 & 0 & 0 & 0 \\0 & 5 & 7 & 0 & 1 & 3\end{array}\right]\]

Remove \(R_4\) and \(R_5\) (duplicate of \(R_3\)):

\[\left[\begin{array}{ccccc|c}6 & 2 & -10 & 6 & 8 & 6 \\0 & 0 & -8 & 2 & 2 & 0 \\0 & 5 & 7 & 0 & 1 & 3\end{array}\right]\]

Step 5: Swap \(R_2\) and \(R_3\) to get a pivot in column 2:

\[\left[\begin{array}{ccccc|c}6 & 2 & -10 & 6 & 8 & 6 \\0 & 5 & 7 & 0 & 1 & 3 \\0 & 0 & -8 & 2 & 2 & 0\end{array}\right]\]

Step 6: Row reduce

Scale \(R_1\) by \(1/6\):

\(R_1: [1, \frac13, -\frac{10}{6}, 1, \frac86 \mid 1]\)

Better keep integers for now, but let's instead scale \(R_2\) and \(R_3\):

\(R_2 \leftarrow \frac15 R_2\):

\([0, 1, \frac75, 0, \frac15 \mid \frac35]\)

\(R_3 \leftarrow -\frac18 R_3\):

\([0, 0, 1, -\frac14, -\frac14 \mid 0]\)

Step 7: Back substitute

From \(R_3\): \(x_3 - \frac14 x_4 - \frac14 x_5 = 0 \Rightarrow x_3 = \frac14 (x_4 + x_5)\).

From \(R_2\): \(x_2 + \frac75 x_3 + \frac15 x_5 = \frac35\).

Substitute \(x_3\):

\(x_2 + \frac75 \cdot \frac14 (x_4 + x_5) + \frac15 x_5 = \frac35\)

\(x_2 + \frac{7}{20} x_4 + \frac{7}{20} x_5 + \frac15 x_5 = \frac35\)

\(x_2 + \frac{7}{20} x_4 + \left(\frac{7}{20} + \frac{4}{20}\right) x_5 = \frac35\)

\(x_2 + \frac{7}{20} x_4 + \frac{11}{20} x_5 = \frac35\)

Multiply by 20: \(20x_2 + 7x_4 + 11x_5 = 12\) … (1)

From \(R_1\): \(6x_1 + 2x_2 - 10x_3 + 6x_4 + 8x_5 = 6\).

Substitute \(x_3 = \frac14 (x_4 + x_5)\):

\(6x_1 + 2x_2 - 10\cdot \frac14 (x_4 + x_5) + 6x_4 + 8x_5 = 6\)

\(6x_1 + 2x_2 - \frac{10}{4}x_4 - \frac{10}{4}x_5 + 6x_4 + 8x_5 = 6\)

\(6x_1 + 2x_2 + \left(6 - 2.5\right)x_4 + \left(8 - 2.5\right)x_5 = 6\)

\(6x_1 + 2x_2 + 3.5 x_4 + 5.5 x_5 = 6\)

Multiply by 2: \(12x_1 + 4x_2 + 7x_4 + 11x_5 = 12\) … (2)

From (1): \(4x_2 = 12 - 7x_4 - 11x_5\)

Substitute into (2):

\(12x_1 + [12 - 7x_4 - 11x_5] + 7x_4 + 11x_5 = 12\)

\(12x_1 + 12 = 12 \Rightarrow 12x_1 = 0 \Rightarrow x_1 = 0\).

Then \(4x_2 = 12 - 7x_4 - 11x_5 \Rightarrow x_2 = 3 - \frac74 x_4 - \frac{11}{4} x_5\).

Step 8: General solution

Let \(x_4 = s\), \(x_5 = t\) free.

Then:

\[\begin{cases}x_1 = 0 \\x_2 = 3 - \frac74 s - \frac{11}{4} t \\x_3 = \frac14 (s + t) \\x_4 = s \\x_5 = t\end{cases}\quad s, t \in \mathbb{R}\]

\[\boxed{x_1 = 0,\quad x_2 = 3 - \frac{7}{4}s - \frac{11}{4}t,\quad x_3 =\frac{s+t}{4},\quad x_4 = s,\quad x_5 = t}\]


Understanding Vector Spaces

Welcome back. So far, we have become comfortable with \( \mathbb{R}^n \), the world of n-vectors. But what makes \( \mathbb{R}^n \) so useful? Is it the fact that they are arrows? Or lists of numbers? The true answer is more profound: it's their structure . Today, we will distill this structure into a general definition that applies to sets of functions, matrices, sequences, and even stranger objects. This definition is that of a Vector Space .

The Core Idea: Closure Under Two Operations

Before the rigorous axioms, grasp the central theme:

NB: A vector space is a set where you can add any two of its members and multiply any one of them by a number (a scalar), and the result will always still be in the set.

This is the essence of "closure." If you perform these natural operations and your result "leaves" the set, you are not in a vector space. Let's now formalize this.

*The Formal Definition: Axioms of a Vector Space

A vector space \( (V, +, \cdot, \mathbb{R}) \) is a set \( V \) (whose elements we call "vectors"), together with two operations:

* Vector Addition (\(+\)): \( V \times V \to V \)

* Scalar Multiplication (\(\cdot\)): \( \mathbb{R} \times V \to V \)

These operations must satisfy the following ten axioms for all \( \mathbf{u}, \mathbf{v}, \mathbf{w} \in V \) and all \( c, d \in \mathbb{R} \):

Axioms for Vector Addition

(+i) Additive Closure: \( \mathbf{u} + \mathbf{v} \in V \).

*(The sum of two vectors is another vector in the same space.)*

(+ii) Additive Commutativity: \( \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \).

*(Order of addition doesn't matter.)*

(+iii) Additive Associativity: \( (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) \).

*(Grouping of additions doesn't matter.)*

(+iv) Existence of Zero Vector: There exists a vector \( \mathbf{0}_V \in V \) such that \( \mathbf{u} + \mathbf{0}_V = \mathbf{u} \).

*(There is a unique "origin" that does nothing when added.)*

(+v) Existence of Additive Inverses: For every \( \mathbf{u} \in V \), there exists a vector \( \mathbf{w} \in V \) such that \( \mathbf{u} + \mathbf{w} = \mathbf{0}_V \).

*(Every vector has a "negative" that cancels it out. We denote this inverse as \( -\mathbf{u} \).)*

Axioms for Scalar Multiplication

(· i) Multiplicative Closure: \( c \cdot \mathbf{v} \in V \).

*(Scaling a vector produces another vector in the same space.)*

(· ii) Distributivity of Scalars over Vector Addition: \( (c + d) \cdot \mathbf{v} = c \cdot \mathbf{v} + d \cdot \mathbf{v} \).

*(Scalar addition distributes over scalar multiplication.)*

(· iii) Distributivity of Vectors over Scalar Addition: \( c \cdot (\mathbf{u} + \mathbf{v}) = c \cdot \mathbf{u} + c \cdot \mathbf{v} \).

*(Scalar multiplication distributes over vector addition.)*

(· iv) Associativity of Scalar Multiplication: \( (cd) \cdot \mathbf{v} = c \cdot (d \cdot \mathbf{v}) \).

*(Grouping of scalar multiplications doesn't matter.)*

(· v) Unity Law: \( 1 \cdot \mathbf{v} = \mathbf{v} \).

*(Multiplying by the scalar 1 leaves the vector unchanged.)*

Notational Convenience: Once these axioms are established, we often drop the "\(\cdot\)" and write scalar multiplication as \( c\mathbf{v} \).

A Universe of Examples: It's Not Just Lists of Numbers

The power of this definition is that any set of objects satisfying these ten axioms is a vector space and inherits all the theorems of linear algebra. Let's explore this universe.

Example 1: The Space of Real-Valued Functions (\( \mathbb{R}^{\mathbb{R}} \))

Let \( V = \mathbb{R}^{\mathbb{R}} \), the set of all functions \( f: \mathbb{R} \to \mathbb{R} \).

* Addition: For \( f, g \in V \), define \( (f + g)(x) = f(x) + g(x) \).

* Scalar Multiplication: For \( c \in \mathbb{R}, f \in V \), define \( (c \cdot f)(x) = c f(x) \).

Let's verify a few key axioms:

* (+i) Additive Closure: Is \( f+g \) a function from \( \mathbb{R} \) to \( \mathbb{R} \)? Yes, because the sum of two real numbers is a real number. So for every input \( x \), the output \( (f+g)(x) \) is a real number. Closure holds.

* (+iv) Zero Vector: The zero vector is the zero function , \( \mathbf{0}(x) = 0 \) for all \( x \). Check: \( (f + \mathbf{0})(x) = f(x) + 0 = f(x) \). Axiom holds.

* (· i) Multiplicative Closure: Is \( c \cdot f \) a function from \( \mathbb{R} \) to \( \mathbb{R} \)? Yes, because \( c f(x) \) is a real number. Closure holds.

* (· v) Unity Law: \( (1 \cdot f)(x) = 1 \cdot f(x) = f(x) \). This is true for all \( x \), so \( 1 \cdot f = f \). Axiom holds.

The other axioms follow directly from the properties of real numbers. Thus, \( \mathbb{R}^{\mathbb{R}} \) is a vector space. Its "vectors" are functions, and they are infinite-dimensional in a very deep sense.

Example 2: The Space of Sequences (\( \mathbb{R}^{\mathbb{N}} \))

Let \( V = \mathbb{R}^{\mathbb{N}} \), the set of all infinite sequences of real numbers. A vector here looks like:

\[\mathbf{a} = (a_1, a_2, a_3, \dots)\]

This is a special case of a function space \( \mathbb{R}^S \) where \( S = \mathbb{N} \). Addition and scalar multiplication are done component-wise:

\[(a_1, a_2, \dots) + (b_1, b_2, \dots) = (a_1+b_1, a_2+b_2, \dots)\]

\[c \cdot (a_1, a_2, \dots) = (c a_1, c a_2, \dots)\]

The zero vector is the sequence of all zeros \( (0, 0, 0, \dots) \). All axioms are verified similarly to the function space example. This shows how vector spaces can handle infinitely long lists.

Example 3: The Space of Differentiable Functions

Let \( V = \{ f: \mathbb{R} \to \mathbb{R} \ | \ f \text{ is differentiable} \} \).

* Closure under Addition: We know from calculus that the sum of two differentiable functions is differentiable. So if \( f, g \in V \), then \( f+g \in V \).

* Closure under Scalar Multiplication: A constant multiple of a differentiable function is differentiable. So if \( f \in V \), then \( c f \in V \).

The zero function is differentiable, and the other axioms are inherited from \( \mathbb{R}^{\mathbb{R}} \). Therefore, the set of differentiable functions is itself a vector space *inside* the larger space \( \mathbb{R}^{\mathbb{R}} \). This is our first example of a subspace .

Example 4: Solution Set to a Homogeneous Linear System

This is a profoundly important example. Consider the homogeneous matrix equation:

\[M\mathbf{x} = \mathbf{0}, \quad \text{where} \quad M =\begin{pmatrix} 1 & 1 & 1 \\ 2 & 2 & 2 \\ 3 & 3 & 3 \end{pmatrix}.\]

The solution set is:

\[V = \left\{ c_1 \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix} + c_2\begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} \ \middle|\ c_1, c_2 \in\mathbb{R} \right\}.\]

This is a plane through the origin in \( \mathbb{R}^3 \). Let's see why it's a vector space.

* Closure under Addition: Take any two solutions \( \mathbf{x}_1 \) and \( \mathbf{x}_2 \). Then \( M\mathbf{x}_1 = \mathbf{0} \) and \( M\mathbf{x}_2 = \mathbf{0} \). Their sum is \( \mathbf{x}_1 + \mathbf{x}_2 \). Check:

\[M(\mathbf{x}_1 + \mathbf{x}_2) = M\mathbf{x}_1 + M\mathbf{x}_2= \mathbf{0} + \mathbf{0} = \mathbf{0}.\]

So \( \mathbf{x}_1 + \mathbf{x}_2 \) is also a solution and lies in \( V \).

* Closure under Scalar Multiplication: Take a solution \( \mathbf{x} \) and a scalar \( c \). Then \( M(c\mathbf{x}) = c(M\mathbf{x}) = c\mathbf{0} = \mathbf{0} \). So \( c\mathbf{x} \) is also a solution and lies in \( V \).

The zero vector \( \mathbf{0} \) is in \( V \) because \( M\mathbf{0} = \mathbf{0} \). The other axioms hold because the operations are the standard ones from \( \mathbb{R}^3 \). This is a general truth: The solution set to any homogeneous linear system \( A\mathbf{x}=\mathbf{0} \) is a vector space.

Non-Examples: When the Structure Breaks

To understand the definition, it's crucial to see where it fails.

Non-Example 1: A Plane Not Through the Origin

The set \( S = \left\{ \begin{pmatrix}1 \\ 0\end{pmatrix} + c\begin{pmatrix}-1 \\ 1\end{pmatrix} \ \middle|\ c \in \mathbb{R} \right\} \) is a line in \( \mathbb{R}^2 \). Is it a vector space?

* Fails Axiom (+iv): The zero vector \( \begin{pmatrix}0 \\ 0\end{pmatrix} \) is not on this line. Therefore, it fails to have a zero vector. Breaking just one axiom is enough to disqualify it.

Non-Example 2: The First Quadrant

Let \( P = \left\{ \begin{pmatrix}a \\ b\end{pmatrix} \ \middle|\ a \geq 0, b \geq 0 \right\} \).

* Fails Axiom (· i): The vector \( \begin{pmatrix}1 \\ 1\end{pmatrix} \in P \), but its scalar multiple \( -2 \cdot \begin{pmatrix}1 \\ 1\end{pmatrix} = \begin{pmatrix}-2 \\ -2\end{pmatrix} \notin P \). It is not closed under scalar multiplication.

Non-Example 3: Nowhere-Zero Functions

Let \( V = \{ f: \mathbb{R} \to \mathbb{R} \ | \ f(x) \neq 0 \ \forall x \in \mathbb{R} \} \).

* Fails Axiom (+i): Let \( f(x) = 1 \) and \( g(x) = 1 \). Both are in \( V \). But \( (f+g)(x) = 2 \), which is fine. Now let \( f(x) = 1 \) and \( g(x) = -2 \). Both are in \( V \). But \( (f+g)(x) = -1 \), which is also fine. The problem arises with functions like \( f(x) = x^2 + 1 \) and \( g(x) = -1 \). Both are in \( V \), but their sum \( (f+g)(x) = x^2 \) is zero at \( x=0 \), so \( f+g \notin V \). Closure under addition fails.

Beyond Real Numbers: Vector Spaces Over Other Fields

The base field (the set of "scalars") doesn't have to be \( \mathbb{R} \). It can be any mathematical structure called a field (a set with its own well-behaved addition, subtraction, multiplication, and division).

* Complex Numbers (\( \mathbb{C} \)): Crucial in quantum mechanics and electrical engineering. A state of an electron's spin can be \( \begin{pmatrix} i \\ 1 \end{pmatrix} \), a vector in \( \mathbb{C}^2 \).

* Binary Numbers (\( \mathbb{Z}_2 = \{0, 1\} \)): The field of bits, with the rule \( 1+1=0 \). This is the foundation of coding theory and computer science. Here, \( -1 = 1 \), which has profound consequences.

Conclusion: The Power of Abstraction

The concept of a vector space separates the structure of linearity from any specific incarnation. Whether we are dealing with:

* Points in space (\( \mathbb{R}^n \)),

* Infinite sequences (\( \mathbb{R}^{\mathbb{N}} \)),

* Functions (\( \mathbb{R}^{\mathbb{R}} \)),

* Solutions to differential equations,

* or States of a quantum system ( \( \mathbb{C}^n \) ),

...if it can be added and scaled meaningfully, the entire edifice of linear algebra-linear combinations, span, linear independence, basis, dimension-applies universally. This is not just mathematical elegance; it is a tremendous economy of thought, allowing us to solve problems in one domain by recognizing their structural similarity to problems in another. This is the true power of the vector space.


Quiz Recall:

🧮 Example: Solving by Row Reduction (RREF)

Problem Statement

We are given the vector equation:

\[x_1 \begin{pmatrix}1 \\[4pt] 2\end{pmatrix} + x_2 \begin{pmatrix}3 \\[4pt] 1\end{pmatrix} = \begin{pmatrix}7 \\[4pt] 8\end{pmatrix}\]

This represents the system of linear equations:

\[\begin{cases}x_1 + 3x_2 = 7 \\[6pt]2x_1 + x_2 = 8\end{cases}\]

Step-by-Step Solution

Step 1️⃣: Write the Augmented Matrix

\[\left[\begin{array}{cc|c}1 & 3 & 7 \\[4pt]2 & 1 & 8\end{array}\right]\]

Step 2️⃣: Eliminate Below the First Pivot

\[R_2 \leftarrow R_2 - 2R_1\]

\[\left[\begin{array}{cc|c}1 & 3 & 7 \\[4pt]0 & -5 & -6\end{array}\right]\]

Step 3️⃣: Make the Second Pivot Equal to 1

\[R_2 \leftarrow -\frac{1}{5}R_2\]

\[\left[\begin{array}{cc|c}1 & 3 & 7 \\[4pt]0 & 1 & \tfrac{6}{5}\end{array}\right]\]

Step 4️⃣: Eliminate Above the Second Pivot

\[R_1 \leftarrow R_1 - 3R_2\]

\[\left[\begin{array}{cc|c}1 & 0 & \tfrac{17}{5} \\[4pt]0 & 1 & \tfrac{6}{5}\end{array}\right]\]

✅ Final Result

Reduced Row Echelon Form (RREF):

\[\boxed{\left[\begin{array}{cc|c}1 & 0 & \tfrac{17}{5} \\[4pt]0 & 1 &\tfrac{6}{5}\end{array}\right]}\]

✨ Solution:

\[x_1 = \frac{17}{5} = 3.4, \qquad x_2 = \frac{6}{5} = 1.2\]

✅ Verification

Let's verify our solution in the original equations:

First equation:

\[1(3.4) + 3(1.2) = 3.4 + 3.6 = 7 \quad \text{✓}\]

Second equation:

\[2(3.4) + 1(1.2) = 6.8 + 1.2 = 8 \quad \text{✓}\]


The Foundations of Linear Algebra: A Step-by-Step Exploration

Welcome back. We've defined vector spaces abstractly. Now, let's see these definitions in action and connect them to the core computational ideas of linear algebra: subspaces, kernels, linear combinations, and basis. Understanding these connections is the key to mastering the subject.

  1. Proving a Vector Space: The Space of 2x2 Matrices

Claim: The set of all 2x2 matrices with real entries, \( M_{22} \), is a vector space over \( \mathbb{R} \).

Step-by-Step Proof:

We define addition and scalar multiplication in the standard way:

If \( A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \) and \( B = \begin{pmatrix} e & f \\ g & h \end{pmatrix} \), then

\( A + B = \begin{pmatrix} a+e & b+f \\ c+g & d+h \end{pmatrix} \).

If \( \lambda \in \mathbb{R} \), then \( \lambda A = \begin{pmatrix} \lambda a & \lambda b \\ \lambda c & \lambda d \end{pmatrix} \).

Now, we verify the ten axioms. The closure axioms ((+i) and (·i)) are satisfied by the very definitions above—the result of addition or scalar multiplication is always another 2x2 matrix.

The other axioms hold because the operations are performed entry-wise, and real numbers obey these rules.

* (+iv) Zero Vector: The zero vector is the zero matrix : \( \mathbf{0} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} \).

* Check: \( A + \mathbf{0} = \begin{pmatrix} a+0 & b+0 \\ c+0 & d+0 \end{pmatrix} = A \). ✅

* (+v) Additive Inverse: For \( A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \), the additive inverse is \( -A = \begin{pmatrix} -a & -b \\ -c & -d \end{pmatrix} \).

* Check: \( A + (-A) = \begin{pmatrix} a-a & b-b \\ c-c & d-d \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = \mathbf{0} \). ✅

* (· v) Unity Law: \( 1 \cdot A = \begin{pmatrix} 1\cdot a & 1\cdot b \\ 1\cdot c & 1\cdot d \end{pmatrix} = A \). ✅

Since all axioms are satisfied, \( M_{22} \) is a vector space.

  1. Identifying and Proving a Subspace

A subspace is a subset of a vector space that is itself a vector space under the same operations. The powerful Subspace Theorem states:

> A subset \( W \) of a vector space \( V \) is a subspace if and only if:

> 1. \( \mathbf{0} \in W \). (Contains the zero vector)

> 2. \( W \) is closed under addition : If \( \mathbf{u}, \mathbf{v} \in W \), then \( \mathbf{u} + \mathbf{v} \in W \).

> 3. \( W \) is closed under scalar multiplication : If \( \mathbf{u} \in W \) and \( c \in \mathbb{R} \), then \( c\mathbf{u} \in W \).

Example: Let \( V = \mathbb{R}^3 \). Let \( W \) be the set of all vectors of the form \( (x, y, 0) \), where \( x, y \in \mathbb{R} \). Show that \( W \) is a subspace of \( V \).

Step-by-Step Proof:

  1. *Check the Zero Vector: The zero vector in \( \mathbb{R}^3 \) is \( (0, 0, 0) \). This is of the form \( (x, y, 0) \) with \( x=0, y=0 \). So, \( \mathbf{0} \in W \). ✅

  1. *Check Closure under Addition: Take two arbitrary vectors in \( W \): \( \mathbf{u} = (u_1, u_2, 0) \) and \( \mathbf{v} = (v_1, v_2, 0) \).

Their sum is \( \mathbf{u} + \mathbf{v} = (u_1 + v_1, u_2 + v_2, 0 + 0) = (u_1 + v_1, u_2 + v_2, 0) \).

This result is again of the form \( (x, y, 0) \). So, \( \mathbf{u} + \mathbf{v} \in W \). ✅

  1. *Check Closure under Scalar Multiplication: Take an arbitrary vector \( \mathbf{u} = (u_1, u_2, 0) \in W \) and a scalar \( c \in \mathbb{R} \).

The scalar multiple is \( c\mathbf{u} = (c u_1, c u_2, c \cdot 0) = (c u_1, c u_2, 0) \).

This result is also of the form \( (x, y, 0) \). So, \( c\mathbf{u} \in W \). ✅

Since \( W \) satisfies all three conditions, it is a subspace of \( \mathbb{R}^3 \). (Geometrically, it's the xy-plane).

  1. The Kernel (or Null Space) of a Matrix

The kernel (or null space ) of a matrix \( A \) is the set of all vectors \( \mathbf{x} \) such that \( A\mathbf{x} = \mathbf{0} \). It is denoted \( \ker(A) \) or \( \text{Null}(A) \).

Key Theorem: For any \( m \times n \) matrix \( A \), its kernel is a subspace of \( \mathbb{R}^n \).

Example: Find the kernel of \( A = \begin{pmatrix} 1 & 2 & -1 \\ 2 & 4 & -2 \end{pmatrix} \).

Step-by-Step Solution:

  1. *Set up the equation: We want all vectors \( \mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \) such that:

\[\begin{pmatrix} 1 & 2 & -1 \\ 2 & 4 & -2\end{pmatrix}\begin{pmatrix}x_1 \\ x_2 \\ x_3 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \end{pmatrix}\]

  1. *Solve the homogeneous system: This corresponds to the system:

\[\begin{aligned}x_1 + 2x_2 - x_3 &= 0 \\2x_1 + 4x_2 - 2x_3 &= 0\end{aligned}\]

Notice the second equation is just a multiple of the first. So, we only have one independent equation: \( x_1 + 2x_2 - x_3 = 0 \).

  1. Express the solution in parametric vector form: Solve for the leading variable \( x_1 \) in terms of the free variables \( x_2 \) and \( x_3 \):

\[x_1 = -2x_2 + x_3\]

Let \( x_2 = s \) and \( x_3 = t \), where \( s, t \in \mathbb{R} \). Then \(x_1 = -2s + t \).

\[\mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} =\begin{pmatrix} -2s + t \\ s \\ t \end{pmatrix} = s \begin{pmatrix} -2 \\1\\ 0 \end{pmatrix} + t \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}\]

  1. Conclusion: The kernel is the set of all linear combinations of these two vectors:

\[\ker(A) = \left\{ s \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix} + t\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \ \middle|\ s, t \in \mathbb{R}\right\}\]

This is a plane (a 2-dimensional subspace) in \( \mathbb{R}^3 \).

  1. Linear Combinations and Spanning Sets

A vector \( \mathbf{b} \) is a linear combination of vectors \( \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k \) if there exist scalars \( c_1, c_2, \dots, c_k \) such that:

\[\mathbf{b} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \dots +c_k\mathbf{v}_k\]

The span of a set of vectors is the set of all possible linear combinations of those vectors.

Example: Let \( \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \) and \( \mathbf{v}_2 = \begin{pmatrix} 3 \\ 1 \end{pmatrix} \). Is \( \mathbf{b} = \begin{pmatrix} 7 \\ 8 \end{pmatrix} \) in the span of \( \{\mathbf{v}_1, \mathbf{v}_2\} \)?

Step-by-Step Solution:

  1. *Set up the vector equation: We want to know if there exist scalars \( x_1, x_2 \) such that:

\[x_1 \begin{pmatrix} 1 \\ 2 \end{pmatrix} + x_2 \begin{pmatrix} 3 \\ 1\end{pmatrix} = \begin{pmatrix} 7 \\ 8 \end{pmatrix}\]

  1. *Convert to a matrix equation: This is equivalent to solving \( A\mathbf{x} = \mathbf{b} \), where \( A = \begin{pmatrix} 1 & 3 \\ 2 & 1 \end{pmatrix} \).

\[\begin{pmatrix} 1 & 3 \\ 2 & 1 \end{pmatrix} \begin{pmatrix} x_1 \\x_2 \end{pmatrix} = \begin{pmatrix} 7 \\ 8 \end{pmatrix} \]

  1. *Solve the system: Use Gaussian elimination on the augmented matrix:

\[\left(\begin{array}{cc|c}1 & 3 & 7 \\2 & 1 & 8\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 3 & 7 \\0 & -5 & -6\end{array}\right)\sim\left(\begin{array}{cc|c}1 & 3 & 7 \\0 & 1 & 6/5\end{array}\right)\]

Back substitute: From row 2, \( x_2 = 6/5 \). From row 1, \( x_1 + 3(6/5) = 7 \Rightarrow x_1 = 7 - 18/5 = 17/5 \).

  1. *Conclusion: Since a solution exists \( (x_1 = 17/5, x_2 = 6/5) \), the vector \( \mathbf{b} \) is a linear combination of \( \mathbf{v}_1 \) and \( \mathbf{v}_2 \). We can write:

\[\begin{pmatrix} 7 \\ 8 \end{pmatrix} = \frac{17}{5} \begin{pmatrix}1 \\ 2 \end{pmatrix} + \frac{6}{5} \begin{pmatrix} 3 \\ 1 \end{pmatrix}\]

Therefore, \( \mathbf{b} \in \text{span}\{\mathbf{v}_1, \mathbf{v}_2\} \).

  1. Basis and Nullity

A basis for a vector space \( V \) is a set of vectors that is:

  1. *Linearly Independent (no vector in the set is a linear combination of the others).
  2. *Spans \( V \) (every vector in \( V \) can be written as a linear combination of the basis vectors).

The dimension of \( V \) is the number of vectors in a basis.

The nullity of a matrix \( A \) is the dimension of its kernel.

Example (Continuing the Kernel): Find a basis for \( \ker(A) \) and the nullity of \( A \), where \( A = \begin{pmatrix} 1 & 2 & -1 \\ 2 & 4 & -2 \end{pmatrix} \).

Step-by-Step Solution:

  1. *Recall the kernel: From our previous work:

\[\ker(A) = \text{span} \left\{ \begin{pmatrix} -2 \\ 1 \\ 0\end{pmatrix}, \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \right\}\]

  1. *Check for Linear Independence: Are these two vectors linearly independent? The only solution to

\[c_1 \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix} + c_2 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix} \]

is \( c_1 = 0 \) and \( c_2 = 0 \). (You can see this by looking at the 2nd and 3rd rows: the 2nd row gives \( c_1 = 0 \), the 3rd row gives \( c_2 = 0 \)). So, yes, they are linearly independent.

  1. *Conclusion: The set \( \left\{ \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \right\} \) is a spanning set for \( \ker(A) \) and is linearly independent . Therefore, it is a basis for \( \ker(A) \).

  1. *Find the Nullity: The nullity is the dimension of the kernel, which is the number of vectors in a basis.

\[ \text{nullity}(A) = 2 \]

The Rank-Nullity Theorem: For an \( m \times n \) matrix \( A \),

\[\text{rank}(A) + \text{nullity}(A) = n\]

Let's verify this. The rank is the dimension of the column space. Our matrix \( A \) has columns \( \begin{pmatrix}1\\2\end{pmatrix}, \begin{pmatrix}2\\4\end{pmatrix}, \begin{pmatrix}-1\\-2\end{pmatrix} \). Notice columns 2 and 3 are just multiples of column 1. So the column space is 1-dimensional: \( \text{rank}(A) = 1 \).

Indeed, \( \text{rank}(A) + \text{nullity}(A) = 1 + 2 = 3 \), which equals \( n \), the number of columns. ✅

Summary: The Beautiful Interconnections

This journey shows how these concepts are deeply intertwined:

* We started with a vector space (\( \mathbb{R}^3 \)).

* We found a subspace (the kernel of \( A \)).

* We described this subspace as the span of a set of vectors.

* We verified that this spanning set was linearly independent , making it a basis .

* The size of this basis gave us the nullity , which is connected to the rank by a fundamental theorem.

This logical chain is the essence of linear algebra. Mastering these connections allows you to navigate any problem in the subject with confidence and clarity.


💡 Can You Help Your Grandfather Solve This Linear Combination Challenges?

Here are comprehensive examples of linear combinations in MathJax format, covering vectors, matrices, polynomials, and functions:

📚 Examples of Linear Combinations

  1. Vector Linear Combination in ℝ²

Example: Express \( \mathbf{b} = \begin{pmatrix} 7 \\ 4 \end{pmatrix} \) as a linear combination of \( \mathbf{v}_1 = \begin{pmatrix} 2 \\ 1 \end{pmatrix} \) and \( \mathbf{v}_2 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \)

Solution:

We seek scalars \( c_1, c_2 \) such that:

\[c_1 \begin{pmatrix} 2 \\ 1 \end{pmatrix} + c_2 \begin{pmatrix} 1 \\ 2\end{pmatrix} = \begin{pmatrix} 7 \\ 4 \end{pmatrix}\]

This gives the system:

\[\begin{cases}2c_1 + c_2 = 7 \\c_1 + 2c_2 = 4\end{cases}\]

Solving:

\[\begin{aligned}&\text{From second equation: } c_1 = 4 - 2c_2 \\&\text{Substitute: } 2(4 - 2c_2) + c_2 = 7 \\&8 - 4c_2 + c_2 = 7\\&-3c_2= -1 \Rightarrow c_2 = \frac{1}{3} \\&c_1 = 4 - 2(\frac{1}{3})=\frac{10}{3}\end{aligned}\]

Result:

\[\boxed{\frac{10}{3} \begin{pmatrix} 2 \\ 1 \end{pmatrix} + \frac{1}{3}\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 7 \\ 4\end{pmatrix}}\]

  1. Vector Linear Combination in ℝ³

Example: Show that \( \mathbf{w} = \begin{pmatrix} 5 \\ 8 \\ 1 \end{pmatrix} \) is a linear combination of:

\[\mathbf{u}_1 = \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}, \quad\mathbf{u}_2 = \begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix}, \quad \mathbf{u}_3 = \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}\]

Solution:

We solve \( c_1\mathbf{u}_1 + c_2\mathbf{u}_2 + c_3\mathbf{u}_3 = \mathbf{w} \):

\[c_1 \begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix} + c_2 \begin{pmatrix} 2\\ 1 \\ 1 \end{pmatrix} + c_3 \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} =\begin{pmatrix} 5 \\ 8 \\ 1 \end{pmatrix}\]

System:

\[\begin{cases}c_1 + 2c_2 = 5 \\2c_1 + c_2 + c_3 = 8 \\c_2 - c_3 = 1\end{cases}\]

Solving:

\[
\begin{aligned}
&\text{From (3): } c_3 = c_2 - 1 \\
&\text{Substitute into (2): } 2c_1 + c_2 + (c_2 - 1) = 8 \Rightarrow 2c_1 + 2c_2 = 9 \\
&\text{From (1): } c_1 = 5 - 2c_2 \\
&\text{Substitute: } 2(5 - 2c_2) + 2c_2 = 9 \Rightarrow 10 - 4c_2 + 2c_2 = 9 \\
&-2c_2 = -1 \Rightarrow c_2 = \frac{1}{2} \\
&c_1 = 5 - 2(\frac{1}{2}) = 4 \\
&c_3 = \frac{1}{2} - 1 = -\frac{1}{2}
\end{aligned}
\]

Result:

\[\boxed{4\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix} + \frac{1}{2}\begin{pmatrix} 2 \\ 1 \\ 1 \end{pmatrix} - \frac{1}{2}\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix} = \begin{pmatrix} 5 \\ 8 \\ 1 \end{pmatrix}}\]

  1. Matrix Linear Combination

Example: Express \( A = \begin{pmatrix} 7 & 4 \\ 2 & 5 \end{pmatrix} \) as a linear combination of:

\[E_1 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad E_2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \quad E_3 = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \quad E_4 = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\]

Solution:

We find \( a,b,c,d \) such that:

\[aE_1 + bE_2 + cE_3 + dE_4 = A\]

\[a\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + b\begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} + c\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}+ d\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 7 & 4 \\2 & 5 \end{pmatrix}\]

This gives:

\[\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} 7 & 4 \\ 2 & 5 \end{pmatrix}\]

Result:

\[\boxed{7\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} +4\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} + 2\begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix} + 5\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}= \begin{pmatrix} 7 & 4 \\ 2 & 5 \end{pmatrix}}\]

  1. Polynomial Linear Combination

Example: Express \( p(x) = 3x^2 + 2x - 1 \) as a linear combination of:

\[q_1(x) = x^2 + 1, \quad q_2(x) = x - 1, \quad q_3(x) = x^2 + x\]

Solution:

We solve \( a\cdot q_1(x) + b\cdot q_2(x) + c\cdot q_3(x) = p(x) \):

\[a(x^2 + 1) + b(x - 1) + c(x^2 + x) = 3x^2 + 2x - 1\]

Expanding:

\[(a + c)x^2 + (b + c)x + (a - b) = 3x^2 + 2x - 1\]

System:

\[\begin{cases}a + c = 3 \\b + c = 2 \\a - b = -1\end{cases}\]

Solving:

\[\begin{aligned}&\text{From (1): } c = 3 - a \\&\text{From (2): } b = 2 - c= 2 - (3 - a) = a - 1 \\&\text{Substitute into (3): } a - (a - 1) = -1 \Rightarrow1 = -1 \quad \text{❌ Contradiction!}\end{aligned}\]

Result: \( p(x) \) is NOT a linear combination of \( q_1, q_2, q_3 \).

  1. Function Linear Combination

Example: Show that \( f(x) = 3\sin x + 2\cos x \) is a linear combination of \( \sin x \) and \( \cos x \)

Solution:

This is trivial by inspection:

\[f(x) = 3\cdot \sin x + 2\cdot \cos x\]

Result:

\[\boxed{f(x) = 3\sin x + 2\cos x = 3(\sin x) + 2(\cos x)}\]

  1. Standard Basis Combination in ℝ³

Example: Express \( \mathbf{v} = \begin{pmatrix} 4 \\ -2 \\ 7 \end{pmatrix} \) using standard basis vectors

Solution:

Using standard basis \( \mathbf{e}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \mathbf{e}_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \mathbf{e}_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \):

\[\mathbf{v} = 4\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} -2\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} + 7\begin{pmatrix} 0 \\ 0 \\ 1\end{pmatrix}\]

Result:

\[\boxed{\begin{pmatrix} 4 \\ -2 \\ 7 \end{pmatrix} = 4\mathbf{e}_1 -2\mathbf{e}_2 + 7\mathbf{e}_3}\]

  1. Zero Vector as Trivial Linear Combination

Example: Show the zero vector can be written as a linear combination of any vectors

Solution:

For any vectors \( \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k \):

\[0\cdot \mathbf{v}_1 + 0\cdot \mathbf{v}_2 + \cdots + 0\cdot\mathbf{v}_k = \mathbf{0}\]

Example:

\[\boxed{0\begin{pmatrix} 1 \\ 2 \end{pmatrix} + 0\begin{pmatrix} 3 \\ -1 \end{pmatrix} + 0\begin{pmatrix} 0 \\ 5 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \end{pmatrix}}\]

🔑 Key Insights

  1. A linear combination has the form: \( c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k \)
  2. Span = all possible linear combinations of a set of vectors
  3. Linear dependence = when one vector can be written as a linear combination of the others
  4. Basis = a linearly independent set that spans the entire space

The ability to express vectors as linear combinations is fundamental to understanding vector spaces, subspaces, and coordinate systems!


📚 Linear Independence vs. Dependence: Complete Examples

  1. Basic 2-Vector Case in ℝ²

Example 1.1: Linearly Independent Vectors

Test: \( \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 3 \\ 1 \end{pmatrix} \)

Step-by-Step:

We solve \( c_1\mathbf{v}_1 + c_2\mathbf{v}_2 = \mathbf{0} \):

\[c_1\begin{pmatrix} 1 \\ 2 \end{pmatrix} + c_2\begin{pmatrix} 3 \\ 1\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}\]

System:

\[\begin{cases}c_1 + 3c_2 = 0 \\2c_1 + c_2 = 0\end{cases}\]

From first equation: \( c_1 = -3c_2 \)

Substitute into second: \( 2(-3c_2) + c_2 = 0 \Rightarrow -6c_2 + c_2 = 0 \Rightarrow -5c_2 = 0 \Rightarrow c_2 = 0 \)

Then \( c_1 = -3(0) = 0 \)

✅ Result: Only trivial solution \( c_1 = c_2 = 0 \)

\[\boxed{\text{LINEARLY INDEPENDENT}}\]

Example 1.2: Linearly Dependent Vectors

Test: \( \mathbf{u}_1 = \begin{pmatrix} 2 \\ 4 \end{pmatrix}, \quad \mathbf{u}_2 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \)

Step-by-Step:

\[c_1\begin{pmatrix} 2 \\ 4 \end{pmatrix} + c_2\begin{pmatrix} 1 \\ 2\end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}\]

System:

\[\begin{cases}2c_1 + c_2 = 0 \\4c_1 + 2c_2 = 0\end{cases}\]

From first equation: \( c_2 = -2c_1 \)

Substitute into second: \( 4c_1 + 2(-2c_1) = 4c_1 - 4c_1 = 0 \) ✓ (always true)

❌ Result: Non-trivial solution exists, e.g., \( c_1 = 1, c_2 = -2 \)

\[\boxed{1\cdot\begin{pmatrix} 2 \\ 4 \end{pmatrix} -2\cdot\begin{pmatrix} 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0\end{pmatrix}}\]

\[\boxed{\text{LINEARLY DEPENDENT}}\]

  1. Three Vectors in ℝ²

Example 2: Test: \( \mathbf{a} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \quad \mathbf{c} = \begin{pmatrix} 2 \\ 3 \end{pmatrix} \)

Step-by-Step:

\[c_1\begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2\begin{pmatrix} 0 \\ 1\end{pmatrix} + c_3\begin{pmatrix} 2 \\ 3 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \end{pmatrix}\]

System:

\[\begin{cases}c_1 + 2c_3 = 0 \\c_2 + 3c_3 = 0\end{cases}\]

We have 2 equations but 3 unknowns → infinite solutions!

Let \( c_3 = t \), then \( c_1 = -2t \), \( c_2 = -3t \)

❌ Result: Non-trivial solution: \( c_1 = -2, c_2 = -3, c_3 = 1 \)

\[\boxed{-2\begin{pmatrix} 1 \\ 0 \end{pmatrix} - 3\begin{pmatrix} 0 \\1 \end{pmatrix} + 1\begin{pmatrix} 2 \\ 3 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \end{pmatrix}}\]

\[\boxed{\text{LINEARLY DEPENDENT}}\]

Key Insight: Any 3 vectors in ℝ² are always linearly dependent.

  1. Three Vectors in ℝ³

Example 3.1: Independent Set

Test: \( \mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}, \quad \mathbf{v}_3 = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \)

Step-by-Step:

\[c_1\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} + c_2\begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + c_3\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} =\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}\]

System:

\[\begin{cases}c_1 + c_2 + c_3 = 0 \\c_2 + c_3 = 0 \\c_3 = 0\end{cases}\]

Back substitution:

- From (3): \( c_3 = 0 \)

- From (2): \( c_2 + 0 = 0 \Rightarrow c_2 = 0 \)

- From (1): \( c_1 + 0 + 0 = 0 \Rightarrow c_1 = 0 \)

✅ Result: Only trivial solution

\[\boxed{\text{LINEARLY INDEPENDENT}}\]

Example 3.2: Dependent Set

Test: \( \mathbf{u}_1 = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}, \quad \mathbf{u}_2 = \begin{pmatrix} 2 \\ 4 \\ 6 \end{pmatrix}, \quad \mathbf{u}_3 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} \)

Observation: \( \mathbf{u}_2 = 2\mathbf{u}_1 \), so they're clearly dependent.

Step-by-Step:

\[c_1\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} + c_2\begin{pmatrix} 2 \\ 4 \\ 6 \end{pmatrix} + c_3\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}\]

Non-trivial solution: \( c_1 = 2, c_2 = -1, c_3 = 0 \)

\[\boxed{2\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} - 1\begin{pmatrix} 2 \\ 4 \\ 6 \end{pmatrix} + 0\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}}\]

\[\boxed{\text{LINEARLY DEPENDENT}}\]

  1. Using Determinants for Square Matrices

Example 4.1: Independent Vectors (Non-zero Determinant)

Test: \( \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 3 \\ 1 \end{pmatrix} \)

Form matrix: \( A = \begin{pmatrix} 1 & 3 \\ 2 & 1 \end{pmatrix} \)

Determinant: \( \det(A) = (1)(1) - (3)(2) = 1 - 6 = -5 \neq 0 \)

✅ Result: \( \det \neq 0 \Rightarrow \boxed{\text{LINEARLY INDEPENDENT}} \)

Example 4.2: Dependent Vectors (Zero Determinant)

Test: \( \mathbf{u}_1 = \begin{pmatrix} 2 \\ 4 \end{pmatrix}, \quad \mathbf{u}_2 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \)

Form matrix: \( B = \begin{pmatrix} 2 & 1 \\ 4 & 2 \end{pmatrix} \)

Determinant: \( \det(B) = (2)(2) - (1)(4) = 4 - 4 = 0 \)

❌ Result: \( \det = 0 \Rightarrow \boxed{\text{LINEARLY DEPENDENT}} \)

  1. Polynomial Examples

Example 5.1: Independent Polynomials

Test: \( p_1(x) = 1, \quad p_2(x) = x, \quad p_3(x) = x^2 \)

Step-by-Step:

Solve \( c_1\cdot 1 + c_2\cdot x + c_3\cdot x^2 = 0 \) for all \( x \)

This means: \( c_3x^2 + c_2x + c_1 = 0 \) (zero polynomial)

For a polynomial to be identically zero, all coefficients must be zero:

\[c_3 = 0, \quad c_2 = 0, \quad c_1 = 0\]

✅ Result: Only trivial solution ⇒ \( \boxed{\text{LINEARLY INDEPENDENT}} \)

Example 5.2: Dependent Polynomials

Test: \( q_1(x) = x^2 + 1, \quad q_2(x) = x^2 + 2x + 1, \quad q_3(x) = 2x + 1 \)

Step-by-Step:

We find \( c_1, c_2, c_3 \) such that:

\[c_1(x^2 + 1) + c_2(x^2 + 2x + 1) + c_3(2x + 1) = 0\]

Group terms:

\[(c_1 + c_2)x^2 + (2c_2 + 2c_3)x + (c_1 + c_2 + c_3) = 0\]

System:

\[\begin{cases}c_1 + c_2 = 0 \\2c_2 + 2c_3 = 0 \\c_1 + c_2 + c_3 = 0\end{cases}\]

From (1): \( c_2 = -c_1 \)

From (2): \( 2(-c_1) + 2c_3 = 0 \Rightarrow -2c_1 + 2c_3 = 0 \Rightarrow c_3 = c_1 \)

From (3): \( c_1 + (-c_1) + c_1 = 0 \Rightarrow c_1 = 0 \)

Wait! Let me check more carefully...

Actually, let's try to find a non-trivial combination by inspection.

Notice: \( q_2(x) - q_1(x) = (x^2 + 2x + 1) - (x^2 + 1) = 2x \)

And \( q_3(x) = 2x + 1 \)

So: \( q_3(x) - [q_2(x) - q_1(x)] = (2x + 1) - (2x) = 1 \)

But \( q_1(x) - 1 = x^2 \)

Let's find explicitly: Try \( c_1 = 1, c_2 = -1, c_3 = 1 \)

\[1\cdot(x^2 + 1) + (-1)\cdot(x^2 + 2x + 1) + 1\cdot(2x + 1) = 0\]

Check: \( x^2 + 1 - x^2 - 2x - 1 + 2x + 1 = 1 \neq 0 \)

Let me solve the system properly:

\[\begin{cases}c_1 + c_2 = 0 \\2c_2 + 2c_3 = 0 \\c_1 + c_2 + c_3 = 0\end{cases}\]

From (1): \( c_2 = -c_1 \)

From (2): \( 2(-c_1) + 2c_3 = 0 \Rightarrow c_3 = c_1 \)

From (3): \( c_1 + (-c_1) + c_1 = c_1 = 0 \)

So only trivial solution! Let me verify with specific x-values:

Test at \( x = 0 \): \( c_1(1) + c_2(1) + c_3(1) = c_1 + c_2 + c_3 = 0 \)

Test at \( x = 1 \): \( c_1(2) + c_2(4) + c_3(3) = 2c_1 + 4c_2 + 3c_3 = 0 \)

Test at \( x = -1 \): \( c_1(2) + c_2(0) + c_3(-1) = 2c_1 - c_3 = 0 \)

Solving gives only trivial solution. So they are actually independent !

Example 5.3: Clearly Dependent Polynomials

Test: \( r_1(x) = x^2 + 2x + 1, \quad r_2(x) = 2x^2 + 4x + 2, \quad r_3(x) = x + 1 \)

Observation: \( r_2(x) = 2r_1(x) \), so clearly dependent.

Non-trivial combination: \( 2r_1(x) - r_2(x) + 0\cdot r_3(x) = 0 \)

\[\boxed{\text{LINEARLY DEPENDENT}}\]

  1. Function Examples

Example 6.1: Independent Functions

Test: \( f_1(x) = e^x, \quad f_2(x) = e^{2x} \)

The Wronskian:

\[W = \begin{vmatrix}e^x & e^{2x} \\e^x & 2e^{2x}\end{vmatrix} = e^x \cdot 2e^{2x} - e^{2x} \cdot e^x = 2e^{3x} - e^{3x} = e^{3x} \neq 0\]

✅ Result: \( \boxed{\text{LINEARLY INDEPENDENT}} \)

Example 6.2: Dependent Functions

Test: \( g_1(x) = \sin^2 x, \quad g_2(x) = \cos^2 x, \quad g_3(x) = 1 \)

Observation: \( \sin^2 x + \cos^2 x = 1 \), so:

\[1\cdot g_1(x) + 1\cdot g_2(x) - 1\cdot g_3(x) = 0\]

❌ Result: \( \boxed{\text{LINEARLY DEPENDENT}} \)

🔑 Summary of Key Tests

For Linear Independence:

  1. Homogeneous system has only trivial solution
  2. Determinant of matrix formed by vectors ≠ 0 (square case)
  3. Rank of matrix = number of vectors
  4. Wronskian ≠ 0 for functions

For Linear Dependence:

  1. Non-trivial combination gives zero vector
  2. Determinant = 0 (square case)
  3. Rank < number of vectors
  4. One vector is a linear combination of others
  5. More vectors than dimension of space

Important Theorems:

- In \( \mathbb{R}^n \), any set of \( m > n \) vectors is always dependent

- A set containing the zero vector is always dependent

- If one vector is a scalar multiple of another, they are dependent

These concepts are fundamental for understanding basis, dimension, and solving systems of linear equations!


Comparison of Gaussian Elimination, Gauss-Jordan Elimination, and Reduced Row Echelon Form (RREF) with detailed examples:

🔄 Comparison of Elimination Methods

  1. Gaussian Elimination (Row Echelon Form - REF)

Goal: Transform matrix to upper triangular form to easily solve system via back substitution.

Characteristics:

- Zeros below pivots

- Pivots are 1 (optional)

- Staircase pattern

- Stops when matrix is in REF

Example: Solve the system:

\[\begin{cases}2x + 4y + 2z = 8 \\x + 2y + 3z = 5 \\3x + y - z = 4\end{cases}\]

Step-by-Step Gaussian Elimination:

Step 1: Augmented Matrix

\[\left[\begin{array}{ccc|c}2 & 4 & 2 & 8 \\1 & 2 & 3 & 5 \\3 & 1 & -1 & 4\end{array}\right]\]

Step 2: Make first column below pivot = 0

\[R_2 \leftarrow R_2 - \frac{1}{2}R_1,\quad R_3 \leftarrow R_3 - \frac{3}{2}R_1\]

\[\left[\begin{array}{ccc|c}2 & 4 & 2 & 8 \\0 & 0 & 2 & 1 \\0 & -5 & -4 & -8\end{array}\right]\]

Step 3: Swap rows to get pivot in 2nd column

\[R_2 \leftrightarrow R_3\]

\[\left[\begin{array}{ccc|c}2 & 4 & 2 & 8 \\0 & -5 & -4 & -8 \\0 & 0 & 2 & 1\end{array}\right]\]

✅ Final REF:

\[\boxed{\left[\begin{array}{ccc|c}2 & 4 & 2 & 8 \\0 & -5 & -4 & -8 \\0 & 0 & 2 & 1\end{array}\right]}\]

Back Substitution:

- From row 3: \( 2z = 1 \Rightarrow z = 0.5 \)

- From row 2: \( -5y - 4(0.5) = -8 \Rightarrow -5y - 2 = -8 \Rightarrow y = 1.2 \)

- From row 1: \( 2x + 4(1.2) + 2(0.5) = 8 \Rightarrow 2x + 4.8 + 1 = 8 \Rightarrow x = 1.1 \)

Solution: \( \boxed{x = 1.1,\ y = 1.2,\ z = 0.5} \)

  1. Gauss-Jordan Elimination (Reduced Row Echelon Form - RREF)

Goal: Transform matrix to RREF where solution can be read directly.

Characteristics:

- Zeros above AND below pivots

- Pivots = 1

- Each pivot column contains only the pivot

Example: Same system as above

Step-by-Step Gauss-Jordan:

Step 1: Start from REF (from Gaussian Elimination)

\[\left[\begin{array}{ccc|c}2 & 4 & 2 & 8 \\0 & -5 & -4 & -8 \\0 & 0 & 2 & 1\end{array}\right]\]

Step 2: Make all pivots = 1

\[R_1 \leftarrow \frac{1}{2}R_1,\quad R_2 \leftarrow -\frac{1}{5}R_2,\quad R_3 \leftarrow \frac{1}{2}R_3\]

\[\left[\begin{array}{ccc|c}1 & 2 & 1 & 4 \\0 & 1 & 0.8 & 1.6 \\0 & 0 & 1 & 0.5\end{array}\right]\]

Step 3: Eliminate above pivots

\[R_2 \leftarrow R_2 - 0.8R_3\]

\[\left[\begin{array}{ccc|c}1 & 2 & 1 & 4 \\0 & 1 & 0 & 1.2 \\0 & 0 & 1 & 0.5\end{array}\right]\]

\[R_1 \leftarrow R_1 - R_3\]

\[\left[\begin{array}{ccc|c}1 & 2 & 0 & 3.5 \\0 & 1 & 0 & 1.2 \\0 & 0 & 1 & 0.5\end{array}\right]\]

\[R_1 \leftarrow R_1 - 2R_2\]

\[\left[\begin{array}{ccc|c}1 & 0 & 0 & 1.1 \\0 & 1 & 0 & 1.2 \\0 & 0 & 1 & 0.5\end{array}\right]\]

✅ Final RREF:

\[\boxed{\left[\begin{array}{ccc|c}1 & 0 & 0 & 1.1 \\0 & 1 & 0 & 1.2 \\0 & 0 & 1 & 0.5\end{array}\right]}\]

Solution (read directly): \( \boxed{x = 1.1,\ y = 1.2,\ z = 0.5} \)

  1. Reduced Row Echelon Form (RREF) - The Standard

Definition: A matrix is in RREF if:

  1. All zero rows are at bottom
  2. Each pivot = 1
  3. Pivots move strictly right in successive rows
  4. Each pivot column contains ONLY the pivot

Example: Find RREF of:

\[A = \begin{pmatrix}1 & 2 & -1 & 4 \\2 & 4 & 1 & 5 \\3 & 6 & 0 & 9\end{pmatrix}\]

Step-by-Step to RREF:

Step 1: Eliminate below first pivot

\[R_2 \leftarrow R_2 - 2R_1,\quad R_3 \leftarrow R_3 - 3R_1\]

\[\begin{pmatrix}1 & 2 & -1 & 4 \\0 & 0 & 3 & -3 \\0 & 0 & 3 & -3\end{pmatrix}\]

Step 2: Eliminate below second pivot

\[R_3 \leftarrow R_3 - R_2\]

\[\begin{pmatrix}1 & 2 & -1 & 4 \\0 & 0 & 3 & -3 \\0 & 0 & 0 & 0\end{pmatrix}\]

Step 3: Make pivot = 1

\[R_2 \leftarrow \frac{1}{3}R_2\]

\[\begin{pmatrix}1 & 2 & -1 & 4 \\0 & 0 & 1 & -1 \\0 & 0 & 0 & 0\end{pmatrix}\]

Step 4: Eliminate above pivot

\[R_1 \leftarrow R_1 + R_2\]

\[\begin{pmatrix}1 & 2 & 0 & 3 \\0 & 0 & 1 & -1 \\0 & 0 & 0 & 0\end{pmatrix}\]

✅ Final RREF:

\[\boxed{\begin{pmatrix}1 & 2 & 0 & 3 \\0 & 0 & 1 & -1 \\0 & 0 & 0 & 0\end{pmatrix}}\]

Interpretation:

- Pivot columns: 1 and 3

- Free variable: \( x_2 \)

- Solution: \( x_1 = 3 - 2x_2,\ x_3 = -1 \)

🔍 Detailed Example Showing All Methods

System:

\[\begin{cases}x + 2y + 3z = 9 \\2x - y + z = 8 \\3x - z = 3\end{cases}\]

A. Gaussian Elimination to REF

Augmented Matrix:

\[\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\2 & -1 & 1 & 8 \\3 & 0 & -1 & 3\end{array}\right]\]

Elimination:

\[R_2 \leftarrow R_2 - 2R_1,\ R_3 \leftarrow R_3 - 3R_1\Rightarrow\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\0 & -5 & -5 & -10 \\0 & -6 & -10 & -24\end{array}\right]\]

\[R_3 \leftarrow R_3 - \frac{6}{5}R_2\Rightarrow\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\0 & -5 & -5 & -10 \\0 & 0 & -4 & -12\end{array}\right]\]

✅ REF:

\[\boxed{\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\0 & -5 & -5 & -10 \\0 & 0 & -4 & -12\end{array}\right]}\]

B. Gauss-Jordan to RREF

Continue from REF:

\[R_2 \leftarrow -\frac{1}{5}R_2,\ R_3 \leftarrow -\frac{1}{4}R_3\Rightarrow\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\0 & 1 & 1 & 2 \\0 & 0 & 1 & 3\end{array}\right]\]

\[R_2 \leftarrow R_2 - R_3\Rightarrow\left[\begin{array}{ccc|c}1 & 2 & 3 & 9 \\0 & 1 & 0 & -1 \\0 & 0 & 1 & 3\end{array}\right]\]

\[R_1 \leftarrow R_1 - 3R_3\Rightarrow\left[\begin{array}{ccc|c}1 & 2 & 0 & 0 \\0 & 1 & 0 & -1 \\0 & 0 & 1 & 3\end{array}\right]\]

\[R_1 \leftarrow R_1 - 2R_2\Rightarrow\left[\begin{array}{ccc|c}1 & 0 & 0 & 2 \\0 & 1 & 0 & -1 \\0 & 0 & 1 & 3\end{array}\right]\]

✅ RREF:

\[\boxed{\left[\begin{array}{ccc|c}1 & 0 & 0 & 2 \\0 & 1 & 0 & -1 \\0 & 0 & 1 & 3\end{array}\right]}\]

Solution: \( \boxed{x = 2,\ y = -1,\ z = 3} \)

🎯 Key Insights

  1. Gaussian Elimination is computationally more efficient for large systems
  2. Gauss-Jordan gives the solution directly but requires more operations
  3. RREF is unique and reveals the complete structure of the solution space
  4. Pivot columns in RREF correspond to basic variables
  5. Free variables correspond to non-pivot columns

When to Use Each:

- Gaussian Elimination: Numerical computations, computer algorithms

- Gauss-Jordan/RREF: Theoretical analysis, finding inverses, understanding solution structure

- REF: Quick solution via back substitution