MAT 247 Review

$\newcommand{\la}{\langle}$ $\newcommand{\ra}{\rangle}$ $\newcommand{\vu}{\mathbf{u}}$ $\newcommand{\vv}{\mathbf{v}}$ $\newcommand{\vw}{\mathbf{w}}$ $\newcommand{\vzz}{\mathbf{z}}$ $\newcommand{\nc}{\newcommand}$ $\nc{\Cc}{\mathbb{C}}$ $\nc{\Rr}{\mathbb{R}}$ $\nc{\Qq}{\mathbb{Q}}$ $\nc{\Nn}{\mathbb{N}}$ $\nc{\cB}{\mathcal{B}}$ $\nc{\cE}{\mathcal{E}}$ $\nc{\cC}{\mathcal{C}}$ $\nc{\cD}{\mathcal{D}}$ $\nc{\mi}{\mathbf{i}}$ $\nc{\sp}[1]{\langle #1 \rangle}$ $\nc{\ol}[1]{\overline{#1} }$ $\nc{\norm}[1]{\left\| #1 \right\|}$ $\nc{\abs}[1]{\left| #1 \right|}$ $\nc{\vz}{\mathbf{0}}$ $\nc{\vo}{\mathbf{1}}$ $\nc{\DMO}{\DeclareMathOperator}$ $\DMO{\tr}{tr}$ $\DMO{\nullsp}{nullsp}$ $\nc{\va}{\mathbf{a}}$ $\nc{\vb}{\mathbf{b}}$ $\nc{\vx}{\mathbf{x}}$ $\nc{\ve}{\mathbf{e}}$ $\nc{\vd}{\mathbf{d}}$ $\nc{\vh}{\mathbf{h}}$ $\nc{\ds}{\displaystyle}$ $\nc{\bm}[1]{\begin{bmatrix} #1 \end{bmatrix}}$ $\nc{\gm}[2]{\bm{\mid & \cdots & \mid \\ #1 & \cdots & #2 \\ \mid & \cdots & \mid}}$ $\nc{\MN}{M_{m \times n}(K)}$ $\nc{\NM}{M_{n \times m}(K)}$ $\nc{\NP}{M_{n \times p}(K)}$ $\nc{\MP}{M_{m \times p}(K)}$ $\nc{\PN}{M_{p \times n}(K)}$ $\nc{\NN}{M_n(K)}$ $\nc{\im}{\mathrm{Im\ }}$ $\nc{\ev}{\mathrm{ev}}$ $\nc{\Hom}{\mathrm{Hom}}$ $\nc{\com}[1]{[\phantom{a}]^{#1}}$ $\nc{\rBD}[1]{ [#1]_{\cB}^{\cD}}$ $\DMO{\id}{id}$ $\DMO{\rk}{rk}$ $\DMO{\nullity}{nullity}$ $\DMO{\End}{End}$ $\DMO{\proj}{proj}$ $\nc{\GL}{\mathrm{GL}}$

1 Matrices and Operations on Matrices

$M_{n \times m}(K)$ is the set $n \times m$ matrices over a field $K$.

It is a vector space over $K$ matrix addition and scalar multiplication. As a vector space it is isomorphic to $K^{nm}$.

Given a matrix $A \in M_{n \times p}(K)$ and a matrix $B \in M_{p \times m}(K)$, the product $AB$ is in $M_{n \times m}(K)$.

Matrix multiplication is not commutative, i.e. in general $AB \neq BA$ even if both are defined.

In MAT 331, we will learn that it represents composition of linear maps.

The structure $(M_n(K) = M_{n \times n}(K),+, *)$ ($n > 1$) is an example of a (non-communtative) ring. Together with the scalar multiplication $M_n(K)$ is a $K$-algebra.

The invertible $n \times n$ matrices form a group, denoted by GL$_n(K)$, under matrix multiplication. Its identity element, denoted by $I_n$ is the ($n \times n$) identity matrix.

The transpose of an $n \times m$ matrix $A$, denoted by $A^T$ is the $m \times n$ matrix whose $i$-th column is the $i$-row of $A$.

$(AB)^T = B^TA^T$ and $(AB)^{-1} = B^{-1}A^{-1}$ whenever both sides are defined.

2. System of Linear Equations

A system of $m$ linear equations in $n$ unknows can be expressed via multiplication as: $A\vx = \vb$ when $A \in M_{m \times n}(K)$.

The system is homogeneous if $\vb = \vz$. The solution set of homogeneous $A\vx = \vz$ is a vector subspace of $K^n$, in particular, it contains the trivial solution $\vz \in K^n$.

Proposition. The solution set of a consistent system $A\vx = \vb$ is of the form $\vv_0 + H$ where $\vv_0$ is a (any) particular solution of $A\vx = \vb$ and $H$ is the solution space of the homogeneous system $A\vx = \vz$.

There are 3 types of elementary row operations (EROs). An ERO $\rho$ on a matrix $A$ can be carried out by multiplying $A$ with the corresponding elementary matrix $E_{\rho}$ to $A$ from the left. In otherwise, $E_{\rho}A$ is the matrix obtained by applying $\rho$ to $A$.

Proposition EROs are reverisible, equivalently elementary matrices are invertible.

Two matrices are row equivalent if one can be transformed into the other by EROs.

Theorem Each matrix $A$ is row equivalent to a unqiue reduced row echelon matrix $R_A$. In other words, there is one and only one reduced row echelon matrix in every row equivalent class.

$R_A$ is called the reduced row echelon form (rref) of $A$.

Gauss-Jordan elimination (GJE) is an algorithm that compute the rref of a given matrix. It is the most fundamental algorithm in Linear Algebra.

For example to solve $A \vx = \vb$, we can first form its argumented matrix $B = [A|\vb]$. Then compute $R_B$ by GJE. Note that $R_B$ is of the form $[R_A| \vb']$ and that $A\vx = \vb$ and $R_A \vx = \vb'$ have the same solution set. But then since $R_A$ is in rref, the solutions of $R_A\vx = \vb'$ can be read off (or more precisely parametrized) and hence the solutions of the original system, easily.

Proposition. Let $A \in M_{n \times m}(K)$ and $B = [A|I_n]$. Then $R_B = [R_A|M]$ where $M$ is a product of elementary matrices (hence invertible) and that $MA = R_A$.

We have two bouns from this observation.

Proposition. An $n \times n$ square matrix $A$ is invertible if and only if $R_A = I_n$.

Moreover, if $A$ is invertible, then $A^{-1}$ is the $M$ in the previous proposition.

Proposition. Every invertible matrix is a product of elementary matrices.

In partice, if we want to prove a statement about invertible matrices, it is often suffices (and easier) to prove it for elementary matrices first.

3 Determinant

Determinant $\det$ is the unique function on $M_{n}(K)$ that satisfies 4 properties described in the notes. They can be summarized as (1,2) linear at each rows (or columns), 3. alternating and 4. $\det(I_n) =1$.

Proposition. For square matrices, $\det(A^T) = \det A$.

The determinant of a type I elementary matrix (corresponding to swapping two rows) is -1, of a type II elementary matrix is (corresponding to multiplying a row by a non-zero scalar $\lambda$) is $\lambda$ and of a type III elementary matrix (corresponding to adding a multiple of a row to another) is $1$.

Proposition. For square matrices, $\det(AB) = \det(A)\det(B)$. Consequently $A$ is invertible if and only if $\det(A) \neq 0$.

We worked out serveral formulas for computing determinants. Introduced, the terminology of minors and cofactors.

4 Vector Spaces and subspaces.

A vector space over $K$ is a set $V$ together with two operations called addition and scalar multiplication that satisfy several axioms (see notes for the link to these axioms).

Examples are $M_{n \times m}(K)$ with matrix addtion and scalar multiplication. When $m=1$, $M_{n \times 1}(K) \cong K^n$ and its elements are called column vectors. When $n=1$ $M_{1 \times m}(K) \cong K^m$ and its elements are called row vectors.

A subset $W$ of a vector space $V$ is a subspace of $V$, written as $W \le V$, if

  1. $\vz_V \in W$.
  2. $\vv + \vv' \in W$ whenever both $\vv$ and $\vv'$ are in $W$.
  3. $\lambda \vv \in W$ for any $\lambda \in K$ whenever $\vv \in W$.

Example, the nullspace (or kernel) of $A \in \NM$, i.e. the solution space of $A\vx = \vz$,is a subspace of $M_{m \times 1}(K) \cong K^m$.

To show that a subset $W$ of $V$ is not a subspace, it suffices to demonstrate that any one of condition 1,2,3 fails for $W$.

Proposition. A subset $W$ of $V$ is a subspace of $V$ if and only if for any $\vv \in \vv' \in W$ and $\lambda \in K$, $\lambda \vv + \vv' \in W$.

5 Linear Combinations and Linear Dependence

Let $X$ be a subset of a vector space $V$. A linear combination of elements of $X$ is a sum of the form $$ c_1 \vv_1 + \ldots +c_k \vv_k$$ where $\vv_i$'s are elements of $X$ and $c_i$'s are scalars.

Note that two different linear combinations of elements of $X$ may sum to the same vector.

Denoted by $\sp{X}$ or span($X$) the set of $\vv \in V$ that are the sum of some linear combination of element of $X$

Proposition. $\sp{X}$ is the smallest (with respect to subset inclusion) subspace of $V$ that contains $X$. Consequently, $\sp{\sp{X}} = \sp{X}$.

Examples: $\sp{\emptyset} = \{\vz_{V}\}$. The span of the columns (resp. rows) of a matrix $A$ is called the column space (resp. row space) of $A$.

Q. Given a finite set of vectors $A$ in $K^n$ and a vector $\vb \in K^n$. Can we decide whether $\vb \in \sp{A}$, if so how?

A. Identify the elements of $A$ with the columns (in some order) of a matrix, still call $A$. Then $\vb \in \sp{A}$ if and only $\vb \in$ the column space of $A$ if and only if $A\vx = \vb$ is consistent (i.e. has a solution).

Since we know how to solve $A\vx = \vb$ by GJE, we answer the question algorithmically.

A subset $X$ of $V$ is a spanning set if $\sp{X} = V$.

Q. Given a finite subset $A$ of $K^n$. Can we tell whether $A$ is a spanning set? If so, how?

A. It is equivalent to determine whether the column space of $A$ is $K^n$. That's the case if and only if all rows of $R_A$ are nonzero.

Consequently,

Proposition. A spanning set of $K^n$ must have at least $n$ vectors.

A subset $X$ of $V$ is

Proposition Any set contains a linearly dependent set is itself linearly dependent. Equivalently, every subset of a linearly independent set is linearly independent.

The emptyset $\emptyset$ is linearly independent. Any set that contains $\vz_V$ is linearly dependent.

Proposition. $X$ is linearly independent if and only if there are no non-trivial relation among its elements, i.e. the only way for a linear combination of elements of $X$, $c_1\vv_1 + \cdots c_k \vv_k$ equals $\vz$ is when $c_1 = \cdots = c_k = 0$.

Q. Given a finite subset $A$ of $K^n$. Can we tell whether it is linearly indepenent or not?

A. It follows from the previous proposition that $A$ is linearly indepenent if and only if $A\vx = \vz$ has only the trivial solution. This condition is equivalent to $R_A$ has no free columns. In other words, $A$ is (or the columns of $A$ are) linearly independent if and only if all columns of $R_A$ are pivotal.

Consequently,

Propositon. A linearly independent subset of $K^n$ has at most $n$ vectors.

6 Basis and Dimension

A subset $\cB$ of $V$ is a basis (over $K$) if $\cB$ is a linearly independent spanning set.

Theorem. Every vector space has a basis.

We already know that every basis of $K^n$ has exactly $n$ vectors. In fact,

Theorem. Any two bases of a vector space have the same cardinality.

This common cardinality is called the dimension of the vector space.

The row rank (resp. column rank) of a matrix is the dimension of its row space (resp column space).

It is clear that the nonzero rows of a rre matrix form a basis of its row space and the pivotal columns form a basis of its column space.

Consequently row rank = column rank (=number of nonzero rows = number of pivotal columns) for any rre matrix. In fact,

Theorem. Row rank = Column rank for any matrix.

This common value is called the rank of a matrix.

Q. How to find a basis of the nullspace of a given matrix $A$?

A. Solve $A\vx = \vz$. If the nullspace is $\{\vz\}$ then $\emptyset$ is a basis. Otherwise the system has nontrivial solutions, i.e. free-variables. Set each free-variable to 1 and the rest to 0, we get a solution. These solutions form a basis of the nullspace of $A$. In particular the nullity (i.e. the dimension of the nullspace) of $A$ is the number of free-columns in $R_A$.

Q. Given a finite subset $A$ of $K^n$. How to find a subset $B$ of $A$ that is basis of $\sp{A}$?

A. The columns of $A$ of the same positions of the pivotal columns of $R_A$ form a basis of the column space of $A$ (i.e. $\sp{A}$ if we identify elements of $A$ as columns of a matrix (also called $A$)).

In particular, the rank of $A$ = number of pivotal column of $R_A$. Thus, we have

Theorem. For any matrix $A$, nullity + rank = number of columns.