Additional Definitions and Theorems

  1. Let S = {v1,...,vn} and  T = {w1,...,wn} be bases for the n-dimensional vector space V.  Let PST be the transition matrix from the T-basis to the S-basis.  Then PST is nonsingular and PST-1 is the transition matrix from the S-basis to the T-basis.

  2. S = {u1,...,un} is called orthogonal if the dot product of any two distinct vectors from the set is 0.  S is called orthonormal if the length of each of the vectors is one.

  3. Any orthonormal set of vectors is linear independent.

  4. If W is a subspace of V, then a vector u is called orthogonal is it is orthogonal to every vector in W.  The set of all orthogonal such u is called the orthogonal complement.  

  5. If W is a subspace and V is its orthogonal complement, then V is a subspace and W intersection V is zero.

  6. Every vector in Rn can be written uniquely as a linear combination of vectors in W and vectors in the orthogonal complement of W.

  7. The orthogonal complement of the orthogonal complement of W is W.

  8. The null space of a matrix A is the orthogonal complement of the row space of A.   

  9. The null space of AT is the orthogonal complement of the column space of A.  

  10. If Av = lv then l is called an eigenvalue of A and v is the corresponding eigenvector of V.

  11. det(A - lI) is called the characteristic polynomial of A and its roots are the eigenvalues.

  12. A is nonsingular if and only if 0 is not an eigenvalue of A.

  13. A is similar to B if there is a nonsingular matrix P with B = P-1AP.

  14. A is called diagonalizable if it is similar to a diagonal matrix.

  15. A matrix is diagonalizable if and only if it all the roots of its characteristic polynomial are real and distinct.

  16. The eigenvalues of a symmetric matrix are all real.

  17. If A is symmetric, then the eigenvectors that belong to distinct eigenvalues are orthogonal.

  18. A matrix is called orthogonal if A-1 = AT

  19. A matrix is orthogonal if and only if its rows (columns) form and orthogonal set of vectors.

  20. A is symmetric if and only if  it is orthogonally diagonalizable.

  21. If L is a linear transformation, then L(c1v1 + ... + cnvn) = c1L(v1) + ... + cnL(vn)

  22. If L is a linear transformation, then L(0) = 0 and L(u - v) = L(u) - L(v).

  23. A linear transformation is completely determined by where it takes a basis.

  24. A linear transformation is called 1-1 if L(u) = L(v) implies that u = v.

  25. The kernel of a linear transformation is the set of v with L(v) = 0.

  26. The kernel of a linear transformation is a subspace of its domain.

  27. A linear transformation is 1-1 if and only if its kernel is zero.

  28. The set of all L(v) is called the range of L. 

  29. The range of a linear transformation is a subspace.

  30. If the domain of a linear transformation L is V then
    dim(kerL) + dim(rangeL) = the dimension of V.

  31. If L: V -> W is a linear transformation, then  L is 1-1 if and only if L is onto.

  32. If L(x) = Ax, then A is diagonalizable with n linearly independent eigenvalues if and only if the matrix with respect to its eigenvalues is diagonal.