The title of the book sounds a bit mysterious. Why should anyone read this book if it presents the subject in a wrong way? What is particularly done "wrong" in the book?
Before answering these questions, let me first describe the target audience of this text. This book appeared as lecture notes for the course "Honors Linear Algebra". It supposed to be a first linear algebra course for mathematically advanced students. It is intended for a student who, while not yet very familiar with abstract reasoning, is willing to study more rigorous mathematics that is presented in a "cookbook style" calculus type course. Besides being a first course in linear algebra it is also supposed to be a first course introducing a student to rigorous proof, formal definitions---in short, to the style of modern theoretical (abstract) mathematics.
The target audience explains the very specific blend of elementary ideas and concrete examples, which are usually presented in introductory linear algebra texts with more abstract definitions and constructions typical for advanced books.
Another specific of the book is that it is not written by or for an algebraist. So, I tried to emphasize the topics that are important for analysis, geometry, probability, etc., and did not include some traditional topics. For example, I am only considering vector spaces over the fields of real or complex numbers. Linear spaces over other fields are not considered at all, since I feel time required to introduce and explain abstract fields would be better spent on some more classical topics, which will be required in other disciplines. And later, when the students study general fields in an abstract algebra course they will understand that many of the constructions studied in this book will also work for general fields.
Also, I treat only finite-dimensional spaces in this book and a basis always means a finite basis. The reason is that it is impossible to say something non-trivial about infinite-dimensional spaces without introducing convergence, norms, completeness etc., i.e. the basics of functional analysis. And this is definitely a subject for a separate course (text). So, I do not consider infinite Hamel bases here: they are not needed in most applications to analysis and geometry, and I feel they belong in an abstract algebra course.
Conditions of Use
This book is licensed under a Creative Commons License (CC BY-NC-SA). You can download the ebook Linear Algebra Done Wrong for free.
- Title
- Linear Algebra Done Wrong
- Author(s)
- Sergei Treil
- Published
- 2021-01-11
- Edition
- 1
- Format
- eBook (pdf, epub, mobi)
- Pages
- 286
- Language
- English
- ISBN-13
- 9781312407183
- License
- CC BY-NC-SA
- Book Homepage
- Free eBook, Errata, Code, Solutions, etc.
Preface Notes for the instructor Chapter 1. Basic Notions 1. Vector spaces 1.1. Examples. 1.2. Matrix notation Exercises. 2. Linear combinations, bases. 2.1. Generating and linearly independent systems Exercises. 3. Linear Transformations. Matrix–vector multiplication 3.1. Examples. 3.2. Linear transformations F**n –> F**m. Matrix–column multiplication. 3.3. Linear transformations and generating sets. 3.4. Conclusions. Exercises. 4. Linear transformations as a vector space 5. Composition of linear transformations and matrix multiplication. 5.1. Definition of the matrix multiplication. 5.2. Motivation: composition of linear transformations. 5.3. Properties of matrix multiplication. 5.4. Transposed matrices and multiplication. 5.5. Trace and matrix multiplication Exercises. 6. Invertible transformations and matrices. Isomorphisms 6.1. Identity transformation and identity matrix. 6.2. Invertible transformations. Examples. 6.2.1. Properties of the inverse transformation. 6.3. Isomorphism. Isomorphic spaces. Examples 6.4. Invertibility and equations. Exercises. 7. Subspaces. Exercises. 8. Application to computer graphics. 8.1. 2-dimensional manipulation. 8.2. 3-dimensional graphics Exercises. Chapter 2. Systems of linear equations 1. Different faces of linear systems. 2. Solution of a linear system. Echelon and reduced echelon forms 2.1. Row operations. 2.1.1. Row operations and multiplication by elementary matrices 2.2. Row reduction. 2.2.1. An example of row reduction. 2.3. Echelon form Exercises. 3. Analyzing the pivots. 3.1. Corollaries about linear independence and bases. Dimension 3.2. Corollaries about invertible matrices Exercises. 4. Finding the inverse of A by row reduction. An Example. Exercises. 5. Dimension. Finite-dimensional spaces. 5.1. Completing a linearly independent system to a basis 5.2. Subspaces of finite dimensional spaces Exercises. 6. General solution of a linear system. Exercises. 7. Fundamental subspaces of a matrix. Rank. 7.1. Computing fundamental subspaces and rank. 7.2. Explanation of the computing bases in the fundamental subspaces. 7.2.1. The null space Ker A. 7.2.2. The column space Ran A. 7.2.3. The row space Ran A**T. 7.3. The Rank Theorem. Dimensions of fundamental subspaces. 7.4. Completion of a linearly independent system to a basis Exercises. 8. Representation of a linear transformation in arbitrary bases. Change of coordinates formula. 8.1. Coordinate vector. 8.2. Matrix of a linear transformation. 8.3. Change of coordinate matrix. 8.3.1. An example: change of coordinates from the standard basis 8.3.2. An example: going through the standard basis 8.4. Matrix of a transformation and change of coordinates. 8.5. Case of one basis: similar matrices Exercises. Chapter 3. Determinants 1. Introduction. 2. What properties determinant should have. 2.1. Linearity in each argument. 2.2. Preservation under ``column replacement'' 2.3. Antisymmetry. 2.4. Normalization. 3. Constructing the determinant. 3.1. Basic properties. 3.2. Properties of determinant deduced from the basic properties. 3.3. Determinants of diagonal and triangular matrices. 3.4. Computing the determinant. 3.5. Determinants of a transpose and of a product. Determinants of elementary matrices. 3.6. Summary of properties of determinant. Exercises. 4. Formal definition. Existence and uniqueness of the determinant. Exercises. 5. Cofactor expansion. 5.1. Cofactor formula for the inverse matrix 5.2. Some applications of the cofactor formula for the inverse. Exercises. 6. Minors and rank. 7. Review exercises for Chapter 3. Chapter 4. Introduction to spectral theory (eigenvalues and eigenvectors) 1. Main definitions 1.1. Eigenvalues, eigenvectors, spectrum 1.2. Finding eigenvalues: characteristic polynomials 1.3. Finding characteristic polynomial and eigenvalues of an abstract operator 1.4. Complex vs real spaces 1.5. Multiplicities of eigenvalues 1.6. Trace and determinant. 1.7. Eigenvalues of a triangular matrix Exercises. 2. Diagonalization. 2.1. Preliminaries 2.2. Some motivations: functions of operators. 2.3. The case of n distinct eigenvalues 2.4. Bases of subspaces (AKA direct sums of subspaces). 2.5. Criterion of diagonalizability 2.6. Real factorization 2.7. Some example 2.7.1. Real eigenvalues 2.7.2. Complex eigenvalues 2.7.3. A non-diagonalizable matrix Exercises. Chapter 5. Inner product spaces 1. Inner product in R**n and C**n. Inner product spaces. 1.1. Inner product and norm in R**n. 1.2. Inner product and norm in C**n. 1.3. Inner product spaces. 1.3.1. Examples 1.4. Properties of inner product 1.5. Norm. Normed spaces Exercises. 2. Orthogonality. Orthogonal and orthonormal bases. 2.1. Orthogonal and orthonormal bases. Exercises. 3. Orthogonal projection and Gram-Schmidt orthogonalization 3.1. Gram-Schmidt orthogonalization algorithm 3.2. An example. 3.3. Orthogonal complement. Decomposition E=E+Eperp Exercises. 4. Least square solution. Formula for the orthogonal projection 4.1. Least square solution 4.1.1. Geometric approach. 4.1.2. Normal equation. 4.2. Formula for the orthogonal projection. 4.3. An example: line fitting 4.3.1. An example. 4.4. Other examples: curves and planes. 4.4.1. An example: curve fitting 4.4.2. Plane fitting Exercises. 5. Adjoint of a linear transformation. Fundamental subspaces revisited. 5.1. Adjoint matrices and adjoint operators. 5.1.1. Uniqueness of the adjoint. 5.1.2. Adjoint transformation in abstract setting. 5.1.3. Useful formulas. 5.2. Relation between fundamental subspaces. 5.3. The ``essential'' part of a linear transformation Exercises. 6. Isometries and unitary operators. Unitary and orthogonal matrices. 6.1. Main definitions 6.2. Examples 6.3. Properties of unitary operators 6.4. Unitary equivalent operators Exercises. 7. Rigid motions in R**n Exercises. 8. Complexification and decomplexification 8.1. Decomplexification 8.1.1. Decomplexification of a vector space 8.1.2. Decomplexification of an inner product 8.2. Complexification 8.3. Introducing complex structure to a real space 8.3.1. An elementary way to introduce a complex structure 8.3.2. From elementary to abstract construction of complex structure 8.3.3. An abstract construction of complex structure 8.3.4. The abstract construction via the elementary one Exercises. Chapter 6. Structure of operators in inner product spaces. 1. Upper triangular (Schur) representation of an operator. Exercises. 2. Spectral theorem for self-adjoint and normal operators. Exercises. 3. Polar and singular value decompositions. 3.1. Positive definite operators. Square roots 3.2. Modulus of an operator. Singular values. 3.3. Singular values. Schmidt decomposition. 3.4. Matrix representation of the Schmidt decomposition. Singular value decomposition. 3.4.1. From singular value decomposition to the polar decomposition Exercises. 4. Applications of the singular value decomposition. 4.1. Image of the unit ball 4.2. Operator norm of a linear transformation 4.3. Condition number of a matrix 4.4. Effective rank of a matrix 4.5. Moore–Penrose (pseudo)inverse. Exercises. 5. Structure of orthogonal matrices 6. Orientation 6.1. Motivation 6.2. Formal definition 6.3. Continuous transformations of bases and orientation Exercises. Chapter 7. Bilinear and quadratic forms 1. Main definition 1.1. Bilinear forms on R**n 1.2. Quadratic forms on R**n 1.3. Quadratic forms on C**n Exercises. 2. Diagonalization of quadratic forms 2.1. Orthogonal diagonalization 2.2. Non-orthogonal diagonalization 2.2.1. Diagonalization by completion of squares 2.2.2. Diagonalization using row/column operations Exercises. 3. Sylvester's Law of Inertia 4. Positive definite forms. Minimax characterization of eigenvalues and the Sylvester's criterion of positivity 4.1. Sylvester's criterion of positivity 4.2. Minimax characterization of eigenvalues 4.3. Some remarks Exercises. 5. Positive definite forms and inner products Chapter 8. Dual spaces and tensors 1. Dual spaces 1.1. Linear functionals and the dual space. Change of coordinates in the dual space 1.1.1. Change of coordinates formula 1.1.2. A uniqueness theorem 1.2. Second dual 1.3. Dual, a.k.a. biorthogonal bases 1.3.1. Abstract non-orthogonal Fourier decomposition 1.4. Examples of dual systems 1.4.1. Taylor formula 1.4.2. Lagrange interpolation Exercises. 2. Dual of an inner product space 2.1. Riesz representation theorem 2.2. Is an inner product space a dual to itself? 2.3. Biorthogonal systems and orthonormal bases 3. Adjoint (dual) transformations and transpose. Fundamental subspace revisited (once more) 3.1. Dual (adjoint) transformation 3.1.1. Dual transformation for the case A : F**n -> F**m 3.1.2. Dual transformation in the abstract setting 3.1.3. A coordinate-free way to define the dual transformation 3.2. Annihilators and relations between fundamental subspaces Exercises. 4. What is the difference between a space and its dual? 4.1. Isomorphisms between X and X' 4.2. An example: velocities (differential operators) and differential forms as vectors and linear functionals 4.2.1. Velocities as vectors 4.2.2. Differential forms as linear functionals (covectors) 4.2.3. Differential operators as vectors 4.3. The case of a real inner product space 4.3.1. Einstein notation, metric tensor 4.3.2. Covariant and contravariant coordinates. Lovering and raising the indices 4.4. Conclusions Exercises. 5. Multilinear functions. Tensors 5.1. Multilinear functions 5.1.1. Multilinear functions form vector space 5.1.2. Dimension of L(V1,V2,...,Vp;V) 5.2. Tensor Products 5.2.1. Lifting a multilinear function to a linear transformation on the tensor product 5.2.2. Dual of a tensor product 5.3. Covariant and contravariant tensors 5.3.1. Linear transformations as tensors 5.3.2. Polylinear transformations as tensors Exercises. 6. Change of coordinates formula for tensors. 6.1. Coordinate representation of a tensor. 6.2. Change of coordinate formulas in Einstein notation 6.3. Change of coordinates formula for tensors Chapter 9. Advanced spectral theory 1. Cayley–Hamilton Theorem Exercises. 2. Spectral Mapping Theorem 2.1. Polynomials of operators 2.2. Spectral Mapping Theorem Exercises. 3. Generalized eigenspaces. Geometric meaning of algebraic multiplicity 3.1. Invariant subspaces 3.2. Generalized eigenspaces. 3.3. Geometric meaning of algebraic multiplicity 3.4. An important application 4. Structure of nilpotent operators 4.1. Cycles of generalized eigenvectors 4.2. Jordan canonical form of a nilpotent operator 4.3. Dot diagrams. Uniqueness of the Jordan canonical form 4.4. Computing a Jordan canonical basis 5. Jordan decomposition theorem 5.1. Remarks about computing Jordan canonical basis Index