545f21topics
============
These are topics, notes and problems for Rob Kusner's Fall 2021 course
on (Advanced) Linear Algebra for (Pure and) Applied Math (Math 545 at
UMassAmherst, Copyleft* 2021-?? by Rob Kusner), and are always under
revision. [Rob may eventually prepare some .tex/.pdf Problem Sets, and
revise a .tex/.pdf/.html version of this page. Since we began on a
Wednesday (and since Wöden is the greatest! :-), I'll use the Wednesday
date to specify the week.
Homework is assigned every other week and aree due in class on the
Monday a dozen days following (unless otherwise noted). Take-home
quiz problems will be suggested in class, polished up in email
messages to you, and turned in at class the next Wednesday.
Also, I may update older problems and notes, so you are welcome to
look backward as well as forward!
Our texts are Strang's _Linear Algebra and its Applications_ (4e) and
Axler's _Linear Algebra Done Right_ (3e) – the letters S and A will be
used to refernce them (e.g. as in the first hw assignment).
01 September
==
Hello world!
Get copies of the texts by whatever (legitimate) means you can, and
review what you may have learned in Math 235 (or another linear
algebra course) by reading (or even studying ;-) the first chapters of
Strang (S) and Axler (A)
You may also enjoy watching videos we made for Math 235 last year:
http://www.gang.umass.edu/~kusner/class/235videolinks.html
whose topics follow Lay's _Linear Algebra..._ text – the videos for
Lay's chapters 1, 2, 3 & 4 are the most relevant....
08 September
==
Solving systems, row operations, and the LU/LDU/PLDU factorizations!
Study S chapter 1 and keep wathcing those Math 235 videos!
Quiz #1 (the take-home variety)
====
1) Write up carefully the LDU factorization for my favorite matrix
1 2 3
A = 4 5 6
7 8 9
where the pivots in L and U are leading 1s, and where D is a diagonal
matrix; compare this to the factorization A=ER where R=rref(A).
2) Try doing the same thing for my matrix
0 1 2
B = 3 4 5
6 7 8
– is there an LDU factorization, and if not, why?
As we discussed in class, B has a PLDU factorization, where P is
a permutation matrix: find P, as well as this factorization!
Challenge Problem: Some of you may rememember the Gram-Schmidt process
and the way it leads to a unique factorization of any invertible real
matrix M into a product KAN, where K is an orthogonal matrix (KK* = I,
with K* the transpose of K), where A is a diagonal matrix with
positive entries, and where N is upper triangular with 1s on its
diagonal (please see the last of my YouTube videos
http://www.gang.umass.edu/~kusner/class/235videolinks.html
on the QR and KAN factorizations). Can you reconcile the PLDU and the
KAN factorizations? [I'm still trying to reconcile these in the 2x2
case; the 3x3 and higher cases are even harder! ;-]
15 September
==
Vector spaces: axioms, "sorites" and examples.
Study A chapter 1 and S chapter 2.
Quiz #2 (to carefully write up and turn in next Wednesday)
====
These general basic facts about vector spaces follow immediately from
the 8 axioms (I order them differently from Axler and Strang, labeling
them A1-4 and S1-4, with the first 4 about vector addition (monoid >
semigroup > group > abelian group), and the next 4 about scalar
multiplication (the last of these S4 is that 1v=v for any vector v,
and without it, the remaining axioms still hold when sv=O for any
scalar s in F).
(1) Prove that 0v=O where v is any vector in a vector space V over a
field F (here 0 is the number zero in F, and O is the zero vector in
V). [Indicate which axioms you use.]
(2) Prove that (-1)v = –v where v is any vector in a vector space V
over a field F (here -1 is a number in F, and –v is the additive
inverse or "opposite" vector of v in V). [Indicate which axioms....]
(Extra Credit) Let F=\F_2={0,1} be the field with two elements, and
let be V a vector space over F. What vector is v+v for any v in V?
(Extra Extra Credit) Besides what's noted above, is there any other
way axiom S4 can fail (assuming the rest of the axioms still hold)?
[And any other E...EC problems I mentioned in class! I only recall
them both to be "geometric" in some way: one involved fixed points
under translation (adding a given vector to an arbitrary vector), and
the other involved rescaling.... Only try them if you can remember
them, or figure out on your own what made them interesting – that's
known as "the Moore method" of learning mathematics: you not only
solve problems, you figure out what the problems are that are worth
solving – that's how mathematics (and most of science and other
intellectual fields) evolve! There are no "assigned problems"...]
22 September
==
Vector spaces: more examples; subspaces and their sums.
Study A chapters 1 & 2 and S chapter 2.
Now that you all have access to Strang's text, please try these dozen
problems from the various sections of S chapter 1 for your first HW
(these should be "review" for most of you, so I expect you to turn
them in next week):
S 1.4#19 (but *prove* the rule, using the distributive law for matrix
multiplication – the 2x2 example is answered at the back of the text)
S 1.4#38
S 1.4#42
S 1.5#14
S 1.5#28
S 1.5#32
S 1.5#40 (but you should write down some of these 4x4 matrices and
multiply them – in the correct order! – to see that their product
represents composition of the corresponding permutations)
S 1.6#2
S 1.6#28
S 1.6#40
S 1.6#52
S 1.6#58
29 September
==
Vector spaces: more subspaces, combos, spans, (in)dependence, bases.
Study A chapters 2 & 3 and S chapter 2.
In class we argued that a maximal independent set B in a vector space
V also spans V (that is, span(B)=V), and so B is a basis for V.
Extra credit quiz problem (think about this for Quiz#3 next week – the
answer is unclear when V is not finitely generated): Can you prove the
"dual statement" that a minimal spanning set is independent...?
06 October
==
Bases and coordinates; linear maps; the matrix for a linear map.
Study A chapters 2 & 3 and S chapter 2.
Quiz #3 (to carefully write up and turn in next Wednesday)
====
On Monday we remarked on this "geometric" fact: a map
L: V –> W
between vector spaces V and W (over a field F) is linear iff graph(L)
is a subspace of the product vector space V\timesW. (Recall that
graph(L):={(v,L(v))\in V\timesW}, i.e. all input-output pairs.)
(1) Prove this "geometric fact" (using formal definitions: a subspace
is a nonempty set that is closed under vector addition and scalar
multiplication; a linear map respects vector addition and scalar...)!
We drew a picture on the wall to convince ourselves of this when
V=W=F=\R: in that case, L is just multiplication by a scalar (the
entry in the 1\times1 [L], if you pick the standard basis vector 1 for
both V=\R and W=\R), and this scalar is just the "slope" of the line
graph(L) in \R^2).
You were then challenged to imagine what happens in higher dimensions.
If for example V=W=F^2=\R^2, what's meant by the "slope" of graph(L)?
In some sense, the "slope" is the linear map L itself; but if we pick
bases {v_1, v_2} for V, and {w_1, w_2} for W, then the entries in
the 2\times2 matrix [L] give the slopes in the basic directions....
(2) Explore this "2\times2 case" for some simple examples, like
s 0
L=sI=
0 s
for s=0 or s=1, and perhaps for
0 -1
L=J= =[π/2 rotation]
1 0
using the standard basis {e_1, e_2} for both domain V and co-domain W.
By spliting W=\Re_1\osum\Re_2, corresponding to the rows of the matrix
[L], sketch a pair of pictures in \R^3 for graph(L) – one is a plane
in V\times\Re_1=\R^3, the other in V\times\Re_2=\R^3 – and this is how
to "see" the plane graph(L) in \R^4=V\timesW=V\times(\Re_1\osum\Re_2).
(EC) [Also try the problem from last week!]
13 October
==
The vector space Hom(V,W) of all linear maps L: V \to W; bases B and C
for V and W (almost) give a basis CB for Hom(V,W); and coordinates
with respect to CB give the matrix [L]_CB for L.
Special case: the dual space V*:=Hom(V,F) of V; a basis B for V yields
a "dual basis" B* for V* (defined by b*(b)=1 or 0); B* is a linearly
independent set, but B* might not span V* (so the dual space V* is
"bigger" than V in general); e.g. if V=F[t], the space of polynomials,
then the evaluation map E_1: V \to F summing the coefficients of p is
in the dual space V* but E_1 is not a finite combination of duals to
the standard basis {1,t,...,t^d,...}. (Note: E_1(p_0+p_1t+...+p_dt^d)
= p_0+p_1+...+p_d, so its matrix [E_1]=[1 1 1 1 1 ... 1 ...].)
Study A chapters 2 & 3 and S chapter 2.
Some textbook problems for next-next week (before the midterm exam):
A 1.B #6
A 1.C #10, 20, 24
A 2.A #10, 16
A 2.B #6, 8
A 2.C #4, 12, 15
[I DON'T EXPECT YOU TO DO ALL OF THESE, BUT PLEASE THINK ABOUT THEM!]
20 October
==
Subspaces associated to a linear map L: V \to W; the nullspace or
kernel ker(L), and the range or image im(L). L is injective (1-to-1,
left-invertible) iff ker(L)=0_V, surjective (onto, right-invertible)
iff im(L)=W, and bijective (invertible) iff both. [Axler uses the
notation null(L) and range(L), but ker(L) and im(L) is standard in
algebra where it also used for homomorphisms between groups.]
Going the other way is the dual map L*: W* \to V* – how to define it,
and how does it relate to the transpose of a matrix representing L?
[More on dual spaces in Axler 3.F – we may return to this later!]
Study A chapters 2 & 3 and S chapter 2.
More textbook problems for next week (before the midterm exam):
S 2.1 #7, 26, 28
S 2.1 #24, 28
S 2.3 #2, 8, 20, 22, 26, 32
A 3.A #7, 10, 11, 12
A 3.B #2, 3, 5, 6 [hint: 3.22], 12 [hint: AX=AX'=B then A(X-X')=0]
[AGAIN, I DON'T EXPECT YOU TO DO ALL OF THESE, BUT PLEASE THINK...!]
27 October
==
Midterm Exam (in class: 2:30PM, Goessmann 152)
======= ====
Topics from S chapters 1 & 2 and A chapters 1, 2 & 3 – I may suggest
some particular things in advance, so if you think about them enough
and work out their features, you'll be ready for the exam itself!
03 November
==
Students present their solutions to the 7 Midterm problems and we
critically discuss them (purposely ambiguous pronoun antecedent here).
We implicilty encountered projections in one of the problems (about
symmetric and anti-symmetric matrices) and how these lead to a direct
sum decomposition of a vector space. That will be a recurring theme
for the rest of the semester....
10 November
==
Study A chapters 3, 4 & 5 and S chapter 2, 3 & 5.
More on dual space V* of a vector space V. Inner-product <.,.> on V
gives a natural linear map V \to V* via v \to ; when V is finite
dimensional (more generally for Hilbert spaces) it's an isomorphism.
In concrete cases like V=\R^n and \R^{m\times n}, the dual space is
obtained by transpose – taking columns to rows (and for the field \C
instead of \R, also complex-conjugating each entry) – followed by
taking the trace (A \to A* = conjugate-transpose of A), corresponding
to the inner-product =trace(A*B).
Discuss connection to "least squares solutions" of the linear system
AX = B when B is NOT in im(A) – so there's no actual solution! – by
solving the equation A*AX = A*B.
Quiz #4 (to carefully write up and submit in class next Wed 17 Nov):
====
Explore this for the 2 \times 2 example we did in class:
1 3 1
A = and B =
2 6 1
How do the least squares solutions compare geometrically with the
solution to AX=0, i.e. with ker(A)?
17 November
==
Study A chapters 5 & 6 and S chapters 2, 3 & 5.
We discussed the quadratic form Q associated to an inner-product <.,.>
defined by Q(v):=, and how to recover <.,.> via polarization
+ = (Q(v+w) – Q(v-w))/2
using a difference of squares to express a product – somebody tried to
patent this, even though it goes back roughly 5000 years to Babylon!
Quiz #5 (to carefully write up and submit in class next Mon 22 Nov):
====
For a Hermitean inner-product on a \C-vector space, =*
(instead of =), and so in the polarization formula
+ = 2\Re
(twice the "real" part instead of 2); can you find a way to express the
"imaginary" part (and thus "all of" <.,.>) using Q?
I wrote this to a student who couldn't make Wednesday's class:
Class was cut a bit short (a late start due to emergency exam
logistics for my other class): we discussed a bit more about dual
space V* of a vector space V, how a basis for V the dual basis for V*,
how an inner product on V gives (at least for dim(V)<∞) an isomorphism
V~V* between those, how this is related to transpose of matrices and
so forth.
We also discussed the orthogonal complement W^\perp of a subspace W in V
(in V* this would correspond to the "annihilator" subspace, the linear
maps V \to F which vanish on W), and the direct sum decomposition
V=W\+W^\perp.
You'll find this in Axler... [and I'll post some related problems from
there (and likely from Strang) to do after Thanksgiving break].
24 November
==
We'll have class Monday (please turn in Quiz# 5), but not Wesdnesday!
Study A chapters 6 & 7 and S chapters 3 & 6.
[I'm loosely following Axler, so if you miss class, please read and
think about the sections from Axler that I've asked you to study! ;-]
01 December
==
Study A chapters 6 & 7 and S chapter 6.
In a finite dimensional inner product space V over F=\C (or over \R,
which is the subfield of \C fixed by complex conjugation), an ordered
orthonormal basis \B={u_1,...,u_n} gives an isomorphism V \to F^n via
corrodnate (i.e. taking u_k in V to e_k in F^n) and turns (Hermitean)
inner product on V into the standard one on F^n
= [v]*[w] (technically it's trace([v]*[w]), but F^{1\by1}=F)
where [w] is the coordinates column vector and [v]* is the conjugate
transpose row vector – we'll use this isomorphism often from now on!
(Note that = [v]*[w] = ([w]*[v])* = * as in Quiz #5!)
Hermitean conjugate A* of A:V\toV is (abstractly) defined to satisfy
= (for any v,w in V)
and (concretely) their matrices with respect to the basis \B satisfy
[A*]=[A]*
(in other words, the matrix for the Hermitean conjugate is transpose
matrix with all entries complex conjugated), so we'll abuse notation
and drop the square brackets [.] when the o.n. basis \B is given).
With all this in mind, define A in End(V)=Hom(V,V)=F^{n\by n} with
A=A* to be Hermitean (symmetric), and prove the basic theorem:
1) eigenvalues of A are all real numbers (\lambda \in \R)
2) eigenspaces of A belonging to distinct e'values are orthogonal.
This gives an (orthogonal) direct sum decomposition
V = \Operpsum E_k(A)
where E_k(A) = ker(A – \lambda_k I) is the \lambda_k-eigenspace.
Similarly, define
B is skew-Hermitian if B* = -B
U is unitary if U* = U^{-1} (i.e. U*U = UU* = I)
and more generally (all of the above fall into this case)
N is normal if N*N = NN*
and prove the analogous theorem that
V = \Operpsum E_k
but modified as follows in each case:
3) e'values of B are pure imaginary (\lambda \in i\R);
4) e'values of U are unitary (\lambda=e^{it} \in S^1);
any complex number can be an e'value for a normal N.
Properties 1,2,3,4 and the orthoganality follow directrly from
the definitions, but to get the direct sum decomposition of V
we need to show that an eigenvector exists (just split off its
e'space from V and use induction on dimension) – how do we show
this?!
For Hermitean case we used the quadratic form Q_A defined by
Q_A(v) :=
restricted to the sphere S of unit vectors {v : =1} in V;
because S is closed and bounded in V(=F^n), the continuous Q_A
has a minimum; since Q_A is quadratic (and thus smooth), the
minimum is a critical point; and any critical point of Q_A on S
is a (unit-length) e'vector for A (the "Rayleigh-Ritz" method).
For general case, apply the Fundamental Theorem of Algebra –
any (nonconstant) complex polynomial has a complex root – to
the characteristic polynomial c_A(t) = det(A – tI) whose roots
are eigenvalues of A.
Note that for 2\times2 case, one could use explicit formula for
for roots of quadratic polynomials, for n≥5 in general there is
no such formula!
Quiz #6 (due in class Wed 8 Dec)
====
Give explicit (nontrivial!) 2\times2 examples for each of the four
types of matrices (above) and work out their corresponding e'stuff
(their e'values and e'vectors) to illustrate theorems above.
[Hint: for the normal case, work backward to find an example that's
not unitary, nor shew-Hermitean, nor Hermitean.]
08 December
==
Study A chapter 7 and S chapter 6.
In the case of 1\by1 matrices, we can identify the Hermitean matrices
with \R, the skew-Hermitians with i\R, the unitary matrices with S^1,
and the general ones with \C; we can think their n\by n analogues: in
particular, the analogue of splitting a complex number into real and
imaginary parts (\C=\R+i\R) is the (orthogonal!) direct sum splitting
\C^{n\by n} = Herm(n) \osum Skew(n)
via M = (M+M*)/2 + (M-M*)/2
= H + S
if we regard these as \R-subspaces (they are NOT \C-subpaces).
Note that writing M=H+S means M is normal (MM*=M*M) iff H commutes
with S: this reflects an even more general fact that matrices are
"simultaneously" diagonalizable iff they commute with each other.
Note also that M*M is not merely Hermitean, but its eigenvalues are
all non-negative (here it's useful to avoid matrices and think instead
of maps from the inner-product space V=\C^n to itself; hint: if v≠0 &
M*Mv=tv, then 0≤===t, i.e. 0≤t since
is positive).
Any non-negative Hermitean matrix H has a unique non-negative square-
root √H as follows: use the spectral theorem to write H=UDU* where D
is a non-negative diagonal matrix, and let √D be the diagonal matrix
with entries the non-negative square-roots of those in D; now define
√H:=U√DU* which satifies √H√H=U√DU*U√DU*=U√D√DU*=UDU*=H as promised!
[Note: Axler calls such H and √H "positive (semi-definite)"!]
The singular value decomposition generalizes the spectral theorem to
the "rectangular" case of maps A:V\to X (both inner-product spaces),
first observing that A*A:V\to V and AA*:X\to X is each a non-negative
Hermitean matrix, and thus each has a "square-root" defined as above
(if V=\C^n and X=\C^m then A\in\C^{m\by n}, A*A\in\C^{n\by n} and
AA*\in\C^{m\by m}.
Then the singular value decomposition is the factorization A=USW*
where the unitary U\in\C^{m\by m} has columns the eigenvectors of AA*
(ordered with all of the positive eigenvalues first), where the
unitary W\in\C^{n\by n} has the eigenvectors of A*A as its columns
(same eigenvalue order, using the fact – proved using hint above again
– that AA* and A*A have the same positive eigenvalues!), and where the
rectangular S\in\R^{m\by n} is all 0 except for upper-left "diagonal"
entries which are the "singular values" of A: the positive square-root
of each positive eigenvalue of A*A (or AA*), again ordered the same!
[See Strang 331-332; curiously, Axler considers only the case V=X.]
Thanks for your attention, good-bye for now, and see you next Wed!!!
============================Pre-Final HW=========================
Here are some problems from Axler (A) and Strang (S) that may help
you prepare for the Final Exam [you do NOT need to do them all!]:
A: 6A #4*, #8, #12, #16, #24*, #25
6B #2*, #4, #9, #11, #15
6C #5*, #6*, #7, #8
7A #1, #3, #4*, #7*, #20, #21 [recall: self-adjoint = Hermitean]
7B #1, #2, #6, #9*, #11*, #15
7C #1, #4*, #7, #8*
7D #4, #6 [printing error: \P_2(\R) not \P(\R^2)], #11*, #14, #17
S: 3.1 #11*, #13, #17*, #19, #20, #32, #34, #45
3.2 #9*, #13, #21, #22, #23*
3.3 #11*, #15
3.4 #14*
6.1 #11
6.2 #2, #5*, #7*
6.3 #2, #5*
6.4 #4, #7, #8*, #12, #13
[Would you like any more?! Or many fewer?!?! Since the 63 problems
above may appear overwhelming, I've marked 22 "recommended" problems
with a "*" for you to focus on. You should look at (and think a bit
about) all 63 problems, but I do NOT even expect all 22 "recommended"
problems to be submitted – yet I DO WANT YOU TO THINK!!!!!!!!!!!! :-]
Please write up and submit as many of these as you can at the Final
Exam, which will follow a format similar to the Midterm.
========================Due 5:30PM next Wed======================
Final Exam (3:30-5:30PM Wednesday 15 December, in our classroom)
===== ==== [but double-check SPIRE for day, time and location]
Topics mostly from S chapters 3 & 6 and A chapters 6 & 7.
[This page is under (re)construction – may want to compare with a
version of Math 545 that I taught a while ago
http://www.gang.umass.edu/~kusner/class/545hw2002
using the classic Curtis text published by Springer.]