Advanced Linear Algebra Homework for Spring 2006 (revised 19 May 2006)
[these problems are being modified, so check frequently; the dates refer
to when the topic was covered in class; please submit your work, and
particularly problems from Curtis, on the Thursday of the following week]
1/31 Resolve the parallelogram conjecture, and do the other exercises
suggested in class, including: cross product on R^3 is a non
associative binary operation.
[Read Chapter 1 of Curtis]
2/2 Do the exercises suggested in class about abelian groups, rings and
fields (division rings), especially:
In any ring, r0 = 0r = 0; in a ring with 1, r(1) = r = (1)r,
where in each case, r is any element of the ring.
Also try this: for what n is the ring Z/nZ a field? Can you
describe any other finite fields?
Also, assuming the rational numbers Q and the real numbers R
are field, verify that the complex numbers C = R + i R is also
a field, and that the quaternions H = R + i R + j R + k R is
a (noncommutative) field, where i^2 = j^2 = k^2 = ijk = 1;
similarly for Q in place of R.
[Read Chapter 2 of Curtis]
2/7 Do the module exercises suggested in class, as well as: v0 = O and
v = v(1) for any vector v in a module; O = Ok for any scalar k in
the ring (note the difference between "O" and "0"); also, show the
{moon} is a module over any ring.
Check the details that any abelian group A is naturally a
(right or left) Zmodule (as we began to verify in class).
For those who know what ideals are: any (right or left) ideal S
in a ring T is an example of a (right of left) Tmodule; explore the
possibilities when T = Z (the integers) or T = Z/nZ.
Let Z[i] be the ring of Gaussian integers in C, and let M =
Z[i,j,k] be the analogous quaternion integers. Show that M
is both a (right) Z[i]module and ring in its own right.
Let R[x^2] be the ring of realvalued polynomial functions in x^2,
that is, all those polynomials whose terms are only of even
degree. Let M_0 and M_1 be all real even and odd functions,
respectively. Verify that each M_i is an R[x^2]module.
Can you generalize this?
2/9 Try the submodule and subspace problems suggested in class.
Also try these:
Let S be any subset of V. Check that span(S) is a subspace.
Show that S={1, x, x^2, ... , x^n, ...} is a linearly independent
set in the vector space F[x] of all polynomial functions of x,
and thus a basis for F[x], since clearly span(S)=F[x]. Also try
the related problem, Curtis 2.5#3.
Show that the sets S={e_1, e_2, ... , e_n} and
T={e_1, e_1+e_2, ... e_1+...+e_n} are bases for K^n.
2/14 Curtis 2.4#3cefg,4cdeg,7,8,10
2/16 Curtis 2.5#1,5
2/21 [No class because of Washington and Lincoln  read rest of
Chapter 2 of Curtis]
2/23 In the proof of the "spanning sets are at least as big as linearly
independents sets" for vector spaces (over possibly noncommutative
fields) we noticed Curtis' proof needed a fix at the first step 
provide the fix! Also, the proof used the existence of inverses
for nonzero elements in the field, but the result makes sense in
any Rmodule  is the result *true* in any Rmodule?
2/28 Curtis 2.7#1,3,4 [Hint: use dimension theorem and previous problem]
[Read Chapter 3 of Curtis]
3/2 Let L = [.]_B be the linear map V > F^n which assigns a vector
in V its coordinates with respect to B. Let S be the standard
basis of F^n. What is the matrix [L]_SB?
Show that Hom(V,W) = {linear maps V > W} is a vector space of
dimension dim(Hom(V,W)) = dim(V) dim(W).
Curtis 3.11#3,4,5,7,9,10
Curtis 3.12#4,5,8
Curtis 3.13#1,3,7,12
3/7 Let V L> W M> X be linear maps. Suppose B, C, and D are bases
for V, W and X, respectively. Verify that the matrix for the
composition ML (w.r.t. bases B and D) is the matrix product of
the matrices for M and L (w.r.t....). (Try the case where V, W and
X are each 2dimensional first!)
Try to work through the details of rank+nullity theorem proof
of the dimension theorem: use the linear map from the direct
sum of subspace V and W onto the sum V+W defined by (v,w) > vw,
and show that the kernel is isomorphic to the intersection of
V and W.
3/9 Let V = F^n where F = Z/2Z is the field of two elements. How
many ordered bases are there for V? (This is a tricky counting
problem: try n=2 case first! How many unordered bases are there?
Try the same problem with any prime p instead of p=2.
3/14 Hamming code problems [I'll distribute these in class].
3/16 Think of the following problems as takehome midterm:
As a warmup, try the problem I adapted from the "car guys":
Consider the Roman alphabet a basis for a vector space (or
a free module over the integers Z), and any word is regarded
as a vector sum of its letters. If we define a linear map
so that ONE > 2, TWO > 3 and ELEVEN > 5, then where
does TWELVE > ?? Is this enough information to determine
where TEN > ?? What about TWENTY > ?? (Give details!)
Let L: V > W be a linear map between finite dimensional
vector spaces. Let k = dim(im(L)) = rank(L), m = dim(W) and
n = dim(V). Then there are bases B for V and C for W such
that the m x n matrix for L w.r.t these bases is
 I_k 0 
[L]_CB =  
 0 0 
where I_k is the k x k identity matrix and where 0 denotes
a matrix of all 0's of the appropriate size. (Prove this!)
In other words, any m x n matrix A can be put in the above
form by multiplying it on the left and right by invertible
m x m and n x n matrices X and Y:
 I_k 0 
XAY =   .
 0 0 
This means that the only invariant of a linear map when we
are free to choose bases for range and domain independently
is rank(L), which is simply an integer between 0 and m.
(Also prove this!)
Try carrying this out explicitly for the matrix
 1 2 3 
A =  4 5 6  .
 7 8 9 
(We sketched this last part in class  write up the details!)
[You should also read Chapters 5, 6 and 7 of Curtis over the
break (we'll return to ideas from 4 later).]
3/28 Curtis 7.22#6,7,8,9,10
3/30 Curtis 7.22#2(cf. hw from 3/2 above),4,5,11,12
4/4 Show that the space of kforms on F^n is a vector space over
F. Try to compute the dimension of this vector space. This
is useful for the following:
Verify that my definition of det (an nform \omega with
\omega(e_1,...,e_n) = 1) agrees with Curtis's. Also verify this
gives an algorithm for computing det (Theorem 16.9 in Curtis 5.16).
Curtis 5.16#4 and 5.18#4
4/6 Curtis 5.19#7,10
4/11 Curtis 7.23#3,4,6
4/13 Curtis 7.24#4,10
4/18 Curtis 7.24#1,2[skip part g in each],3
Also show that if N:W > W is nilpotent of order e (N^e = 0, but
N^{e1} \= 0) and w is a nonzero vector in W which is not in
ker(N^{e1}), then {w, N(w), N^2(w),...,N^{e1}(w)} is linearly
independent in W.
4/20 Curtis 7.25#2,3,4,8[read/think these through, at least]
In class we defined the exponential of an n x n matrix (over a
subfield of the complex numbers for the series to converge)
exp(A) = I + A + ... + A^k/k! + ....
Use the triangular form theorem A = D + N, where D is
diagonalizable and N is nilpotent of some order e at most
n (and, as is clear once in triangular form, DN = ND)
to get a simpler formula for exp(A). [Hint: in general,
exp(A+B) = exp(A)exp(B) *only* when A and B commute.]
Carry this out to compute exp(A) for the 2 x 2 matrix
14 25
A =   ,
 9 16
which is similar (conjugate) to the matrix
1 1
B =   .
0 1
=====================================================================
Let's summarize our "streamlined" alternative approach to the
canonical form theorems, avoiding some of the ring theory in Curtis,
and using instead "Fitting's Lemma":
First we discussed finding a basis in which a nilpotent linear map
N:V > V has d x d matrix (d = dim V) in the canonical form
0 0 0 ... 0 0
1 0 0 ... 0 0
0 1 0 ... 0 0 0
0 0 1 ... 0 0
: : : : :
0 0 0 ... 1 0


0  0

where the e x e block corresponds to order of nilpotency e of N. This
is what I call Fitting's Lemma. We proved that if v is a nonzero
vector in V which is not in ker(N^{e1}), then
{v, N(v), N^2(v),..., N^{p1}(v)}
is linearly independent in V, and that it can be extended to a basis
with the properties desired above. (This is essentially the problem
that I suggested for 4/18.) Consequently, e is at most d. (Note: the
argument in class actually only dealt with the case rank(N) = e1; the
general case is a little more complicated, since the rank of N can be
as large as d1, and the canonical form of N can in fact have several
blocks of the above form; the idea is that there could be a larger
linearly independent set {v1,...,vs} not in ker(N^{e1} which would give
rise to s blocks of size e; and it's even more complicated, because
there could be additional vectors, independent of N(v1),...,N(vs),
and these will give rise to more blocks of size e1; so the ultimate
canonical form of a nilpotent N may have several blocks of size e,
several more of size e1, and so forth....)
Note that this works over any field F, not necessarily algebraically
closed. I emphasized this result because it is also at the heart of
rational canonical form, also valid over any F.
Next step: we applied this to an arbitrary linear map L: V > V under
the assumption that its minimal polynomial m(x)
factors into powers of linear factors
m(x) = (x  a_1)^{e_1} ... (x  a_k)^{e_k}
(which will be true automatically when F is closed field).
A key point is that the map N_i:V_i > V_i, defined as (L  a_i)
restricted to V_i = ker((L  a_i)^{p_i}), is nilpotent of order e_i
(by minimality of m). This L restricted to V_i has the block form
indicated above, except with a_i along the main diagonal.
A lemma which I referred to in class verifies that V_i is in fact an
Linvariant subspace (try as an exercise):
Suppose L, M are commuting linear maps V > V, that is LM = ML.
Then ker(M) and im(M) are Linvariant subspace (similarly for
the roles of M and L reversed).
We applied this with M=(La_i)^{e_i}, which clearly commutes with L.
Finally, we deduce the HamiltonCayley theorem from this:
1) m(x) divides the characteristic polynomial p(x) = det(xL)
2) L satisfies p(x), i.e. p(L)=0
at least when F is closed (since then clearly the factors of m divide those
of p); but a little thought shows that this must really be true for
arbitrary F as well (exercise, using the fact we can extend to algebraic
closure K of F to get all the roots, but coefficients of polynomials
still remain in F)!
One final note: problem 23#6 is not correct if the characteristic of F
is 2, since then the matrix
1 0
 
1 1
satisfies the hypotheses, but not the conclusion  you might think
about this in terms of canonical form theorem above (the square of
this matrix is I, so it has minimal polynomial x^2  1 = (x1)^2
in characteristic 2!
=====================================================================
4/25 Curtis 4.15#2,9[this relies on the exercise suggested in class,
i.e. V = W \directsum W^\perp for any subspace W of V],12,13
4/27 Carry out the KAN factorization of your favorite 2 x 2 matrices,
including my favorite:
1 2
C =   .
3 4
Determine the symmetries of the capital letters of the Roman
alphabet, assuming these are written in the maximally symmetric
form (in particular, L and Q have nontrivial symmetries, and O
has an infinite family), and express as matrices with respect
to the standard basis of R^2.
Find the QR factorization of the (singular) 3 x 3 matrix
1 2 3
 
A = 4 5 6 .
 
7 8 9
5/2 Back to KAN factorization. For small matrices, this isn't very
practical, but the factorization C = KAN gives a nice way to
compute the inverse of C: in fact, the inverses of K and A are very
easy, being K* and 1/A (meaning the diagonal matrix with reciprocal
entries); the inverse of N is also easy: since N = I  P where P
is nilpotent, its inverse is I + P + P^2 + ... + P^k where k+1 is
the order of nilpotency (verify this). Carry this out for the
2 x 2 example from 4/27. [Here and forever more, A* denotes the
(conjugate, if complex entries) transpose of a matrix A.]
5/4 Let W be the subpace of R^4 perpendicular to [1 1 1 1]*. Compute
matrices for orthogonal projections onto W^\perp and onto W. Find
the closest points on W and W^\perp to the vector [1 2 3 4]*.
Curtis 9.30#1,2,3
5/9 Curtis 9.31#1,3,58[really one extended problem]
THE FOLLOWING MATERIAL WILL BE ADAPTED INTO YOUR TAKE HOME FINAL EXAM:
Suppose we have an endomorphism L: V > V of a real vector space V,
and we hope to find eigenspace and eigenvalues, or at least invariant
subspaces. Today we discussed how to do this using complex
eigenspaces and eigenvalues for L, by regarding V as a real subspace
of a complex vector space V_C. To make sense of this in general
requires the notion of tensor product, but we can be rather concrete:
if V has basis B (over R) then V_C also has basis B (over C); in other
words, we simply extend the coefficient field from R to C. (Note that
the same trick works for any field extension F to K: a vector space
V_F over F can be viewed as an Fsubspace of V_K). Equivalently (and
even more concretely) we know that we may simply regard V as R^n and
V_C as C^n, by choosing some basis B as above.
Note that complex conjugation defines a real (but not complex) linear
mapping #:C^n > C^n which fixes R^n, that is, R^n is the 1eigenspace
of # (in class I used a "bar" instead of #, but can't here without
TeX). In fact, if U is any subspace of C^n which is fixed by # then
the vectors in U have all entries real. Consider any complex subspace
W of C^n and its image #W under complex conjugation. Inside W+#W is
the (real) subspace U of vectors of the form u=w+#w for some w in W;
again, these are just the vectors fixed by #.
Return now to our Rlinear map L: R^n > R^n, which we may extend to a
Clinear map C^n > C^n. Suppose w is an eigenvector of L with
eigenvalue m (both may be complex), and let W be the 1dimensional
complex subspace spanned by w. Then W+#W is an Linvariant subpace,
and because L is real, its real subspace U is also Linvariant. There
are two possibilities:
1) If the eigenvalue m is real, then L(w+#w)=mw+#(mw)=m(w+#w) so
u=w+#w is a real eigenvector, and W=#W. Conversely, if W=#W, then U
is 1dimensional over R, spanned by a real eigenvector u, and in this
case the eigenvalue m must be real.
2) When m is not real, W and #W are distinct, U is 2dimensional over
R, and L restricted to U can be put in the form
(m+#m)/2 (m#m)/2i
 
(m#m)/2i (m+#m)/2 
in a suitable basis for U (and conversely).
Note that in the special case m is a unit complex number (as in
class), 2) is just the corresponding rotation matrix with cosines and
sines of the argument (angle) of m; and 1) occurs for a reflection
with m=1 or 1 (cf. Curtis Theorems 30.2 and 30.3).
5/11 The following is part of your final exam:
Let V be a finite dimensional vector space over C, with hermitian
inner product (think of C^n with (x,x)=x*x if you wish).
The key lemma for a linear map A: V > V which lets us show that
it can be diagonalized with respect to an orthonormal
(unitary) basis when selfadjoint (symmetric or hermitian) is
this: if a subspace W is Ainvariant, then its orthogonal
complement W^\perp is A*invariant. Prove this lemma, and apply
it to show we can diagonalize skewadjoint, unitary, or, more
generally, normal (A*A=AA*) maps in the same way.
To do this you should prove and use the lemma (cf. Curtis 32.13):
Suppose L, M are commuting linear maps V > V, that is LM = ML.
Then L and M have a common 1dimensional invariant subspace W.
[Compare to the lemma from 4/25, and contrast with the following:
commuting linear maps need not have the same invariant subspaces,
since obviously a scalar multiple of the identity (for which any
subspace of V is invariant) commutes with any L (which may have only
particular invariant subspaces).]
To show that a normal linear map can be diagonalized, let L=A and
M=A*, and observe that A**=A. Thus the n1 dimensional subspace
W^\perp is also invariant under under both A and A*, so we can use
the dimension reduction argument as before (induction on n if you like).
In applications, one of the most useful ways to compute eigenvectors
and eigenvalues of a symmetric matrix A is to find the maximum of the
associated quadratic form Q(x) = x*Ax among unit vectors x. Show that
the maximum point v is an eigenvector of A belonging to the largest
eigenvalue, the maximum value Q(v). More generally, the critical
points and values of Q correspond to the eigenvectors and eigenvalues
of A.
To find such maximum value, please justify the following algorithm:
pick a random unit vector (say, v_0 = e_1) to begin, then apply A to
get v_1 = Av_0, then rescale to get unit vector u_1; we can iterate
this to get a sequence of unit vectors u_n (in practice, simply use
v_n = Av_{n1} and rescale only at the end to get u_n) which converge
to the eigenvector v. (In general, the eigenvectors of A are fixed
points [up to "+/" sign] of this rescaled action of A on the set
of unit vectors.)
Carry out this algorithm for the 2 x 2 symmetric matrix
 1 2 
A =   .
 2 3 
Fibonacci numbers will appear, and the u_n should converge to a
multiple of [1 T]* where T is the "golden mean" (1 + \sqrt5)/2.
By this method, you should find the largest eigenvalue of A is T^3.
Check algebraicly that this is a root of the characteristic polynomial.
What happens when you begin the iteration at v_0 = e_2?
Actually, a better algorithm for the largest eigenvalue/vector
of A is to iterate
v > (1h)v + hAv ,
rescaling to the sphere of unit vectors as appropriate. For h
small this is a discrete approximation to the gradient flow of Q,
which will generically converge to the largest eigenvalue/vector.
(When h=1, this coincides with the above algorithm.)
PLEASE NOTE: I EXPECT TO BE AROUND GANG (LGRT 1535) IN THE AFTERNOONS
NEXT WEEK (MTW, 5/2224)  IF YOU COMPLETE YOUR EXAM AND PASS IT TO ME
BEFORE NOON ON THURSDAY 5/25, I WILL BE GRATEFUL  I'LL THEN BE OUT OF
TOWN TILL TUESDAY 5/30 AND WILL ONLY ACCEPT EXAMS TILL NOON ON THAT DAY
WITH ADVANCE (BEFORE 5/24) EMAIL NOTICE FROM YOU....
HAVE A MATHEMATICALLY WONDERFUL SUMMER!!!!!!!!!
======================================================================
==========================TAKEHOME FINAL=============================
======================================================================
PART A.
Suppose we have an endomorphism L: V > V of a real vector space V,
and we hope to find eigenspaces and eigenvalues, or at least invariant
subspaces. Here's a variation of what we did in class, using complex
eigenspaces and eigenvalues for L acting on a complex vector space V_C,
regarding V as a real subspace of V_C. To make sense of this in general
requires the notion of tensor product, but we can be rather concrete:
if V has basis B (over R) then V_C also has basis B (over C); in other
words, we simply extend the coefficient field from R to C. (Note that
the same trick works for any field extension F to K: a vector space
V_F over F can be viewed as an Fsubspace of V_K).
PROBLEM A.1. In the situation above, show that V_K = span_K(B) is
indeed a vector space over F, that V=V_F is a subspace of V_K, and
that dim_F(V_K) = dim_F(K) dim_F(V), where we regard K itself as a
vector space over F. Check the case F=R and K=C=R+iR: if B is a basis
for V over R, then the union of B and iB is a basis for V_C over R,
so dim_R(V_C) = 2 dim_R(V).
Equivalently (and even more concretely) we know that we may simply
regard V as R^n and V_C as C^n, by choosing some basis B as above.
We will assume this perspective for the rest of the problem
Note that complex conjugation defines a real (but not complex) linear
mapping #:C^n > C^n which fixes R^n, that is, R^n is the 1eigenspace
of #. In fact, if U is any subspace of C^n which is fixed by # then
the vectors in U have all entries real. Consider any complex subspace
W of C^n and its image #W under complex conjugation. Inside W+#W is
the (real) subspace U of vectors of the form u=w+#w for some w in W;
again, these are just the vectors fixed by #.
PROBLEM A.2. What is the matrix for # in the standard R basis {e1, ie1,
e2, ie2, ... , en, ien} for C^n?
Return now to our Rlinear map L: R^n > R^n, which we may extend to a
Clinear map C^n > C^n. Suppose w is an eigenvector of L with
eigenvalue m (both may be complex), and let W be the 1dimensional
complex subspace spanned by w.
PROBLEM A.3. Show that W+#W is an Linvariant subpace, and because L
is real, its real subspace U is also Linvariant.
There are two possibilities for U:
1) If the eigenvalue m is real, then L(w+#w)=mw+#(mw)=m(w+#w) so
u=w+#w is a real eigenvector, and W=#W. Conversely, if W=#W, then U
is 1dimensional over R, spanned by a real eigenvector u, and in this
case the eigenvalue m must be real.
2) When m is not real, W and #W are distinct, U is 2dimensional over
R, and L restricted to U can be put in the form
(m+#m)/2 (m#m)/2i
 
(m#m)/2i (m+#m)/2 
in a suitable basis for U (and conversely).
PROBLEM A.4. Verify statements 1) and 2) above, and check that in the
special case m is a unit complex number (as mentioned in class), 2) is
just the corresponding rotation matrix with cosines and sines of the
argument (angle) of m; and 1) occurs for a reflection with m=1 or 1
(cf. Curtis Theorems 30.2 and 30.3).
PART B.
Let V be a finite dimensional vector space over C, with hermitian
inner product (,)  think of C^n with (x,y)=x*y if you wish.
The key lemma for any linear map A: V > V which lets us show that it
can be diagonalized with respect to an orthonormal (or unitary) basis
when A is symmetric (or hermitian) is this:
If a subspace W of V is Ainvariant, then its orthogonal
complement W^\perp is A*invariant, where A*:V>V is defined
by the property (A*x,y) = (x,Ay).
PROBLEM B.2. Prove this lemma.
In the following, we will apply this lemma to show we can diagonalize
not only symmetric (or hermitian) maps, but also skewsymmetric (or
skewhermitian), unitary, or, more generally, normal (A*A=AA*) maps in
the same way.
PROBLEM B.3. Check that hermitian (A*=A), skewhermitian (A*=A), and
unitary (A*A=I=AA*) maps are all normal.
To apply this do this you will need another lemma (cf. the first part
of Curtis 32.13, but this is more general):
Suppose L, M are commuting linear maps V > V, that is LM = ML.
Then L and M have a common 1dimensional invariant subspace W,
i.e. a common eigenvector w with span({w}) = W.
PROBLEM B.4. Prove this lemma. [Compare to the lemma from 4/20, and
contrast with the following: commuting linear maps need not have the
same invariant subspaces, since obviously a scalar multiple of the
identity (for which any subspace of V is invariant) commutes with any
L (which may have only particular invariant subspaces). You might also
think about whether the following is true for a vector space V over an
arbitrary field K:
Suppose L, M are commuting linear maps V > V, that is LM = ML.
Then L and M have a common irreducible invariant subspace W.
In particular, when K is algebraically closed, W is 1dimensional,
and when K is a real field, 1 or 2dimensional; for example any
pair of real 2by2 rotation matrices commute, but generally R^2
is the only invariant subspace.]
Now, to show that a normal linear map can be diagonalized, let L=A and
M=A*, and observe that A**=A. Thus the n1 dimensional subspace
W^\perp is also invariant under under both A and A*, so we can use
the dimension reduction argument as before (induction on n if you like).
PROBLEM B.5. Carry out the details of the above paragraph.
PART C.
In applications, one of the most useful ways to compute eigenvectors
and eigenvalues of an nbyn symmetric (or Hermitian) matrix A is to
find the maximum of the associated quadratic form Q(x) = x*Ax among
unit vectors x, which being a closed, bounded set  indeed, a sphere
of real dimension n1 (or 2n1)  must have a maximum by a calculus
argument. This is known as the RayleighRitz method.
Problem C.1. Show that the maximum point v is an eigenvector of A
belonging to the largest eigenvalue, the maximum value Q(v). More
generally, the critical points and values of Q correspond to the
eigenvectors and eigenvalues of A. (Hint: Use calculus to verify the
gradient vector gradQ satisfies gradQv = 2Av, so that at any critical
point v, the method of Lagrange multipliers forces grad Q to be a
multiple of grad(x,x)v=2v, that is Av=av for some a in R. Try the
2by2 case first to see how the partial derivatives work before you
do the nbyn case.)
To find such maximum value, pick a random unit vector (say, v_0 = e_1)
to begin, then apply A to get v_1 = Av_0, then rescale to get unit
vector u_1; we can iterate this to get a sequence of unit vectors u_n
(in practice, simply use v_n = Av_{n1} and rescale only at the end to
get u_n) which converge to the eigenvector v. (In general, the
eigenvectors of A are fixed points [up to "+/" sign] of this rescaled
action of A on the set of unit vectors, and thus you want to be sure your
random vector is not already an eigenvector of A.)
PROBLEM C.2. Explain why the above algorithm should work in general,
and carry it out explicitly for the 2 x 2 symmetric matrix
 1 2 
A =   .
 2 3 
What happens when you begin the iteration at v_0 = e_2?
In the example, Fibonacci numbers will appear, and the u_n should
converge to multiple of the column vector [1 T]* where T is the
"golden mean" (1 + \sqrt5)/2. By this method, you should find the
largest eigenvalue of A is T^3.
PROBLEM C.3. Check algebraicly that this is a root of the
characteristic polynomial. What is the other root?
[Actually, a better algorithm for the largest eigenvalue/vector
of A is to iterate
v > (1h)v + hAv ,
rescaling to the sphere of unit vectors as appropriate. For h
small this is a discrete approximation to the gradient flow of Q,
which will generically converge to the largest eigenvalue/vector.
When h=1, this coincides with the above algorithm.]
PROBLEM C.4. In the symmetric 2by2 case, how might you find the
smallest eigenvalue and its eigenvector? (Compare with C.3. Hint:
The eigenspaces are perpendicular, so it's easy to find that other
eigenvector in this case. In the nbyn case there are eigenvalues
intermediate to the largest and smallest, so these are obtained by a
similar method: after finding the largest eigenvalue and its
eigenvector w, restrict to the subsphere of unit vectors perpendicular
to w and repeat to find the next largest eigenvalue and its
eigenvector....)
PLEASE NOTE: I EXPECT TO BE AROUND GANG (LGRT 1535) IN THE AFTERNOONS
NEXT WEEK (MTW, 5/2224)  IF YOU COMPLETE YOUR EXAM AND PASS IT TO ME
(IN PERSON, OR UNDER MY OFFICE DOOR, IN AN ENVELOPE IF POSSIBLE)
BEFORE NOON ON THURSDAY 5/25, I WILL BE GRATEFUL  AND YOU'LL GET A
BONUS POINT!
I AM OUT OF TOWN THEN TILL TUESDAY 5/30: ONLY WITH ADVANCE (BEFORE
5/24) EMAIL NOTICE FROM YOU, WILL I ACCEPT EXAMS TILL NOON ON 5/30....
======================================================================
==========================TAKEHOME FINAL=============================
======================================================================