Linear Algebra part  Eigenvalues and Eigenvectors by Evan Dummit  v

Linear Algebra part Eigenvalues and Eigenvectors by Evan Dummit v - Description

100 Contents 1EigenvaluesandEigenvectors 11 The Basic Setup 1 12 Some Slightly More Advanced Results Ab out Eigenvalues 4 13 Theory of Similarity ID: 28773 Download Pdf

122K - views

Linear Algebra part Eigenvalues and Eigenvectors by Evan Dummit v

100 Contents 1EigenvaluesandEigenvectors 11 The Basic Setup 1 12 Some Slightly More Advanced Results Ab out Eigenvalues 4 13 Theory of Similarity

Similar presentations


Download Pdf

Linear Algebra part Eigenvalues and Eigenvectors by Evan Dummit v




Download Pdf - The PPT/PDF document "Linear Algebra part Eigenvalues and Eig..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "Linear Algebra part Eigenvalues and Eigenvectors by Evan Dummit v"Рђћ Presentation transcript:


Page 1
Linear Algebra (part 3): Eigenvalues and Eigenvectors (by Evan Dummit, 2012, v. 1.00) Contents 1EigenvaluesandEigenvectors 1.1 The Basic Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Some Slightly More Advanced Results Ab out Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Theory of Similarity and Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 How To Diagonalize A Matrix (if p ossible) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1EigenvaluesandEigenvectors We have discussed (p erhaps excessively) the corresp ondence b etween solving a system of homogeneous linear equations and solving the matrix equation ~x , for an matrix and ~x and each column vectors. For reasons that will b ecome more apparent so on, a more general version of this question which is also of interest is to solve the matrix equation ~x λ~x , where is a scalar. (The original homogeneous system problem corresp onds to = 0 .) In the language of linear transformations, this says the following: given a linear transformation from a vector space to

itself, on what vectors ~x do es act as multiplication by a constant 1.1TheBasicSetup Denition : For an matrix, a nonzero vector ~x with ~x λ~x is called an eigenvector of , and the corresp onding scalar is called an eigenvalue of Imp ortant note : We do not consider the zero vector an eigenvector. For a xed value of , the set whose elements are the eigenvectors ~x with ~x λ~x , together with the zero vector, is a subspace of . (This set is called the eigenspace asso ciated to the eigenvalue .) [S1]: contains the zero vector. [S2]: is closed under addition, b ecause if ~x λ~x

and ~x λ~x , then ~x ~x ) = ~x ~x [S3]: is closed under scalar multiplication, b ecause for any scalar β~x ) = ~x ) = λ~x ) = β~x It turns out that it is fairly straightforward to nd all of the eigenvalues: b ecause λ~x = ( λI ~x where is the identity matrix, we can rewrite the eigenvalue equation ~x λ~x = ( λI ~x as λI ~x . But we know precisely when there will b e a nonzero vector ~x with λI ~x : it is when the matrix λI is not invertible, or, in other words, when det( λI ) = 0 Denition : When we expand the determinant det( tI ,

we will obtain a p olynomial of degree in the variable . This p olynomial is called the characteristic p olynomial of the matrix , and its ro ots are precisely the eigenvalues of Notation 1 : Some authors instead dene the characteristic p olynomial as the determinant of the matrix tI rather than tI . I dene it this way b ecause then the co ecient of will always b e 1, rather than 1)
Page 2
Notation 2 : It is often customary, when referring to the eigenvalues of a matrix, to include an eigenvalue the appropriate numb er of extra times if the eigenvalue is a multiple ro ot of the

characteristic p olynomial. Thus, for the characteristic p olynomial 1) , we could say the eigenvalues are = 0 if we wanted to emphasize that the eigenvalues o ccurred more than once. Remark : The characteristic p olynomial may have non-real numb ers as ro ots. Non-real eigenvalues are absolutely acceptable; the only wrinkle is that the eigenvectors for these eigenvalues will also necessarily contain non-real entries. (If has real numb er entries, then any non-real ro ots of the characteristic p oly- nomial will come in complex conjugate pairs. The eigenvectors for one ro ot will b e complex

conjugates of the eigenvectors for the other ro ot.) Prop osition : The eigenvalues of an upp er-triangular matrix are the diagonal entries. This statement follows from the observation that the determinant of an upp er-triangular matrix is the pro duct of the diagonal entries, combined with the observation that if is upp er-triangular, then tI is also upp er-triangular. (If diagonal entries are rep eated, the eigenvalues are rep eated the same numb er of times.) Example : The eigenvalues of 0 3 0 0 are 1, 3, and , and the eigenvalues of 2 0 1 0 3 2 0 0 2 are 2, 2, and 3. To nd all the

eigenvalues (and eigenvectors) of a matrix , follow these steps: Step 1 : Write down the matrix tI and compute its determinant (using any metho d) to obtain the characteristic p olynomial Step 2 : Set equal to zero and solve. The ro ots are precisely the eigenvalues of Step 3 : For each eigenvalue , solve for all vectors ~x satisfying ~x λ~x . (Either do this directly, or by solving the homogeneous system λI ~x via row-reduction.) The resulting solution vectors ~x form the eigenspace asso ciated to , and the nonzero vectors in the space are the eigenvectors corresp onding to Example

: Find all eigenvalues, and a basis for each eigenspace, for the matrix 1 0 0 1 Step 1: We have tI 1 0 , so ) = det( tI ) = ( 1) Step 2: The characteristic equation 1) = 0 has a double ro ot = 1 . So the eigenvalues are = 1 (Alternatively, we could have used the fact that the matrix is upp er-triangular.) Step 3: We want to nd the vectors with 1 0 0 1 . Clearly, all vectors have this prop erty. Therefore, a basis for the eigenspace with = 1 is given by and Example : Find all eigenvalues, and a basis for each eigenspace, for the matrix 1 1 0 1 Step 1: We have tI , so ) = det( tI ) = ( 1) Step

2: The characteristic equation 1) = 0 has a double ro ot = 1 . So the eigenvalues are = 1 (Alternatively, we could have used the fact that the matrix is upp er-triangular.) Step 3: We want to nd the vectors with 1 1 0 1 . This requires which means can b e arbitrary and = 0 . So the vectors we want are those of the form , and so a basis for the eigenspace with = 1 is given by
Page 3
Remark : Note that this matrix 1 1 0 1 and the identity matrix 1 0 0 1 have the same characteristic p olynomial and eigenvalues, but do not have the same eigenvectors. In fact, for = 1 , the eigenspace

for 1 1 0 1 is 1-dimensional, while the eigenspace for 1 0 0 1 is 2-dimensional. Example : Find all eigenvalues, and a basis for each eigenspace, for the matrix 2 2 3 1 Step 1: We have tI , so ) = det( tI ) = ( 2)( 1) 2)( 3) = Step 2: Since ) = 4 = ( 4)( + 1) , the eigenvalues are = 1 Step 3: For we want 2 2 3 1 , so we need + 2 , which reduces to . So the vectors we want are those of the form , so a basis is given by For = 1 we want 2 2 3 1 = 4 , so we need + 2 , which reduces to . So the vectors we want are those of the form , so a basis is given by Example : Find all eigenvalues, and a

basis for each eigenspace, for the matrix 0 0 0 1 0 0 1 0 Step 1: We have tI 0 0 , so ) = det( tI ) = + 1) Step 2: Since ) = + 1) , the eigenvalues are = 0 , i, Step 3: For = 0 we want 0 0 0 1 0 0 1 0 = 0 , so we need , so and = 0 . So the vectors we want are those of the form , so a basis is given by For we want 0 0 0 1 0 0 1 0 , so we need ia ib ic , so = 0 and ic . So the vectors we want are those of the form ic , so a basis is given by For we want 0 0 0 1 0 0 1 0 , so we need ia ib ic , so = 0 and ic . So the vectors we want are those of the form ic , so a basis is given by
Page

4
Example : Find all eigenvalues, and a basis for each eigenspace, for the matrix 1 0 1 1 1 3 1 0 3 Step 1: We have tI 1 0 1 0 , so ) = ( 1) +( 1) 1 0 1) 3) + ( 1) Step 2: Since ) = ( 1) [( 1)( 3) + 1] = ( 1)( 2) , the eigenvalues are = 1 Step 3: For = 1 we want 1 0 1 1 1 3 1 0 3 = 1 , so we need + 3 + 3 , so = 0 and = 0 . So the vectors we want are those of the form , so a basis is given by For = 2 we want 1 0 1 1 1 3 1 0 3 = 2 , so we need + 3 + 3 , so and = 2 . So the vectors we want are those of the form , so a basis is given by 1.2SomeSlightlyMoreAdvancedResultsAboutEigenvalues

Theorem : If is an eigenvalue of the matrix which app ears exactly times as a ro ot of the characteristic p olynomial, then the dimension of the eigenspace corresp onding to is at least 1 and at most Remark : The numb er of times that app ears as a ro ot of the characteristic p olynomial is called the algebraic multiplicity of , and the dimension of the eigenspace corresp onding to is called the geometric multiplicity of . So what the theorem says is that the geometric multiplicity is at most the algebraic multiplicity. Example : If the characteristic p olynomial is 1) 3) , then the eigenspace

for = 1 is at most 3-dimensional, and the eigenspace for = 3 is at most 2-dimensional. Pro of : The statement that the eigenspace has dimension at least 1 is immediate, b ecause (by assumption) is a ro ot of the characteristic p olynomial and therefore has at least one nonzero eigenvector asso ciated to it. For the statement that the dimension is at most , the idea is to lo ok at the homogeneous system λI ~x If app ears times as a ro ot of the characteristic p olynomial, then when we put the matrix λI into its reduced row-echelon form , then must have at most rows of all zero es.

Otherwise, the matrix (and hence λI to o, although this requires a check) would have 0 as an eigenvalue more than times, b ecause is in echelon form and therefore upp er-triangular. But the numb er of rows of all zero es in a square matrix is the same as the numb er of nonpivotal columns, which is the numb er of free variables, which is the dimension of the solution space. So, putting all the statements together, we see that the dimension of the eigenspace is at most Theorem : If ~v ,~v ,...,~v are eigenvectors of asso ciated to distinct eigenvalues , ,..., , then ~v ,~v ,...,~v are

linearly indep endent.
Page 5
Pro of : Supp ose we had a nontrivial dep endence relation b etween ~v ,...,~v , say ~v иии ~v (Note that at least two co ecients have to b e nonzero, b ecause none of ~v ,...,~v is the zero vector.) Multiply b oth sides by the matrix : this gives ~v иии ~v ) = 0 = Now since ~v ,...,~v are eigenvectors this says ~v ) + иии ~v ) = But now if we scale the original equation by and subtract (to eliminate ~v ), we obtain ~v ~v иии ~v Since by assumption all of the eigenvalues , ,..., were dierent, this dep endence is still nontrivial, since each of is

nonzero, and at least one of иии ,a is nonzero. But now we can rep eat the pro cess to eliminate each of ~v ~v ... ~v in turn. Eventually we are left with the equation ~v for some nonzero . But this is imp ossible, b ecause it would say that ~v , contradicting our denition saying that the zero vector is not an eigenvector. So there cannot b e a nontrivial dep endence relation, meaning that ~v ,...,~v are linearly indep endent. Corollary : If is an matrix with distinct eigenvalues , ,..., , and ~v ,~v ,...,~v are (any) eigenvectors asso ciated to those eigenvalues, then ~v ,~v ,...,~v are a

basis for This result follows from the previous theorem: it guarantees that ~v ,~v ,...,~v are linearly indep endent, so since they are vectors in the -dimensional vector space , they are a basis. Theorem : The pro duct of the eigenvalues of is the determinant of Pro of : If we expand out the pro duct ) = ( иии , we see that the constant term is equal to 1) иии . But the constant term is also just (0) , and since ) = det( tI we have (0) = det( ) = ( 1) det( . Thus, setting the two expressions equal shows that the pro duct of the eigenvalues equals the determinant of Theorem : The sum of the

eigenvalues of equals the trace of Note : The trace of a matrix is dened to b e the sum of its diagonal entries. Pro of : If we expand out the pro duct ) = ( иии we see that the co ecient of is equal to иии . If we expand out the determinant det( tI to nd the co ecient of , we can show (with a little bit of eort) that the co ecient is the negative of the sum of the diagonal entries of . Therefore, setting the two expressions equal shows that the sum of the eigenvalues equals the trace of 1.3TheoryofSimilarityandDiagonalization Denition : We say two matrices and are similar (or conjugate

) if there exists an invertible matrix such that AP . (We refer to AP as the conjugation of by .) Example : The matrices and 1 2 1 1 are similar: with 2 3 1 2 , so that 1 2 , we can verify that 1 2 2 3 1 2 1 2 1 1 , so that AP Remark : The matrix 1 2 also has AQ . In general, if two matrices and are similar, then there can b e many dierent matrices with AQ Similar matrices have quite a few useful algebraic prop erties (which justify the name similar). If AP and CP , then we have the following: The sum of the conjugates is the conjugate of the sum: AP CP The pro duct of the conjugates is the

conjugate of the pro duct: BD AP CP AC The inverse of the conjugate is the conjugate of the inverse: exists if and only if exists, and
Page 6
The determinant of the conjugate is equal to the original determinant: det( ) = det( AP ) = det( ) det( ) det( ) = det( ) det( ) = det( The conjugate has the same characteristic p olynomial as the original matrix: det( tI ) = det( tI ) = det( tI ) = det( tI In particular, a matrix and its conjugate have the same eigenvalues (with the same multiplicities). Also, by using the fact that the trace is equal b oth to the sum of the diagonal elements

and a co ecient in the characteristic p olynomial, we see that a matrix and its conjugate have the same trace. If ~x is an eigenvector of with eigenvalue , then ~x is an eigenvector of with eigenvalue : if ~x λ~x then ~x ) = PP ~x ~x λ~x ) = ~x This is also true in reverse: if ~y is an eigenvector of then ~y is an eigenvector of (with the same eigenvalue). In particular, the eigenspaces for have the same dimensions as the eigenspaces for One question we might have ab out similarity is: given a matrix , what is the simplest matrix that is similar to? As observed ab ove, any matrix

similar to has the same eigenvalues for . So, if the eigenvalues are иии , , the simplest form we could plausibly hop e for would b e a diagonal matrix whose diagonal elements are the eigenvalues of Denition : We say that a matrix is diagonalizable if it is similar to a diagonal matrix ; that is, if there exists an invertible matrix with AP Example : The matrix 3 7 is diagonalizable. We can check that for 1 2 and 1 2 1 1 , then we have AP 4 0 0 1 If we know that is diagonalizable and have AP , then it is very easy to compute any p ower of Since is diagonal, is the diagonal matrix whose

diagonal entries are the th p owers of Then = ( AP , so Example : With 3 7 as ab ove, we have 0 1 , so that 1 2 0 1 1 2 1 1 1 + 4 1 + 2 Observation : This formula also makes sense for values of which are not p ositive integers. For example, if we get the matrix , which is actually the inverse matrix . And if we set we get the matrix 1 3 , whose square satises 3 7 Theorem : An matrix is diagonalizable if and only if it has linearly indep endent eigenvectors. In particular, every matrix whose eigenvalues are distinct is diagonalizable. Pro of : If has linearly indep endent eigenvectors ~v иии

,~v with resp ective eigenvalues иии , then consider the matrix | | | ~v иии ~v | | | whose columns are the eigenvectors of
Page 7
Because ~v иии ,~v are eigenvectors, we have | | | A~v иии A~v | | | | | | ~v иии ~v | | | But we also have | | | ~v иии ~v | | | | | | ~v иии ~v | | | Therefore, . Now since the eigenvectors are linearly indep endent, is invertible, and we can therefore write AP , as desired. For the other direction, if AP then (like ab ove) we can rewrite this to say AP PD If | | | ~v иии ~v | | | then AP PD says | | | A~v иии A~v | | | | | | ~v иии ~v | | | , which (by

comparing columns) says that A~v ~v ... A~v ~v . Thus the columns ~v иии ,~v of are eigenvectors, and (b ecause is invertible) they are linearly indep endent. Finally, the last statement in the pro of follows b ecause (as shown earlier) a matrix with distinct eigenvalues has linearly indep endent eigenvectors. Advanced Remark : As the theorem demonstrates, if we are trying to diagonalize a matrix, we can run into trouble if the matrix has rep eated eigenvalues. However, we might still like to know what the simplest form a non-diagonalizable matrix is similar to. The answer is given by what is

called the Jordan Canonical Form (of a matrix): every matrix is similar to a matrix of the form , where each иии ,J is a square Jordan blo ck matrix of the form , with s on the diagonal and 1s directly ab ove the diagonal (where blank entries are zero es). Example : The non-diagonalizable matrix 2 1 0 0 2 0 0 0 3 is in Jordan Canonical Form, with 2 1 0 2 and = [3] The existence and uniqueness of the Jordan Canonical Form can b e proven using a careful analysis of generalized eigenvectors: vectors ~x satisfying λI ~x for some p ositive integer . (Regular eigenvectors would corresp ond

to = 1 .) Roughly sp eaking, the idea is to use certain carefully-chosen generalized eigenvectors to ll in for the missing eigenvectors; doing this causes the app earance of the extra 1s app earing ab ove the diagonal in the Jordan blo cks. Theorem (Cayley-Hamilton) : If is the characteristic p olynomial of a matrix , then is the zero matrix (where in applying a p olynomial to a matrix, we replace the constant term with that constant times the identity matrix). Example : For the matrix 2 2 3 1 , we have det( tI ) = = ( 1)( 2) 6 = . We can compute 10 6 9 7 , and then indeed we have 10 6 9 7

6 6 9 3 4 0 0 4 0 0 0 0
Page 8
Pro of (if is diagonalizable): If is diagonalizable, then let AP with diagonal, and b e the characteristic p olynomial of The diagonal entries of are the eigenvalues иии , of , hence are ro ots of the characteristic p olynomial of . So ) = иии ) = 0 Then, b ecause raising to a p ower just raises all of its diagonal entries to that p ower, we can see that ) = Now by conjugating each term and adding the results, we see that ) = AP ) = )] . So by conjugating back, we see that ) = In the case where is not diagonalizable, the pro of is more dicult. One way

is to use the Jordan Canonical Form of in place of the diagonal matrix ; then (one can verify) ) = , and then the remainder of the argument is the same. 1.4HowToDiagonalizeAMatrix(ifpossible) In order to determine whether a matrix is diagonalizable (and if it is, how to nd a diagonalization DP ), follow these steps: Step 1 : Find the characteristic p olynomial and eigenvalues of Step 2 : Find a basis for each eigenspace of Step 3a : Determine whether is diagonalizable  if each eigenspace has the prop er dimension (namely, the numb er of times the corresp onding eigenvalue app ears as a ro

ot of the characteristic p olynomial) then the matrix is diagonalizable. Otherwise, the matrix is not diagonalizable. Step 3b : If the matrix is diagonalizable, then is the diagonal matrix whose diagonal entries are the eigenvalues of (with appropriate multiplicities), and then can b e taken to b e the matrix whose columns are linearly indep endent eigenvectors of in the same order as the eigenvalues app ear in Example : For 3 5 , determine whether there exists a diagonal matrix and an invertible matrix with AP , and if so, nd them. Step 1: We have tI so det( tI ) = 5) + 6 = + 3 = ( 2)( 3)

The eigenvalues are therefore = 2 Step 2: For = 2 we need to solve 3 5 = 2 , so + 5 and thus The eigenvectors are of the form so a basis for the = 2 eigenspace is For = 3 we need to solve 3 5 = 3 , so + 5 and thus The eigenvectors are of the form so a basis for the = 3 eigenspace is Step 3: Since the eigenvalues are distinct we know that is diagonalizable, and 2 0 0 3 . We have two linearly indep endent eigenvectors, and so we can take 1 3 To check: we have 1 1 , so AP 1 1  3 5  1 3 2 0 0 3
Page 9
Note : We could also take 3 0 0 2 if we wanted. There is no particular reason to

care much ab out which diagonal matrix we want as long as we make sure to arrange the eigenvectors in the correct order. Example : For 1 0 0 2 0 0 2 1 , determine whether there exists a diagonal matrix and an invertible matrix with AP , and if so, nd them. Step 1: We have tI 1 1 0 2 0 so det( tI ) = ( 1) 2 0 = ( 1) 2) The eigenvalues are therefore = 1 Step 2: For = 1 we need to solve 1 0 0 2 0 0 2 1 , so and thus = 0 The eigenvectors are of the form so a basis for the = 1 eigenspace is For = 2 we need to solve 1 0 0 2 0 0 2 1 = 2 , so and thus and = 2 . The eigenvectors are of the form so a

basis for the = 2 eigenspace is Step 3: Since the eigenspace for = 1 is 2-dimensional, the matrix is diagonalizable, and 1 0 0 0 1 0 0 0 2 We have three linearly indep endent eigenvectors, so we can take 1 0 0 0 1 0 1 2 To check: we have 1 1 0 2 1 0 1 0 , so AP 1 1 0 2 1 0 1 0 1 0 0 2 0 0 2 1 1 0 0 0 1 0 1 2 1 0 0 0 1 0 0 0 2 Example : For 1 1 1 0 1 1 0 0 1 , determine whether there exists a diagonal matrix and an invertible matrix with AP , and if so, nd them. Step 1: We have tI 0 0 so det( tI ) = ( 1) since tI is upp er-triangular. The eigenvalues are therefore = 1 Step 2:
Page 10

For = 1 we need to solve 1 1 1 0 1 1 0 0 1 , so and thus = 0 . The eigenvectors are of the form so a basis for the = 1 eigenspace is Step 3: Since the eigenspace for = 1 is 1-dimensional but the eigenvalue app ears 3 times as a ro ot of the characteristic p olynomial, the matrix is not diagonalizable Well, you're at the end of my handout. Hop e it was helpful. Copyright notice: This material is copyright Evan Dummit, 2012. You may not repro duce or distribute this material without my express p ermission.