Monday, July 12, 2010

Determining the Eigenvalues of a Matrix




Since every linear operator is given by left multiplication by some square matrix, finding the eigenvalues and eigenvectors of a linear operator is equivalent to finding the eigenvalues and eigenvectors of the associated square matrix; this is the terminology that will be followed. Furthermore, since eigenvalues and eigenvectors make sense only for square matrices, throughout this section all matrices are assumed to be square.


Given a square matrix A, the condition that characterizes an eigenvalue, λ, is the existence of a nonzero vector x such that A x = λ x; this equation can be rewritten as follows:
This final form of the equation makes it clear that x is the solution of a square, homogeneous system. If nonzero solutions are desired, then the determinant of the coefficient matrix—which in this case is A − λ I—must be zero; if not, then the system possesses only the trivial solution x = 0. Since eigenvectors are, by definition, nonzero, in order for x to be an eigenvector of a matrix A, λ must be chosen so that
I hope the above explanation was useful.

No comments:

Post a Comment