Hello, I'm, Sheldon, Axle, the author of linear algebra done. Right? This video, discusses part, one of the section of the book titled eigenvectors and upper triangular matrices.
This video focuses on the existence of eigenvalues let's quickly, redo our notation and terminology. F denotes, either the scalar field are a real numbers or the scalar field, C of complex numbers. We let V denote a vector space over F. The word operator means a linear map from a vector space to itself ml of V means L of V comma V in. Other words, L of V is a set of operators from V to V. We now need to define the powers of an operator T. If M is a positive integer, then T to the M is defined to be T multiplied by itself M times.
We call the multiplication of operators is religious composition of linear maps. Thus, T squared, which equals T times T is equal to T composed. T. The zero power is defined to be the identity operator, I on V. And if T is invertible with inverse, T raised to the -1 power, then T to the minus M is defined to be. The inverse of T raised to the M power as you see here as an example of our definition, T cubed means T times T. Suppose T is a linear operator on V, then T to the M times T to the N is equal to T to the M, plus N and T to the M raised to the nth power is equal to T to the M times n power, where N and n are arbitrary integers.
If T is invertible and non-negative integers. If T is not invertible, you should be sure you verify these easy-e qualities, they follow immediately from the death. And now. We need to define what it means to apply a polynomial to an operator.
So suppose T is an operator on V and P is a polynomial with coefficients in our scalar field. F. If P of Z is given by the formula shown here for each number Z, then P of T is defined to be the operator that we obtain by simply replacing Z with T. We also replace the constant term, a zero in the polynomial with zero times I, getting the equation shown here. For example, if P of Z is equal to Z cubed, then P of T is the operator T. Cubed let's, look at another example, for this example, a vector space will be the vector space of polynomials with real coefficients, but D be the differentiation operator on that vector, space, meaning D of a polynomial is equal to the derivative of that. Polynomial. Finally, let's take a polynomial P defined by P of X is equal to 7 minus 3x, plus 5, x squared. According to our definition, P of D is then 7 times the identity operator minus 3 d, plus 5. D squared P of D is supposed to be an operator on P of R. That means if we apply P of D to a polynomial with real coefficients, we should get another polynomial with real coefficients that's, shown by the last equation here, the polynomial that we get when we apply P of D to Q is 7, Q minus 3 times the derivative Q plus 5 times the second derivative of Q that second derivative term, of course, comes from applying d squared to Q.
D is the differentiation operator. So decompose D is the operator of taking the second derivative now I would like to discuss some of. The algebraic properties of the map that takes a polynomial P to P of T. Here is the first to those properties, fix a linear operator. T on V, then the function from the vector space of polynomials with coefficients and f2l of e. Given by a polynomial P goes to P of T is a linear map from P of F into L of V. This result follows easily from the definitions, but make sure you take a minute to verify it yourself.
Our next result is similar, but it focuses on multiplicative properties. This result says that. If T is a linear operator on V and P and Q are polynomials with coefficients in the scalar field, F then P Q applied to T is the same as P of T times Q of T let's work through an example to help understand why this important result is true. Suppose P of Z is Z plus 2 and Q of Z is the polynomial Z plus 3. Then just by using the ordinary multiplication of polynomials. We see that P times Q evaluated at Z is Z squared plus 5, Z, plus 6, we have that P of T is 2, plus TI.
Q of T is T plus 3i. And using the. Formula above for the polynomial P Q, we see that P Q of T is equal to T squared, plus 5t, plus 6 times the identity operator I. Now let's look P of T times, Q of T is T, plus 2i times, T, plus 3i. And then multiplying that out we get T squared, plus 5t, plus 6i. And then looking at the equation left column.
The last equation we see that that is equal to P Q of T. Thus we have verified that P of T times Q of T is equal to P Q of T in this particular case. But this case enables you to see why it's true in. General when we multiply the polynomial P times, the polynomial Q we're, just using the distributive property, and we do the same thing when we're multiplying P of T times Q of T. In this case, T plus 2i times, T, plus 3i, the procedure for multiplying that and finding that product just the distributive property same is done with the polynomial that's. The reason the PQ of T is equal to P of T times Q of T. We now have this corollary. The previous result, this corollary states that any two polynomials of T. Commute with each other in general multiplication on LV is not commutative. Thus is often useful to know that in this particular case we do have commutativity let's.
Look at the easy proof for this result. We have from the previous result. The P of T times Q of T is equal to the polynomial PQ applied to T. However, the usual multiplication of polynomials is commutative.
So this is equal to Q P applied to T now apply the previous result. Once again, this time to Q times, P concluding, the Q P of T is equal. To Q of T times P of T, this completes, the proof of this corollary.
Now we come to one of the truly crucial results in linear algebra. This result, states that every operator on a finite dimensional nonzero, complex vector, space has an eigenvalue before we get to the proof let's note that this result is false on real vector spaces. We've seen an example previously specifically. If T is a linear operator on r2 defined by T of X comma, y equals minus y comma X. Then T has no eigenvalues because this T. Operates on a real vector, space, alien, values by definition, must be real.
This result is also false on infinite dimensional, complex vector spaces. For example, to find T to be the linear operator on the vector, space of polynomials with complex coefficients by defining T P of Z to be Z times P of Z. In other words, T is the operator of multiplication by Z. For example, if P is the polynomial Z squared.
Then T of P is a polynomial Z cubed because T of P has degree 1 larger than the degree of P it's clear. That T of P cannot be a scalar multiple of P. Thus T has no eigenvalues let's, get rid of these examples and move to the proof of this theorem. We want to prove that every operator on a finite dimensional nonzero, complex vector, space has an eigenvalue. Thus, let V be a complex vector space with positive dimension and let t be an operator on this vector, space, d, choose a vector V in our vector, space, V with V naught equal to zero. Thus, we have just used our hypothesis that V is not the zero vector. Space. Now, look at the list, V T of V T squared of e up to T to the N of V. This list has length n, plus 1, and we are in a vector space of dimension n. Thus, this list cannot be linearly independent some nonzero linear combination of the vectors in this list equals zero.
In other words, there exists complex numbers, a 0 up to an n, not all zero such that. We have the top equation here in the right-hand column note that a 1 up to an n cannot all be 0 because otherwise we would be left just with the equation 0. Equals a 0 V, but V is not 0 by choice that would force a 0 to be 0. And then if all the 1 up to an n are also 0 all the A's would be 0, which is not the case here now make the is the coefficient of a polynomial.
In other words, consider the polynomial, a 0, plus a 1, z, plus up to plus an n Z to the N, but the fundamental theorem of algebra. We can factor this polynomial as some constant times Z, minus lambda 1 up to z, minus lambda M, where C is a nonzero complex number. And each of the lambda J's is in C. Let's look at this carefully C is a nonzero number because of C were 0 that would imply that all the A's are 0, which we know is not the case note, however, that it is possible that the coefficient a sub M that's, the coefficient of Z to the N is 0.
Thus, the left-hand side might we have degree less than M. In other words, we do not necessarily have that M is equal to n. Here. Notice how we were using the hypothesis that we're working in the complex numbers because polynomials with real coefficients. Cannot necessarily be factored in this form using real numbers. Now we have the equation shown here in red, which is the same as the equation at the top of this column.
We can rewrite that equation as follows shown in the second line here. Finally, using the factorization above this is equal to what is now shown in red look at this last equation, carefully we're applying some operator to the nonzero vector, V, and we're ending up with zero on the left-hand side. This means that at some point in.
Evaluating the right-hand side, we're applying t, minus lambda J times the identity operator to a non-zero vector and getting 0 in other words, t, minus lambda J I is not injective for at least one of the J's. Thus, T has an age value completing the proof before concluding this video I would like to compare the proof we have just seen to the proof the same theorem that is found in most linear algebra books. Most linear algebra books, prove this theorem by looking at the polynomial, the determinant of. Lambda I minus T. The polynomial is called the characteristic polynomial of T. One can prove that the determinant of lambda I minus T is equal to 0 if and only if lambda is an eigenvalue of t, then using the fundamental theorem of algebra, every polynomial has a root. Thus, the characteristic polynomial has a root. The STE has an eigenvalue that proof is mathematically correct.
But it does have some problems. One needs to define the determinate first, which is a complicated object. We will eventually. Define the determinant in these videos, but we have no need to do. So yet then one needs to prove that the determinant being 0 is equivalent to having an age value I think by then most of the intuition about why the result is true is gone, especially because the definition of determinant is somewhat complicated. In contrast, the proof shown here uses basic notions of linear algebra that are crucial to understanding linear algebra, mainly linear independence.
This proof is perhaps the main. Justification for the audacious title of the book. This concludes part 1 of the video on I, give Actor and upper triangular matrices.