For A = [(3, 2, 4), (2, 0, 2), (4, 2, 3)] a 3x3 matrix, Write A = Q Ʌ Q^t where Q's columns are orthogonal (unit) vectors of A.
Since A is symmetric and you can show that it has three independent eigenvalues. Can you do that?
The eigenvalues are 8, -1, -1 the corresponding eigenvectors are {{2, 1, 2}, {-1, 0, 1}, {-1, 2, 0}} Notice that these vectors are perpendicular.
@benfraser1012
You there?
I am really at my limits here. But I can point out a couple of things. One is that for a symmetric matrix (according to my copy of Strang), the factorization into S Lambda S^-1 generates orthogonal eigenvectors These are usually scaled to unit length and renamed Q. Furthermore Q^-1 = Q^t Glancing through the answer given above, the eigenvalues look right, but there is a problem with the eigenvectors. That's because the second and third one have a non-zero dot product.
The other thing I did is to run your matrix through R. I checked the results by computing S Lambda S^-1 and I also checked for orthogonality. I am not so good with arithmetic so I didn't calculate the eigenvectors by hand. I find it useful to check calculations by using R (or Python). HTH. > A = c(3,2,4,2,0,2,4,2,3) > dim(A) = c(3,3) > A [,1] [,2] [,3] [1,] 3 2 4 [2,] 2 0 2 [3,] 4 2 3 > eigen(A) $values [1] 8 -1 -1 $vectors [,1] [,2] [,3] [1,] 0.6666667 0.7453560 0.0000000 [2,] 0.3333333 -0.2981424 -0.8944272 [3,] 0.6666667 -0.5962848 0.4472136 > result = eigen(A) > S = result$vectors > S [,1] [,2] [,3] [1,] 0.6666667 0.7453560 0.0000000 [2,] 0.3333333 -0.2981424 -0.8944272 [3,] 0.6666667 -0.5962848 0.4472136 > solve(S) [,1] [,2] [,3] [1,] 0.6666667 0.3333333 0.6666667 [2,] 0.7453560 -0.2981424 -0.5962848 [3,] 0.0000000 -0.8944272 0.4472136 > Si = solve(S) > Si [,1] [,2] [,3] [1,] 0.6666667 0.3333333 0.6666667 [2,] 0.7453560 -0.2981424 -0.5962848 [3,] 0.0000000 -0.8944272 0.4472136 > L = diag(result$values) > L [,1] [,2] [,3] [1,] 8 0 0 [2,] 0 -1 0 [3,] 0 0 -1 > S %*% L %*% Si [,1] [,2] [,3] [1,] 3 2.000000e+00 4 [2,] 2 1.776357e-15 2 [3,] 4 2.000000e+00 3 > u = S[,1] > u [1] 0.6666667 0.3333333 0.6666667 > v = S[,2] > w = S[,3] > u %*% v [,1] [1,] 2.775558e-16 > v %*% w [,1] [1,] -1.110223e-16 > u %*% w [,1] [1,] 5.551115e-17
These vectors are scaled to unit length so I should've called them Q and Q^-1.
I am a bit mystified comparing my R results with @eliassaab. On the one hand, I verify that \[A \mathbf{v} = \lambda \mathbf{v}\] for his eigenvectors and eigenvalues. On the other hand, these are supposed to be perpendicular, since A is symmetric, but <-1, 0, 1> • <-1, 2, 0> is non-zero. The vector that R gave that is different is (when flipped to match his orientation) <-4,-2,5>. This does not satisfy the equation. Nevertheless when I multiply\[Q \lambda Q^{-1}\] in R, I get back A. Hmm? Any thoughts on what I've done wrong, Dr. Saab?
I was a bit sloppy, one has to take the eigenvector and construct an orthogonal system out of them. Doing that, we obtain the following eigenvalues corresponding to the eigenvalues 8, -1, -1 \[ aa=\left\{\frac{2}{3},\frac{1}{3},\frac{2}{3}\right\}\\ \text{bb}=\left\{-\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}} \right\}\\\text{cc}=\left\{-\frac{1}{3 \sqrt{2}},\frac{2 \sqrt{2}}{3},-\frac{1}{3 \sqrt{2}}\right\} \] If we take \[ M=\left( \begin{array}{ccc} \frac{2}{3} & -\frac{1}{\sqrt{2}} & -\frac{1}{3 \sqrt{2}} \\ \frac{1}{3} & 0 & \frac{2 \sqrt{2}}{3} \\ \frac{2}{3} & \frac{1}{\sqrt{2}} & -\frac{1}{3 \sqrt{2}} \\ \end{array} \right) \] then \[ M^T A M = \left( \begin{array}{ccc} 8 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \\ \end{array} \right)=D\\ A= M D M^T \]
So orthogonal eigenvectors don't just come from the symmetric matrix. You have to do Gramm-Schmidt or whatever it's called? I will study and try to learn this. Thank you.
You have to do Gramm-Schmidt in every eigenspace. If the matrix has distinct eigenvalues, the eigenvectors will be perpendicular to each other. All you have to do is to normalize them.
Thanks to @eliassaab I was able to solve this problem finally, which was posted by @benfraser1012. The given matrix is\[A = \begin{bmatrix} 3 & 2 & 4 \\ 2 & 0 & 2 \\ 4 & 2 & 3 \end{bmatrix}\] To find the characteristic equation, we compute the determinant of the matrix, set it equal to zero \[\det\ (A - \lambda I) = 0\] and then solve. That is, we want the determinant of: \begin{bmatrix} 3-\lambda & 2 & 4 \\ 2 & -\lambda & 2 \\ 4 & 2 & 3-\lambda \end{bmatrix} This is a a bit of a mess, but eventually got \[-\lambda^3 + 6\lambda^2 + 15 \lambda + 8 = 0\] which I was able to factor into \[-(\lambda + 1)(\lambda + 1)(\lambda - 8) = 0\] giving eigenvalues 8, -1, -1. Taking -1 as the first eigenvalue where we want the corresponding eigenvector, I set up this equation: \[\begin{bmatrix} 3 & 2 & 4 \\ 2 & 0 & 2 \\ 4 & 2 & 3 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = - \begin{bmatrix} x \\ y \\ z \end{bmatrix}\] This is easily solved to give u = <0,-2,1> Similarly, we find v = <1,-2,0> and w = <2,1,2> for the other eigenvectors. Note that although u • w = v • w = 0, u • v is not zero. If I am understanding correctly, for a symmetric matrix (like A), if the three eigenvalues were all different, then the eigenvectors would all be orthogonal. In order to produce orthogonal vectors, we need to do the Gram-Schmidt procedure. I'll go through that in another box.
Now, we have u = <0,-2,1>, v = <1,-2,0> and w = <2,1,2>. We take w and v as already orthogonal, and we need to derive an orthogonal vector from u. The formula I got from Strang is \[C = c - \frac {A^T c}{A^T A} A - \frac {B^T c}{B^T B} B\] where A^T A is just A • A, and so on. He used A,B,C for the orthogonal vectors and a,b,c for the starting vectors. If we set this up with u for c, w for A and v for B, then we have: \[U = u - \frac {W^T u}{W^T W} W - \frac {V^T u}{V^T V} V\] W • W = 9 W • u = 0 V • V = 5 V • u = 4 I'll go back to using u,v,w now. So the first term to subtract from c works out to be 0 and the second one is (4/5) v, so I get that u = <-4/5, -2/5, 1>. I check that A multiplied by this new v = -v. So the part of v that is orthogonal to both u and w is also an eigenvector of A for eigenvalue 1. This was a surprise to me. The last step is to scale these to be unit vectors. For the unit vectors I get: u = <4/(3 sqrt(5)), 2/(3 sqrt(5)), - sqrt(5)/3> v = <1/sqrt(5),-2/sqrt(5),0> w = <2/3,1/3,2/3> One can then show that if Q is composed of the eigenvectors, and D is the diagonal vector with the eigenvalues, that Q D Q^-1 works as it should. In fact fact, if you go back to the R code and output that I posted originally above, we have already done this. Using Python as our calculator >>> import math >>> f = math.sqrt(5) >>> 1.0/f 0.4472135954999579 >>> 2.0/f 0.8944271909999159 >>> 4.0/(3*f) 0.5962847939999439 >>> 2.0/(3*f) 0.29814239699997197 >>> f/3.0 0.7453559924999299 Compare with the R output: $vectors [,1] [,2] [,3] [1,] 0.6666667 0.7453560 0.0000000 [2,] 0.3333333 -0.2981424 -0.8944272 [3,] 0.6666667 -0.5962848 0.4472136 To summarize, because we had duplicate eigenvalues, the eigenvectors for eigenvalue -1 were not orthogonal. However, Gram-Schmidt can produce orthogonal vectors, and after normalization, these can be placed in a matrix Q. I have to work on visualizing why subtracting the projection of u on v should give us another perfectly good eigenvector of A. I had the idea that there would only be the three original eigenvectors. Clearly something more to learn here. Thanks, Professor Saab!
Thank you @telliott99
If u and v are two eigenvectors of an eigenvlue \(\lambda \), then \( w=a u + b v \) is also an eigevecor of \(\lambda\) \[ A w = a Au + b A v= a \lambda u + b \lambda v =\lambda(a u + b v)= \lambda w \]
Wow. Really? That just blows me away. Beautifully simple proof. I have to work on my geometric intuition about eigenvectors. I think it may be because I don't have a fundamental understanding of linear transformation.
Join our real-time social learning platform and learn together with your friends!