Alright, let's see. http://prntscr.com/cz93g6
eigenvectors satisfy \(A{\rm x} = \lambda {\rm x}\). let the given eigenvectors be \(\rm x_1, ~~x_2\) and put them in a matrix and call it \(\rm X\). \[\begin{align}AX &= A [\rm x_1~~~x_2] ~\\~\\&=~ [A{\rm x_1}~~A{\rm x_2}] \\~\\&= [\lambda {\rm x_1}~~\lambda {\rm x_2}] \\~\\&=[\rm x_1~~~x_2] \begin{bmatrix} \lambda_1&0\\0&\lambda_2\end{bmatrix}\\~\\& =X\Lambda\end{align}\]
\[AX = X\Lambda \\~\\\implies A = X\Lambda X^{-1}\] Notice that \(A^2 = A\cdot A =(X\Lambda X^{-1})(X\Lambda X^{-1}) =X\Lambda^2 X^{-1} \) Easy to see \(A^n = X\Lambda^n X^{-1}\) where \(X\) is the matrix with eigenvectors as its columns,
Yes, great. I was looking for this.
I've got a problem related to the logistics. Or maybe not. I just wanted to clarify what's happening here.
Yeah sure ask. I'll try to help as much as I can. If not @Kainui would rescue us :)
So we're taking a linear operation given by the matrix \(A\) and we're going to change our basis for this operation. So the linear operation is kinda like... it takes \((0, 1)\) to \((1, 0)\) and \((1, 0)\) to \((1, 1)\) (these are column vectors btw) By definition if we apply this linear operation to the eigenvectors, we'll get a scalar multiple of the eigenvectors
Now basically if we change our basis vectors to the two eigenvectors then in the new coordinate system our operation would be given by the matrix|dw:1477505007680:dw|
Join our real-time social learning platform and learn together with your friends!