Ask your own question, for FREE!
Mathematics 11 Online
OpenStudy (loser66):

On attachment Please, help

OpenStudy (loser66):

OpenStudy (loser66):

I would like to know from where we have "Hence \(x(t) = e^{tA}x_0\) at 1.4, 1.5

OpenStudy (freckles):

what are you asking?

OpenStudy (freckles):

like how did they get that equation?

OpenStudy (loser66):

It didn't indicate to x(t) before. It worked on \(e^{tA}\) and derivative of it. Suddenly, I have :" Hence x(t) =...." why?

ganeshie8 (ganeshie8):

plug x(t) into the IVP \[\dfrac{dx}{dt} = Ax~;~x(0)=x_0\] and see if it is really a solution

OpenStudy (loser66):

If "plug x(t) into .." \(\dfrac{d}{dt}e^{tA}= Ae^{tA}\) then \(\dfrac{d}{dt}x(t)= Ax(t)\), right?

OpenStudy (anonymous):

since we have that \(\dfrac{d}{dt}e^{At}=Ae^{At}\), it follows that \(x=e^{At}\) is clearly a solution to \(\dfrac{dx}{dt}=Ax\) (this is just a restatement of the previous fact)

ganeshie8 (ganeshie8):

\(x(t) = e^{tA}x_0\) right ?

OpenStudy (loser66):

@ganeshie8 That is what I asked!! How and why we have that equation?

OpenStudy (loser66):

and @oldrin.bataku gave the answer, but I didn't get how "it follows that x = e^(At) :(

ganeshie8 (ganeshie8):

they are saying that function is a solution to the given IVP i was asking you to check if it is really the case by plugging x(t) into the differential eqn

ganeshie8 (ganeshie8):

btw, \(x_0\) is just a constant recall that if \(f(x)\) is a solution , then a constant multiple of it is also a solution

OpenStudy (anonymous):

in fact, since the derivative is linear, we actually have \(\dfrac{d}{dt}\left(ke^{At}\right)=kAe^{At}\) so the general solution is actually a whole parameterized family of solutions \(x(t)=kAe^{At}\) for an \(n\)-by-\(1\) constant matrix \(k\)

OpenStudy (anonymous):

oops, i meant \(x(t)=ke^{At}\)

OpenStudy (anonymous):

so if \(x(0)=x_0\) then \(ke^{A\cdot0}=x_0\implies kI=x_0\implies k=x_0\) so \(x(t)=x_0e^{At}=e^{At}x_0\) is the unique solution to \(\dot x=Ax\) subject to \(x(0)=x_0\)

OpenStudy (loser66):

I got it. Thank you very much. I am confused with x_0. But now, I am ok with it.

ganeshie8 (ganeshie8):

In the textbook, \(x_0\) is a specific constant for the IVP. Maybe think of it like the usual \(c_0\) if the past experience with simple ODE's helps..

OpenStudy (empty):

I think it's cute how you can kinda pretend these aren't vectors and matrices and just kinda like \[\frac{dx}{dt} = Ax \] "separate" \[\frac{dx}{x} = Adt\] \[ \ln x = At +c\] \[x=ke^{At}\]

OpenStudy (anonymous):

note the idea here is actually very general and underlies what we call flows of vector fields -- \(d/dt\) is an infinitesimal generator of the evolution of \(x\) through time, i.e. it gives rise to flows

OpenStudy (loser66):

I have another question, please explain me Ex 1

OpenStudy (anonymous):

which part is giving you trouble?

OpenStudy (loser66):

1.31

OpenStudy (anonymous):

recall that for a matrix \(A\) with columns \(A_1,A_2,A_3,\dots,A_n\) the image of a column vector \(u=(u_1,\dots,u_n)^T\) is \(Au=u_1A_1+\dots+u_nA_n\)

OpenStudy (anonymous):

i.e. the \(n\)-th column of the matrix is the action of the matrix on the \(n\)-th basis vector \(e_n\), represented by a matrix of zeros with a \(1\) in row \(n\): $$e_1=\begin{bmatrix}1\\0\end{bmatrix},e_2=\begin{bmatrix}0\\1\end{bmatrix}$$so the matrix $$\begin{bmatrix}a&b\\c&d\end{bmatrix}$$represents the transformation that takes \(e_1\to \begin{bmatrix}a\\c\end{bmatrix}\) and \(e_2\to \begin{bmatrix}b\\d\end{bmatrix}\)

OpenStudy (anonymous):

so to figure out the first column of the matrix representing \(e^{At}\), we just need to see how it behaves on \(\begin{bmatrix}1\\0\end{bmatrix}\), and similarly for the second column

OpenStudy (loser66):

I got what you meant . How about 1.30?

OpenStudy (loser66):

1.33 is for \(e^{tA}(1,0)^T\) that is the first column of \(e^{tA}\) 1.34 is what? why not \(e^{tA}(0,1)^T\) which is the second column of \(e^{tA}\) Why it calculate \(e^{tA}(1,1)^T\) ??

OpenStudy (anonymous):

consider that if \(v\ne 0\) is an eigenvector of \(A\), with \(Av=\lambda v\), it follows that \(A^2v=\lambda^2 v\) and more generally \(A^nv=\lambda^n v\). so we have: $$e^{A}=\sum_{n=0}^\infty\frac1{n!} A^n\\e^Av=\left(\sum_{n=0}^\infty\frac1{n!} A^n\right)v=\sum_{n=0}^\infty\frac1{n!}\left(A^nv\right)=\sum_{n=0}^\infty\frac{\lambda^n}{n!}v=\left(\sum_{n=0}^\infty\frac{\lambda^n}{n!}\right)v=e^{\lambda}v$$

OpenStudy (anonymous):

if \(v\) is an eigenvector of \(A\) it's also an eigenvector of \(tA\) as \((tA)v=tAv=t\lambda v=(t\lambda )v\) so:$$e^{tA}=e^{t\lambda}v$$

OpenStudy (anonymous):

so now we know the eigenvectors of \(e^{tA}\) are those of \(A\) and the eigenvalues \(\lambda\) become \(e^{t\lambda }\), which is what happened there

OpenStudy (anonymous):

now, since we know the eigenvectors \(v_1,\dots,v_n\) of \(e^{tA}\) and we have a vector \(v\in\operatorname{span}\{v_1,\dots,v_n\}\) then we can decompose \(v_1=c_1v_1+\dots+c_nv_n\) and it follows $$e^{tA}v=c_1 e^{tA}v_1+\dots+c_2 e^{tA}v_n=c_1 e^{t\lambda_1}v_1+\dots+c_n e^{t\lambda_n} v_n$$. this gives us a way to determine how \(e^{tA}\) behaves on linear combinations of the eigenvectors

OpenStudy (anonymous):

so to figure out how \(e^{tA}\) acts on \(e_1\begin{bmatrix}1\\0\end{bmatrix}\) to figure out its first column we can decompose \(e_1=\dfrac12v_1+\dfrac12v_2\) and \(e^{tA}e_1=\dfrac12 e^{tA}v_1+\dfrac12 e^{tA}v_2=\dfrac12e^{t\lambda_1}v_1+\dfrac12e^{t\lambda_2}v_2\)

OpenStudy (loser66):

I follow it.

OpenStudy (anonymous):

since we have \(\lambda_1=1,\lambda_2=-1\) and \(v_1=\begin{bmatrix}1\\1\end{bmatrix},v_2=\begin{bmatrix}1\\-1\end{bmatrix}\) this says: $$e^{tA}\begin{bmatrix}1\\0\end{bmatrix}=\frac12e^t\begin{bmatrix}1\\1\end{bmatrix}+\frac12 e^{-t}\begin{bmatrix}1\\-1\end{bmatrix}=\begin{bmatrix}\frac12 e^t+\frac12 e^{-t}\\\frac12 e^t-\frac12 e^{-t}\end{bmatrix}$$ is our first column of \(e^{tA}\)

OpenStudy (loser66):

Yes, that is 1.33

OpenStudy (anonymous):

in 1.34 they made a typo, they actually decomposed \(e_2=\frac12 v_1-\frac12 v_2\) and then compute similarly to the above $$e^{tA}\begin{bmatrix}0\\1\end{bmatrix}=\frac12 e^t\begin{bmatrix}1\\1\end{bmatrix}-\frac12e^{-t}\begin{bmatrix}1\\-1\end{bmatrix}=\begin{bmatrix}\frac12 e^t-\frac12e^{-t}\\\frac12e^t+\frac12e^{-t}\end{bmatrix}$$

OpenStudy (anonymous):

so we've figured out the columns of \(e^{tA}\) and can write it now as $$e^{tA}=e^{tA}\begin{bmatrix}1&0\\0&1\end{bmatrix}=\begin{bmatrix}\frac12(e^t+e^{-t})&\frac12(e^t-e^{-t})\\\frac12(e^t-e^{-t})&\frac12(e^t+e^{-t})\end{bmatrix}$$

OpenStudy (loser66):

Yes, I guessed it is a typo but it tortured me a lot since I didn't understand it. Now I got it. Thank you so so so much.

OpenStudy (loser66):

Can I have another answer for another problem? The same topic, the same problem, just want to know the trick. Please.

OpenStudy (loser66):

1.41 . Is there any way to quickly figure out \(-\dfrac{i+1}{4}\) tem? and \(-\dfrac{i}{2}\)

OpenStudy (anonymous):

the easy way is to compute the inner products, since the eigenvectors \(\{v_j\}\) are orthogonal we have \(\langle v_i,v_j\rangle=0\text{ where }i\ne j,\text{ or }|v_i|^2\text{ where }i=j\)

OpenStudy (anonymous):

so: $$\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2\\1-i\end{bmatrix}\\\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2&1+i\end{bmatrix}\begin{bmatrix}-2\\1-i\end{bmatrix}\\-2\cdot1+(1+i)\cdot0=c_1((-2)^2+(1+i)^2)+c_2((-2)^2+(1+i)(1-i))\\-2=c_1(4+1+2i-1)+c_2(4+1+1)\\-2=(4+2i)c_1+6c_2$$similarly we get$$\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=c_1\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}-2\\1+i\end{bmatrix}+c_2\begin{bmatrix}-2&1-i\end{bmatrix}\begin{bmatrix}-2\\1-i\end{bmatrix}\\-2\cdot1+(1-i)\cdot0=c_1((-2)^2+(1-i)(1+i))+c_2((-2)^2+(1-i)^2)\\-2=c_1(4+1+1)+c_2(4+1-2i-1)\\-2=6c_1+(4-2i)c_2$$now you can do substitution to solve \(c_1,c_2\)

OpenStudy (loser66):

I got it. Again, thank you so much.

OpenStudy (anonymous):

oops they aren't orthonormal or even orthogonal here but they are linearly independent and we can still solve for unique \(c_1,c_2\)

OpenStudy (loser66):

I did as usual and got a different solutions from the paper. I am checking.

OpenStudy (loser66):

Wow!! it takes a long time to work with :)

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!