Stochastic Processes
I hope someone can help my with this problem Consider a Markov chain with state space {1, 2, 3} and transition matrix \[P=\left[\begin{matrix}0.4 & 0.2 & 0.4 \\ 0.6 & 0 & 0.4 \\ 0.2 & 0.5 & 0.3\end{matrix}\right]\] What is the probability in the long run that the chain is in state 1? Solve this problem two different ways: 1) by raising the matrix to a high power; and 2) by directly computing the invariant probability vector as a left eigenvector.
shouldn't all columns add up to 1?
No the row must add up to 1...
says that column should add up to 1 http://en.wikipedia.org/wiki/Stochastic_matrix
also what's up with the state vector? shouldn't it be one? representing the probability vector? http://en.wikipedia.org/wiki/Probability_vector also note that I don't have experience with stochastic process
If you see the example on the link the row sum =1
I think people often use rows summing to 1 and you post multiply by matrix P xP
2) by directly computing the invariant probability vector as a left eigenvector. you want the eigenvector that corresponds to \(\lambda\)= 1
So 1 is \[\left[\begin{matrix}0.4 & 0.2 & 0.4 \\ 0.6 & 0 & 0.4 \\ 0.2 & 0.5 & 0.3\end{matrix}\right]*X=\left[\begin{matrix}0.4 & 0.2 & 0.4 \\ 0.6 & 0 & 0.4 \\ 0.2 & 0.5 & 0.3\end{matrix}\right]\]
to find the eigenvector form the matrix P - λI which is just P - I and use elimination to find the null space (the eigenvector)
this is works because Px = λx or Px - λx = 0 factor out x (P-λI) x = 0 which says vector x (the eigenvector) is in the null space of matrix (P-λI)
to get the next state, do you operate by that matrix on the space sate?
yes v_1 P = v_2
lol that was no to my statement ... the eigen space seem to operating on the marix.
for 1) the n-th state is given by v_1 P^n = v_n just diagonalize the matrix P using eigen value decomposition. you should get the probability after n-th step. http://www.wolframalpha.com/input/?i=diagonalize+1%2F10%7B%7B4%2C+2%2C+4%7D%2C+%7B6%2C+0%2C+4%7D%2C+%7B2%2C+5%2C+3%7D%7D
if we let x stand for the eigenvectors, and u = [c1,c2,c3] for the state vector, we can say \[ u_k = P^k u_0 = c_1 1^k x_1 + c_2 λ_2^k x_2+ c_2 λ_2^k x_2 \] we know one of the eigenvalues is 1, and the other two are | λ_i | < 0 so if k is big enough, only \( c_1 x_1\) survives
if it helps the eigenvector you want is [1 1 1]
I think I did P arranged with columns adding to 1. amend that to u0 * P^k for row oriented P
taking transpose should fix that.
woops!! no same eigen vector http://www.wolframalpha.com/input/?i=eigenvalue+Transpose%5B1%2F10%7B%7B4%2C+2%2C+4%7D%2C+%7B6%2C+0%2C+4%7D%2C+%7B2%2C+5%2C+3%7D%7D%5D
btw, here is as much as I know about this http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-24-markov-matrices-fourier-series/
i read the same stuff, but not in details.
so we can answer the question, x= [ 1 1 1] corresponds to λ=1 that means the long term state will have equal amounts of the 3 states.What is the probability in the long run that the chain is in state 1? 1/3
Join our real-time social learning platform and learn together with your friends!