Summary
Algorithm to find the equilibrium vector/eigenvector associated to 1
the normal algorithm for eigenvectors would work as well
Adjacency matrix
convert into a probability transition matrix by scaling each column such that the total is 1
Concept
Probability vector
- all entries are nonnegative and sum to 1
different from unit vectors
(Left) stochastic matrix
- columns are probability vectors
Markov chain, cumulative probability
- sequence of probability vectors(state vectors), that represents the probability of diffferent outcomes
- alongside a stochastic matrix, which transforms the current state vector to the subsequent state vector
1 is always an eigenvalue of a stochastic matrix
Steady-state vector/equilibrium vector
- a state vector that remains the same regardless of how many experiments
- is the eigenvector associsated to the eigenvalue 1
- if the Markov chain converges, it will converge to this equilibrium vector
Regular stochastic matrix
- there is a power where all the entries are more than 0
- every state can reach every state
recall the product of triangular matrices is also triangular, ie. a triangular stochastic matrix is not regular
Markov chain with regular stochastic matrix
- will converge to the unique equilibrium vector
Octave
octave
# Finding the equilibrium vector of a converging markov chain
rref([P-eye(3), [0; 0; 0]; 1 1 1 1])
Application
Probability
- Sheldon only patronizes three stalls in the school canteen, the mixed rice, noodle, and mala hotpot stall for lunch everyday. He never buys from same stall two days in a row. If he buys from the mixed rice stall on a certain day, there is a 40% chance he will patronize the noodles stall the next day. If he buys from the noodle stall on a certain day, there is a 50% chance he will eat mala hotpot the next day. If he eats mala hotpot on a certain day, there is a 60% chance he will patronize the mixed rice the next day.
Outgoing connections on adjacency matrix, on a graph