### Markov Chains and Expected Value

A few weeks ago, I was using a Markov Chain as a model for a Project Euler problem, and I learned about how to use the transition matrix to find the expected number of steps to reach a certain state. In this post, I will derive the linear system that will help answer that question, and will work out specific an example using a 1-dimensional random walk as the model.

### Terminology

Before I begin the derivation, let me define some terms. The transition matrix is an $n$ by $n$ matrix that describes how you can move from one state to the next. The rows of the transition matrix must always sum to 1. States can either be transient or absorbing. An absorbing state is one that cannot be left once reached (it transitions to itself with probability 1). A transient state is a state that is not an absorbing state. In many problems, it is of general interest to compute the expected number of steps to reach an absorbing state (from some start state).

### Derivation

Let $p_{i,j}$ be the probability of transitioning from $i$ to $j$. Note that the $(i,j)^{th}$ element of the transition matrix is just $p_{i,j}$. Let $E_i$ be the expected number of steps to reach an absorbing state from state $i$. Then $E_i$ must satisfy the equation: $$E_i = 1 + p_{i,1} E_1 + \dots + p_{i,j} E_j + \dots + p_{i,n} E_n = 1 + \sum_{j=1}^n p_{i,j} E_j$$ The $1$ is the cost of making a single transition, and the sum corresponds to the probability of transitioning to some state $j$ times the expected number of steps needed to reach an absorbing state from state $j$. Trivially, if state $i$ is absorbing, then $E_i = 0$. If you look at the above expression carefully, you will probably notice that $E_i$ is just the dot product between the $i^{th}$ row of the transition matrix and the vector of expected values. Another important thing to notice is that in order to calculate $E_i$ using this relation, we must first know $E_j$ for all $j$ (including $j = i$!). At first sight this might seem like an issue, but this is exactly the type of problem that Linear Algebra is used to solve. If we rearrange the above formula to move all unknowns ($E_j$) to one side, we get: $$E_i - \sum_{j=1}^n p_{i,j} E_j = 1$$ If we let $E$ be the vector of expected values and let $P$ be the transition matrix of the Markov chain, then $$(I - P) E = 1$$ where $I$ is the identity matrix and 1 is the column vector of all $1$'s. We can find $E$ by solving the linear system. Any method you'd like is acceptable (matrix inversion, gaussian elimination, lu factorization, etc.).

### A Simple Example

Consider the following problem:
You are randomly walking on the number line starting at position $0$ and at every step you either move forward $1$ unit or backward $1$ unit. You may stop walking when you reach position $-n$ or position $n$. What is the expected number of steps you will need to take before you can stop walking?
Before going through the Markov chain example, I will come up with a closed form solution to this question and use the result to verify the validity of the Markov chain solution. We can simplify the problem by taking into account symmetry: At position $0$, we will move to position $1$ with probability $1$ and at any other position $i$, we will move to position $i-1$ with probability $\frac{1}{2}$ and to position $i+1$ with probability $\frac{1}{2}$. This eliminates the need to consider two goal states and cuts our state space in half. Let $R_i$ be the expected number of steps required to move from state $i$ to state $i+1$. Using the linearity of expectation, we can see that the expected number of steps to reach the goal state is simply: $$E = \sum_{i=0}^{n-1} R_i$$ $R_i$ is nice because it satisfies a simple recurrence relation: $$R_0 = 1$$ $$R_i = 1 + \frac{1}{2} (R_{i-1} + R_i) + \frac{1}{2} (0)$$ Solving the equation for $R_i$, we get: $$R_i = 2 + R_{i-1}$$ Thus, the values of $R$ are just the odd numbers $1,3,5...$. The sum of the first $n$ odd numbers is just $n^2$. This gives the expected number of steps required to reach position $n$. $$E = \sum_{i=0}^{n-1} R_i = n^2$$

### Using the Markov Chain

Markov chains are not designed to handle problems of infinite size, so I can't use it to find the nice elegant solution that I found in the previous example, but in finite state spaces, we can always find the expected number of steps required to reach an absorbing state. Let's solve the previous problem using $n = 8$. If the method succeeds, we'd expect the expected number of steps to be $64$. If you want to work out an example by hand, you should use $n = 3$ or $n = 4$, but I will use a computer algebra system to make solving the system feasible. Using the symmetric model explained in the previous section, the transition matrix is: $$\begin{bmatrix} 0& 1 &&&&&&&\\ \frac{1}{2} &0& \frac{1}{2} &&&&&&&\\ &\frac{1}{2} &0& \frac{1}{2} &&&&&&\\ &&\frac{1}{2} &0& \frac{1}{2} &&&&&\\ &&&\frac{1}{2} &0& \frac{1}{2} &&&&\\ &&&&\frac{1}{2} &0& \frac{1}{2} &&&\\ &&&&&\frac{1}{2} &0& \frac{1}{2} &&\\ &&&&&&\frac{1}{2} &0& \frac{1}{2} &\\ &&&&&&&0&1\\ \end{bmatrix}$$ We know $E_8 = 0$, and we want to solve for $E_0$ through $E_7$. If you look carefully at $I - P$, you'll notice the last row of the matrix is $0$. This makes the matrix singular, which basically means the system has no solution. We can take out the last row and last column of the matrix to remedy this. In the general case, you'll want to remove the row and column of every absorbing state. The corresponding system we want to solve is then: $$\begin{bmatrix} 1& -1 &&&&&&&\\ -.5 &1& -.5 &&&&&&\\ &-.5 &1& -.5 &&&&&\\ &&-.5 &1& -.5 &&&&\\ &&&-.5 &1& -.5 &&&\\ &&&&-.5 &1& -.5 &&\\ &&&&&-.5 &1& -.5 &\\ &&&&&&-.5 &1 \end{bmatrix} \begin{bmatrix} E_0\\ E_1\\ E_2\\ E_3\\ E_4\\ E_5\\ E_6\\ E_7 \end{bmatrix} = \begin{bmatrix} 1\\1\\1\\1\\1\\1\\1\\1 \end{bmatrix}$$ Using a computer algebra system, the solution to this linear system is: $$E = \begin{bmatrix} 64\\ 63\\ 60\\ 55\\ 48\\ 39\\ 28\\ 15 \end{bmatrix}$$ which is the expected result: $E_0 = 64$. Notice that in solving for $E_0$, we also found the expected number of steps to reach the goal from any state! This type of analysis can be useful in a lot of different situations: how long you expect an inning to last in baseball, how long you'd expect to wait until the stock market goes from bull to bear, or how many days you expect to wait before it will rain again, etc. It's not always appropriate to apply Markov chains, as there was a simple and elegant solution to this particular problem without them. But they are a powerful statistical tool and a fascinating mathematical object, so knowing when it's appropriate to use them and how to use them effectively can really help solve probability and statistics problems.