### Markov Chains and Expected Value

A few weeks ago, I was using a Markov Chain as a model for a Project Euler problem, and I learned
about how to use the transition matrix to find the expected number of steps to reach a certain state.
In this post, I will derive the linear system that will help answer that question, and will work
out specific an example using a 1-dimensional random walk as the model.

### Terminology

Before I begin the derivation, let me define some terms. The transition matrix is an \( n \) by \( n \) matrix that describes how you can move from one state to the next. The rows of the transition matrix must always sum to 1. States can either be*transient*or*absorbing*. An absorbing state is one that cannot be left once reached (it transitions to itself with probability 1). A transient state is a state that is not an absorbing state. In many problems, it is of general interest to compute the expected number of steps to reach an absorbing state (from some start state).### Derivation

Let \( p_{i,j} \) be the probability of transitioning from \( i \) to \( j \). Note that the \( (i,j)^{th} \) element of the transition matrix is just \( p_{i,j} \). Let \( E_i \) be the expected number of steps to reach an absorbing state from state \( i \). Then \( E_i \) must satisfy the equation: $$ E_i = 1 + p_{i,1} E_1 + \dots + p_{i,j} E_j + \dots + p_{i,n} E_n = 1 + \sum_{j=1}^n p_{i,j} E_j $$ The \( 1 \) is the cost of making a single transition, and the sum corresponds to the probability of transitioning to some state \( j \) times the expected number of steps needed to reach an absorbing state from state \( j \). Trivially, if state \( i \) is absorbing, then \( E_i = 0 \). If you look at the above expression carefully, you will probably notice that \( E_i \) is just the dot product between the \( i^{th} \) row of the transition matrix and the vector of expected values. Another important thing to notice is that in order to calculate \( E_i \) using this relation, we must first know \( E_j \) for all \( j \) (including \( j = i \)!). At first sight this might seem like an issue, but this is exactly the type of problem that Linear Algebra is used to solve. If we rearrange the above formula to move all unknowns (\( E_j \)) to one side, we get: $$ E_i - \sum_{j=1}^n p_{i,j} E_j = 1 $$ If we let \( E \) be the vector of expected values and let \( P \) be the transition matrix of the Markov chain, then $$ (I - P) E = 1 $$ where \( I \) is the identity matrix and 1 is the column vector of all \( 1 \)'s. We can find \( E \) by solving the linear system. Any method you'd like is acceptable (matrix inversion, gaussian elimination, lu factorization, etc.).### A Simple Example

Consider the following problem:You are randomly walking on the number line starting at position \( 0 \) and at every step you either move forward \( 1 \) unit or backward \( 1 \) unit. You may stop walking when you reach position \( -n \) or position \( n \). What is the expected number of steps you will need to take before you can stop walking?Before going through the Markov chain example, I will come up with a closed form solution to this question and use the result to verify the validity of the Markov chain solution. We can simplify the problem by taking into account symmetry: At position \( 0 \), we will move to position \( 1 \) with probability \( 1 \) and at any other position \( i \), we will move to position \( i-1 \) with probability \( \frac{1}{2} \) and to position \( i+1 \) with probability \( \frac{1}{2} \). This eliminates the need to consider two goal states and cuts our state space in half. Let \( R_i \) be the expected number of steps required to move from state \( i \) to state \( i+1 \). Using the linearity of expectation, we can see that the expected number of steps to reach the goal state is simply: $$ E = \sum_{i=0}^{n-1} R_i $$ \( R_i \) is nice because it satisfies a simple recurrence relation: $$ R_0 = 1 $$ $$ R_i = 1 + \frac{1}{2} (R_{i-1} + R_i) + \frac{1}{2} (0) $$ Solving the equation for \( R_i \), we get: $$ R_i = 2 + R_{i-1} $$ Thus, the values of \( R \) are just the odd numbers \( 1,3,5... \). The sum of the first \( n \) odd numbers is just \( n^2 \). This gives the expected number of steps required to reach position \( n \). $$ E = \sum_{i=0}^{n-1} R_i = n^2 $$

## Comments

## Post a Comment