Posts

Showing posts from April, 2015

Markov Chains and Expected Value

A few weeks ago, I was using a Markov Chain as a model for a Project Euler problem, and I learned about how to use the transition matrix to find the expected number of steps to reach a certain state. In this post, I will derive the linear system that will help answer that question, and will work out specific an example using a 1-dimensional random walk as the model. Terminology Before I begin the derivation, let me define some terms. The transition matrix is an \( n \) by \( n \) matrix that describes how you can move from one state to the next. The rows of the transition matrix must always sum to 1. States can either be transient or absorbing . An absorbing state is one that cannot be left once reached (it transitions to itself with probability 1). A transient state is a state that is not an absorbing state. In many problems, it is of general interest to compute the expected number of steps to reach an absorbing state (from some start state). Derivation  Let \( p_