A place where share my thoughts and ideas with the world.

A Neat Random Walk Problem

Get link

Facebook

Twitter

Pinterest

Email

Other Apps

As many of the people that read this blog probably know, I enjoy inventing math problems, especially problems that are probabilistic in nature, and then working out the math to solve that problem. In this blog post, I will introduce one problem that I solved that I think is particularly elegant:

You are randomly walking along the x-axis starting at \( x=0 \). When you are at \( x = n \), you move to \( x = n-1 \) with probability \( p = \frac{n}{n+1} \) and to \( x = n+1 \) with probability \( p = \frac{1}{n+1} \). What is the expected number of steps until you return to \( x = 0 \) for the first time?

This problem differs from standard random walk problems because the probability of moving left and right along the x-axis depends on your current location. In this problem, there is a force that is pulling you towards \( x = 0 \), and the further away you get, the more of an effect this force has on you.

So how do we go about solving this problem? When I first attempted the problem I attempted to setup an equation that described the situation. In particular, I defined a function \( f(n) \) to be the expected number of steps to reach \( x = 0 \) from \( x = n \). This was ultimately a dead end, but my key insight was fairly similar.
Let's define a function \( f(n) \) to be the expected number of steps to reach \( x = n-1 \) from \( x = n \). Then the answer to the original problem is simply \( 1 + f(1) \). To get some intuition about what this function needs to look like, we need to think about what can happen when you are at \( x = n \). No matter what you have to take 1 step. With probability \( p = \frac{n}{n+1} \) that is the only step you need to take, because that is the probability of moving to \( x = n-1 \) immediately. However, with probability \( p = \frac{1}{n+1} \) you need to take more steps. In this case, you end up at \( x = n+1 \) but you ultimately want to get back to \( x = n-1 \). To do that, we must first get back to \( x = n \) then we have to get to \( x = n-1 \). By definition of \(f\), the expected number of steps needed to complete this sequence of events is exactly \( f(n+1) + f(n) \). This gives rise to the following equation:
$$ f(n) = 1 + \frac{1}{n+1} [ f(n+1) + f(n) ] $$
Solving this equation for \( f(n) \) yields;
$$ f(n) = \frac{n + 1}{n} + \frac{f(n+1)}{n} $$
We've got an interesting situation here where we have a recurrence relation but no base case. Theoretically, if we knew \( f(n) \) for any \( n \geq 1 \), then we would know the value of \( f(n) \) for all \( n \) by repeatedly applying the formula. After carefully thinking about the problem, you will probability realize that as we get further away from \( x = 0 \), the expected number of steps to move one unit to the left will approach \( 1 \). Formally we have \( \lim_{n \rightarrow \infty} f(n) = 1 \). Let's take a deeper look at \( f(1) \) for a moment, which is ultimately what we are interested in finding:
\begin{align*}
f(1) &= \frac{2}{1} + \frac{f(2)}{1} \\
&= \frac{2}{1} + \frac{3}{2 \cdot 1} + \frac{f(3)}{2 \cdot 1} \\
&= \frac{2}{1} + \frac{3}{2 \cdot 1} + \frac{4}{3 \cdot 2 \cdot 1} + \frac{f(4)}{3 \cdot 2 \cdot 1} \\
&= \frac{2}{1!} + \frac{3}{2!} + \frac{4}{3!} + \frac{5}{4!} + \frac{f(5)}{5!} \\
&= \frac{2}{1!} + \frac{3}{2!} + \frac{4}{3!} + \dots + \frac{n+1}{n!} + \dots
\end{align*}
since \( \lim_{n \rightarrow \infty} \frac{f(n)}{n!} = 0 \), we have:
$$ f(1) = \sum_{n=1}^\infty \frac{n+1}{n!} $$
$$ 1 + f(1) = \sum_{n=0}^\infty \frac{n+1}{n!} $$

Now you might recognize this infinite series already but don't worry if you don't. The sum evaluates to exactly $ 2 e \approx 5.437 $. Here's why: let's define \( g(x) = e^x = \sum_{k=0}^\infty \frac{x^k}{k!} \) Then consider \( x g'(x) = x e^x = \sum_{k=1}^\infty \frac{k x^k}{k!} \). Now define \( h(x) = g(x) + x g'(x) = 1 + \sum_{k=1}^\infty \frac{(k+1) x^k}{k!} \). This looks familiar! In fact, \( 1 + f(1) = h(1) = e^1 + 1 e^1 = 2 e \). There you have it! The expected number of steps to return to the origin is \( 2 e \).
This was a beautiful problem, and it has one of the most elegant solutions I have ever come up with. When I initially thought of the problem, I had no idea \( e \) was going to pop up in the answer so that was somewhat of a surprise. If you liked this problem, consider a new but very similar problem where the force is reversed:

You are randomly walking along the x-axis, starting at \( x = 1 \). When you are at \( x = n \), you move to \( x = n-1 \) with probability \( p = \frac{1}{n+1} \) and you move to \( x = n+1 \) with probability \( p = \frac{n}{n+1} \). What is the probability that you eventually reach \( x = 0 \).

Processors speeds are no longer growing at the rate we've grown accustomed to. Moore's law states that computers double in speed every 18 months. Moore's law is still alive and well, but we just need to find alternate ways to speed up computers. That's where multi-core programming comes into play. The idea behind it is that 2 heads are better than one and in the right situation 2 processors can work on the same problem to solve it twice as fast. Parallel programming is becoming increasingly important these days and there are many different flavors of it including distributed computing, shared memory computing, and GPU computing. Today I am going to focus on shared memory computing, or using multi-core computers rather than many single core computers. This blog post is intended to serve as a beginners guide to parallel programming with Java. It is recommended that the reader have some java experience and is familiar with some of the features of Java 8 (namely lambda exp…

Suppose you have a 2d numpy array and you want to remove duplicate rows (or columns). In this blog post, I'll show you a trick you can use to do this more efficiently than using np.unique(A, axis=0). This algorithm has time complexity $ O(\max(n \log{n}, n m)) $ for an $ n \times m $ matrix, and works almost surely. By "almost surely" I mean that it is a randomized algorithm that works correctly with probability $1$. To find the unique rows of a matrix $A$, the algorithm works by generating a random vector $x$ of real numbers, computing the dot product $ y = A x $, then analyzing the unique elements of $y$. The indices of unique elements in $y$ is the same as the unique row indices in $A$ with probability $1$. Below is an example to demonstrate how it works:

In order to maximize your probability of beating the streak, you should (1) predict the probability that a batter will get a hit in a given game given the game parameters and (2) determine if it's worthwhile to risk your current streak in order to possibly improve it by 1 or 2 games. In this blog post, I outline my solution to (2). In previous blog posts, I've hinted at what I do to solve (1) and will continue that discussion in a later blog post.

Motivation
When your current streak is short, the optimal strategy is to pick the best player every day, regardless of how likely their probability of getting a hit is (to a certain extent). However, as your streak grows, you have an important decision to consider: is it better to pick the best player today and possibly lose your current streak, or skip picking a player and instead maintain your current streak. Naturally, this decision should be guided by the players probability of getting a hit, as well as the distribution of these …

Free Game

ReplyDeleteSteam Keys

Free Steam Keys

Game Giveaway

Game Steam Giveaway

Free Steam Game Giveaway