A place where share my thoughts and ideas with the world.
Analyzing Randomly Descending Sequences
Get link
Facebook
Twitter
Pinterest
Email
Other Apps
Finals week just ended for me so I'm hoping to post some new cool math problems in the next few days. In this blog post, I present a math problem that is very easy to understand what it is asking, but not so easy to solve. Regardless, I will present my solution, which yielded a very interesting result.
You construct a sequence \( k_1, k_2, ..., k_m \) by randomly picking an integer from the set \( 0 \leq k_1 < n \). After picking \( k_1 \), you repeat the process by picking \( k_2 \) randomly from \( 0 \leq k_2 < k_1 \) and repeat the process until you can no longer pick any numbers (when \( k_m = 0 \)). Find a function for \( f(n) \), where \( f(n) \) is the expected value of the length of this sequence.
If \( n = 3 \), then the possible sequences are \( \{ [0], [1,0], [2,0], [2,1,0] \} \), but these don't all occur with equal probability: \( P([0]) = \frac{1}{3}, P([1,0]) = \frac{1}{3}, P([2,0]) = \frac{1}{6}, P([2,1,0]) = \frac{1}{6} \). Using these probabilities, we can easily compute \( f(3) \):
$$
f(3) = \frac{1}{3} \cdot 1 + \Big(\frac{1}{3} + \frac{1}{6}\Big) \cdot 2 + \frac{1}{6} \cdot 3 = \frac{11}{6}
$$ By similar analysis, we can see \( f(0) = 0, f(1) = 1, \) and \( f(2) = \frac{3}{2} \). Do you recognize a pattern? I didn't see it right away, but let's keep going and see what we can come up with.
This problem is inherently recursive, and we can write a recurrence relation for \( f(n) \). The possible values for \( k_1 \) are \( k_1 \in \{ 0, 1, ..., n-1 \} \). In each case the problem reduces to \( f(k_1) \), but we must average the results over all the possibilities. Here is a recurrence relation for \( f(n) \):
$$
f(n) = 1 + \frac{1}{n} \sum_{k=0}^{n-1} f(k)
$$ In this equation, the \( +1 \) represents the cost of reducing the problem in size (when we choose \( k_1 \), that adds 1 item to our sequence). The summation is the average of the expected values all the possible sub problems that can arise from the current position. Let's rearrange the recurrence relation and try to make some simplifications.
$$
n (f(n) - 1) = \sum_{k=0}^{n-1} f(k) = f(n-1) + \sum_{k=0}^{n-2} f(k)
$$ Notice that we can also write \( f(n-1) \) in a similar manner.
$$
(n-1) (f(n-1) - 1) = \sum_{k=0}^{n-2} f(k)
$$ We can replace the summation in the top formula with the summation in the bottom formula to get a nice
recurrence relation for \( f(n) \) in terms of \( f(n-1) \).
$$
n (f(n) - 1) = f(n-1) + (n-1) (f(n-1) - 1)
$$ After some algebraic manipulation, we can see that
$$
f(n) = f(n-1) + \frac{1}{n}
$$ Additionally, we have the initial condition that \( f(0) = 0 \), so we have
$$
f(n) = \sum_{i=1}^n \frac{1}{i} = H_n
$$ This is the exact construction for the harmonic numbers! Although I was unable to answer the original question and find a function for \( f(n) \) in closed form, this is an equally exciting result.
Processors speeds are no longer growing at the rate we've grown accustomed to. Moore's law states that computers double in speed every 18 months. Moore's law is still alive and well, but we just need to find alternate ways to speed up computers. That's where multi-core programming comes into play. The idea behind it is that 2 heads are better than one and in the right situation 2 processors can work on the same problem to solve it twice as fast. Parallel programming is becoming increasingly important these days and there are many different flavors of it including distributed computing, shared memory computing, and GPU computing. Today I am going to focus on shared memory computing, or using multi-core computers rather than many single core computers. This blog post is intended to serve as a beginners guide to parallel programming with Java. It is recommended that the reader have some java experience and is familiar with some of the features of Java 8 (namely lambda expr
In order to maximize your probability of beating the streak, you should (1) predict the probability that a batter will get a hit in a given game given the game parameters and (2) determine if it's worthwhile to risk your current streak in order to possibly improve it by 1 or 2 games. In this blog post, I outline my solution to (2). In previous blog posts, I've hinted at what I do to solve (1) and will continue that discussion in a later blog post. Motivation When your current streak is short, the optimal strategy is to pick the best player every day, regardless of how likely their probability of getting a hit is (to a certain extent). However, as your streak grows, you have an important decision to consider: is it better to pick the best player today and possibly lose your current streak, or skip picking a player and instead maintain your current streak. Naturally, this decision should be guided by the players probability of getting a hit, as well as the distribution of the
If you have taken an introductory computer science course, you've probably seen the binary search algorithm - an algorithm to efficiently find the index of an item in a sorted array, if it exists. You might not have heard of interpolation search, however. Interpolation search is an alternative to binary search that utilizes information about the underlying distribution of data to be searched. By using this additional information, interpolation search can be as fast as $O(log(log(n)))$, where $n$ is the size of the array. In this post I am going to talk about two implementations of interpolation search. First, I will discuss the algorithm as described on Wikepedia, then I will describe my own version of the algorithm that has a better worst case time complexity than the first and performs better in practice than the traditional binary search and interpolation search algorithms. The Basic Idea Interpolation search models how humans search a dictionary better than a binary search,
Comments
Post a Comment