A place where share my thoughts and ideas with the world.
An Interesting Game
Get link
Facebook
Twitter
Pinterest
Email
Other Apps

Today I'm going to talk about an interesting game that I thought of, and in particular I'll show the probability that each player will win. While the rules of the game are not fair, the skill levels of each player are not necessarily the same either, so I formally define what I mean and find the conditions under which these factors balance out. Without further ado, here is the problem statement:
Alice and Bob play a game consisting of an indeterminate number of rounds. In each round, Alice and Bob compete to win the round, and thus score a point. Alice is more skilled at this game and so she wins each round with probability $ p > 0.5 $ (independent of the score and other context), meaning Bob wins each round with probability $ 1  p $. The score starts at $ 0  0 $ and Bob wins if he ever has more points than Alice, and Alice wins otherwise. What is the probability that Bob wins this game? For what value of $ p $ is this game fair?
While the game never ends unless Bob wins, we can still reason about the game in a theoretical setting. The rest of this blog post is devoted to answering this question, so if you'd like to see a solution or know the answer, click the button below.
A natural place to start is to think about the probability of Bob winning when the current score is (a,b), then connect the probabilities between adjacent scores by defining some kind of recurrence relation. The first key observation to make is that the probability that Bob wins only depends on the score differential, or $ a  b $: the number of points Alice has minus the number of points Bob has. This allows us to define a recurrence relation with respect to one parameter, $ d = a  b $ rather than two, and these are much easier to reason about analytically.
If $ d = a  b = 1 $, then Bob has won by definition. Letting $ f(d) $ denote the function mapping score differentials to probabilities of Bob winning, we have $ f(1) = 1 $. This is our base case, and now we can define a more general formula for $ f(d) $ as follows:
$$ f(d) = (1p) \cdot f(d1) + p \cdot f(d+1) $$
If you are familiar with recurrence relations, this formula should be pretty intuitive to you. If not, I encourage you to think carefully about how $ f $ is defined in English, then convince yourself that the mathematical definition is consistent with it. This formula holds because Bob scores the next point with probability $ 1p $ and the score differential goes down by one. Similarly, Alice scores the next point with probability $p$ and the score differential goes up by one. We are interested in $ f(0) $  the probability that Bob wins when the score differential is $ 0 $, because that's the score differential when the game first starts.
Now that we have a recurrence relation for $ f(d) $, we'd like to find a closed form expression for it in terms of $ p $ and $ d $ so we can just plug in $ d = 0 $ to get our answer. However, using another clever insight we can get around the need for doing that (although as I'll show, it's easy to recover $f(d)$ from $ f(0) $ anyway). Using the recurrence relation defined above, we know:
$$ f(0) = (1p) \cdot f(1) + p \cdot f(1) = (1p) + p \cdot f(1) $$
The second key insight is that $ f(1) = f(0)^2$. Again I encourage you to think about this carefully to convince yourself that it is true. It basically holds because $p$ doesn't change based on the score differential. We can think of $ f(1) $ in two different ways: (1) by definition, it is the probability that Bob wins when the current score differential is $ 1 $ and (2) it is the probability that Bob wins two games in a row with score differential $0$. (2) holds because winning one game brings the score differential down from $ 1 $ to $ 0 $, and winning the second game brings it down from $ 0 $ to $ 1 $. Plugging in this newly derived knowledge to the formula above, we find:
$$ f(0) = (1p) + p \cdot f(0)^2 $$
This is just a quadratic equation that we need to solve. To make it more clear that this is the case, we will let $ f(0) = x $, so we want to find the value of $ x $ such that
$$ x = (1p) + p x^2 $$ $$p x^2 + x + (p1) = 0 $$
Using the quadratic formula, we find that $ x = \frac{p1}{p} $ or $ x = 1 $. Of course $ x = 1 $ is the solution when $ p \leq 0.5 $, but $ \frac{p1}{p} $ is the solution when $ p > 0.5 $. Thus the answer to the first question is $ f(0) = \frac{p1}{p} $ and we can find the answer to the second question by setting $ f(0) = \frac{1}{2} $. This shows that when $ p = \frac{2}{3} $ the game is fair for both players.
I hope you enjoyed this problem, and if you have other solutions, feel free to post them in the comments section below!
In this post, I will discuss my findings in terms of an optimal strategy for Farkle , which is a dice game of chance. If you are unfamiliar with the game, I encourage you to skim the linked article so that you can better understand this blog post. All of my findings are based on sound mathematical methods and a computer program was used to determine the optimal strategy. Before I begin, let me first state the value of different dice rolls I assumed while developing the strategy. Each 1: 100 points Each 5: 50 points 3 1's: 1000 points 3 2's: 200 points ... 3 6's: 600 points 4 1's: 2000 points ... 4 6's 1200 points 5 1's 3000 points ... 5 6's 1800 points Straight 16 1500 points 3 Pairs (e.g., 22 33 44): 750 points Everything else is considered to be a Farkle (0 points). There are many variants to Farkle, but I like to play by this set of rules. The "game state" of Farkle roll can roughly be characterized by 2 values: the curre
A few weeks ago, I was using a Markov Chain as a model for a Project Euler problem, and I learned about how to use the transition matrix to find the expected number of steps to reach a certain state. In this post, I will derive the linear system that will help answer that question, and will work out specific an example using a 1dimensional random walk as the model. Terminology Before I begin the derivation, let me define some terms. The transition matrix is an \( n \) by \( n \) matrix that describes how you can move from one state to the next. The rows of the transition matrix must always sum to 1. States can either be transient or absorbing . An absorbing state is one that cannot be left once reached (it transitions to itself with probability 1). A transient state is a state that is not an absorbing state. In many problems, it is of general interest to compute the expected number of steps to reach an absorbing state (from some start state). Derivation Let \( p_
In a lot of the recreational math and computer science problems I solve, I try to derive a recurrence relation that describes the situation. These recurrence relations pop up pretty often in counting problems (how many ways can you tile a 2xn grid with 2x1 tiles?) In some cases, I am unable to derive a recurrence relation directly from the problem statement, but still suspect that there exists some recurrence relation. In these cases, I end up solving the problem usin g brute force methods for small input sizes and analyzing/extrapolating the pattern. However, when the relation is obscure, it becomes hard to find. In order to make this process easier in the future, I created a program that can automatically find recurrence relations for arbitrary integer sequences. To use it, simply type in the first few terms of a sequence you are interested in, and it will try to find a recurrence relation for that sequence. There are some constraints on the allowable inputs/outputs which I di
Comments
Post a Comment