Posts

Showing posts from March, 2016

Beat the Streak 2016: Looking for Help

For anybody who is interested in developing strategies to beat the streak using statistics or machine learning for the 2016 season and beyond, get in touch with me at RMcKenna21@gmail.com. The past six months or so I have been developing statistical models and designing algorithms to automate the pick selection process, and now I am looking for like-minded people to help me improve my methods. If you are interested in working together on this problem, let me know and we can start sharing ideas. I have a repository and a fairly nice python framework for predicting the most likely players to get a hit every day. However, the accuracy of my models are still ~10% lower than my target. I think if we can develop a model that correctly picks a player 83-85% of the time, then we have a pretty decent shot at winning this thing (by "pretty good" I mean like 1000 to 1 odds). To get a sense of what I've been doing to solve this problem so far, check out this paper and this

An Alternate Formulation of Markov Decision Processes for Solving Probabilistic Games

Today I'm going to talk about one of the coolest things I know about: Markov Decision Processes (MDPs). I was formally introduced to MDPs in my undergraduate course in AI, however I had actually independently discovered something very similar when I analyzed an unfair game . In this blog post, I will generalize my formulation of MDPs which have many similarities to the traditional setup, but I believe my formulation is better suited for finding theoretically optimal strategies to games with uncertainty. The Traditional Setup In the traditional setup, there is some state space, \( S \), and an agent who moves around the state space with some degree of randomness and some degree of choice (actions). Different actions lead to different transition probabilities for the agent. These probabilities are denoted by \( P_a(s, s') \) where a is the chosen action, \( s \) is the starting state, and \( s' \) is the state transitioned to. As a side note and to solidify your unde