Posts

Showing posts with the label easy

Evaluating the Sum of a Geometric Sequence

This is a short blog post that shows you an easy and intuitive way to derive the formula for the summation of an infinite geometric sequence. Let \( 0 \leq p < 1 \), and let \( a \) be some constant; then we wish to find the value of \( x \) such that $$ x = \sum_{k=0}^\infty a p^k $$ Writing out the first few terms of the summation, we get: $$ x = a + a p + a p^2 + a p^3 + \dots $$ Rewriting the equation by factoring out a \( p \) from every term except the first, we get: $$ x = a + p (a + a p + a p^2 + \dots) $$ Notice that the expression in parenthesis is exactly how \( x \) is defined. Replacing the expression with \( x \) leaves us with: $$ x = a + p x $$ Solving the equation for \( x \) yields $$ x = \frac{a}{1-p} $$ Just remember this simple derivation and you will never have to look up the formula for evaluating the sum of an infinite geometric sequence ever again!

Triangles Containing the Origin

Image
You may have found this site by searching for information regarding Project Euler Problem 184. This is not a solution to that problem. In fact, it really bothers me when I find people who post their solutions to these problems online, especially the higher level ones. In this blog post, I talk about a simple problem from probability that was motivated from this Project Euler problem, but the solution to this problem is not likely to help you solve that one. Pick three points uniformly at random along the circumference of a unit circle centered at the origin. What is the probability that the triangle connecting these three points contains the origin? Like I said, this problem is not as difficult as the problems that I usually write about, but I decided to write a blog post about it for two main reasons: I wanted to create the animation/simulation for this problem I eventually want to extend this problem to explore all convex polygons instead of just triangles (which is a more diff...

Simplifying the Expected Value Formula

Probability theory is by far my favorite field in mathematics. I am particularly interested in expected value problems over a discretized domain. I often find myself thinking up and solving expected value problems that may or may not have any actual relevance. After working on an expected value problem that I had thought up, I derived a very interesting and useful identity for computing the answer to expected value problems more easily in certain situations. My derivation is easiest to understand when the sample space is the set of natural numbers (i.e., $1, 2, 3, \dots$). In situations when the sample space of a random variable is the set of natural numbers, the expected value is defined by this formula: $$ \mathbf{E}(X) = \sum_{k=1}^{\infty} k \cdot P(X = k) = P(X = 1) + 2 \cdot P(X = 2) + 3 \cdot P(X = 3) + \dots $$ In some cases, this summation may be difficult to evaluate directly. However, if you represent the sum in a special way, a simple but powerful simp...

A Simple Derivation of the Quadratic Formula

Image
In this blog post, I will talk about the famous quadratic formula - a formula for finding the zero(s) of a polynomial equation of degree 2. I first learned this equation in Algebra, but I had always thought of it as a mathematical truth, without actually knowing how it came to be or why it's justified. Now, I have the tools to show that it's correct and to actually derive it using simple algebra. The quadratic formula is a formula used to solve for the zero(s) an arbitrary polynomial in the form $y = ax^2+bx+c$.  Here it is in it's standard form: $$ x = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a} $$ My proof of this formula relies on an algebraic trick known as completing the square, which is another useful technique to solving equations of this form. While I personally prefer the completing the square method because it's faster, many people prefer using the quadratic formula because it doesn't require much thought or intuition; it's straight plug-and-c...