### log-dot-exp: generalizing the log-sum-exp trick

In scientific computing applications we often have to work with very small positive numbers. With finite precision representation for numbers (like 32 or 64 bit floats), this can be a problem when precision matters even at the smallest scales. In these cases, it is useful to represent numbers in log space and perform computations directly in that space. Working in log space can make a numerically unstable algorithm stable. Suppose for example that we want to know the sum of $n$ positive real numbers: $Z = \phi_1 + \dots + \phi_n$. In log space, we would instead work with $\theta_i = \log{(\phi_i)}$. To perform this summation in log space, we need to evaluate the following expression: $$ \log Z = \log \sum_{i} \exp{(\theta_i)} $$ When done naively, this computation can be numerically unstable, e.g., if $\theta_i$ is large then $\exp{(\theta_i)}$ may cause an overflow error. The Log-Sum-Exp trick provides a way to do this computation in a numerically stable manner. In particular