It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.
This turns out to be quite useful in estimating completion time of dependant parallel jobs.
One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask.
If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.
For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.
Nowadays I think this is solved in an entirely different way, though.
Sourabhsss1•5h ago