It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.
This turns out to be quite useful in estimating completion time of dependant parallel jobs.
Spawning multiple parallel jobs becomes a Max operation and chaining sequential jobs becomes a '+' operation on the completion times. This expression tree of Max'es and Plus'es can be algebraically processed to obtain bounds on the completion time distribution.
One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask. (*)
If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.
For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.
(*) This shows up in microservices architectures also. The owner/maintainer of a microservice end point might be very happy with its tail latency. However an end user who consumes and composed the results of the endpoints can experience really bad tail latencies for the final output.
Nowadays I think this is solved in an entirely different way, though.
It's common to wrap API calls with
retry on failure, or
spawn an identical request if taking longer than x,or
recursively spawn an identical request if taking longer than x,or
retry on failure but no more than k times.
All of these and similar patterns/decorators can be analysed using the same idea.
That said, silloncito's warning does need to be paid heed.
While independence is essential for the proof to go through, the relationships need not break catastrophically with break of independence, usually it is graceful degradation with degree of independence. There are however specific, often degenerate, theoretical edge cases where the degradation is rapid.
Max and Plus at the random variables space becomes product and convolution in their distribution function space.
Distr(X+Y) = DistrX ° DistrY
Distr (X ^ Y) = DistrX * DistrY.
Where '^' denotes Max and '°' denotes convolution.
Note *, +, ° and ^ being commutative and associative they can be chained. One can also use their distributive properties. This really the math of groups and rings.However, one can and one does resort to approximations to compute the desired end results.
More specifically, people are often interested not in the distribution, but some statistics. For example, mean, standard deviation, some tail percentile etc. To compute those stats from the exact distributions, approximations can be employed.
Max of variables = product of cumulative distribution functions.
Sum of variables = convolution of probability density functions.
So both of the equations you write down are correct, but only if you interpret "Distr" as meaning different things in the two cases.
[EDITED to add:] Provided the random variables in question are independent, as mentioned elsewhere in the discussion; if they aren't then none of this works.
I just carried through that assumption of independence in my own comment, thinking it was obvious to do that (carry over the assumptions).
Both the ideas work for the cumulative distribution function that is called just distribution function in math. I think he got confused by the fact that convolution relation works also with densities (so he might have assumed that it works with densities only and not distributions)
Let's take the simplest possible case: a "random" variable that's always equal to 0. Its cdf is a step function: 0 for negative values, 1 for positive values. (Use whatever convention you prefer for the value at 0.)
The sum of two such random variables is another with the same distribution, of course. So, what's the convolution of the cdfs?
Answer: it's not even well defined.
The convolution of functions f and g is a function such that h(x) = integral over t of f(t) g(x-t) dt. The integral is over the whole of (in this case) the real numbers.
In this case f and g are both step functions as described above, so (using the convenient Iverson bracket notation for indicator functions) this is the integral over t of [t>0] [x-t>0] dt, i.e., of [0<t<x] dt, whose value is 0 for negative x and x for positive x.
This is not the cdf of any probability distribution since it doesn't -> 1 as x -> oo. In particular, it isn't the cdf of the right probability distribution which, as mentioned above, would be the same step function as f and g.
If X,Y are independent with pdfs f,g and cdfs F,G then the cdf of X+Y is (not F conv G but) f conv G = F conv g.
It is not at all true that "it's not even well defined", and indeed the following couple of paragraphs determine exactly what the thing in question is. It's not an actual cdf, but the problem isn't that it's ill-defined but that it's well-defined but has the wrong shape to be a cdf.
$P(X \leq K) x P(Y \leq K) = F_X(K) x F_Y(K)$.
So $F_M = F_X \times F_Y$ In another post there are some comments between topology and deep learning. I wonder if there is a definition similar to dimension in topology which would allow you to estimate the minimal size (number of parameters) in a neural network so that is able to achieve a certain state (for example obtaining the capacity to one shot learning with high probability).We share interest in AWK (*) then :) I don't know OS at all. Did you imply I know lisp ? I enjoy scheme, but used it in anger never. Big fan of the little schemer series of books.
(*) Have to find that Weinberger face Google-NY t-shirt. Little treasures.
Regarding your dimensions comment, this is well understood for a single layer, that is, for logistic regression. Lehmann's book will have the necessary material. With multiple layers it gets complicated real fast.
The best performance estimates, as in, within realms of being practically useful, largely come from two approaches, one from PAC-Bayesian bounds, the other from Statistical Physics (but these bounds are data distribution dependent). The intrinsic dimension of the data plays a fundamental role there.
The recommended place to dig around is JMLR (journal of machine learning research).
I meant Lehmann's Theory of Point Estimation, but large sample theory is a good book too. The newer editions of TPE are a tad hefty in number of pages. The earlier versions would serve you fine.
The generic idea is that smaller these dimensions, easier the prediction problem. Intrinsic dimension is one that comes closest to topology. VC is very combinatorial and gives the worst of worst case bounds. For a typical sized dataset one ends up with an error probability estimate of less than 420. With PAC-Bayes the bounds are atleast less than 1.0.
> Did you get it from LLM
LOL. There must be a fun and guilty story lurking inside the accusation.
On a more serious note, I would love it if LLMs could do such simplifications and estimations on their own.
https://en.m.wikipedia.org/wiki/Distribution_function
It's right in the title.
In probability theory, integration theory, as well as electrical engineering, "distribution function", unless further clarified, means that cumulative thing.
In math, nomenclature overloading can be a problem. So context matters. In the context of dirac delta, distribution means something else entirely -- generalized functions.
Oh! So sad you deleted your point about densities. One can only laugh at and enjoy these idiosyncrasies of nomenclature.
In Electrical Engineering one uses j for imaginary numbers because i is taken (by current).
The degree of dependence matters though. Mutual information is one way to measure that.
Thankfully, some theorems remain valid even when independence is violated. The next stop after independence is martingale criteria. Martingale difference sequences can be quite strongly dependent yet allow some of the usual theorems to go through but with worse convergence rates.
Sometimes it makes things simpler (quite a a lot of things in combinatorics), other times it is a tools for nice tricks (I have no idea how I would solved these equations if it were not for generating functions, see the appendix from a Mafia game paper, https://arxiv.org/abs/1009.1031).
Generating functions, Z-transforms are indispensable in probability theory, Physics, signal processing, and now it seems for a good round of Mafia while camping with friends.
3 1 2 1
× 2 0 6
------------
18 6 12 6
0 0 0 0
6 2 4 2
-----------------
6 2 22 8 12 6
= 6x^5 + 2x^4 + 22x^3 + 8x^2 + 12x^1 + 6x^0. 2x^2 + 5x - 3
-------------
x + 2
2x + 1
______________
x + 2 | 2x^2 + 5x - 3
2x^2 + 4x
-------------
x - 3
x + 2
-----
- 5
The remainder is -5, which gives us a -5/(x + 2) term:
Thus 5
= 2x + 1 - -----
x + 2
How about something we know divides: x + 1
_______________
x + 1 | x^2 + 2x + 1
x^2 + x
--------
x + 1
x + 1
-----
0
Sourabhsss1•8mo ago