While by no means logically incorrect, it feels inelegant to setup a problem using variables A and B in the first paragraph and solve for X and Y in the second (compounded with the implicit X==B, and Y==A).
1. How to Write Mathematics — Paul Halmos
2. Mathematical Writing — Donald Knuth, Tracy Larrabee, and Paul Roberts
3. Handbook of Writing for the Mathematical Sciences — Nicholas J. Higham
4. Writing Mathematics Well — Steven Gill Williamson
My complaint stems more to the general observation that readability is prized in math and programming but not emphasized in traditional education curriculum to the degree it is in writing.
Bad style is seldomly commented on in our profession.
----
Write v_i = Var[X_i]. John writes
t_i = \frac{\prod_{j\ne i} v_j}{\sum_{k=1}^n \prod_{j\ne k} v_j}.
But if you multiply top and bottom by (1 / \prod_{m=1}^n v_m), you just get t_i = \frac{1/v_i}{\sum_{k=1}^n 1/v_k}.
No need to compute elementary symmetric polynomials.If you plug those optimal (t_i) back into the variance, you get
\min Var[\sum t_i X_i] = 1/(\sum_{k=1}^n 1/v_k) = H/n,
where `H = n / (\sum_{k=1}^n 1/v_k)` is the Harmonic Mean of the variances.ADDED. Because the new functionality will be used to create cutesy effects for reasons that have nothing to do with communicating math, increasing the demand for moderation work.
edit: Nobody is going to use maths for cutesy effects. Where have you ever seen that happen? Downvote them if they do. It is not going to be a big deal.
Also, note that the "precision" τ is defined as 1/σ².
t_i Var [X_i]] = t_j Var [X_j]
There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.
Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?
Once you have that intuition, the math just tells you what the optimal mix is, if you want to minimize the variance.
Is it?
You have ten estimates of some distance with similar accuracy of the order of 10m : you take the average (and reduce the error by more than half).
If you increase the precision of one measure by 1% you will disregard all the others?
Quantitatively speaking, t^2 and (1-t)^2 are always < 1 iff |t| < 1 and t != 0. As such, the standard deviation of a convex combination of variables is *always strictly smaller* than the convex combination of the standard deviations of the variables. In other words, stddev(sum_i t_i X_i) < sum_i t_i stddev(X_i) for all t != 0, |t|<1.
What this means in practice is that the convex combination (that is, with positive coeffs < 1) of any number of random variables is always smaller than the standard deviation of any of those variables.
He also frames it as a different goal too: normally when we (as a physicist) talks about the random variables to combine, we think of it as different measurements of the same thing. But he didn’t even assume that: he’s saying if you want to have a weighted sum of random variables, not necessarily expected to be a measurement of the same thing (eg share same mean), this is still the optimal solution if all care is minimal variance. His example is stock, where if all you care is your “index” being less volatile, inverse variance weighting is also optimal.
As I’m not a finance person, this is new to me (the math is exactly the same, just different conceptually in what you think the X_i s are).
I wish he mention inverse variance weighting just to draw the connection though. Many comments here would be unnecessary if he did.
whatever1•2mo ago
Don’t make decisions for evolving systems based on statistics.
Insider info on the other hand works much better.
JohnCClarke•2mo ago
energy123•2mo ago
pinkmuffinere•2mo ago
energy123•2mo ago
You'll also see more ad hoc approaches, such as simulating hypothetical scenarios to determine worst case scenarios.
It's not math heavy. Math heavy is a smell. Expect to see fairly simple monte carlo simulations, but with significant thought put into the assumptions.
pinkmuffinere•2mo ago
mhh__•2mo ago
Markowitz isn't really used at all, but Markowitz-like reasoning is used extremely heavily in finance, by which I basically mean factor modelling of various kinds - effectively the result of taking mean-variance as a concept and using some fairly aggressive dimensionality reduction to cope with the problems of financial data, and the fact that one has proprietary views about things ("alpha" and so on)
pinkmuffinere•2mo ago
kgwgk•2mo ago
This may be one reason but the return part is much more problematic than the risk part.
energy123•2mo ago
ijidak•2mo ago
Price variance is a noisy statistic not based on any underlying data about a company, especially if we believe that stock prices are truly random.
mhh__•2mo ago
CGMthrowaway•2mo ago