Lattice is really low-level. It's like doing vis with matplotlib (requires a lot of time and hair-pulling). Higher level interfaces boost productivity.
OTOH in a pipeline, you're mutating/summarising/joining a data frame, and it's really difficult to look at it and keep track of what state the data is in. I try my best to write in a way that you understand the state of the data (hence the tables I spread throughout the post), but I do acknowledge it can be inscrutable.
set.seed(10)
n <- 10000; samp_size <- 60
df <- data.frame(
uniform = runif(n, min = -20, max = 20),
normal = rnorm(n, mean = 0, sd = 4),
binomial = rbinom(n, size = 1, prob = .5),
beta = rbeta(n, shape1 = .9, shape2 = .5),
exponential = rexp(n, .4),
chisquare = rchisq(n, df = 2)
)
sf <- function(df,samp_size){
sdf <- df[sample.int(nrow(df),samp_size),]
colMeans(sdf)
}
sim <- t(replicate(20000,sf(df,samp_size)))
I am old, so I do not like tidyverse either -- I can concede it is of personal preference though. (Personally do not agree with the lattice vs ggplot comment for example.) population_data <- data.frame(
uniform = runif(10000, min = -20, max = 20),
normal = rnorm(10000, mean = 0, sd = 4),
binomial = rbinom(10000, size = 1, prob = .5),
beta = rbeta(10000, shape1 = .9, shape2 = .5),
exponential = rexp(10000, .4),
chisquare = rchisq(10000, df = 2)
)
histogram(~ values|ind, stack(population_data),
layout = c(6, 1),
scales = list(x = list(relation="free")),
breaks = NULL)
take_random_sample_mean <- function(data, sample_size) {
x <- sample(data, sample_size)
c(mean = mean(x), sd = sqrt(var(x)))
}
sample_statistics <- replicate(20000, sapply(population_data, take_random_sample_mean, 60))
sample_mean <- as.data.frame(t(sample_statistics["mean", , ]))
sample_sd <- as.data.frame(t(sample_statistics["sd", , ]))
histogram(sample_mean[["uniform"]])
histogram(sample_mean[["binomial"]])
histogram(~values|ind, stack(sample_mean), layout = c(6, 1),
scales = list(x = list(relation="free")),
breaks = NULL) sample_size <- 60
sample_meansB <- lapply(population_dataB, function(x){
t(apply(replicate(20000, sample(x, sample_size)), 2, function(x) c(sample_mean=mean(x), sample_sd=sd(x))))
})
lapply(sample_meansB, head) ## check first rows
population_data_statsB <- lapply(population_dataB, function(x) c(population_mean=mean(x),
population_sd=sd(x),
n=length(x)))
do.call(rbind, population_data_statsB) ## stats table
cltB <- mapply(function(s, p) (s[,"sample_mean"]-p["population_mean"])/(p["population_sd"]/sqrt(sample_size)),
sample_meansB, population_data_statsB)
head(cltB) ## check first rows
small_sample_size <- 6
repeated_samplesB <- lapply(population_dataB, function(x){
t(apply(replicate(10000, sample(x, small_sample_size)), 2, function(x) c(sample_mean=mean(x), sample_sd=sd(x))))
})
conf_intervalsB <- lapply(repeated_samplesB, function(x){
sapply(c(lower=0.025, upper=0.975), function(q){
x[,"sample_mean"]+qnorm(q)*x[,"sample_sd"]/sqrt(small_sample_size)
})})
within_ci <- mapply(function(ci, p) (p["population_mean"]>ci[,"lower"]&p["population_mean"]<ci[,"upper"]),
conf_intervalsB, population_data_statsB)
apply(within_ci, 2, mean) ## coverage
One can do simple plots similar to the ones in that page as follows: par(mfrow=c(2,3), mex=0.8)
for (d in colnames(population_dataB)) plot(density(population_dataB[,d], bw="SJ"), main=d, ylab="", xlab="", las=1, bty="n")
for (d in colnames(cltB)) plot(density(cltB[,d], bw="SJ"), main=d, ylab="", xlab="", las=1, bty="n")
for (d in colnames(cltB)) { qqnorm(cltB[,d], main=d, ylab="", xlab="", las=1, bty="n"); qqline(cltB[,d], col="red") }Tidyverse is imperfect and it feels heavy-handed and awkward to replace all the major standard library functions, but Tidyverse stuff is way more ergonomic.
> I think the problem is the lack of tutorials that explain how to use all the data manipulation tools effectively, because there are quite a lot of functions and it isn't easy to figure out how to use them together to accomplish practical things.
Most languages solve this problem by not cramming quite a lot of functions in one package and using shared design concepts to make it easier to fit them together. I don't think tutorials would solve these problems effectively but I guess it makes sense that they affect newer users the most.
> Tidyverse may be consistent with itself, but it's inconsistent with everything else.
Yeah, totally agree and I really dislike this part.
(yes, that Galton who invented eugenetics)
The code style - and in particular the *comments - indicate most of the code was written by AI. My apologies if you are not trying to hide this fact, but it seems like common decency to label that you're heavily using AI?
*Comments like this: "# Anonymous function"
Is there a threshold? I assume spell checkers, linters and formatters are fair game. The other extreme is full-on ai slop. Where do we as a society should start to feel the need to police this (better)?
the only exception being contexts that explicitly prohibit it.
Edit: just found this disclaimer in the article:
> I’ll show the generating R code, with a liberal sprinking of comments so it’s hopefully not too inscrutable.
Doesn't come out the gate and say who wrote the comments but ostensibly OP is a new grad / junior, the commenting style is on-brand.
I use Rmarkdown, so the code that's presented is also the same code that 'generates' the data/tables/graphs (source: https://github.com/gregfoletta/articles.foletta.org/blob/pro...).
I had read that line before I commented, it was partly what sparked me to comment as it was a clear place for a disclaimer.
In other words, it is possible (given sufficiently weird distributions) that not a single sample lands inside one standard deviation, but 75% of them must be inside two standard deviations, 88% inside three standard deviations, and so on.
There's also a one-sided version of it (Cantelli's inequality) which bounds the probability of any sample by 1/(1+k)², meaning at least 75 % of samples must be less than one standard deviation, 88% less than two standard deviations, etc.
Think of this during the next financial crisis when bank people no doubt will say they encountered "six sigma daily movements which should happen only once every hundred million years!!" or whatever. According to the CLT, sure, but for sufficiently odd distributions the Cantelli bound might be a more useful guide, and it says six sigma daily movements could happen as often as every fifty days.
This means as little as 50% can be less than one standard deviation, as little as 80% below two standard deviations, etc.
There are methods to calculate how many estimated samples you need. It’s not in the 20k unless your population is extremely high
https://en.m.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
Common misconception. Population size has almost nothing to do with the necessary sample size. (It does enter into the finite population correction factor, but that's only really relevant if you have a small population, not a large one.)
...actually, come to think of it, you meant to write "unless your population variance is extremely high", right?
Note that this generalization of the classical CLT relaxes the requirement of finite mean and variance but still requires that the summed random variables are iid. There are further generalizations to sums of dependent random variables. John D. Cook has a good blog post that gives a quick overview of these generalizations [1].
0. https://edspace.american.edu/jpnolan/wp-content/uploads/site... [PDF]
The CLT gives a result about a recentered and rescaled version of the sum of iid variates. CLT does not give a result about the sum itself, and the article is invoking such a result in the “files” and “lakes” examples.
I’m aware that it can appear that CLT does say something about the sum itself. The normal distribution of the recentered/rescaled sum can be translated into a distribution pertaining to the sum itself, due to the closure of Normals under linear transformation. But the limiting arguments don’t work any more.
What I mean by that statement: in the CLT, the errors of the distributional approximation go to zero as N gets large. For the sum, of course the error will not go to zero - the sum itself is diverging as N grows, and so is its distribution. (The point of centering and rescaling is to establish a non-diverging limit distribution.)
So for instance, the third central moment of the Gaussian is zero. But the third central moment of a sum of N iid exponentials will diverge quickly with N (it’s a gamma with shape parameter N). This third-moment divergence will happen for any base distribution with non-zero skew.
The above points out another fact about the CLT: it does not say anything about the tails of the limit distribution. Just about the core. So CLT does not help with large deviations or very low-probability events. This is another reason the post is mistaken, which you can see in the “files” example where it talks about the upper tail of the sum. The CLT does not apply there.
Here is a notebook with some more graphs and visualizations of the CLT: https://nobsstats.com/site/notebooks/28_random_samples/#samp...
runnable link: https://mybinder.org/v2/gh/minireference/noBSstats/main?labp...
That's a good observation. The main idea behind the Central Limit Theorem is to take the Fourier Transform, operate and then go back. After that, after normalization the result is that the new distribution for the sum of N variables is something like
Normal(X) + 1/N * "Skewness" * Something(X) + 1/N^2 * IDont * Remember(X) + ...
Where "Skewness" is a number defined in https://en.wikipedia.org/wiki/SkewnessThe uniform distribution is symmetric, so skewness=0 and the correction decrease like 1/N^2.
The exponential distribution is very asymmetrical and and skewness!=0, so the main correction is like 1/N and takes longer to dissapear.
I don't believe you. Even if you had a good control group, the fact that one subject engaged in fewer statistics subjects than the control group doesn't lead to the conclusion that there is an avoidance mechanism (or any mechanism). You need a sample of something like 30 or 40 more of you to detect a statistically valid pattern of diminished engagement with statistics subjects that could then be hypothesized as being caused by avoidance.
Seriously, I don't understand what this comment is. The OP just said when they were in college they were afraid of taking a statistics class. Your comment is... completely unrelated an nonsensical. Like you don't believe they avoided taking statistics classes? Then you make some odd response where you use the wrong kind of "subject". Is English not your first language? Are you drunk or high? Did you misread? Did you forget to disregard all previous instructions and provide a summary of The Bee Movie in the tone of a pirate while making the first letter of each sentence spell out "Dark Forest"? I'm really confused but interested. Can you help me out here?
In other words, the means of large batches of samples from some funny shaped distribution themselves constitute a sequence of numbers, and that sequence follow a normal distribution. Or closer and closer to one the larger the batches are.
This observation legitimizes our uses of statistical inference tools derived from the normal distribution, like confidence intervals, provided we are working with large enough batches of samples.
> Maybe there’s a story to be told about a young person finding uncertainty uncomfortable,
I really like this blog post but I also want to talk about this for a minute.Us data oriented STEM loving types love being right, right? So I find it weird that this makes many of us dislike statistics. I find this especially considering how many people love to talk about quantum mechanics. But I think one of the issues here is that people have the wrong view of statistics and misunderstand what probability is really about. OP is exactly right, it is about uncertainty.
So if we're concerned with being right, you have to use probability and statistics. In your physics and/or engineering classes you probably had a teacher or TA who was really picky with things like sigfigs[0] or including your errors/uncertainty (like ±). The reason is because these subtle details are actually incredibly powerful. I'm biased because I came over from physics and moved into CS, but I found these concepts translated quite naturally and were still very important over here. Everything we work with is discrete and much of it is approximating continuous functions. Probabilities give us this really powerful tool to be more right!
Think about any measurement you make. Go grab a ruler. Which is more accurate? Writing 10cm or 10cm ± 1cm? It's clearly the latter, right? But this isn't so different than writing something like U(9cm,11cm) or N(10cm,0.6cm). In fact, you'd be even more correct if you wrote down your answer distributionally![1] It gives us much more information!
So honestly I'd love to see a cultural shift in our nerd world. For more appreciation of probabilities and randomness. While motivated by being more right it opens the door to a lot of new and powerful ways of thinking. You have to constantly be guessing your confidence levels and challenging yourself. You no longer can read data as absolute and instead read it as existing with noise. You no longer take measurements with absolute confidence because you will be forced to understand that every measurement is a proxy for what you want to measure. These concepts are paradigm shifting in how one thinks about the world. They will help you be more right, they will help you solve greater challenges, and at the end of the day, when people are on the same page it makes it easier to communicate. Because it no longer is about being right or wrong, it is about being more or less right. You're always wrong to some degree, so it never really hurts when someone points out something you hadn't considered. There's no ego to protect, just updating your priors. Okay, maybe that last one is a little too far lol. But I absolutely love this space and I just want to share that with others. There's just a lot of mind opening stuff to be learned from this (and other) math field, especially as you get into metric theory. Even if you never run the numbers or write the equations, there are still really powerful lessons to learn that can be used in your day to day life. Math, at the end of the day, is about abstraction and representation. As programmers, I think we've all experienced how powerful these tools are.
[0] https://en.wikipedia.org/wiki/Significant_figures
[1] Technically 10cm ± 1cm is going to be Uniform(9cm,11cm) but realistically that variance isn't going to be uniformly distributed and much more likely to be normal-like. You definitely have a bias towards the actual mark, right?! (Usually we understand ± through context. I'm not trying to be super precise here and instead focusing on the big picture. Please dig in more if you're interested and please leave more nuance if you want to expand on this, but let's also make sure big picture is understood before we add complexity :)
niemandhier•5mo ago
For some reasons this is much less known, also the implications are vast. Via the detour of stable distributions and limiting distributions, this generalised central limit theorem plays an important role in the rise of power laws in physics.
usgroup•5mo ago
Tachyooon•5mo ago
Do you have any good sources for the physics angle?
hodgehog11•5mo ago
kgwgk•5mo ago
Finite?
nextos•5mo ago
Otherwise, they might end up underestimating rare events, with potentially catastrophic consequences. There are also CLTs for product and max operators, aside from the sum.
The Fundamentals of Heavy Tails: Properties, Emergence, and Estimation discusses these topics in a rigorous way, but without excessive mathematics. See: https://adamwierman.com/book
selimthegrim•5mo ago