> Many Computer Science (CS) programs still center around problems that AI can now solve competently.
Yeah. No they do not. Competent CS programs focus on fundamentals not your ability to invert a binary tree on a whiteboard. [1]
Replacing linear algebra and discrete mathematics with courses called "Baby's First LLM" and "Prompt Engineering for Hipster Doofuses" is as vapid as proposing that CS should include an entire course on how to use git.
I'm all for criticizing a lack of scientific rigor, but this bit pretty clearly shows that the author knows even less about sample sizes than the GitHub guy, so it seems a bit pot calling the kettle black. You certainly don't need to sample more than 25% of any population in order to draw statistical information from it.
The bit about running the study multiple times also seems kinda random.
I'm sure this study of 22 people has a lot of room for criticism but this criticism seems more ranty than 'proper analysis' to me.
Certainly? Now, who is ranting?
It's interesting to note that for a billion people this number changes to a whopping ... 385. Doesn't change much.
I was curious, with 22 sample size (assuming unbiased sample, yada yada), while estimating the proportion of people satisfying a criteria, the margin of error is 22%.
While bad, if done properly, it may still be insightful.
righthand•1h ago