frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Visual World of 'Samurai Jack'

https://animationobsessive.substack.com/p/the-visual-world-of-samurai-jack
146•ani_obsessive•3h ago•29 comments

Root shell on a credit card terminal

https://stefan-gloor.ch/yomani-hack
538•stgl•11h ago•149 comments

LibriVox

https://librivox.org/
75•bookofjoe•4h ago•23 comments

Cinematography of “Andor”

https://www.pushing-pixels.org/2025/05/20/cinematography-of-andor-interview-with-christophe-nuyens.html
314•rcarmo•15h ago•308 comments

How Generative Engine Optimization (GEO) rewrites the rules of search

https://a16z.com/geo-over-seo/
21•eutropheon•2d ago•10 comments

Estimating Logarithms

https://obrhubr.org/logarithm-estimation
62•surprisetalk•1d ago•13 comments

Nitrogen Triiodide (2016)

https://www.fourmilab.ch/documents/chemistry/NI3/
50•keepamovin•3d ago•28 comments

A new generation of Tailscale access controls

https://tailscale.com/blog/grants-ga
156•ingve•3d ago•33 comments

Progressive JSON

https://overreacted.io/progressive-json/
458•kacesensitive•1d ago•195 comments

When Fine-Tuning Makes Sense: A Developer's Guide

https://getkiln.ai/blog/why_fine_tune_LLM_models_and_how_to_get_started
106•scosman•3d ago•44 comments

Atari Means Business with the Mega ST

https://www.goto10retro.com/p/atari-means-business-with-the-mega
133•rbanffy•14h ago•101 comments

What works (and doesn't) selling formal methods

https://www.galois.com/articles/what-works-and-doesnt-selling-formal-methods
18•azhenley•3d ago•1 comments

Making maps with noise functions (2022)

https://www.redblobgames.com/maps/terrain-from-noise/
20•benbreen•3d ago•2 comments

M8.2 solar flare, Strong G4 geomagnetic storm watch

https://www.spaceweatherlive.com/en/news/view/581/20250531-m8-2-solar-flare-strong-g4-geomagnetic-storm-watch.html
163•sva_•8h ago•40 comments

Google AI Edge – On-device cross-platform AI deployment

https://ai.google.dev/edge
169•nreece•18h ago•29 comments

How I like to install NixOS (declaratively)

https://michael.stapelberg.ch/posts/2025-06-01-nixos-installation-declarative/
103•secure•19h ago•109 comments

RenderFormer: Neural rendering of triangle meshes with global illumination

https://microsoft.github.io/renderformer/
245•klavinski•21h ago•49 comments

HeidiSQL Available Also for Linux

https://www.heidisql.com/forum.php?t=44068
8•Daril•3d ago•2 comments

“Bugs are 100x more expensive to fix in production” study might not exist (2021)

https://www.theregister.com/2021/07/22/bugs_expense_bs/
40•rafaepta•4h ago•32 comments

Figma Slides Is a Beautiful Disaster

https://allenpike.com/2025/figma-slides-beautiful-disaster
355•tobr•19h ago•209 comments

Ukraine destroys more than 40 military aircraft in drone attack deep in Russia

https://www.npr.org/2025/06/01/nx-s1-5419509/ukraine-destroys-military-aircraft-attack-inside-russia-planes
200•consumer451•11h ago•209 comments

Toying with the Lambda Calculus

https://github.com/WinVector/Examples/blob/main/lambda_calculus/toying_with_the_lambda_calculus.ipynb
10•jmount•3d ago•1 comments

Why DeepSeek is cheap at scale but expensive to run locally

https://www.seangoedecke.com/inference-batching-and-deepseek/
276•ingve•17h ago•162 comments

Structured Errors in Go (2022)

https://southcla.ws/structured-errors-in-go
119•todsacerdoti•20h ago•42 comments

Codex CLI is going native

https://github.com/openai/codex/discussions/1174
84•bundie•14h ago•76 comments

Why Blender Changing to Vulkan Is Groundbreaking [video]

https://www.youtube.com/watch?v=7cta91Y53gs
27•mdtrooper•4h ago•24 comments

Father Ted Kilnettle Shrine Tape Dispenser

https://stephencoyle.net/kilnettle
200•indiantinker•19h ago•51 comments

Show HN: Patio – Rent tools, learn DIY, reduce waste

https://patio.so
211•GouacheApp•1d ago•137 comments

New adaptive optics shows details of our star's atmosphere

https://nso.edu/press-release/new-adaptive-optics-shows-stunning-details-of-our-stars-atmosphere/
156•sohkamyung•1d ago•25 comments

Ru and W isotope systematics in ocean island basalts reveals core leakage

https://www.nature.com/articles/s41586-025-09003-0
4•temporalparts•1h ago•0 comments
Open in hackernews

The Illusion of Causality in Charts

https://filwd.substack.com/p/the-illusion-of-causality-in-charts
48•skadamat•4d ago

Comments

NoTranslationL•1d ago
This is a tough problem. I’m working on an app called Reflect [1] that lets you analyze your life’s data and the temptation to draw conclusions from charts and correlations is strong. We added an experiments feature that will let you form hypotheses and it will even flag confounding variables if you track other metrics during your experiments. Still trying to make it even better to avoid drawing false conclusions.

[1] https://apps.apple.com/us/app/reflect-track-anything/id64638...

gcanyon•1d ago
The article seems more about the underlying causality, and less about the charts' specific role in misleading. To pick one example, the scatterplot chart isn't misleading: it's just a humble chart doing exactly what it's supposed to do: present some data in a way that makes clear the relationship (not necessarily causality!) between saturated fat consumption and heart disease.

The underlying issue (which the article discusses to some extent) is how confounding factors can make the data misleading/allow the data to be misinterpreted.

To discuss "The Illusion of Causality in Charts" I'd want to consider how one chart type vs. another is more susceptible to misinterpretation/more misleading than another. I don't know if that's actually true -- I haven't worked up some examples to check -- but that's what I was hoping for here.

melagonster•1d ago
a famous example is that bar chart always better than pie chart (see the advice from page of pie chart on ggplot website).
hammock•1d ago
> the scatterplot chart isn't misleading

Even leaving out the data (which you rightly point out) you are forced to choose what to plot on x and y, which by convention will communicate IV and DV respectively whether you like it or not.

rcxdude•15h ago
True. Arguably this is a harmful convention: with any scatter plot you should consider the axes could be flipped.
the-mitr•1d ago
You can check out some work of Howard wainer in this regard.

Graphic discovery, visual revelations etc.

https://en.m.wikipedia.org/wiki/Howard_Wainer

justonceokay•1d ago
A pet issue I have that is in line with the “illusions” in the article is what I might call the “bound by statistics” fallacy.

The shape of it is that there is a statistic about population and then that statistic is used to describe a member of that population. For example, a news story that starts with “70% of restaurants fail in their first year, so it’s surprising that new restaurant Pete’s Pizza is opening their third location!”

But it’s only surprising if you know absolutely nothing about Pete and his business. Pete’s a smart guy. He’s running without debt and has community and government ties. His aunt ran a pizza business and gave him her recipes.

In a Bayesian way of thinking, the newscasters statement only makes sense if the only prior they have is the average success rate of restaurants. But that is an admittance that they know nothing about the actual specifics of the current situation, or the person they are talking about. Additionally there is zero causal relationship between group statistics and individual outcomes, the causal relationship goes the other way. Pete’s success will slightly change that 70% metric, but the 70% metric never bound Pete to be “likely to fail”.

Other places I see the “bound by statistics” problem is in healthcare, criminal proceedings, racist rhetoric, and identity politics.

zmgsabst•1d ago
It’s not even surprising without knowing about Pete: the newspaper isn’t going to publish the many that failed, so their own selection bias is the dominant effect. Even if we take the probability of the group to be the probability of individuals, eg, rolling dice (“1 in 6 people rolls a 4!”).

Lots of them opened, of them 70% failed, and one who didn’t happened to be named Pete.

No more interesting than “Pete rolled a 4!” even though 83% of people don’t.

skybrian•1d ago
If a newspaper only publishes surprising results, but it’s unsurprising when they appear in the newspaper, then you’ve set up a paradox: a set that only contains nonmembers.

I don’t think it’s valid to define “surprising” in such a self-referential way. When something unusual appears in the news, that doesn’t make it common. The probabilities are different before and after applying a filter.

nemomarx•1d ago
It doesn't make it a common outcome overall, but it makes it a common outcome for it to appear in the newspaper, right? It's just different meanings of "common".
skybrian•1d ago
Yes, that's what I meant.
zmgsabst•1d ago
I didn’t say that, nor use any self-reference.

> When something unusual appears in the news, that doesn’t make it common.

This in particular isn’t even close to what I said, which was: rare events can be unsurprising in large datasets — as is the case with both dice rolls and restaurants succeeding.

skybrian•1d ago
If rare events are unsurprising then I think you've defined surprising events out of existence? I mean, sure, someone will win the lottery, but it would very surprising if it happened to you.

I guess this is just point of view. "Someone won the lottery" and "I won the lottery" describe the same event from very different perspectives.

yusina•1d ago
If 99.9% fail and you see one that didn't, then would it be surprising that it didn't? No! It's not 100%, so there must be some example. For that one it's not surprising.

More precisely, it's not surprising that one exists. It may be surprising that this particular one survived, just as it wouldbe surprising that it's my neighbor who wins the lottery next week. But it's likely that somebody will, so if somebody has won, it won't be a surprise that somebody did.

skybrian•1d ago
Yes, it seems like surprise has to depend on your point of view. There’s an enormous difference between “someone somewhere won the lottery” and “I won the lottery,” even if it’s describing the same event.

It’s a question of how many other possibilities are considered similar to the one that happened. From a zoomed-out perspective, one win is as good as any another.

steveBK123•1d ago
People are also very even worse with conditional probabilities
skybrian•1d ago
People sometimes talk about this as “taking the outside view” or “reference class forecasting.” [1] It doesn’t work when there are important differences between the case being considered and other members of the reference class. Nationwide statistics are especially zoomed-out and there are going to be a lot of people in the same country who are quite different from you. Worldwide statistics are even worse.

It doesn’t mean the statistics are wrong, though. If there is a 70% chance of failure, there’s also a 30% chance of success. But it’s subjective: use a different reference class and you’ll get a different number.

The opposite problem is also common: assuming that “this time it’s different” without considering the reasons why others have failed.

The general issue is overconfidence and failure to consider alternative scenarios. The future isn’t known to us and it’s easy to fool yourself into thinking it is known.

[1] https://en.m.wikipedia.org/wiki/Reference_class_forecasting

yusina•1d ago
I agree with your description, but the pizza place case is even simpler: Statistics don't guarantee future single sample properties. 70% fail in the first year. So, 30% don't. Why would it be surprising to see one that didn't? It would be surprising to see none that didn't fail. So, it's expected to see lots that don't fail, Pete's being one of them.
nwlotz•1d ago
One of the best things I was forced to do in high school was read "How to Lie with Statistics" by Darrell Huff. The book's a bit dated and oversimplified in parts, but it gave me a healthy skepticism that served me well in college and beyond.

I think the issues described in this piece, and by other comments, are going to get much worse with the (dis)information overload AI can provide. "Hey AI, plot thing I don't like A with bad outcome B, and scale the axes so they look heavily correlated". Then it's picked up on social media, a clout-chasing public official sees it, and now it's used to make policy.

hammock•1d ago
It helps to internalize the concept that all statistics (visualizations, but also literally any statistic with an element of organization) is narrative. “The medium is the message” type of way.

Sometimes you are choosing the narrative consciously (I created this chart to tell a story), and sometimes you are choosing it unconsciously (I just want to scatter plot and see what it shows - but you chose the x and y to plot, and you chose the scatter plot vs some other framework), and sometimes it is chosen for you (chart defaults for example, or north is up on a map).

And it’s not just charts. Statistics on the whole exist to organize raw data. The very act of introducing organization means you have a scheme, framework, lens which with to do so. You have to accept that and become conscious of that.

You cannot do anything as simple as report an average without choosing which data to include and which type of average to use. Or a histogram without choosing the bin sizes, and again, the data to include.

This is all to say nothing of the way the data was produced in the first place. (Separate topic)

djoldman•1d ago
This is not a problem with charts, it is a problem with the interpretation of charts.

1. In general, humans are not trained to be skeptical of data visualizations.

2. Humans are hard-wired to find and act on patterns, illusory or not, at great expense.

Incidentally, I've found that avoiding the words "causes," "causality," and "causation" is almost always the right path or at the least should be the rule as opposed to the exception. In my experience, they rarely clarify and are almost always overreach.

ninetyninenine•1d ago
It's not a problem of interpretation or visualization or charts. People are talking about it as if it's deception or interpretation but the problem is deeper than this.

It's a fundamental problem of reality.

The nature of reality itself prevents us from determining causality from observation, this includes looking at a chart.

If you observe two variables. Whether those random variables correlate or not... there is NO way to determine if one variable is causative to another through observation alone. Any causation in a conclusion from observation alone is in actuality only assumed. Note the key phrase here is: "through observation alone."

In order to determine if one thing "causes" another thing, you have to insert yourself into the experiment. It needs to go beyond observation.

The experimenter needs to turn off the cause and turn on the cause in a random pattern and see whether that changes the correlation. Only through this can one determine causation. If you don't agree with this, think about it a bit.

Also note that this is how they approve and validate medicine... they have to prove that the medicine/procedure "causes" a better outcome and the only way to do this is to actually make giving and withholding the medicine as part of the trial.

djoldman•1d ago
I find the definition of causality that places it squarely in the realm of philosophy to be a dead end or perhaps a circle with no end or objective or goal.

"What does it mean that something is caused by something else?" At the end of it all, what matters is how it's used in the real world. Personally I find the philosophical discussion to be tiresome.

In law, "to cause" is pretty strict: "but for" A, B would not exist or have happened. Therefore A caused B. That's one version. Other people and regimes have theirs.

This is why it's something I try to avoid.

In any case, descriptions of distributions are more comprehensive and avoid conclusions.

ninetyninenine•19h ago
I'm not talking about philosophy. Clinical trials for medicine use this technique to determine causality. I'm talking about something very practical and well known.

It is literally the basis for medicine. We literally have to have a "hand in the experiment" for clinical trials to with-hold medicine and to give medicine in order to establish that medicine "causes" a "cure". Clinical trials are by design not about just observation.

Likely, you just don't understand what I was saying.

djoldman•12h ago
I believe I understood what you were saying.

The criteria or definition for " A causes B" that you alluded to is a useful one in the context of medicine:

> The experimenter needs to turn off the cause and turn on the cause in a random pattern and see whether that changes the correlation. Only through this can one determine causation. If you don't agree with this, think about it a bit.

It's useful because it establishes a threshold we can use and act on in the real world.

I think there is more nuance and context here though. In clinical trials, minimum cohort sizes are required, possibly related or proportional to power analysis (turning on and off the cause for one person doesn't give us much confidence but for 1000 people gives much more).

So the definition of causes for clinical trials and medicine hinges on more than just turning on and off, it relies on effect size and population size in the experiment.

Going back to TFA, this is the problem when we bring "cause" into the discussion: the definition of it varies depending on the context.

ninetyninenine•7h ago
> So the definition of causes for clinical trials and medicine hinges on more than just turning on and off, it relies on effect size and population size in the experiment.

Of course. Because the clinical trial is statistical so the basis of the trial is trying to come to a conclusion about a population of people via a sample. That fact applies to both correlation or causation. Statistics is like an extension from person to people… rather then coming to a conclusion about something for one person you can do it for a sample of the population.

Causality is independent of the extension. You can measure causality against a sample of a population or even a single thing. The property of inserting yourself into an experiment still exists in both cases. This is basic and a simple thought experiment can determine this.

You have two switches two lights and two people. Both people turn each of their respective switches on and off and the light turns on and off in the expected pattern exactly like the state of the switch.

You know one of the switches is hooked to the light and “causes” the light to turn on and off. The other switch is BS and is turning on and off on some predetermined pattern and the person that’s flipping the related switch memorized the pattern and is making it look like the switch is causative to turning on or off the light.

How do you determine which one is the switch that is causative to the light turning on and off and which switch isn’t?

Can you do it through observation alone? Can you just watch the people flip the switch? Or do you have to insert yourself into the experiment and flip both switches randomly yourself to see which one is causal to the light turning on or off?

The answer is obvious. I’m sort of anticipating a pedantic response where you just “observe” the wiring of the switch and the light to that I would say I’m obviously not talking about that. You can assume all the wiring is identical and the actual mechanism is a perfect black box. We never actually try to determine causality or correlation unless we are dealing with a black or semi black box so please do not go down that pedantic road.

You should walk away from this conversation with new intuition on how reality works.

You shouldn’t be getting to involved in mathematical definitions and details of what involves a clinical trial or pedantic details and formal definitions.

Gain deep understanding of why causality must be determined this way and then that helps you see why the specific detailed protocols of clinical trials were designed that way in the first place.

rcxdude•15h ago
I'd say this is generally true, but in practice there are a decent number of cases where some reasoning can give you a pretty good confidence one way or another. Mainly by considering what other correlations exist and what causal relationships are plausible (because not all of them are).

(I say this coming from an engineering context, where e.g. you can pretty confidently say that your sensor isn't affecting the weather but vice-versa is plausible)

ninetyninenine•6h ago
This is true fundamentally. It is not general. It is a fundamental facet of reality.

In practice it’s hard to determine causality so people make assumptions. Most conclusions are like that. I said this in the original post that conclusions from observation alone must have assumptions made. Which is fine given available resources. If you find people who smoke weed have lower iq you can come to the conclusion that weed causes iq to lower assuming that all smokers of weed had average iq before smoking and this is fine.

I’m sure you’ve seen many causative conclusions redacted because of incorrect assumptions so it is in general a very unreliable method.

And that’s why in medicine they strictly have to do causative based testing because they can’t afford to have a conclusion based off of an incorrect assumption.

rcxdude•1h ago
Sorry, I meant in general in the broader sense that you were intending (i.e., I agree). And if you really get down to brass tacks, it's not obvious you can actually do truly causative based tasting. (See e.g. superdeterminism, which posits that you fundamentally can't as a way to explain quantum weirdness in physics).
qixv•1d ago
You know, everyone that confuses correlation with causation ends up dying.
singularity2001•1d ago
what's very fascinating in general is that causality is a difficult mathematical concept which only a tiny fraction of the population learns yet everyone is talking about it and "using it"

we do have a pretty good intuition for it but if you look at the details and ask people what is the difference between correlation and causality and how do you distinguish it things get rabbit holey pretty quick

JackSlateur•3h ago
http://www.tylervigen.com/spurious-correlations