It underscores a timeless lesson: no matter how much data or logic we have, we’re still wired to fall for well-crafted optimism and that means skepticism remains the best defense.
Your comment would have been better if you'd chosen and example that did not create hundreds of thousands of millionaires.
Lotteries have also produced lots of millionaires. Crypto could produce lots of winners just from wealth transfer even if it was a zero sum or net negative game in terms of wealth creation.
Similarly, we are bad at estimating small proportions ("easily shave 2%"). What is being claimed in the parentheses here is that there's a probability distribution of "how much costs are shaved" and that we can estimate where the bulk of its support is.
But we're not really good at making such estimates. Maybe there is some probability mass around 2%, but the bulk is around 0.5%. It seems like that's a small difference (just 1.5%!) but it's a factor of 4 in terms of savings.
So now we have a large number (annual spend), multiplied by a very uncertain number (cost shave, with poor experimental support), leading to a very uncertain outcome in terms of savings.
And it can be that, in reality, the costs of changing service turn out to overwhelm this outcome.
https://www.effectivealtruism.org/articles/cause-profile-lon...
Although we are at a peak population of a bit over 8B people at the moment, it is estimated that more than 100B people we would classify as humans have ever lived. The population long ago was much smaller than 1B, but thousands of generations have lived and died.
2. Cover up the chart so only the data from the past to present day is visible.
3. Note that most humans in that subset exist near or at the present. You are one of these people today, it should make sense for you to be born in one of the densest parts of the graph.
4. Now uncover the graph. If there are trillions of humans in the future, it seems almost impossibly unlikely that you would born in a part of the graph with "so few" humans as today, and not in the far future.
Therefore, you must conclude that the actual graph rapidly drops to zero in the near future. QED.
This "doomsday argument" is a pretty shit one, but not worse than others I've seen arguing the opposite.
I think the core idea is simply: since resources for helping the poor/sick is not unlimited, we should try to allocate those resources in the most effective way. Before EA charity evaluation came along, the only metric for most people was simply looking at the charity overhead via Charity Navigator. But that isn't a great metric. A charity with only a 1% overhead with a mission to make balloon animals for children dying in a famine will score well on Charity Navigator but does nothing to help the problem.
To be honest I haven't looked deeply into long-termism, but from what I've heard (eg, hearing Will MacAskill on a few podcasts) it seems to ignore a few things. Just like a bird in the hand is worth two in the bush, long-termers have no good way to estimate the likelihood of future events, and that discounting needs to increase greatly the further out one looks. At best many of these estimates are like the Drake Equation -- better than nothing, but with multiple orders of magnitude error bars.
There are other second-order reasons which don't seem to factor in, or at least haven't come across in the few hours of listening to long-termers talk about the issue. One is that by working to make a better world now, it effects the trajectory of future events much more directly than the low-probability guesswork they think my have an impact in the distant future.
"The great subverter of Pyrrhonism [radical skepticism] is action, and employment, and the occupations of common life. [...] I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hour's amusement, I wou'd return to these speculations, they appear so cold, and strain'd, and ridiculous, that I cannot find in my heart to enter into them any farther."
Hume
The biggest positive change you can make, even for future generations, is to uplevel the people who are alive today.
Or to put it another way, everything fuzzes out into noise for me much sooner than humanity will have trillions of new members. There's no way for me to predict whatsoever what effect any action I take today will have a thousand years from now. Even in extreme cases, like, I push a magic button that instantly copies whatever you, the reader, believe is the optimal distribution of ideological beliefs out into the world (ignoring for the moment the possibility that your ideology might consider that unethical, this is just a thought experiment anyhow so no need to go that meta), you really don't know what that would do 1000 years from now, what the seventeenth-order effects of such a thing would be. I'm not even saying that it might not be as good as you think or something; I'm saying you just have no idea what it would be at all. So there's no way to hold people responsible for that, and no way to build plans based on it.
I had one friend who would leave his bike chained partially blocking a fire exit, because "what are the odds the fire inspector will come today?" But the fire inspector comes once a year, and if your bike is chained there 99% of the time, odds are you're going to get a fine. He couldn't see the logic. He got fined.
I honk them, then they often get aggressive that I dared to react to their perfectly cool maneuver that gave them those precious extra 5 seconds. Bloody a-holes. Had few almost-collisions even this year due to too aggressive drivers riding too close, some were literally car in front of us or next one behind. Keep your distance, I can't emphasize this enough.
"The odds of X happening are so low that what's the point?", to which I respond "It only needs to happen once for me to be dead, so, the stakes are way too high for me to risk the odds".
People often equate “risk” with “likelihood”, when it would be more effective to view risk = impact * likelihood.
He tells me later that it didn't quite work out in terms of saving money, but because he sometimes parked in spots that he could not get permits for, it actually saved time.
Between around mid-2006 and the end of 2008 I rode the train to work downtown every day. The trains were so crowded during rush hour that it was impossible for Transit police to board trains to check fares, and even outside rush hour, fare checks were very occasional. A monthly pass at the time was around $75 and a fine for fare evasion was around $200 (the first violation was less than $200, and I think it increased until a cap of something like $250 for repeat offenders). I'd worked it out that if I was caught without paying a fare less than once every three months, it would be cheaper to just pay the fine if/when I got caught rather than buy a pass. So I didn't buy a pass and decided to see how long it would take to actually get caught.
The answer was about 18 months. Got a $170 fine. Which I then forgot about and never actually paid. The statute of limitations on that fine has long since expired.
> Hitchens's razor is an epistemological razor that serves as a general rule for rejecting certain knowledge claims. It states:
> > What can be asserted without evidence can also be dismissed without evidence.
They assign infinite negative or postive values to outcomes and then it doesn't mighter what the likelihood or how much they uncertainty they have everywhere else, they insist that they need to do everything possible to cause or prevent whatever that outcome is.
Aside from other problems with it, there are a vast number of highly improbable and near-infinitely bad or good outcomes that might possibly occur which would require completey different actions if you're concerned about them.
praptak•8h ago
skrebbel•6h ago
Symmetry•5h ago
thornewolf•4h ago
One of my main gripes with AI doomerism is that it is downstream of being pascal's mugged into being a doomer