Given they're all for this sort of horrible calculations, if someone doesn't include this in their thinking - and as far as I read, this potential is not taken into consideration - they might have just condemned billions to die for a wrong calculation. Gah
> This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive.
I do not believe that this is either obvious, an accurate generalization of longtermism or backed by references (did I miss one?)
EDIT: Did miss the "noted elsewhere" link (pdf): https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ug...
It fills in a lot of the blanks. I should have read it before engaging.
I also am particularly amused by the worship of Bayesian statistics without serious reflection on the fact that it is premised on subjective belief in the prior.
He scrapped the long-term planning system, shrinking the planning window down shorter and shorter. They ran the company on a 30-day window for a year or 18 months. Only then did they get the numbers right. After that, they extended the planning window, but never back to five years.
So, those who are doing longtermism: How accurate are your short-term forecasts? Can you accurately predict a year from now? If not, how can you predict the long term future with enough accuracy to act on?
Worse: For every year further in the future, you should probably multiply your certainty in your prediction by 0.8. (This depends on the nature of the prediction, of course, and is a made-up number. Still, the point is valid - the longer term the prediction, the more probability that it is not only wrong, but wildly wrong.) That means (using that number) that predictions 10 years out only have 10% accuracy. Predictions 20 years out only have 1% accuracy. How do you think you know enough today about what will happen then in order to make decisions now on the basis of what will happen then?
Putting the extremes as the two options frame the discussion as having only two unreasonable alternatives. Worrying and taking action to avoid the worst of climate change won't ensure our very long term survival, but we are expected to survive a few centuries more at the very least. Its like reaching 40 and do everything in our hands to kill ourselves what the civilization is doing.
amarcheschi•4h ago
PaulHoule•4h ago
amarcheschi•4h ago
spencerflem•4h ago
tonyarkles•3h ago
On the surface, morally, I agree with you.
But when it comes to practice, things get tougher. Whether capitalist or communist or random utopia, ultimately most of it comes down to: how do we decide, individually and collectively, how each person spends their finite time on Earth? While imperfect, in most places we use money as a way to compensate individuals for the time they’ve spent performing an activity that they wouldn’t have spent time doing on their own. They can then trade that money for the product of other peoples’ labour (things that they wouldn’t have done on their own).
Distilling it down to a dollar value sucks, but is essentially acting as a proxy for “how many hours of how many of the right peoples’ lives gets spent on solving problem X?”. Problem X could be an individual problem: how many hours of how many oncologists lives should be spent trying to cure this specific person’s cancer? And given a finite supply of oncologists and a finite number of hours each one can work in a day, how do we divide their time between different patients? This scales up to national and international levels; people work, the governments take some fraction of that compensation and redistribute it to others in order to take on tasks that people and companies don’t want to do on their own for free. But there’s a finite amount of that money too, stemming from there being a finite number of humans qualified to solve specific problems and finite time from each of them.
amarcheschi•3h ago
Yes, allocating funds to research is something that has to be done to distribute resources given they're not infinite. But that's a real scenario. Not people pulling bayesian bs in a bad way with random numbers they agree with. It's a completely different scenario, even if resource allocation is necessary in our lifes