The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
Alas.
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
Guessing by the contents of this comment section, many people seem to believe that EA was invented by SBF, so it can be quite a shock for them to learn otherwise.
(That of course assumes that they would read the article...)
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
When they write "rationalism" you should read "rationalization".
It's at least 50% right in my experience.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
If I wished to mirror your comment style with its performative weight and implied authority, then I would adopt a tone closer to this.
I said:
"I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract"
"I think social dynamics are real and must be answered for"
"I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements"
In the face of that, you're trying to claim that I'm ignoring "social indicators as a valid heuristic."
That's not true and no amount of projection or character attacks can make it true. These are verbatim quotes from both of us. You're attempting to present a point I agree with as if it's a new unacknowledged critique.
Meanwhile, when I say the subject matter of a belief system matters for its content, you don't engage with it but reply to me by re-asserting the point I agree with as if it does the work of responding to me. No amount of social signalling takes the place of evaluating intellectual content on its merits and saying "intellectual content matters" is not a denial of the importance of social signalling.
And I said that people tend not to associate themselves with labels that have connotations that they don't like.
These two statements are not the same.
> "I think social dynamics are real and must be answered for"
Yet you completely dismiss my point.
> "I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements"
What does this have to do with group associations?
> In the face of that, you're trying to claim that I'm ignoring "social indicators as a valid heuristic."
Because you never acknowledged my point.
> That's not true and no amount of projection or character attacks can make it true.
You are the one who started with the insults, I was following suit.
> Meanwhile, when I say the subject matter of a belief system matters for its content, you don't engage with it but reply to me by re-asserting the point I agree with as if it does the work of responding to me.
Because I wasn't contesting that. I was adding something to it.
Most tellingly: you dismiss my direct, repeated acknowledgments as 'not counting' while claiming credit for 'adding' a point you never actually verbalized until this moment. Your standard for what constitutes 'acknowledgment' shifts based entirely on whether you're demanding it or taking credit for it.
And the Bell Curve example directly illustrates holding proponents responsible for social entanglements, the exact thing you claim I never addressed.
Your entire tone is disdainful and dismissive, and your constant need to insist that I am missing something when you refuse to acknowledge my basic point is tedious.
Yes, you acknowledged 'social indicators'. No, you did not acknowledge that 'people who stick around in clubs filled with other people they vehemently disagree with about core issues tend to be rare'.
Coincidentally, libertarian socialism is also a thing.
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).
The sheer scale of the spread is the impetus behind the entire train of thought.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
This has some truth to it and if EA were primarily about reminding people that not all donations to charitable causes pack the same punch and that some might even be deleterious, then I wouldn't have any issues with it at all. But that's not what it is anymore, at least not the most notable version of it. My knee jerk reaction to it comes from this version. The one where narcissistic tech bros posture moral and intellectual superiority not only because they give, but because they give better than you.
Subtract billionaire activity from your perception of EA attitude: is this critique still true? Who specifically makes it so?
But that's the problem, that is my entire perception of EA. I see regular altruism where, like in the shopping example I gave above, wanting to be effective is already intrinsic. Doing things like giving people information that some forms of giving are better than others is just great. No issues there at all, but again I see that as a part of plain old regular altruism.
Then there is Effective Altruism (tm) which is the billionaire version that I see as performative and corrupt. Even when it helps people, this seems to be incidental rather that the main goal which appears to be marketing the EA premise for self promotion and back patting.
"A lot of people think that EA is some hifalutin, condescending endeavor and billionaire utilitarians hijack its ideology to justify extreme greed (and sometimes fraud!), but in reality, EA is simply the imperative (accessible to anyone) to direct their altruistic efforts toward what will actually do the most good for the causes they care about. This is in contrast to the most people's default mode of relying on marketing, locality, vibes, or personal emotional satisfaction to guide their generosity."
See? Fair and accurate, and without propagating things I know or suspect to be untrue!
Why so?
“Well Kathy Griffin and Carrot Top fit the bill”
Do you think that is a fair characterization of red headed people in general?
“Not really, but I’m allowed to say so anyway.”
…
Sure I guess?
They'll insist on propagating it anyway, but they will actually say they don't actually believe it themselves!
And no, it’s not really. This negative branding mostly exists among people who actually will admit to knowing it’s not even accurate to them.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.
But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.
EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.
So basically everyone who has a lot of money to donate has questionable morals already.
The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?
In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.
You say this like it's fact beyond dispute, but I for one strongly disagree.
Not a fan of EA at all though!
You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.
If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.
The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it, but don't, in fact they benefit more from their workers being poorer and more desperate for a job. I hope that explains the exploitation.
The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups. Other contributors generally have a choice of sharing into the risk vs. playing it safe, and they usually pick the safe option because they know what the odds are. Nothing has been taken away from them.
For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.
Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.
That unicorns propagate this inequality is harmful in itself. The entire startup scene is also a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.
What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.
That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.
That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.
He's far from the only example.
I understand the distribution of wealth. I agree that in the US in particular it is setup to exploit poor people.
I don't think being rich is immoral.
That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.
I'm interested to hear how you argue for this position. It's one I rarely see.
To quote[1]:
> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.
[1] https://blog.givewell.org/2014/07/03/the-moral-value-of-the-...
This is an interesting take. So if we found out for certain that an action we are taking today is going to kill 100% of humans in 200 years, it would be immoral to consider that as a factor in making decisions? None of those people are living today, obviously, so that means we should not worry about their lives at all?
But to put future lives on the same scale (as in to allow for the possibility of measuring one against the other) of current lives is immoral.
Future lives are important, but balancing them against current lives is immoral
Just wait until you find out about vegetarianism's most notorious supporter.
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
A friend of mine used to "gotcha" any use of the expression "X is about Y", which was annoying but trained a useful intellectual habit. That may have been what EA's original stated intent was, but then you have to look at what people actually say and do under the name of EA.
But I want to take another tack. I never see anybody make the following argument. Probably that's because other people wisely understand how repulsive people find it, but I want to try anyway, possibly because I have undiagnosed autism.
EA-style donations have saved hundreds of thousands of lives. I know there are people who will quibble about the numbers, but I don't think you can sensibly dispute that EA has saved a lot of lives. This never seems to appear in people's moral calculus, like at all. Most of those are people who are poor, distant, powerless and effectively invisible to you but nevertheless, do they not count for something?
I know I'm doing utilitarianism and people hate it, but I just don't get how these lives don't count for something. Can you sell me on the idea that we should let more poor people die of preventable diseases in exchange for a more morally unimpeachable policy to donations?
Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue. And even from a utilitarian view we'd have to say that more of these donations happened than would have without the movement or with a different movement, which is difficult to measure. Consider the usaid thing - Elon musk may have wiped out most of the EA community gains by causing that defending, and was probably supported by the community in some sense. How to weigh in all these factors?
For me the core issue is why people are so happy to advocate for the deaths of the poor because of things like "the community has issues". Of course the withdrawal of EA donations is going to cause poor people to die. I mean yes, some funding will go elsewhere, but a lot of it's just going to go away. Sorry to vent but people are so endlessly disappointing.
> Elon musk may have wiped out most of the EA community gains by causing that defending
For sure!
> and was probably supported by the community in some sense
You sound fairly under-confident about that, presumably because you're guessing. It's wildly untrue.
And the rationalist community writ large is very much part of that. The whole idea that private individuals should get to decide whether or not to do charity, or where they can casually stop giving funds or etc, or that so much money needs to be tied up in speculative investments and so on, I find that all pretty distasteful. Should life or death matters be up to whims like this?
I apologize though, I've gotten kinda bitter about a lot of these things over the last year. It's certainly a well intentioned philosophy and it did produce results for a time - there's many worse communities than that.
For sure, not quibbling with any of that. The part I don't get is why it's EA's fault, at least more than it's many, many other people and organizations' fault. EA gets the flak because it wants to take money from rich people and use it to save poor people's lives. Not because it built the Silicon Valley environment / tech culture / investing bubble.
> Should life or death matters be up to whims like this?
Referring back to my earlier comment, can you sell me on the idea that they shouldn't? If you think aid should all come from taxes, sell me on the idea that USAID is less subject to the whims of the powerful than individual donations. Also sell me on the idea that overseas aid will naturally increase if individual donations fall. Or, sell me on the idea that the lives of the poor don't matter.
None of this will happen naturally though. We need to make it happen. So ultimately my position is that we need to aim efforts at making these changes, possibly at a higher priority than individual giving - if you can swing elections or change systems of government the potential impact is very high in terms of policy change and amount of total aid, and also in terms of how much money we allow the rich to play and gamble with. None of these are natural states of affairs.
None of this is new. What may be new is branding those traditional claims as a unique insight.
Even the terrible behavior and frightening sophistry of some high-profile proponents is really nothing groundbreaking. We've seen it before in other movements.
And the core idea of Effective Altruism is to actually verify those claims.
They donate a significant percentage of their income to the global poor, and save thousands of lives every year (see e.g. https://www.astralcodexten.com/p/in-continued-defense-of-eff... )
This is like saying "the master is go because he clothed his slaves"
For instance -
If I find some sort of fraud that will harm X number of users, but make me Y dollars - if Y > (harm caused), not doing (fraud making me Y dollars) could be interpreted as being "inefficient" with your resources or causing more harm. It's very easy to use the philosophy in this manner, and of course many see it as a huge perk. The types of people drawn to it are all much the same.
Just because the market pays for one activity does not mean ots externalitirs are equally solvedby the matkets valuation.
From basic physics, its akin to saying you can drop a vase and return it to predropped state with equal effort.
Entropy alone prevents EA.
So I’d argue on OPs side, I don’t care what EA stated intent is, it works pretty well as a smokescreen for the types who want to get really fucking rich by any means necessary. Even better if the donation target is a North Star they never actually reach.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
https://en.wikipedia.org/wiki/Virtue_ethics
EA being a prime example of consequentialism.
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
Virtue ethics is open-loop: the actions and virtues get considered without checking if reality has veered off course.
Consequentialist is closed-loop, but you have to watch out for people lying to themselves and others about the future.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.
If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.
Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.
The conclusion that many EA people seemed to reach is that keeping your high-paying job and hiring 10 people to do good deeds is more ethically laudable than doing the thing yourself, even though it may be inefficient. Which really rubs a lot of people the wrong way, as it should.
The argument of EA is that feelings can be manipulated (and often are) by the marketing work done by charities and their proponents. If we want to actually be effective we have to cut past the pathos and look at real data.
Secondly, you're missing the point I'm making, which is why many people find EA distasteful: it completely focuses on outcomes and not internal character, and it arrives at these incomes by abstract formulae. This is how we ended up with increasingly absurd claims like "I'm a better person because I work at BigCo and make $250k a year, then donate 10% of it, than the person that donates their time toward helping their community directly." Or "AGI will lead to widespread utopia in the future, therefore I'm ethically superior because I'm working at an AI company today."
I really don't think anyone is critical of EA because they think being inefficient with charity dollars is a good thing, so that is a strawman. People are critical of the smarmy attitude, the implication that other altruism is ineffective, and the general detached, anti-humanistic approach that the people in that movement portray.
The problems with it are not much different from utilitarianism itself, which EA is just a half-baked shadow of. As someone else in this comment section said, unless you have a sense of virtue ethics underlying your calculations, you end up with absurd, anti-human conclusions that don't make much sense to anyone with common sense.
There's also the very basic argument that maybe directly helping other people leads to a better world overall, and serves as an example than just spending money abstractly. That counterargument never occurs to the EA/rationalist crowd, because they're too obsessed with some master rational formula for success.
No, I did not miss that point at all. I think it is WRONG to focus on character. That leads us down the dark path of tribalism and character assassination and culture war.
If we're going to talk about a philosophy and an ethics of behaviour, we have to talk about ACTIONS. That's the only way we'll ever accomplish any good.
"But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."
Sam Bankman-Fried was all in with EA, but instead of putting his own money in, he put everybody else's in.
Also his choice of "good causes" was somewhat myopic.
To an EA, what you said is as laughable of a strawman as if someone summarized your beliefs as "it makes no difference if you donate to starving children in africa or if you do nothing, because it's your decision and neither is immoral".
The popularity of EA is even more obvious than what you described. Here's why it's popular. A lot of people are interested in doing good, but have limited resources. EAs tried to figure out how to do a lot of good given limited resources.
ou might think this sounds too obvious to be true, but no one before EAs was doing this. The closest thing was charity rankings that just measured what percent of the money was spend on administration. (A charity that spends 100% of its donations on back massages for baby seals would be the #1 charity on that ranking.) Finding ways to do a lot of good given your budget is a pretty intuitively attractive idea.
And they're really all about this too. Go read the EA forum. They're not talking about how their hands are clean now because they donated. They're talking about how to do good. They're arguing about whether malaria nets or malaria chemotreatments are more effective at stopping the spread of the disease. They're arguing about how to best mitigate the suffering of factory farmed animals (or how to convince people to go vegan). And so on. EA is just people trying to do good. Yeah, SBF was a bad actor, but how were EA charities supposed to know that when the investors that gave him millions couldn't even do that?
I hope SBF doesn’t buy a pardon from our corrupt president, but I hope for a lot of things that don’t turn out the way I’d like. Apologies for USA-centric framing. I’m tired.
https://www.mcsweeneys.net/articles/i-work-for-an-evil-compa...
It's really amazing when reading this kind of stuff how many people don't appear to realize others don't buy into their cult. The way I see it, "I work for a company that intellectual descendants of the 2nd (or the 1st) most evil ideology invented by man consider evil"
EA-the-brand turned into a speed run of the failure cases of utilitarianism. Because it was simply too easy to make up projections for how your spending was going to be effective in the future, without ever looking back at how your earning was damaging in the past. It was also a good lesson in how allowing thought experiments to run wild would end up distracting everyone from very real problems.
In the end an agency devoted to spending money to save lives of poor people globally (USAID) got shut down by the world's richest man, and I can't remember whether EA ever had anything to say about that.
But again, I recognize the appeal of your narrative so you're on safer ground than I am as far as HN popularity goes.
I have a lot of sympathy for the ideas of EA, but I do think a lot of this is down to EA-as-brand rather than whatever is happening at grassroots level. Perhaps it's in the same place as Communism; just as advocates need a good answer to "how did this go from a worker's rights movement to Stalin", EA needs an answer to "how did EA become most publicly associated with a famous fraudster".
EA had a fairly easy time in the media for a while which probably made its "leadership" a bit careless. The EA foundation didn't start to seriously disassociate itself from SBF until the collapse of FTX made his fraudulent activity publicly apparent.
But mostly, people (especially rich people) fucking hate it when you tell them they could be saving lives instead of buying a slightly nicer house. That (it seems to me) is why eg. MOMA / Harvard / The British Museum etc get to accept millions of dollars of drug dealer money and come out unscathed, whereas "EA took money from somebody who was subsequently convicted of fraud" gets presented as a decisive indicator of EA's moral character. It's also, I think, the reason you seem to have ended up thinking EA is anti-tax and anti-USAID.
I feel like I need to say, there's also a whole thing about EA leadership being obsessed with AI risk, which (at least at the time) most people thought was nuts. I wasn't really happy with the amount of money (especially SBF money) that went into that, but a large majority of EA money was still going into very defensible life-saving causes.
Edit: I made a few edits, sorry
I am not impressed with billionaires who dodge taxes and then give a few pennies to charity.
The government is quite literally all of us. Do better.
Doing that doesn’t buy you personal virtue. It doesn’t excuse heinous acts. But within the bounds of ordinary standards of good behavior, try to do the most good you can with the talents and resources at your disposal.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
Which obviously has great appeal.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
That's why I was replying too. Obviously, if you are willing to "do more", then you can potentially get more done.
Tourism does redistribute money, but a lot of resources go to taking care of the tourists.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
Effective altruism is a political movement, with all the baggage implicit in that.
An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.
Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.
Based on this, charity navigator says charity A is lower-ranked than charity B.
Now imagine that charity A and B can each absorb up to $1 billion in additional funding to work on their respective missions. Charity A saves one life for every $1,000 it gets, while B saves one life for every $10,000 it gets.
Charity navigator wouldn’t even attempt to consider this difference in its evals. EA does.
These evals get complex, and the EA organizations focused on charity evals like this have sophisticated methods for trying to do this well.
How does a charity spend money unless people give it money?
They need to fund raise. There's only so far you can get with volunteers shaking tins on streets.
If a TV adverts costs £X but raises 2X, is that a sensible cost?
Here's a random UK charity which spent £15m on fund raising.
https://register-of-charities.charitycommission.gov.uk/en/ch...
That allowed them to raise 3X the amount they spent. Tell me if you think that was unnecessary?
Sure, buying the CEO a jet should start ringing alarm bells, but most charities have costs. If you want a charity to be well managed, it needs to pay for staff, audits, training, etc.
Maybe, but quite possibly not, because that 2X didn't magically appear, it came out of other people's pockets, and you've got to properly account for that as a negative impact you're having.
>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.
What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.
Anyone who has to call themselves altruistic simply isn't lol
Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.
ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.
Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.
The real world has optimums, and there's not a single best thing to do, but some charities are just obviously closer to being one of those optimums. Donating to an art museum is probably not one of the optimal things for the world, for example.
Why? The alternative is to donate to sexy causes that make you feel good:
- disaster relief and then forget about once it's not in the news anymore
- school uniforms for children when they can't even do their homework because they can't afford lighting at home
- literal team of full time body guards for the last member of some species
The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.
If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.
And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.
I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.
So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.
The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.
I already specified why.
>It's basically just spending money to change the world into what you want.
Change isn't necessarily good. I think we can all rattle off a ton of missions to change throughout human history that were very bad, and did not even have good intentions. On top of that, even in less extreme cases, people have competing conceptions of the good. Resolving that is always going to include some messiness.
It's not just about donating. Modern day EA is focused on impactful jobs, like working in research, policy, etc., more than it is focused on donating money.
See for example: https://80000hours.org/2015/07/80000-hours-thinks-that-only-...
Instead, the definition of EA given on their own site is
> Effective altruism is the project of trying to find the best ways of helping others, and putting them into practice.
> Effective altruism breaks down into a philosophy that aims to identify the most effective ways of helping others, and a practical community of people who aim to use the results of that research to make the world better.
Oh, god forbid people try to change the world, especially when the change they want to see is fewer drowned children. Or eliminating malaria.
If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.
Congratulations you rediscovered tithing.
Most comments read like a version of "Who do you think you are?". Apparently it is very bad to try to think rationally about how and where to give out your money
I mean if rich people want to give out their money for good and beyond are actually trying to do work of researching whether it has an impact instead of just enjoying the high-status feeling of the optics of giving to a good cause (see The Anonymous Donor episode of Curb your enthusiasm), what is it to you all ?
It feels to me like some parents wanting to plan the birth of their children and all the people around are like "Nooo, you have to let Nature decide, don't try to calculate where you are in your cycle !!! "
Apparently this is "authoritarian", "can be used to justify anything" like eugenics but also will end up "similar to communism" but also leads to "hyperindividualism ?
The only way I can explain it is no one wants to give out 1% of their money away and hate the people who make them feel guilty by doing so and saying it would be a good thing so everyone is lashing out
Also I don't see Elon Musk giving out his money to save non-white people's lives anytime soon
So who are we talking about here ?
I don't think much of Christians but I love the Salvation army. They patrol the streets picking up whoever they find and help them. Regardless of background, nationality, religion or IQ. It goes against everything tech bros believe in.
Don't you have other things to do than to give flak to people who helped a population at the other side of the globe not to die of malaria ?
In the meantime, Christians did not give us vaccines and antibiotics without which you might not even be alive today. Also charity has a bad track record of being more about making the donors feel superior/good about themselves than actually making a change. Maybe you'd like to read "Down and out in London and Paris".
Don't get me wrong, the Salvation Army is great and everyone who wishes to make a difference is welcome to do so.
I, myself, am not even donating to EA causes and what I have done is much closer to Salvation Army stuff (a hot soup and a place to rest) but I don't see how the Salvation Army can be weaponized by against EA, that's insane.
There are loads of charities that are basically scams that give very little to the cause they claim to support and reserve most of the money for the high salaries of their board members. The EA argument, at its core, is to do some research before you give and try to avoid these scams.
I remember reading the original founder of (MADD) Mothers Against Drunk Driving, left because of this kind of thing.
"Lightner stated that MADD "has become far more neo-prohibitionist than I had ever wanted or envisioned … I didn't start MADD to deal with alcohol. I started MADD to deal with the issue of drunk driving".
https://en.wikipedia.org/wiki/Mothers_Against_Drunk_Driving#...
TBH I am not like, 100% involved, but my first exposure to EA was a blog post from a notorious rich person, describing how he chose to drop a big chunk of his wealth on a particular charity because it could realistically claim to save more lives per dollar than any other.
Now, that might seem like a perfect ahole excuse. But having done time in the NFP/Charity trenches, it immediately made a heap of sense to me. I worked for one that saved 0 lives per dollar, refused to agitate for political change that might save people time and money, and spent an inordinate amount of money on lavish gifts for its own board members.
While EA might stink of capitalism, to me, it always seemed obvious. Charities that waste money should be overlooked in favor of ones that help the most people. It seems to me that EA has a bad rap because of the people who champion it, but criticism of EA as a whole seems like cover for extremely shitty charities that should absolutely be starved of money.
YMMV
The way I first heard of Effective Altruism, I think before it was called that, took a rather different approach. It was from a talk given by the founders of GiveWell at Google. (This is going off of memory so this is approximate.)
Their background was people working for a hedge fund who were interested in charity. They had formed a committee to decide where best to donate their money.
The way they explained it was that there are lots of rigorous approaches to finding and evaluating for-profit investments. At least in hindsight, you can say which investments earned the most. But there's very little for charities, so they wanted to figure out a rigorous way to evaluate charities so they could pick the best ones to donate to. And unlike what most charitable foundations do, they wanted to publish their recommendations and reasoning.
There are philosophical issues involved, but they are inherent in the problem. You have some money and you want to donate it, but don't know which charity to give it to. What do you mean by the best charity? What's a good metric for that?
"Lives saved" is a pretty crude metric, but it's better than nothing. "Quality-adjusted life years" is another common one.
Unfortunately, when you make a spreadsheet to try to determine these things, there are a lot of uncertain inputs, so doing numeric calculations only provides rough estimates. GiveWell readily admits that, but they still do a lot of research along these lines to determine which charities are the best.
There's been a lot of philosophical nonsense associated with Effective Altruism since then, but I think the basic approach still makes sense. Deciding where to donate money is a decision many people have! It doesn't require much in the way of philosophical commitments to decide that it's helpful to do what you can to optimize it. Why wouldn't you want to do a better job of it?
GiveWell's approach has evolved quite a bit since then, but it's still about optimizing charitable donations. Here's recent blog post that goes into their decision-making:
https://blog.givewell.org/2025/07/17/apples-oranges-and-outc...
Origins of some movement or school of thought or whatever will have many threads. I worked in charity fundraising over 20 years ago as one of the first things I did after first getting out of college, and the first organization I am aware of that did public publishing of charity evaluations was GuideStar, founded in 1994. This is the kind of thing that had always been happening in public foundations and government funding agencies, but they tended not to publish or well organize the results such that any individual donor could query. GuideStar largely collected and published data that was legally required to be public but not easy to collate and query, allowing donors to see what proportion of a donation went to programs versus overhead and how effective each charity was at producing the outcomes it was designed to produce. GiveWell went beyond that to making explicit attempts at ranking impact across possible outcomes, judging some to be more important than others.
As I recall from the times, taking this idea to places like Google and hedge funds came from the observation that rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding. Think of Phil Knight almost single handledly turning the University of Oregon into a national football power, places like the Mozilla Foundation or New York Met having chairpersons earning 7 or 8 figure salaries, or the ever popular "give money to get your name on a hospital wing," which usually involves giving money to hospitals that already had a lot of money.
Parallel to that is guys like Singer trying to make a more rationally coherent form of consequentialism that doesn't bias the proximate over the distant.
Eventually, LessWrong latches onto it, it merges with the "earn to give" folks, and decades later you end up with SBF and that becomes the public view of EA.
Fair enough and understandable, but it doesn't mean there were never any good ideas there, and even among rich people, whatever you think of them, I'd say Bill and Melinda Gates helped more with their charity than Phil Knight and the Koch brothers.
To me, the basic problem is people, no matter how otherwise rational they may be, don't deal well with being able to grok directionality without being able to precisely quantify, and morality involves a lot of that. We also don't do well with incommensurate goods. Saving the life of a starving child is probably almost always better than making more art, but that doesn't reduce to we want or should want a world with no art, and GiveWell's attempts at measuring impact in dollars clearly doesn't mean we can just spend $5000 x <number of people who die in an average year> and we can achieve zero deaths, or even just zero from malaria and parasitic worms. These are fuzzy categories that involve uncertain value judgments and moving targets with both diminishingly marginal utility and diminishing marginal effectiveness. Likewise, earning to give clearly breaks down if you imagine a world with nothing but hedge fund managers and no nurses. Funding is important, but someone still has to actually do the work and they're "good" people, too, maybe even better.
In any case, I at least feel confident in stating that becoming a deca-billionaire at all costs, including fraud and crime, so you can helicopter cash onto poor people later in life, is not the morally optimal human pursuit. But I don't know what actually is.
How do you figure out which causes need the most money (have "more room for funding", in EA terms) or are "really" charitable by most understanding? You need to rank impact across possible outcomes and judge some more relevant than others, which is what GiveWell and Open Philanthropy Project do.
But hoping I'm misreading and engaging anyway, "room for funding" varies in the specifics across domains, but involves some combination of unmet need plus organizational capacity to meet that need. Try not to get hung up on the object-level examples because I have no idea if these are true now or were true in the past, but I think they're close to real examples from 15 years ago or so last time I cared about this. Imagine you've got 50000 people in some equatorial country living in places scurged by malaria. 5000 of them have nets. Some charity exists with the supply chains, connections to manufacturers, and local distributors such that they could easily giving 20000 additional people net, but they simply don't have the money to buy them. Conversely, imagine pancreatic cancer research is in a state whereby there may be plenty of fruitful areas of research not currently being explored, but every person on the planet qualified to conduct such research is 100% booked with whatever it is they're currently doing for at least the next five years. Then it is more effective to donate to the former than the latter, at least for the next five years, at least up to the point that there is still sufficient unmet need and capacity in the former. Again emphasizing that they're not static conditions.
As for "really" charitable, as always, it's a judgment call. I assume most people would find poverty assistance and medical aid to be charitable, but funding college sports not as much, in spite of both qualifying for tax deductions under US tax law. I can't guarantee all people will agree, but somethig like GiveWell is nonetheless premised on the assumption that some outcomes are more morally valuable than others. Curing children of parasites that might kill them or severely impede mental development is more morally valuable than the civic pride and bragging rights of Oregon alumni and the local fan base.
But at the same time, just as I say above I don't think we want a world with no art, I also don't think we want a world with no sports. I can't speak for GiveWell, but certainly I don't think the correct amount of money to donate to amateur sports or art museums is zero. In line with the origin coming from hedge funds, instead of all or nothing thinking like that, we should think in terms of portfolio allocations. Overweight high-QALY early childhood health interventions and underweight adding $10 million to Harvard's $50 billion endowment. Exactly how much? I have no idea. Each person should decide that for themselves, but I still think there's value in bringing up the topic and starting the discussion.
DDT is also a "very bad, evil, and wicked" thing - for anyone educated between 1970 and 2010.
I remember cartoons having villains who were seeking to make DDT legal again - that's how stigmatized it is.
The EA people did a pretty good job rehabilitating DDT. Good for them.
But the problem is there still asking, "What's the cheapest way to save a human life?"
The GiveWell objective is lives saved or QALYs or whatever. Others have qualia maximized or whatever. But the idea is entirely logical.
I think part of the problem with popularization is that many people have complex objective functions, not all of which are socially acceptable to say. As an example, I want to be charitable in a way that grants me status in my social circle, where spending on guinea worm is less impressive than, say, buying ingredients for cookies, baking them, and giving the cookies to the poor.
Personally I think that’s fine too. I know that some aspect of the charity I do (which is not effective, I must admit) has a desire for recognition and I think it’s good to encourage this because it leads to more charity.
But for many people, encouraging stating one’s objective function is seen as a way to “unearth the objective functions of the ones with lesser motives” and some number of EA people do that.
To say nothing of the fact that lots of people get very upset about the idea that “you think you’re so much better than me?” and so on. It’s an uphill climb, and I wouldn’t do it, but I do enjoy watching them do it because I get the appeal.
https://www.givewell.org/how-we-work/our-criteria/cost-effec...
Maybe a book clarifying what it really is is a good idea.
For example, the most prominent scandal in the U.S. right now is the Epstein saga. A massive scandal that likely involves the President, a former President, one of the richest men in the world, and a member of the UK royal family.
And in a nutshell, Eostein’s job and source of power was his role as a philanthropist.
No one is using that example to say that regular philanthropy and charity has something wrong with it (even though there are a lot of issues with it…).
Bingo card (and their rebuttals):
– Effective altruists donate money and think it’s the most effective way to do good. [1][2]
– They think that exploiting people is fine if money is given to a good cause. [3][4][5]
– They think they are so much morally-superior/better than us. [3]
– Sam Bankman-Fried is a thief and he self-identified as an EA, so EA must be bad as a whole. [4][6]
– It’s dangerous because it’s an “end justifies the means” philosophy. [4][5]
– If it’s not perfect then it’s terrible and has no merit whatsoever. [7][8][9]
– They think they are so smart but they just stole the idea of donating part of the income from Christians. [10][11]
——————————
[1] https://www.effectivealtruism.org/faqs#objectionsto-effectiv...
[2] “80,000 Hours thinks that only a small proportion of people should earn to give long term”: https://80000hours.org/2015/07/80000-hours-thinks-that-only-...
[3] What We Owe The Future (EA book): “naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.” and “it's wrong to do harm even when doing so will bring about the best outcome.”
[4] https://threadreaderapp.com/thread/1591218028381102081.html / https://xcancel.com/willmacaskill/status/1591218028381102081
[5] The Precipice (EA book): “Don't act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.”
[6] “Bankman-Fried agreed his ethically driven approach was "mostly a front".”: https://www.bbc.com/worklife/article/20231009-ftxs-sam-bankm...
[7] “It’s perfectly okay to be an imperfect effective altruist”: https://www.givingwhatwecan.org/blog/its-perfectly-okay-to-b...
[8] “Mistakes we’ve made”: https://www.centreforeffectivealtruism.org/our-mistakes
[9] “GiveWell's Impact”: https://www.givewell.org/about/impact
[10] There is a large Christian community within EA. “We are Christians excited about doing the most good possible.”: https://www.eaforchristians.org/
[11] Many EAs consider Christian charity to be one of the seeds of EA. “A potential criticism or weakness of effective altruism is that it appeals only to a narrow spectrum of society, and exhibits a ‘monoculture’ of ideas. I introduce Dorothea Brooke, a literary character who I argue was an advocate for the principles of effective altruism -- as early as 1871 -- in a Christian ethical tradition”: https://forum.effectivealtruism.org/posts/TsbLgD4HHpT5vrFQC/...
While greater efficiencies are always welcome, it seem immature or unwise to bring the “Well I tell ya what I’d do…” attitude to incredibly complex messy human endeavors like philanthropy. Ditto for politics. Rather get in there and learn why these systems are so messy…that’s life, really.
I think people fall into that trap because our economic programming suggests that money has something to do with merit. A mind that took that programming well will have already made whatever sacrifices are necessary to also see altruism as an optimization problem.
philipallstar•2mo ago
This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.
weepinbell•2mo ago
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
gopher_space•2mo ago
When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.
weepinbell•2mo ago
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
zozbot234•2mo ago
rincebrain•2mo ago
As the numbers get larger, it becomes easier and easier to suggest that the organization's continued existence is still a net positive as you waste more and more on the organization bloating.
It's also surprisingly hard to avoid - consider how the ACA required that 85% of premiums go to care, and how that meant that the incentives became for the prices to become enormous.
potato3732842•2mo ago
To be fair, that particular example was obvious from day 1.
rincebrain•2mo ago
But it's an excellent example of how something you could see some naive people in good faith claiming is a good thing can be pathologized.
philipallstar•2mo ago
cm2012•2mo ago
tavavex•2mo ago
cm2012•2mo ago
bombcar•2mo ago
lmm•2mo ago
philipallstar•2mo ago
Perhaps, but it's exactly the type of thinking the article is describing.
lmm•2mo ago
tovej•2mo ago
cm2012•2mo ago
jimbokun•2mo ago