frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I built a synth for my daughter

https://bitsnpieces.dev/posts/a-synth-for-my-daughter/
697•random_moonwalk•5d ago•138 comments

An official atlas of North Korea

https://www.cartographerstale.com/p/an-official-atlas-of-north-korea
89•speckx•1h ago•38 comments

FreeMDU: Open-source Miele appliance diagnostic tools

https://github.com/medusalix/FreeMDU
180•Medusalix•6h ago•39 comments

Project Gemini

https://geminiprotocol.net/
121•andsoitis•4h ago•82 comments

Things I don't like in configuration languages

https://medv.io/blog/things-i-dont-like-in-configuration-languages
31•birdculture•8h ago•27 comments

My stages of learning to be a socially normal person

https://sashachapin.substack.com/p/my-six-stages-of-learning-to-be-a
86•eatitraw•2d ago•28 comments

WeatherNext 2: Our most advanced weather forecasting model

https://blog.google/technology/google-deepmind/weathernext-2/
97•meetpateltech•5h ago•41 comments

Show HN: ESPectre – Motion detection based on Wi-Fi spectre analysis

https://github.com/francescopace/espectre
25•francescopace•5h ago•2 comments

Israeli-founded app preloaded on Samsung phones is attracting controversy

https://www.sammobile.com/news/israeli-app-app-cloud-samsung-phones-controversy/
147•croes•3h ago•81 comments

Our dogs' diversity can be traced back to the Stone Age

https://www.bbc.com/news/articles/ce9d7j89ykro
11•1659447091•3d ago•2 comments

Giving C a superpower: custom header file (safe_c.h)

https://hwisnu.bearblog.dev/giving-c-a-superpower-custom-header-file-safe_ch/
202•mithcs•9h ago•164 comments

Aldous Huxley predicts Adderall and champions alternative therapies

https://angadh.com/inkhaven-7
10•surprisetalk•5h ago•1 comments

How to escape the Linux networking stack

https://blog.cloudflare.com/so-long-and-thanks-for-all-the-fish-how-to-escape-the-linux-networkin...
32•meysamazad•4h ago•2 comments

Astrophotographer snaps skydiver falling in front of the sun

https://www.iflscience.com/the-fall-of-icarus-you-have-never-seen-an-astrophotography-picture-lik...
88•doener•1d ago•22 comments

How when AWS was down, we were not

https://authress.io/knowledge-base/articles/2025/11/01/how-we-prevent-aws-downtime-impacts
26•mooreds•2h ago•11 comments

EEG-based neurofeedback in athletes and non-athletes

https://www.mdpi.com/2306-5354/12/11/1202
10•PaulHoule•2h ago•1 comments

DESI's Dizzying Results

https://www.universetoday.com/articles/desis-dizzying-results
9•belter•2h ago•0 comments

Raccoons are showing early signs of domestication

https://www.scientificamerican.com/article/raccoons-are-showing-early-signs-of-domestication/
56•pavel_lishin•3d ago•36 comments

The time has finally come for geothermal energy

https://www.newyorker.com/magazine/2025/11/24/why-the-time-has-finally-come-for-geothermal-energy
51•riordan•6h ago•80 comments

Implementing Rust newtype for errors in axum

https://rup12.net/posts/learning-rust-custom-errors/
4•ruptwelve•44m ago•0 comments

A graph explorer of the Epstein emails

https://epstein-doc-explorer-1.onrender.com/
86•cratermoon•2d ago•5 comments

Show HN: Bsub.io – zero-setup batch execution for command-line tools

9•wkoszek•4h ago•4 comments

Azure hit by 15 Tbps DDoS attack using 500k IP addresses

https://techcommunity.microsoft.com/blog/azureinfrastructureblog/defending-the-cloud-azure-neutra...
77•speckx•2h ago•71 comments

Google is killing the open web, part 2

https://wok.oblomov.eu/tecnologia/google-killing-open-web-2/
254•akagusu•4h ago•196 comments

Where do the children play?

https://unpublishablepapers.substack.com/p/where-do-the-children-play
232•casca•1d ago•185 comments

Replicate is joining Cloudflare

https://replicate.com/blog/replicate-cloudflare
227•bfirsh•5h ago•52 comments

Show HN: Continuous Claude – run Claude Code in a loop

https://github.com/AnandChowdhary/continuous-claude
8•anandchowdhary•2d ago•1 comments

Show HN: Building WebSocket in Apache Iggy with Io_uring and Completion Based IO

https://iggy.apache.org/blogs/2025/11/17/websocket-io-uring/
7•spetz•2h ago•0 comments

Are you stuck in movie logic?

https://usefulfictions.substack.com/p/are-you-stuck-in-movie-logic
111•eatitraw•7h ago•103 comments

People are using iPad OS features on their iPhones

https://idevicecentral.com/ios-customization/how-to-enable-ipad-features-like-multitasking-stage-...
86•K0IN•17h ago•95 comments
Open in hackernews

A new book about the origins of Effective Altruism

https://newrepublic.com/article/202433/happened-effective-altruism
43•Thevet•2h ago

Comments

philipallstar•1h ago
> In the past, there was nothing we could do about people in another country. Peter Singer says that’s just an evolutionary hangover, a moral error.

This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.

weepinbell•1h ago
Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.

givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.

GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).

It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".

gopher_space•42m ago
> giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong

When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.

cm2012•1h ago
You can pretty reliably save a life in a 3rd world country for about $5k each right now.
tavavex•48m ago
How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?
cm2012•42m ago
Here is a detailed methodology: https://www.givewell.org/impact-estimates. It convinced me that $5k is a reasonable estimate.
jimbokun•45m ago
Peter Singer is the LAST person I would go to for advice on morality or ethics.
jmount•1h ago
Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."

The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.

lesuorac•1h ago
> The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor

Is that an actual EA argument?

The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.

I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.

But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.

hobs•1h ago
When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.
lesuorac•6m ago
It still depends on the marginal contribution.

A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.

It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).

Eisenstein•1h ago
> Is that an actual EA argument?

you missed this part: "The arguments always feel to me too similar"

> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.

That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.

TimorousBestie•1h ago
> . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.

The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.

Alas.

libraryofbabel•1h ago
I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!

(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)

dang•1h ago
Ok, we've de-recovered the origins in the title above.
CactusBlue•1h ago
> I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.

I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.

hexator•1h ago
I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.

> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.

Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.

nullc•1h ago
> through rationalism,

When they write "rationalism" you should read "rationalization".

chrisweekly•49m ago
Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".
XorNot•45m ago
It's a variant of how you instantly know what a government will be like depending how much democracy they put in their name.
mikkupikku•56m ago
Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.
keiferski•1h ago
The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.

It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.

That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.

nonethewiser•1h ago
>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."

I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.

some_guy_nobel•57m ago
You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.

Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?

jandrese•52m ago
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
glenstein•32m ago
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.

I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.

Eisenstein•6m ago
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
RobinL•36m ago
> many of EA's most notorious supporters.

The fact they're notorious makes them a biased sample.

My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:

- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's

- We should carefully decide which charities to give to, because some are far more effective than others.

That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.

I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere

klustregrif•52m ago
> EA is about donating your money effectively

For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.

Aunche•19m ago
If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.
ghurtado•48m ago
I don't see anything in your comment that directly disagrees with the one that you've replied to.

Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.

It's the 21st century version of Papal indulgences.

glenstein•40m ago
I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.

I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.

nxor•58m ago
SBF has entered the chat
AgentME•47m ago
I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.
phantasmish•54m ago
I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.

Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.

jrochkind1•52m ago
What does "virtue ethics" mean?
keiferski•50m ago
One of the three traditional European philosophy approaches to ethics:

https://en.wikipedia.org/wiki/Virtue_ethics

EA being a prime example of consequentialism.

phantasmish•41m ago
… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)

Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.

TimorousBestie•42m ago
The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.

The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.

glenstein•3m ago
Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.

And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.

glenstein•7m ago
I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.

I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.

You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.

downrightmike•45m ago
Its basically the same thing as the church selling indulgences. Didn't matter if you stole the money, pay the church and go to heaven
Aunche•45m ago
> It's the perfect philosophy for morally questionable people with a lot of money.

The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.

chaseadam17•1h ago
Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.

Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.

The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.

Just try to help the people around you a bit more. If everyone did that, we'd be good.

keiferski•1h ago
That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.
phantasmish•49m ago
Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”

Which obviously has great appeal.

PaulDavisThe1st•46m ago
> Just try to help the people around you a bit more. If everyone did that, we'd be good.

This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.

It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.

Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.

chaseadam17•18m ago
If you live in a wealthy society it's possible to travel or move or get to know people in a different society and offer to help them.
mk12•45m ago
If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.
jimbokun•42m ago
What studies can you point to demonstrating your approach is more effective than donating to a GiveWell recommended non profit?
jmyeet•1h ago
I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.

Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.

"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.

I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.

omnimus•48m ago
Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.
matt3D•1h ago
Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.

I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.

throw4847285•58m ago
That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.

Effective altruism is a political movement, with all the baggage implicit in that.

Vinnl•43m ago
Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.
nonethewiser•59m ago
Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:

>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.

What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.

Lammy•50m ago
It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.

Anyone who has to call themselves altruistic simply isn't lol

pfortuny•43m ago
On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.

Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.

ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.

Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.

ngruhn•41m ago
> I think the initial assumption that that definition is good and harmless is just wrong.

Why? The alternative is to donate to sexy causes that make you feel good:

- disaster relief and then forget about once it's not in the news anymore

- school uniforms for children when they can't even do their homework because they can't afford lighting at home

- literal team of full time body guards for the last member of some species

chemotaxis•31m ago
That's a strawman alternative.

The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.

If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.

The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.

And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.

throw4847285•54m ago
The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.

If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.

jimbokun•37m ago
> Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.

Congratulations you rediscovered tithing.