frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•29s ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•1m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•2m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•2m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•3m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•5m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•5m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•8m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•8m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•9m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•9m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•9m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•10m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•10m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•11m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•12m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•18m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•19m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•19m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
23•bookofjoe•19m ago•8 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•20m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•21m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•22m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•22m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•23m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•23m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•23m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•24m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•25m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•25m ago•0 comments
Open in hackernews

Effective Altruists Use Threats and Harassment to Silence Their Critics

https://www.realtimetechpocalypse.com/p/how-effective-altruists-use-threats
31•konmok•2mo ago

Comments

Trasmatta•2mo ago
Turns out you can justify all sorts of reprehensible behavior when you convince yourself it's for "the greater good"

They learned the wrong lesson from Death Note

konmok•2mo ago
I find this really frustrating because I like the idea of "make a lot of money, then give most of it away to make the world better for everyone". But it seems like most of the people who proudly call themselves "effective altruists" are just heartless tech bros that toss their money into useless AGI cults.
plastic-enjoyer•2mo ago
EA is a neat philosophy to make greed and fraud seem principled.
themafia•2mo ago
How about just "build a good company and give most of the profits to the workers."

I just saved you several steps and opportunities for graft and corruption. Let's call it "immediate altruism."

konmok•2mo ago
Well, that doesn't really align with my interests, education, personality, or skills[1]. I do appreciate that criticism, but I'm looking for ways to give back that don't require abandoning my chosen career. I think there's a middle ground, basically.

[1]: What I mean is, I don't want to build my own company, and if I did, it would be in a very niche area that wouldn't directly benefit the people that most need help.

themafia•2mo ago
> Well, that doesn't really align with my interests, education, personality, or skills

Ah, well for you, we have "regular altruism." Just pick a charity and send them money or donate your time to volunteer efforts in your community.

> What I mean is

Completely understandable. I was responding to the idea that being a cut throat capitalist that treads on your customers and workers to make a bunch of money that you then export some fraction of into "effective altruism" is probably missing the point of altruism entirely. I think it creates more suffering than it solves.

listenallyall•2mo ago
Why the workers and not the customers, let's say? Workers have little risk, they get paid a salary regardless of the company's fortunes (unless the company is so awful it goes out of business). The customers who believed in the company enough to give them money, that seems more worthy of future compensation (via profit-sharing, as per your example).
themafia•2mo ago
> Why the workers and not the customers, let's say?

Workers represent more of an investment in time and training. Therefore they represent long term value. Customers are fickle, as they should be, but if I get beat on prices today they're gone tomorrow.

> customers who believed in the company enough to give them money

You seem to be describing a donor or possibly a member of a co-op. A customer simply receives an object of value in exchange of the money. As long as they're getting a good value on a quality product then their belief in the company is not material.

listenallyall•2mo ago
> Workers represent more of an investment in time and training

Not really, post a job opening and you'll likely get plenty of applicants, many of whom are indeed qualified. You taking the time to vet them and choose one is a benefit of having too many options. Getting customers is harder, you have to advertise and market your product, "acquisition cost" is a real thing.

> As long as they're getting a good value on a quality product

But especially early on, how do new customers know the product is quality? Someone has to be the first to eat at a restaurant or to hire you to paint their house, whatever. Even established companies - ordering clothes online when you can't actually feel the material, picking a dentist when you dont actually know how he/she will treat you, letting Uber decide who will drive you to the airport, how a pair of skis will perform from looking at them on a carpeted floor - most customer purchases and decisions are made with far-from-perfect information and they just have to put faith in the seller or service provider - and that's what I'm suggesting is worth future compensation.

> if I get beat on prices today they're gone tomorrow

If this is the case you really havent built much of a business, you're just selling commodities, and your employees have failed in differentiating your company from your competitors.

dfe•2mo ago
This is a time-tested winning strategy that too few corporate owners embrace.

When you look at some of the most well-known industrial companies, their founders basically did this.

Difficulty: give away too much of the company trying to raise capital and most investors won't let you do this. Of course, you aren't really the owner then anymore, are you?

I think that's the allure of effective altruism. You founded a company or were early enough in a company to have enough shares to sell to investors. Those investors want big returns. The company is now at their mercy, but hey, they gave you a pile of cash so you can spend it on feeling good.

Arnt•2mo ago
Didn't that book suggest that a single building used 20% of the water in South America? Amazingly sloppy.

I really do think that people should be careful about what they say in public and measure their words. And further, I think that the author of that book ought to be silent on that particular subject.

rendx•2mo ago
Interesting how you seem to see nothing inherently wrong in the provided quotes that call for violence against people of different opinion, but decided to only critique the person that admitted a mistake without aggression against anyone else, and demand they be (forever?) silent about a topic they seem interested in.

Why would you ever want to demand that someone "stay silent" about anything. Taking away somebody's voice is the lowest of the low. You do not have to read it or interact with it if you don't like it. And how would you want to be treated when you make a mistake? Can't you see how that leads straight to a world of zero progress, where people are afraid to do anything because it could turn out to be a mistake and they will be shunned for it by those that happen to have the most power? Are you not aware of the research into how bad punishment is for learning and advancement of society?

Williams, K. D., & Nida, S. A. (2022). Ostracism and social exclusion: Implications for separation, social isolation, and loss. Current opinion in psychology, 47, 101353. https://doi.org/10.1016/j.copsyc.2022.101353

Knapton, H. M. (2014). The Recruitment and Radicalisation of Western Citizens: Does Ostracism Have a Role in Homegrown Terrorism?. Journal of European Psychology Students, 5(1), 38-48. https://doi.org/10.5334/jeps.bo

konmok•2mo ago
Your comment kinda proves the article's point, don't you think? I mean, obviously your comment doesn't constitute a threat or harassment, but it does demonstrate the weird double standard and unbalanced scrutiny that the article describes.
Arnt•2mo ago
No double standard.

On one hand, I think that people should check before publication and not publish shit. That goes for posting on the internet, and also about publishing books.

Separately and orthogonally, I think that someone who doesn't check before publication and publishes shit should refrain from complaining about other people's shit, even though other people's shit really is shit.

konmok•2mo ago
Sure, fine. I'm just highlighting that you chose to call out Karen Hao for her mistake (which she admitted and corrected), but not Will MacAskill or any of the other big EA names that have made egregious and dishonest claims. If it's not a double standard, Will also ought to be silent on this subject, right?

That's what I mean by unbalanced scrutiny.

Arnt•2mo ago
IMNSHO she didn't admit and correct the mistake — yet. She admitted one mistake and has corrected none, and the one she admitted was IMNSHO not the severe one.

She made several mistakes, of which I'll describe two. One (modestly serious) was to confuse units and compute the wrong number. The second (against my religion) was to publish without sanity-checking. You and I both know she didn't check, because her estimate for the average water use of one building was 20% of the water use of the continent. Any sort of check would uncover that mistake.

We in the rational camp are supposed to behave differently from Alex Jones, and part of that is to check before we publish.

She's "making arrangements with her editor to rectify the situation". If she fixes every reported error, not just one of them, I'll have a lot of respect for her.

konmok•2mo ago
You dodged the question. That's not very rational of you :)
Arnt•2mo ago
Was the question about Will whatshisname? I'd rather not mention his possible transgressions, since I hadn't even heard the name until this thread.

Post a story about him to HN and I'll either comment or miss the thread, both are possible.

imtringued•2mo ago
> I’m referring to his claim that “it’s hard to see how” 7 to 10 degrees C of global warming “could lead directly to civilisational collapse.” He proceeds to assert that, while “climatic instability is generally bad for agriculture,” his “best guess” is that “even with fifteen degrees of warming, the heat would not pass lethal limits for crops in most regions.”

Do these people not understand that crops need water? Higher temperatures mean higher evaporation rates. Vast swathes of Iran have become inhospitable due to water mismanagement. That will lead to millions of refugees fleeing the country. Climate change is like poverty in this respect. If you're poor in water, you can't afford to make any mistakes.

Longtermism is a curse to long term thinking. You're not allowed to think about the next ten thousand years of humanity, because apparently that's too short of a window.

Not just that. This type of thinking is a contradiction of optimal control theory. Your model needs to produce an uninterrupted chain from the present to the future. Longtermism chops the present off, which means the initial state is in the future. You end up with an unknown initial state to which the Longtermists then respond by with hacks: They are adding a minimal set of constraints back. That minimal set is the avoidance of extinction, which is to say they are fine with almost everything.

Based on that logic, you'd think that Longtermists would be primarily concerned with colonizing planets in the solar system and building resilient ecosystems on earth so that they can be replicated on other planets or in space colonies, but you see no such thing. Instead they got their brains fried by the possibility of runaway AI [0] and the earth is treated as a disposable consumable to be thrown away.

[0] The AI they worry about is extremely narrow. Tesla doors that can't be opened in an emergency due to battery loss don't count as runaway AI, but if you had to beg the Tesla car AI to open the door and the AI refused, that would be worthy of AI safety research. However, they wouldn't see the problem in the inappropriate use of AI where it shouldn't be used in the first place.

yongjik•2mo ago
This blog post could have been better without the long intro featuring Timnit Gebru. It just reads as a boring "someone finds a mistake in a book, someone else quotes it sarcastically, a bunch of others call it out with 'bro why so butthurt'" story. You'll find better stories in r/subredditdrama.

As it reads now, I'm not sure if this is an objective critic of EA or gripes of someone who orbited in the same social space having a public fallout.