frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
258•theblazehen•2d ago•86 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
27•AlexeyBrin•1h ago•3 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
707•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
70•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•48m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
45•speckx•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Welcome to the Room – A lesson in leadership by Satya Nadella

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
240•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
238•dmpetrov•16h ago•127 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•150 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
390•ostacke•22h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
304•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
24•bikenaga•3d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
26•1vuio0pswjnm7•2h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•462 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

AI is Anti-Human (and assorted qualifications)

https://njump.me/naddr1qqxnzde58yerxv3exycrsdpjqgsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgsrqsqqqa28nmz2vk
34•fiatjaf•7mo ago

Comments

tines•7mo ago
Brilliantly written. Another of my favorite works on these ideas is Technopoly by Neil Postman. Absolutely a must-read.
AlexeyBrin•7mo ago

     AI work is the same kind of thing as an AI girlfriend, because work is not only for the creation of value (although that's an essential part of it), but also for the exercise of human agency in the world. In other words, tools must be tools, not masters.
No matter if you agree or not with the author, the article is worth your time.
somewhereoutth•7mo ago
> not doing [AI], either on an individual or a collective level, is just not an option.

I would have liked a deeper exploration of this point, since not doing it does indeed fix all the issues raised.

delichon•7mo ago
It's already too entrenched to revert without a broad anti AI consensus that doesn't exist. I can imagine the body politic suddenly seeing the light but that's fantasy. You could have as easily rolled back smart phones in 2009 based on vague warnings of future social damage.
somewhereoutth•7mo ago
I don't see that - I do see failed projects, negative returns, and general disillusionment.

The only thing propping it up is the sunk cost fallacy, on now a biblical scale, requiring ever more hype to cover the rising disappointment (ChatGPT 5 when??). The blowback from the inevitable collapse will be similarly biblical in proportions, a wise strategy would be to avoid it all (as an individual and/or collectively at some level).

In fairness, there are / will be some areas where LLMs have business benefit, but likely those areas will be limited, and the 'AI stuff' will be hidden from the user.

robbiewxyz•7mo ago
Ultimately the attitude of self-restraint called for in the article seems near-impossible: modern capitalism puts companies in a desperate race for dominance, and modern foreign policy puts countries in the same. From the penultimate paragraph:

"I think in all of this is implicit the idea of technological determinism, that productivity is power, and if you don't adapt you die. I reject this as an artifact of darwinism and materialism. The world is far more complex and full of grace than we think."

This argument is the one that either makes or breaks the article's feasibility and I fear the author is too optimistic.

What force of nature is it that can possibly hold its own against darwinism?

seabombs•7mo ago
Great article.

> Don't underestimate the value to your soul of good work.

This in particular resonated with me.

My concern is not really that AI will take over my job and replace me (if it is genuinely better at my job than I am, I think I would quite happily give it up and find something else to do). My concern is that AI will partially take over my job, doing the parts that I enjoy (creativity, thinking, learning) and leave to me the mundane aspects.

solid_fuel•7mo ago
I keep finding myself contemplating the proscription of AI in the universe of Dune. While the later prequels by Brian Herbert write a fairly typically backstory of an AI rebellion and war, the original novels hint at something - IMO - far more interesting: AI was used by other people to manipulate and control, it didn't take control directly. People rebelled over the sheer amount of power that AI and computers enabled a few people to wield over society and the way it turned human life into one of being a cog in a machine, leading regimented and structured lives.

To wit: while ChatGPT and Gemini and most of these current models are fairly well behaved, when you use a model even for something seemingly innocuous like summarizing an article or an email you are indirectly allowing another person to decide what is important to you. Consider the power that gives other people over you. It pays to put on the devils cap sometimes and imagine a future where the LLMs that power our tools are controlled by people who don't exercise any restraint.

We have already seen shades of this with the amount of influence Facebook and TikTok and Twitter can wield over political discourse, picking which issues are winners and which are losers by choosing (even indirectly, by simply reacting to engagement metrics) what to emphasize and what to suppress. LLMs unlock another level entirely. An LLM can easily summarize an article while conveniently leaving out any negative mentions of certain politicians or parties. They can summarize emails and texts from family and friends while eliding any section asking for help or action.

While most people are somewhat distrustful of obviously biased sources, they don't regard LLMs with the same suspicion. An LLM could easily write a summary of Alan Turing's life while dropping all the bits about his sexuality and the way he was persecuted and castrated by the British government simply for trying to love.

I am not alleging that any of these things have happened, yet. But it is best to think of LLMs and generative AI in general as a tool that works _for someone else_. They can be very useful tools, but they can also be subverted and manipulated in subtle ways, and should not automatically be regarded as unbiased.

tim333•7mo ago
In general they seem less biased than human produced web information because the LLMs so far seem to basically download the whole internet and literature while human writers have selective biases. I find it quite funny that if you asked Grok who the biggest spreader of misinformation on X was it would say Musk. When people try to bias them like putting instructions in that all art must be mixed race you get ridiculous stuff like Gemini's black nazis. Not sure it will continue like that but so far not too bad.
solid_fuel•7mo ago
Biases inserted into the prompts are a very crude way to bias an LLM, and indeed don't work well. They result in very visible deviations like when Grok decided inject "white genocide" into all sorts of unrelated topics.

The real danger of model biasing though is in the training stages - by being selective in the source material you train against. With carefully constructed training data I am certain you could even bias an LLM to actively steer conversations away from topics you want to avoid.