frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

AI is Anti-Human (and assorted qualifications)

https://njump.me/naddr1qqxnzde58yerxv3exycrsdpjqgsf03c2gsmx5ef4c9zmxvlew04gdh7u94afnknp33qvv3c94kvwxgsrqsqqqa28nmz2vk
34•fiatjaf•7mo ago

Comments

tines•7mo ago
Brilliantly written. Another of my favorite works on these ideas is Technopoly by Neil Postman. Absolutely a must-read.
AlexeyBrin•7mo ago

     AI work is the same kind of thing as an AI girlfriend, because work is not only for the creation of value (although that's an essential part of it), but also for the exercise of human agency in the world. In other words, tools must be tools, not masters.
No matter if you agree or not with the author, the article is worth your time.
somewhereoutth•7mo ago
> not doing [AI], either on an individual or a collective level, is just not an option.

I would have liked a deeper exploration of this point, since not doing it does indeed fix all the issues raised.

delichon•7mo ago
It's already too entrenched to revert without a broad anti AI consensus that doesn't exist. I can imagine the body politic suddenly seeing the light but that's fantasy. You could have as easily rolled back smart phones in 2009 based on vague warnings of future social damage.
somewhereoutth•7mo ago
I don't see that - I do see failed projects, negative returns, and general disillusionment.

The only thing propping it up is the sunk cost fallacy, on now a biblical scale, requiring ever more hype to cover the rising disappointment (ChatGPT 5 when??). The blowback from the inevitable collapse will be similarly biblical in proportions, a wise strategy would be to avoid it all (as an individual and/or collectively at some level).

In fairness, there are / will be some areas where LLMs have business benefit, but likely those areas will be limited, and the 'AI stuff' will be hidden from the user.

robbiewxyz•7mo ago
Ultimately the attitude of self-restraint called for in the article seems near-impossible: modern capitalism puts companies in a desperate race for dominance, and modern foreign policy puts countries in the same. From the penultimate paragraph:

"I think in all of this is implicit the idea of technological determinism, that productivity is power, and if you don't adapt you die. I reject this as an artifact of darwinism and materialism. The world is far more complex and full of grace than we think."

This argument is the one that either makes or breaks the article's feasibility and I fear the author is too optimistic.

What force of nature is it that can possibly hold its own against darwinism?

seabombs•7mo ago
Great article.

> Don't underestimate the value to your soul of good work.

This in particular resonated with me.

My concern is not really that AI will take over my job and replace me (if it is genuinely better at my job than I am, I think I would quite happily give it up and find something else to do). My concern is that AI will partially take over my job, doing the parts that I enjoy (creativity, thinking, learning) and leave to me the mundane aspects.

solid_fuel•7mo ago
I keep finding myself contemplating the proscription of AI in the universe of Dune. While the later prequels by Brian Herbert write a fairly typically backstory of an AI rebellion and war, the original novels hint at something - IMO - far more interesting: AI was used by other people to manipulate and control, it didn't take control directly. People rebelled over the sheer amount of power that AI and computers enabled a few people to wield over society and the way it turned human life into one of being a cog in a machine, leading regimented and structured lives.

To wit: while ChatGPT and Gemini and most of these current models are fairly well behaved, when you use a model even for something seemingly innocuous like summarizing an article or an email you are indirectly allowing another person to decide what is important to you. Consider the power that gives other people over you. It pays to put on the devils cap sometimes and imagine a future where the LLMs that power our tools are controlled by people who don't exercise any restraint.

We have already seen shades of this with the amount of influence Facebook and TikTok and Twitter can wield over political discourse, picking which issues are winners and which are losers by choosing (even indirectly, by simply reacting to engagement metrics) what to emphasize and what to suppress. LLMs unlock another level entirely. An LLM can easily summarize an article while conveniently leaving out any negative mentions of certain politicians or parties. They can summarize emails and texts from family and friends while eliding any section asking for help or action.

While most people are somewhat distrustful of obviously biased sources, they don't regard LLMs with the same suspicion. An LLM could easily write a summary of Alan Turing's life while dropping all the bits about his sexuality and the way he was persecuted and castrated by the British government simply for trying to love.

I am not alleging that any of these things have happened, yet. But it is best to think of LLMs and generative AI in general as a tool that works _for someone else_. They can be very useful tools, but they can also be subverted and manipulated in subtle ways, and should not automatically be regarded as unbiased.

tim333•7mo ago
In general they seem less biased than human produced web information because the LLMs so far seem to basically download the whole internet and literature while human writers have selective biases. I find it quite funny that if you asked Grok who the biggest spreader of misinformation on X was it would say Musk. When people try to bias them like putting instructions in that all art must be mixed race you get ridiculous stuff like Gemini's black nazis. Not sure it will continue like that but so far not too bad.
solid_fuel•7mo ago
Biases inserted into the prompts are a very crude way to bias an LLM, and indeed don't work well. They result in very visible deviations like when Grok decided inject "white genocide" into all sorts of unrelated topics.

The real danger of model biasing though is in the training stages - by being selective in the source material you train against. With carefully constructed training data I am certain you could even bias an LLM to actively steer conversations away from topics you want to avoid.