frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Terence Tao, at 8 years old (1984) [pdf]

https://gwern.net/doc/iq/high/smpy/1984-clements.pdf
253•gurjeet•18h ago•114 comments

Firefox 148 Launches with AI Kill Switch Feature and More Enhancements

https://serverhost.com/blog/firefox-148-launches-with-exciting-ai-kill-switch-feature-and-more-en...
232•shaunpud•4h ago•187 comments

Show HN: enveil – hide your .env secrets from prAIng eyes

https://github.com/GreatScott/enveil
74•parkaboy•4h ago•38 comments

Diode – Build, program, and simulate hardware

https://www.withdiode.com/
43•rossant•3d ago•8 comments

I Ported Coreboot to the ThinkPad X270

https://dork.dev/posts/2026-02-20-ported-coreboot/
191•todsacerdoti•10h ago•30 comments

Blood test boosts Alzheimer's diagnosis accuracy to 94.5%, clinical study shows

https://medicalxpress.com/news/2026-02-blood-boosts-alzheimer-diagnosis-accuracy.html
251•wglb•6h ago•94 comments

The Age Verification Trap: Verifying age undermines everyone's data protection

https://spectrum.ieee.org/age-verification
1448•oldnetguy•19h ago•1108 comments

Show HN: X86CSS – An x86 CPU emulator written in CSS

https://lyra.horse/x86css/
121•rebane2001•7h ago•46 comments

Show HN: Steerling-8B, a language model that can explain any token it generates

https://www.guidelabs.ai/post/steerling-8b-base-model-release/
159•adebayoj•9h ago•39 comments

Baby chicks pass the bouba-kiki test, challenging a theory of language evolution

https://www.scientificamerican.com/article/baby-chicks-pass-the-bouba-kiki-test-challenging-a-the...
76•beardyw•4d ago•18 comments

Making Wolfram Tech Available as a Foundation Tool for LLM Systems

https://writings.stephenwolfram.com/2026/02/making-wolfram-tech-available-as-a-foundation-tool-fo...
168•surprisetalk•11h ago•86 comments

Unsung heroes: Flickr's URLs scheme

https://unsung.aresluna.org/unsung-heroes-flickrs-urls-scheme/
64•onli•2d ago•20 comments

UNIX99, a UNIX-like OS for the TI-99/4A (2025)

https://forums.atariage.com/topic/380883-unix99-a-unix-like-os-for-the-ti-994a/
175•marcodiego•13h ago•53 comments

“Car Wash” test with 53 models

https://opper.ai/blog/car-wash-test
225•felix089•13h ago•265 comments

Intel XeSS 3: expanded support for Core Ultra/Core Ultra 2 and Arc A, B series

https://www.intel.com/content/www/us/en/download/785597/intel-arc-graphics-windows.html
30•nateb2022•5h ago•19 comments

A simple web we own

https://rsdoiel.github.io/blog/2026/02/21/a_simple_web_we_own.html
238•speckx•18h ago•156 comments

Genetic underpinnings of chills from art and music

https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1012002
26•coloneltcb•1d ago•6 comments

Show HN: PgDog – Scale Postgres without changing the app

https://github.com/pgdogdev/pgdog
261•levkk•18h ago•51 comments

Ladybird adopts Rust, with help from AI

https://ladybird.org/posts/adopting-rust/
1172•adius•22h ago•647 comments

What it means that Ubuntu is using Rust

https://smallcultfollowing.com/babysteps/blog/2026/02/23/ubuntu-rustnation/
143•zdw•16h ago•162 comments

Typed Assembly Language (2000)

https://www.cs.cornell.edu/talc/
36•luu•3d ago•14 comments

FreeBSD doesn't have Wi-Fi driver for my old MacBook, so AI built one for me

https://vladimir.varank.in/notes/2026/02/freebsd-brcmfmac/
363•varankinv•12h ago•295 comments

Show HN: Cellarium: A Playground for Cellular Automata

https://github.com/andrewosh/cellarium
12•andrewosh•3d ago•0 comments

AI-generated replies really are a scourge these days

https://twitter.com/simonw/status/2025909963445707171
7•da_grift_shift•24m ago•4 comments

Show HN: Babyshark – Wireshark made easy (terminal UI for PCAPs)

https://github.com/vignesh07/babyshark
117•eigen-vector•13h ago•43 comments

Writing code is cheap now

https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/
176•swolpers•16h ago•238 comments

Shatner is making an album with 35 metal icons

https://www.guitarworld.com/artists/guitarists/william-shatner-announces-all-star-metal-album
183•mhb•9h ago•81 comments

Graph Topology and Battle Royale Mechanics

https://blog.lukesalamone.com/posts/beam-search-graph-pruning/
4•salamo•2d ago•0 comments

SIM (YC X25) Is Hiring the Best Engineers in San Francisco

https://www.ycombinator.com/companies/sim/jobs/Rj8TVRM-software-engineer-platform
1•waleedlatif1•13h ago

Iowa farmers are leading the fight for repair

https://www.ifixit.com/News/115722/iowa-farmers-are-leading-the-fight-for-repair
128•gnabgib•8h ago•35 comments
Open in hackernews

The Engine Behind the Hype

https://www.onuruzunismail.com/blog/the-engine-behind-the-hype
29•tosh•2h ago

Comments

gas9S9zw3P9c•1h ago
How is this highly upvoted and on the front page? It's so clearly at least 50% AI written slop, probably closer to 95%. Wow HN these days... this site is dying, completely overrun by bots.
7777332215•1h ago
Unfortunately it is seemingly being overran by bots. What's the solution? Just read curated lists of blogs directly?
gas9S9zw3P9c•1h ago
I don't know either what the solution is other than human verification, but nobody wants that. Perhaps the times of semi-anonymous online communities are over and the best you can do now is follow real people you trust that can filter content for you.
7777332215•1h ago
Even with human verification, people are going to verify, then conduct bot activity. And worse, use other people's identities to verify, then spam.
linkregister•1h ago
I acknowledge it is a skill issue, but I can barely tell. Do you use a checklist to determine if something has been written by an LLM?
gas9S9zw3P9c•1h ago
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing is a good starting point. You start seeing patterns at some point.

But more important than the writing style, there is no interesting content here. It's all generic statements and platitudes with a bunch of generated links.

swordsith•1h ago
I use a ML model i trained off of AI and human text, it flags mostly as Claude. The article even references claude.
H8crilA•30m ago
Because it contains information. Content > form, this is one of the cornerstones of hacker culture actually.
grey-area•1h ago
"The lobster gets the attention. The engine gets quietly forked into production."

Is there any way to stop LLMs generating text like this?

Is it really better than just writing it yourself? I guess generating blog posts is lower-effort and thus wins in this attention economy people think they are competing in.

The actual engine is here: https://github.com/badlogic/pi-mono

An interesting idea to have a bit more control over what your 'agent' is doing and keep it simple. Some of the prompts do give me pause though, why do we talk to text generators as if they are people, have we found this works best, or is it a sort of cargo-cult?

https://github.com/badlogic/pi-mono/blob/main/.pi/prompts/is...

I love that he's telling his tools not to trust people in his comments here!

swordsith•1h ago
It looks like claude signed this at the end of the article for you 'ai, tools, opinion' lol
7777777phil•1h ago
The token comparison is what jumped at me.. Half the context window for the same work means mainstream tools are burning your tokens on system prompts and MCP plumbing before you even start.

I wrote earlier why the agent stack is splitting into specialized layers, and this is a good example of what drives it. Monolithic tools waste the most on their own overhead. https://philippdubach.com/posts/dont-go-monolithic-the-agent...

linkregister•1h ago
Having only heard of Pi in passing, I derived value from this article. I plan to experiment with it as a replacement for claude-code and gemini-cli.

[meta] I frequently see criticism about an article having been obviously written by an LLM. Often the author apologizes for it in the HN comments. I wonder what is wrong with me that I am totally unaware of this LLM stench.

I have gotten a lot of value from hearing people criticize candidates' LLM usage in technical interviews and conversations. I adjusted my style from talking about axioms and best practices. Instead I always relate a personal anecdote to explain a technical decision. This has been universally well-received.

So I am hoping that someone can respond with some helpful holistic answers beyond a checklist of "uses em-dashes" and "says 'not X, but Y'". I suspect my writing style could be easily declared as having been written by an LLM.

grey-area•56m ago
I think the main objections are the effort mismatch between writing and reading, and the likely low informational content and errors in the text, which the author may or may not have read. Some of this like the bit I quoted below is pretty nonsensical, comparing lobsters and quiet engines.

The writing definitely has a stench and is full of breathless comparisons which pretend some very minor thing is a breakthrough. This is annoying and trite and people dislike it for that alone but also for the more important reasons above.

This blog post could have been a lot shorter. I’d honestly rather just read the prompt with a link to pi. People like this author should just publish their prompt IMO and they will continue to be called out on it till this bubble pops.

abhikul0•17m ago
Pi works great for local models in my short testing. Wanted to try out Skills and how small models work with these agentic tools for small tasks, mainly [browser-use](https://github.com/browser-use/browser-use).

Tried Mistral Vibe, Codex, Opencode, Claude with gpt-oss:20b, ministral 3b,8b, Nemotron3 nano 30b and GLM 4.6V; finally settling on gpt for its impressive pass rates. All the other tools inject upto around 7-10k tokens on the initial prompt while pi takes up ~1.5k. This works out to be quite usable for my m3 pro machine that can take a while when processing the huge initial prompts from other CLIs.

While I'm not doing any serious work, and the other tools could be tweaked to use a simpler System Prompt; pi felt quick and the llms did use the tool calls correctly without being confused with all the huge prompts being dumped on them.