frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Claude Code: Channels

https://code.claude.com/docs/en/channels
99•jasonjmcghee•1h ago•42 comments

Wayland set the Linux Desktop back by 10 years

https://omar.yt/posts/wayland-set-the-linux-desktop-back-by-10-years
90•omarroth•1h ago•46 comments

Astral to Join OpenAI

https://astral.sh/blog/openai
1208•ibraheemdev•12h ago•749 comments

Google details new 24-hour process to sideload unverified Android apps

https://arstechnica.com/gadgets/2026/03/google-details-new-24-hour-process-to-sideload-unverified...
486•0xedb•8h ago•569 comments

Cockpit is a web-based graphical interface for servers

https://github.com/cockpit-project/cockpit
164•modinfo•4h ago•102 comments

How the Turner twins are mythbusting modern technical apparel

https://www.carryology.com/insights/how-the-turner-twins-are-mythbusting-modern-gear/
113•greedo•2d ago•58 comments

Return of the Obra Dinn: spherical mapped dithering for a 1bpp first-person game

https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742
236•PaulHoule•3d ago•34 comments

Bombarding gamblers with offers greatly increases betting and gambling harm

https://www.bristol.ac.uk/news/2026/march/bombarding-gamblers-with-offers-greatly-increases-betti...
61•hhs•2h ago•60 comments

Show HN: Three new Kitten TTS models – smallest less than 25MB

https://github.com/KittenML/KittenTTS
313•rohan_joshi•9h ago•106 comments

Noq: n0's new QUIC implementation in Rust

https://www.iroh.computer/blog/noq-announcement
142•od0•7h ago•19 comments

Drugwars for the TI-82/83/83 Calculators

https://gist.github.com/mattmanning/1002653/b7a1e88479a10eaae3bd5298b8b2c86e16fb4404
6•robotnikman•57m ago•2 comments

The Day I Discovered Type Design

https://www.marksimonson.com/notebook/view/the-day-i-discovered-type-design/
25•ingve•2h ago•2 comments

4Chan mocks £520k fine for UK online safety breaches

https://www.bbc.com/news/articles/c624330lg1ko
262•mosura•10h ago•415 comments

EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages

https://esolang-bench.vercel.app/
56•matt_d•4h ago•27 comments

How many branches can your CPU predict?

https://lemire.me/blog/2026/03/18/how-many-branches-can-your-cpu-predict/
15•chmaynard•1d ago•28 comments

Waymo Safety Impact

https://waymo.com/safety/impact/
211•xnx•5h ago•208 comments

Be intentional about how AI changes your codebase

https://aicode.swerdlow.dev
64•benswerd•4h ago•26 comments

“Your frustration is the product”

https://daringfireball.net/2026/03/your_frustration_is_the_product
426•llm_nerd•13h ago•248 comments

NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute

https://qlabs.sh/10x
98•sdpmas•6h ago•16 comments

From Oscilloscope to Wireshark: A UDP Story (2022)

https://www.mattkeeter.com/blog/2022-08-11-udp/
76•ofrzeta•6h ago•17 comments

Juggalo makeup blocks facial recognition technology (2019)

https://consequence.net/2019/07/juggalo-makeup-facial-recognition/
229•speckx•12h ago•141 comments

Clockwise acquired by Salesforce and shutting down next week

https://www.getclockwise.com
68•nigelgutzmann•5h ago•45 comments

Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster

https://blog.skypilot.co/scaling-autoresearch/
120•hopechong•8h ago•58 comments

OpenBSD: PF queues break the 4 Gbps barrier

https://undeadly.org/cgi?action=article;sid=20260319125859
177•defrost•11h ago•55 comments

Launch HN: Voltair (YC W26) – Drone and charging network for power utilities

48•wweissbluth•8h ago•23 comments

My Random Forest Was Mostly Learning Time-to-Expiry Noise

https://illya.sh/threads/out-of-sample-permutation-feature-importance-for-random
13•iluxonchik•3d ago•3 comments

An update on Steam / GOG changes for OpenTTD

https://www.openttd.org/news/2026/03/19/steam-changes-update
267•jandeboevrie•7h ago•185 comments

I turned Markdown into a protocol for generative UI

https://fabian-kuebler.com/posts/markdown-agentic-ui/
79•FabianCarbonara•11h ago•38 comments

The Shape of Inequalities

https://www.andreinc.net/2026/03/16/the-shape-of-inequalities/
95•nomemory•10h ago•14 comments

macOS 26 breaks custom DNS settings including .internal

https://gist.github.com/adamamyl/81b78eced40feae50eae7c4f3bec1f5a
319•adamamyl•10h ago•165 comments
Open in hackernews

Minecraft Source Code Is Interesting

https://www.karanjanthe.me/posts/minecraft-source/
25•KMJ-007•2h ago

Comments

slopinthebag•1h ago
Once again, a promising article is completely ruined by blatant ai-isms. I could only make to the end of the pointer section before I couldn't take it anymore.

There is a real crisis of AI slop getting posted to this forum. I don't even bother reading posted articles related to AI anymore, but now it's seemingly extending to everything.

wvenable•1h ago
I didn't notice until "this turns a lighting update from “noticeable stutter” into “instant.”"
slopinthebag•1h ago
"This means reading light data requires zero locks. No mutex, no spinlock, nothing." threw up red flags, and by the time I got to "But here’s the insight" I couldn't go any further.
user3939382•1h ago
I’ve been trying to put my finger on what gives it away. It’s that there are boolean trees underneath each text decision it makes. While humans are obviously capable of that, our conclusions and framing are more continuous. This why you for example see LLMs constantly defining things by what they’re not.
dvt•1h ago
LLMs are trained to be precise (and more specifically: semantically precise), especially in the fine-tuning phase. An LLM just trained on the corpus of full human production would surely sound more "human," but it would also probably be pretty useless. So that's why idioms like "it's not X, it's Y" are a dead giveaway; but really, any structure that tries to "guide" our salience is a dead giveaway. Here's a random paragraph from Knuth's Literate Programming†[1]:

> For example, a system program is often designed to gather statistics about its own operation, but such statistics-gathering is pointless unless someone is actually going to use the results. In order to make the instrumentation code optional, I include the word ‘stat’ just before any special code for statistics, and ‘tats’ just after such code; and I tell WEAVE to regard stat and tats as if they were begin and end. But stat and tats are actually simple macros.

I encourage you to read that paragraph a few times. Even if you have no idea what the context is, you get that there's a point, that there's something else to dig into, that the author might be being a bit cheeky. In other words, you can feel Knuth behind the ink. Philosophers would call this intentionality[2]. LLMs produce the polar opposite of garden path sentences[3] (and, imo, that's why they're so easy to spot).

† I specifically picked something technical to illustrate that even in domains where semantic precision is of utmost importance, human expression is still just that: human.

[1] https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...

[2] https://plato.stanford.edu/archives/fall2014/entries/intenti...

[3] https://www.sjsu.edu/writingcenter/docs/handouts/Garden%20Pa...

slopinthebag•11m ago
I fed ChatGPT part of the introductory paragraph of that paper and instructed it to continue. It gave me this:

> In this light, the activity of programming becomes less an act of issuing commands and more an act of communication. The computer is, after all, an obedient but uncomprehending servant; it will execute whatever precise instructions we provide. But our colleagues, our future selves, and the broader community of readers are not so easily satisfied. They demand clarity, intention, and narrative. A program, then, should be structured not merely for execution, but for reading—its logic unfolding in a manner that mirrors the way one might naturally explain the solution to another person.

> This shift in perspective has practical consequences. When we write with exposition in mind, we are compelled to confront ambiguities that might otherwise remain hidden. Vague assumptions must be made explicit; convoluted steps must be reorganized into simpler, more digestible ideas. The discipline of explaining a program often leads to improvements in the program itself, since confusion in the prose is frequently a symptom of confusion in the underlying design.

Fascinating technology. I would not be able to immediately tell this was AI generated. So these models can in some cases produce text that doesn't immediately set off alarm bells. As an avid reader and writer I'm not really sure what to make of it. I don't want to consume AI generated art or literature because it's completely besides the point, but in the future will we even be able to tell? How do we even know if anyone around us is real? Could they just be sufficiently advanced LLM's, fooling us? Am I the only human in the matrix?

softskunk•1h ago
i would genuinely rather read the rough draft before it got turned into this slop. it would be messier, maybe, but it’d have actual human insight and direction.
tills13•59m ago
such a shame too because I'm genuinely interested but like I cannot bring myself to care about AI generated content slop
gurkin•1h ago
> Its shit too, but our kind of shit.

Unfathomably based.