frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Beginning January 2026, all ACM publications will be made open access

https://dl.acm.org/openaccess
1604•Kerrick•15h ago•175 comments

Getting bitten by Intel's poor naming scenes

https://lorendb.dev/posts/getting-bitten-by-poor-naming-schemes/
42•LorenDB•2h ago•19 comments

1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5

https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5
337•rbanffy•9h ago•109 comments

History LLMs: Models trained exclusively on pre-1913 texts

https://github.com/DGoettlich/history-llms
410•iamwil•8h ago•154 comments

We pwned X, Vercel, Cursor, and Discord through a supply-chain attack

https://gist.github.com/hackermondev/5e2cdc32849405fff6b46957747a2d28
772•hackermondev•12h ago•305 comments

Texas is suing all of the big TV makers for spying on what you watch

https://www.theverge.com/news/845400/texas-tv-makers-lawsuit-samsung-sony-lg-hisense-tcl-spying
719•tortilla•2d ago•350 comments

Noclip.website – A digital museum of video game levels

https://noclip.website/
129•ivmoreau•5h ago•17 comments

2026 Apple introducing more ads to increase opportunity in search results

https://ads.apple.com/app-store/help/ad-placements/0082-search-results
106•punnerud•1h ago•92 comments

The state of the kernel Rust experiment

https://lwn.net/SubscriberLink/1050174/63aa7da43214c3ce/
61•dochtman•6d ago•8 comments

GPT-5.2-Codex

https://openai.com/index/introducing-gpt-5-2-codex/
456•meetpateltech•13h ago•237 comments

Reconstructed Commander Keen 1-3 Source Code

https://pckf.com/viewtopic.php?t=18248
47•deevus•4h ago•2 comments

From Zero to QED: An informal introduction to formality with Lean 4

https://sdiehl.github.io/zero-to-qed/01_introduction.html
7•rwosync•5d ago•0 comments

How China built its ‘Manhattan Project’ to rival the West in AI chips

https://www.japantimes.co.jp/business/2025/12/18/tech/china-west-ai-chips/
316•artninja1988•12h ago•327 comments

Making Google Sans Flex

https://design.google/library/google-sans-flex-font
8•meetpateltech•1h ago•2 comments

SMB Direct – SMB3 over RDMA – The Linux Kernel Documentation

https://docs.kernel.org/filesystems/smb/smbdirect.html
19•tambourine_man•5h ago•4 comments

Property-Based Testing Caught a Security Bug I Never Would Have Found

https://kiro.dev/blog/property-based-testing-fixed-security-bug/
6•nslog•7h ago•0 comments

Show HN: Picknplace.js, an alternative to drag-and-drop

https://jgthms.com/picknplace.js/
253•bbx•2d ago•101 comments

Telegraph chess: A 19th century tech marvel

https://spectrum.ieee.org/telegraph-chess
28•sohkamyung•6d ago•4 comments

Skills for organizations, partners, the ecosystem

https://claude.com/blog/organization-skills-and-directory
258•adocomplete•14h ago•143 comments

Lite^3, a JSON-Compatible Zero-Copy Serialization Format

https://github.com/fastserial/lite3
52•cryptonector•6d ago•11 comments

Great ideas in theoretical computer science

https://www.cs251.com/
89•sebg•8h ago•16 comments

Show HN: Stop AI scrapers from hammering your self-hosted blog (using porn)

https://github.com/vivienhenz24/fuzzy-canary
209•misterchocolat•2d ago•144 comments

My First Impression on HP Zbook Ultra G1a: Ryzen AI Max+ 395, Strix Halo 128GB

https://forum.level1techs.com/t/my-first-impression-on-hp-zbook-ultra-g1a-ryzen-ai-max-395-strix-...
4•teleforce•8h ago•1 comments

The Code That Revolutionized Orbital Simulation [video]

https://www.youtube.com/watch?v=nCg3aXn5F3M
40•surprisetalk•4d ago•3 comments

Prompt caching: 10x cheaper LLM tokens, but how?

https://ngrok.com/blog/prompt-caching/
57•samwho•2d ago•5 comments

Firefox will have an option to disable all AI features

https://mastodon.social/@firefoxwebdevs/115740500373677782
384•twapi•13h ago•326 comments

Delty (YC X25) Is Hiring an ML Engineer

https://www.ycombinator.com/companies/delty/jobs/MDeC49o-machine-learning-engineer
1•lalitkundu•10h ago

Two kinds of vibe coding

https://davidbau.com/archives/2025/12/16/vibe_coding.html
75•jxmorris12•10h ago•58 comments

T5Gemma 2: The next generation of encoder-decoder models

https://blog.google/technology/developers/t5gemma-2/
127•milomg•11h ago•22 comments

Oliver Sacks put himself into his case studies – what was the cost?

https://www.newyorker.com/magazine/2025/12/15/oliver-sacks-put-himself-into-his-case-studies-what...
39•barry-cotter•10h ago•73 comments
Open in hackernews

Prompt caching: 10x cheaper LLM tokens, but how?

https://ngrok.com/blog/prompt-caching/
57•samwho•2d ago

Comments

simedw•2d ago
Thanks for sharing; you clearly spent a lot of time making this easy to digest. I especially like the tokens-to-embedding visualisation.

I recently had some trouble converting a HF transformer I trained with PyTorch to Core ML. I just couldn’t get the KV cache to work, which made it unusably slow after 50 tokens…

samwho•1d ago
Thank you so much <3

Yes, I recently wrote https://github.com/samwho/llmwalk and had a similar experience with cache vs no cache. It’s so impactful.

mrgaro•35m ago
Hopefully you can write the teased next article about how Feedforward and Output layers work. The article was super helpful for me to get better understanding on how LLM GPTs work!
Youden•1d ago
Link seems to be broken: content briefly loads then is replaced with "Something Went Wrong" then "D is not a function". Stays broken with adblock disabled.
est•1h ago
This is a surprising good read of how LLM works in general.