frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•2m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•2m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•3m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•6m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•6m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•6m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•6m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•7m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•10m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•10m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
2•jerpint•10m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•12m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•15m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•15m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•17m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•19m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•19m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•19m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•20m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•20m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•21m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•23m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•26m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•27m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•28m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•28m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•32m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•35m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•36m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•37m ago•0 comments
Open in hackernews

OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)

https://github.com/UCSBarchlab/OpenTPU
166•walterbell•8mo ago

Comments

mdaniel•8mo ago
Yeowzers that FAQ is filled with watch-outs

The /forks contained https://github.com/csirlin/OpenTGPTPU which had a commit 3 hours ago but it seems they have not yet updated the FAQ for their version. Anyway, the fact it has commits greater than 8 years ago makes it seem like a more reasonable submission

walterbell•8mo ago
Google TPU engineers used open-source Chisel for ASIC design (2018), https://youtube.com/watch?v=x85342Cny8c

"Google Edge TPU devices", 100 comments (2019), https://news.ycombinator.com/item?id=19130896 & https://news.ycombinator.com/item?id=19313813

"Coral Edge TPU review", 100 comments (2020), https://news.ycombinator.com/item?id=24808755

"TPU transformation: 10 years of our AI-specialized chips", 60 comments (2024), https://news.ycombinator.com/item?id=41148532

dekhn•8mo ago
The site confuses the inference engine in the Edge TPU with the datacenter TPU. They are two unrelated projects. Based on the paper they're borrowing from, I think they are trying to go for a much older datacenter inference-only TPU, or only implementing the inference capabilities of the datacenter TPU.
walterbell•8mo ago
Are there recent papers on datacenter TPU?
dekhn•8mo ago
Yes.
walterbell•8mo ago
David Patterson overview (2023), https://www.cs.ucla.edu/wp-content/uploads/cs/PATTERSON-10-L...

TPU v4 (2023), https://arxiv.org/abs/2304.01433

flakiness•8mo ago
[2017] (https://arxiv.org/abs/1704.04760)
walterbell•8mo ago
[May 2025] (https://github.com/csirlin/OpenTGPTPU/commits/master)
flakiness•8mo ago
Wow they have kept working on this! Thanks for pointing this! very impressive.
whimsicalism•8mo ago
> The TPU is Google's custom ASIC for accelerating the inference phase of neural network computations.

this seems hopelessly out of date/confused

walterbell•8mo ago
Additional text from Google's 2017 paper abstract says:

  This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. 

  The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. 

  The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
whimsicalism•8mo ago
hence the out of date part of my comment
walterbell•8mo ago
Recent (2024) description by Google, https://cloud.google.com/blog/transform/ai-specialized-chips...

  TPUs were purpose-built specifically for AI. TPUs are an application-specific integrated circuit (ASIC), a chip designed for a single, specific purpose: running the unique matrix and vector-based mathematics that’s needed for building and running AI models..

  TPU v2.. built an interconnected machine — our first TPU pod — with 256 TPU chips connected with a very high-bandwidth, custom interconnect.. liquid cooling was added with TPU v3 to help address efficiency needs, while TPU v4 introduced optical circuit switches to allow the chips in pods to communicate even faster and more reliably. 

  TPUs also underpin Google DeepMind’s cutting-edge foundation models, including the newly unveiled Gemini 1.5 Flash, Imagen 3, and Gemma 2, propelling advancements in AI.. Forget about a single chip, or a single TPU pod — we’re building a global network of data centers filled with TPUs.
throwawaymaths•8mo ago
what's the memory bandwidth? IIRC that is the limiting factor in LLM hardware today
walterbell•8mo ago
Slide 21, https://files.futurememorystorage.com/proceedings/2024/20240...

            TPUv3     TPUv4
  HBM2 BW   900 GB/s  1200 GB/s
surfmike•8mo ago
How would you describe it instead? Curious and learning
imtringued•8mo ago
Google does everything, both inference and training, on their TPUs.

Inference is easier, since the person deploying a model knows the architecture ahead of time and therefore can write custom code for their particular model.

When training you want to be as flexible as possible. The framework and hardware should not impose any particular architecture. This means lots of kernels and combinations of kernels. Miss one and you're out.

throwawaymaths•8mo ago
> Miss one and you're out.

well these days since everything is transformer, your pool of choices is less daunting and theres only about four or five places that someone might get clever.

dgacmu•8mo ago
They're not confused at all, this is just a (correct) description of TPU v1. The repository is 8 years old.
andutu•8mo ago
There is an excellent paper and talk on how Google's TPU cluster is managed: https://www.usenix.org/conference/nsdi24/presentation/zu.
westurner•8mo ago
Can [OpenTPU] TPUs be fabricated out of graphene, with nanoimprinting or a more efficient approach?

From https://news.ycombinator.com/item?id=42314333 :

>> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :

>>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.

westurner•8mo ago
What about QPUs though?

Can QPUs (Quantum Processing Units) built on with electrons in superconducting graphene ever be faster than photons in integrated nanophotonics?

There are integrated parametric single-photon emitters and detectors.

Is there a lower cost integrated nanophotonic coherent light source for [quantum] computing than a thin metal wire?

"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885