frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•28s ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•34s ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•1m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•2m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

https://twitter.com/alansass/status/2019904035982307406
1•alan_sass•3m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•3m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•3m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•4m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•6m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•7m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•8m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•11m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•12m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•12m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•13m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•13m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•16m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•16m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•18m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•20m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•21m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•21m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•21m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•23m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•24m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•26m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•31m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•32m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•32m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•36m ago•0 comments
Open in hackernews

OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)

https://github.com/UCSBarchlab/OpenTPU
166•walterbell•8mo ago

Comments

mdaniel•8mo ago
Yeowzers that FAQ is filled with watch-outs

The /forks contained https://github.com/csirlin/OpenTGPTPU which had a commit 3 hours ago but it seems they have not yet updated the FAQ for their version. Anyway, the fact it has commits greater than 8 years ago makes it seem like a more reasonable submission

walterbell•8mo ago
Google TPU engineers used open-source Chisel for ASIC design (2018), https://youtube.com/watch?v=x85342Cny8c

"Google Edge TPU devices", 100 comments (2019), https://news.ycombinator.com/item?id=19130896 & https://news.ycombinator.com/item?id=19313813

"Coral Edge TPU review", 100 comments (2020), https://news.ycombinator.com/item?id=24808755

"TPU transformation: 10 years of our AI-specialized chips", 60 comments (2024), https://news.ycombinator.com/item?id=41148532

dekhn•8mo ago
The site confuses the inference engine in the Edge TPU with the datacenter TPU. They are two unrelated projects. Based on the paper they're borrowing from, I think they are trying to go for a much older datacenter inference-only TPU, or only implementing the inference capabilities of the datacenter TPU.
walterbell•8mo ago
Are there recent papers on datacenter TPU?
dekhn•8mo ago
Yes.
walterbell•8mo ago
David Patterson overview (2023), https://www.cs.ucla.edu/wp-content/uploads/cs/PATTERSON-10-L...

TPU v4 (2023), https://arxiv.org/abs/2304.01433

flakiness•8mo ago
[2017] (https://arxiv.org/abs/1704.04760)
walterbell•8mo ago
[May 2025] (https://github.com/csirlin/OpenTGPTPU/commits/master)
flakiness•8mo ago
Wow they have kept working on this! Thanks for pointing this! very impressive.
whimsicalism•8mo ago
> The TPU is Google's custom ASIC for accelerating the inference phase of neural network computations.

this seems hopelessly out of date/confused

walterbell•8mo ago
Additional text from Google's 2017 paper abstract says:

  This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. 

  The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. 

  The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
whimsicalism•8mo ago
hence the out of date part of my comment
walterbell•8mo ago
Recent (2024) description by Google, https://cloud.google.com/blog/transform/ai-specialized-chips...

  TPUs were purpose-built specifically for AI. TPUs are an application-specific integrated circuit (ASIC), a chip designed for a single, specific purpose: running the unique matrix and vector-based mathematics that’s needed for building and running AI models..

  TPU v2.. built an interconnected machine — our first TPU pod — with 256 TPU chips connected with a very high-bandwidth, custom interconnect.. liquid cooling was added with TPU v3 to help address efficiency needs, while TPU v4 introduced optical circuit switches to allow the chips in pods to communicate even faster and more reliably. 

  TPUs also underpin Google DeepMind’s cutting-edge foundation models, including the newly unveiled Gemini 1.5 Flash, Imagen 3, and Gemma 2, propelling advancements in AI.. Forget about a single chip, or a single TPU pod — we’re building a global network of data centers filled with TPUs.
throwawaymaths•8mo ago
what's the memory bandwidth? IIRC that is the limiting factor in LLM hardware today
walterbell•8mo ago
Slide 21, https://files.futurememorystorage.com/proceedings/2024/20240...

            TPUv3     TPUv4
  HBM2 BW   900 GB/s  1200 GB/s
surfmike•8mo ago
How would you describe it instead? Curious and learning
imtringued•8mo ago
Google does everything, both inference and training, on their TPUs.

Inference is easier, since the person deploying a model knows the architecture ahead of time and therefore can write custom code for their particular model.

When training you want to be as flexible as possible. The framework and hardware should not impose any particular architecture. This means lots of kernels and combinations of kernels. Miss one and you're out.

throwawaymaths•8mo ago
> Miss one and you're out.

well these days since everything is transformer, your pool of choices is less daunting and theres only about four or five places that someone might get clever.

dgacmu•8mo ago
They're not confused at all, this is just a (correct) description of TPU v1. The repository is 8 years old.
andutu•8mo ago
There is an excellent paper and talk on how Google's TPU cluster is managed: https://www.usenix.org/conference/nsdi24/presentation/zu.
westurner•8mo ago
Can [OpenTPU] TPUs be fabricated out of graphene, with nanoimprinting or a more efficient approach?

From https://news.ycombinator.com/item?id=42314333 :

>> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :

>>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.

westurner•8mo ago
What about QPUs though?

Can QPUs (Quantum Processing Units) built on with electrons in superconducting graphene ever be faster than photons in integrated nanophotonics?

There are integrated parametric single-photon emitters and detectors.

Is there a lower cost integrated nanophotonic coherent light source for [quantum] computing than a thin metal wire?

"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885