frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We built another object storage

https://fractalbits.com/blog/why-we-built-another-object-storage/
60•fractalbits•2h ago•9 comments

Java FFM zero-copy transport using io_uring

https://www.mvp.express/
25•mands•5d ago•6 comments

How exchanges turn order books into distributed logs

https://quant.engineering/exchange-order-book-distributed-logs.html
49•rundef•5d ago•17 comments

macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt

https://developer.apple.com/documentation/macos-release-notes/macos-26_2-release-notes#RDMA-over-...
467•guiand•18h ago•237 comments

AI is bringing old nuclear plants out of retirement

https://www.wbur.org/hereandnow/2025/12/09/nuclear-power-ai
33•geox•1h ago•25 comments

Sick of smart TVs? Here are your best options

https://arstechnica.com/gadgets/2025/12/the-ars-technica-guide-to-dumb-tvs/
433•fleahunter•1d ago•362 comments

Photographer built a medium-format rangefinder, and so can you

https://petapixel.com/2025/12/06/this-photographer-built-an-awesome-medium-format-rangefinder-and...
78•shinryuu•6d ago•9 comments

Apple has locked my Apple ID, and I have no recourse. A plea for help

https://hey.paris/posts/appleid/
865•parisidau•10h ago•445 comments

GNU Unifont

https://unifoundry.com/unifont/index.html
287•remywang•18h ago•68 comments

A 'toaster with a lens': The story behind the first handheld digital camera

https://www.bbc.com/future/article/20251205-how-the-handheld-digital-camera-was-born
42•selvan•5d ago•18 comments

Beautiful Abelian Sandpiles

https://eavan.blog/posts/beautiful-sandpiles.html
83•eavan0•3d ago•16 comments

Rats Play DOOM

https://ratsplaydoom.com/
332•ano-ther•18h ago•123 comments

Show HN: Tiny VM sandbox in C with apps in Rust, C and Zig

https://github.com/ringtailsoftware/uvm32
167•trj•17h ago•11 comments

OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI

https://simonwillison.net/2025/Dec/12/openai-skills/
481•simonw•15h ago•271 comments

Computer Animator and Amiga fanatic Dick Van Dyke turns 100

109•ggm•6h ago•23 comments

Will West Coast Jazz Get Some Respect?

https://www.honest-broker.com/p/will-west-coast-jazz-finally-get
10•paulpauper•6d ago•2 comments

Formula One Handovers and Handovers From Surgery to Intensive Care (2008) [pdf]

https://gwern.net/doc/technology/2008-sower.pdf
82•bookofjoe•6d ago•33 comments

Show HN: I made a spreadsheet where formulas also update backwards

https://victorpoughon.github.io/bidicalc/
179•fouronnes3•1d ago•85 comments

Freeing a Xiaomi humidifier from the cloud

https://0l.de/blog/2025/11/xiaomi-humidifier/
126•stv0g•1d ago•51 comments

Obscuring P2P Nodes with Dandelion

https://www.johndcook.com/blog/2025/12/08/dandelion/
57•ColinWright•4d ago•1 comments

Go is portable, until it isn't

https://simpleobservability.com/blog/go-portable-until-isnt
119•khazit•6d ago•101 comments

Ensuring a National Policy Framework for Artificial Intelligence

https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-nati...
169•andsoitis•1d ago•217 comments

Poor Johnny still won't encrypt

https://bfswa.substack.com/p/poor-johnny-still-wont-encrypt
52•zdw•10h ago•64 comments

YouTube's CEO limits his kids' social media use – other tech bosses do the same

https://www.cnbc.com/2025/12/13/youtubes-ceo-is-latest-tech-boss-limiting-his-kids-social-media-u...
84•pseudolus•3h ago•67 comments

Slax: Live Pocket Linux

https://www.slax.org/
41•Ulf950•5d ago•5 comments

50 years of proof assistants

https://lawrencecpaulson.github.io//2025/12/05/History_of_Proof_Assistants.html
107•baruchel•15h ago•17 comments

Gild Just One Lily

https://www.smashingmagazine.com/2025/04/gild-just-one-lily/
29•serialx•5d ago•5 comments

Capsudo: Rethinking sudo with object capabilities

https://ariadne.space/2025/12/12/rethinking-sudo-with-object-capabilities.html
75•fanf2•17h ago•44 comments

Google removes Sci-Hub domains from U.S. search results due to dated court order

https://torrentfreak.com/google-removes-sci-hub-domains-from-u-s-search-results-due-to-dated-cour...
193•t-3•11h ago•34 comments

String theory inspires a brilliant, baffling new math proof

https://www.quantamagazine.org/string-theory-inspires-a-brilliant-baffling-new-math-proof-20251212/
167•ArmageddonIt•22h ago•154 comments
Open in hackernews

Tensor Manipulation Unit (TMU): Reconfigurable, Near-Memory, High-Throughput AI

https://arxiv.org/abs/2506.14364
58•transpute•5mo ago

Comments

KnuthIsGod•5mo ago
Cutting edge and innovative AI hardware research from China.

Looks like Amerikan sanctions are driving a new wave of innovation in China.

" This work addresses that gap by introducing the Ten- sor Manipulation Unit (TMU): a reconfigurable, near-memory hardware block designed to execute data-movement-intensive (DMI) operators efficiently. TMU manipulates long datastreams in a memory-to-memory fashion using a RISC-inspired execution model and a unified addressing abstraction, enabling broad support for both coarse- and fine-grained tensor transformations.

The proposed architecture integrates TMU alongside a TPU within a high-throughput AI SoC, leveraging double buffering and output forwarding to improve pipeline utilization. Fab- ricated in SMIC 40 nm technology, the TMU occupies only 0.019 mm2 while supporting over 10 representative TM operators. Benchmarking shows that TMU alone achieves up to 1413.43× and 8.54× operator-level latency reduction over ARM A72 and NVIDIA Jetson TX2, respectively.

When integrated with the in- house TPU, the complete system achieves a 34.6% reduction in end-to-end inference latency, demonstrating the effectiveness and scalability of reconfigurable tensor manipulation in modern AI SoCs."

yorwba•5mo ago
It's not like AI hardware acceleration is some niche field that nobody would be researching if there were no sanctions. Academics started flocking towards hardware for AI workloads as soon as it became a trendy topic to be working on (of course back then it was mostly convnets). Maybe recent sanctions have increased the total funding pool, but that's not something you can infer by just gesturing at a single paper.
WithinReason•5mo ago
Isn't this a software problem being solved in hardware? Ideally you would try to avoid going to memory in the first place by fusing the operations, which should be much faster than speeding up memory ops. E.g. you should never do an explicit im2col before a convolution, it should be fused. However it's hard to argue with a 0.019 mm2 area increase.
imtringued•5mo ago
"Fusing im2col with matrix multiplication" is a confused way of saying that the convolution operation should be implemented directly in hardware.

There are two arguments in favor of im2col.

1. "I don't want to implement a dedicated software kernel just for convolutions" aka laziness

2. "I don't want to implement dedicated hardware just for convolution"

The former is a sham, the latter is motivated by silicon area constraints. Implementing convolutions requires exactly the same number of FMAs, so you would end up doubling your chip size and automatically be cursed with 50% utilization from the start unless you do both matrix multiplication and convolutions simultaneously.

When you read answers like this: https://stackoverflow.com/a/47422548, they are subtly wrong.

"Element wise convolution performs badly because of the irregular memory accesses involved in it." at a first glance sounds like a reasonable argument, but all you're doing with im2col is shifting the "irregular memory accesses" into a separate kernel. It doesn't fundamentally get rid of the "irregular memory accesses".

The problem with the answer is that the irregularity is purely a result of ones perspective. Assuming you implement im2col in hardware, there is in fact nothing difficult about the irregularity. In fact, what is considered irregular here is perfectly predictable from the perspective of the hardware.

All you do is load x pixels from y rows simultaneously, which is extremely data parallel and SIMD friendly. Once the data is in local registers, you can access it any way you want (each register is effectively its own bank), which allows you to easily produce the im2col output stream and feed it straight to your matrix multiplication unit. You could have implemented the convolution directly, but then again you'd only get 50% utilization due to inflexibility.

WithinReason•5mo ago
they compare im2col performance with a GPU, while you don't need explicit im2col on a GPU
shihab•5mo ago
In one view, the fact that it's a software problem is actually a weakness of (GPU) hardware design.

In the olden, serial computing days, our algorithms were standard, and CPU designers did all sorts of behind-the-scene tricks to improve performance without burdening software developers. It wasn't perfect abstraction, but they tried. Algorithm led the way; hardware had to follow.

CUDA threw that all away, exposed lots of ugly details of GPU hardware design that developers _had to_ take into account. This is why, for a long time, CUDA's primary customers (HPC community & Natl labs) refused to adopt CUDA.

It's interesting that now that CUDA has become a legitimate, widely adopted computing paradigm, how much our view on this has shifted.

djmips•5mo ago
You can still live your abstract, imperfect universe, there's nothing stopping you.
shihab•5mo ago
I don't believe you really can in GPU world. With CPU, if you ignore something important like cache hierarchy, the performance penalty is likely to be in double digits percentage. Something people can and do often ignore. With GPU, there are many many things (memory coalescing, warp, SRAM) that can have triple digits % of impact, hell maybe even more than that.
WithinReason•5mo ago
Ignoring the cache hierarchy on a CPU for matrix multiplication gets you a 100x performance drop, just like a GPU
mikewarot•5mo ago
The only memory involved should be at the input and output of a pipeline stage that does an entire layer of an LLM. I'm of the opinion that we'll end up with effectively massive FPGAs with some stages of pipelining that have NO memory access internally, so that you get one token per clock cycle.

100 million tokens per second is currently worth about $130,000,000/day. (Or so ChatGPT 4.1 told me a few days ago)

I'd like to drop that by a factor of at least 1000:1

thijson•5mo ago
In theory that would be ideal, I feel like FPGA's haven't kept up compared to GPU's. The latest GPU's will be at 4nm, while FPGA's will be still at 28nm. The pipelines are huge, it would take many FPGA's to fit one LLM if everything is kept on-die. Cerebras is attempting this, but has to use a whole silicon wafer:

https://www.cerebras.ai/

We need FPGA's at the latest process node, with many GB's of HBM in the package. Fast reconfigurability would also be a nice have.

I feel like the FPGA has stagnated over the last decade as the two largest companies in this space were acquired by Intel and AMD. Those companies haven't kept up the pace of innovation in this space, as it isn't their core business.

addaon•5mo ago
> The latest GPU's will be at 4nm, while FPGA's will be still at 28nm.

16 nm (or “14 nm”) for Ultrascale+.

craigjb•5mo ago
7nm for Achronix

https://www.achronix.com/product/speedster7t-fpgas