frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Extropic is building thermodynamic computing hardware

https://extropic.ai/
97•vyrotek•8h ago

Comments

quantumHazer•8h ago
there is also Normal Computing[0] that are trying different approaches to chips like that. Anyway these are very difficult problems and Extropic already abandoned some of their initial claims about superconductors to pivot to more classical CMOS circuits[1]

[0]: https://www.normalcomputing.com

[1]: https://www.zach.be/p/making-unconventional-computing-practi...

nfw2•8h ago
I don't really understand the purpose of hyping up a launch announcement and then not making any effort whatsoever to make the progress comprehensible to anyone without advanced expertise in the field.
ipsum2•8h ago
That's the intention. Fill it up with enough jargon and gobbledegook that it looks impressive to investors, while hiding the fact that there's no real technology underneath.
frozenseven•5h ago
>jargon and gobbledegook

>no real technology underneath

They're literally shipping real hardware. They also put out a paper + posted their code too.

Flippant insults will not cut it.

ipsum2•5h ago
Nice try. It's smoke and mirrors. Tell me one thing it does better than a 20 year old CPU.
frozenseven•5h ago
More insults and a blanket refusal to engage with the material. Ok.
ipsum2•5h ago
If you think comparing hardware performance is an insult, then you have some emotional issues or are a troll.
frozenseven•4h ago
Ah, more insults. This will be my final reply to you.

I'll say it again. The hardware exists. The paper and code are there. If someone wants to insist that it's fake or whatever, they need to come up with something better than permutations of "u r stoopid" (your response to their paper: https://news.ycombinator.com/item?id=45753471). Just engage with the actual material. If there's a solid criticism, I'd like to hear it too.

maradan•5h ago
This hardware is an analog simulator for Gibbs sampling, which is an idealized physical process that describes random systems with large scale structure. The energy efficient gains come from the fact that it's analog. It may seem like jargon, but Gibbs sampling is an extremely well known concept with decades of work with connections to many areas of statistics, probability theory, and machine learning. The algorithmic problem they need to solve is how to harness Gibbs sampling for large scale ML tasks, but arguably this isn't really a huge leap, it's very similar to EBM learning/sampling but with the advantage of being able to sample larger systems for the same energy.
rcxdude•2h ago
The fact that there's real hardware and a paper doesn't mean the product is actually worth anything. It's very possible to make something (especially some extremely simplified 'proof of concept' which is not actually useful at all) and massively oversell it. Looking at the paper, it looks like it may have some very niche applications but it's really not obvious that it would be enough to justify the investment needed to make it better than existing general purpose hardware, and the amount of effort that's been put into 'sizzle' aimed at investors makes it look disingenuous.
frozenseven•1h ago
>The fact that there's real hardware and a paper doesn't mean the product is actually worth anything.

I said you can't dismiss someone's hardware + paper + code solely based on insults. That's what I said. That was my argument. Speaking of which:

>disingenuous

>sizzle

>oversell

>dubious niche value

>window dressing

>suspicious

For the life of me I can't understand how any of this is an appropriate response when the other guy is showing you math and circuits.

maradan•5h ago
"no really technology underneath" zzzzzzzzzzz
fastball•4h ago
You not comprehending a technology does not automatically make it vaporware.
lacy_tinpot•7h ago
What's not comprehensible?

It's just miniaturized lava lamps.

nfw2•6h ago
A lava lamps that just produces randomness, ie for cryptology purposes, is different than the benefit here, which is to produce specific randomness at low energy-cost
d_silin•8h ago
It is a hardware RNG they are building. The claim is that their solution is going to be more computationally efficient for a narrow class of problems (de-noising step for diffusion AI models) vs current state of the art. Maybe.

This is what they are trying to create, more specifically:

https://pubs.aip.org/aip/apl/article/119/15/150503/40486/Pro...

A_D_E_P_T•8h ago
An old concept indeed! I think about this Ed Fredkin story a lot... In his words:

"Just a funny story about random numbers: in the early days of computers people wanted to have random numbers for Monte Carlo simulations and stuff like that and so a great big wonderful computer was being designed at MIT’s Lincoln laboratory. It was the largest fastest computer in the world called TX2 and was to have every bell and whistle possible: a display screen that was very fancy and stuff like that. And they decided they were going to solve the random number problem, so they included a register that always yielded a random number; this was really done carefully with radioactive material and Geiger counters, and so on. And so whenever you read this register you got a truly random number, and they thought: “This is a great advance in random numbers for computers!” But the experience was contrary to their expectations! Which was that it turned into a great disaster and everyone ended up hating it: no one writing a program could debug it, because it never ran the same way twice, so ... This was a bit of an exaggeration, but as a result everybody decided that the random number generators of the traditional kind, i.e., shift register sequence generated type and so on, were much better. So that idea got abandoned, and I don’t think it has ever reappeared."

RIP Ed. https://en.wikipedia.org/wiki/Edward_Fredkin

rcxdude•2h ago
It's funny because that did actually reappear at some point with rdrand. But still it's only really used for cryptography, if you just need a random distribution almost everyone just uses a PRNG (a non-cryptographic one is a lot faster still, apart from being deterministic).
Imnimo•1h ago
And still today we spend a great deal of effort trying to make our randomly-sampled LLM outputs reproducibly deterministic:

https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

vlovich123•7h ago
Generating randomness is not a bottleneck and modern SIMD CPUs should be more than fast enough. I thought they’re building approximate computation where a*b is computed within some error threshold p.
TYPE_FASTER•6h ago
https://en.wikipedia.org/wiki/Lavarand
jazzyjackson•6h ago
I think that's underselling it a bit, since there's lots of existing ways to have A hardware RNG. They're trying to use lots and lots of hardware RNG to solve probabilistic problems a little more probabilisticly.
pclmulqdq•5h ago
I tried this, but not with the "AI magic" angle. It turns out nobody cares because CSPRNGs are random enough and really fast.
rvz•8h ago
This looks really amazing if not unbelieveable to the point where it is almost too good to be real.

I have not seen benchmarks on Extropic's new computing hardware yet but need to know from experts who are in the field of AI infrastructure at the semiconductor level if this is legit.

I'm 75% believing that this is real but have a 25% skepticisim and will reserve judgement until others have tried the hardware.

So my only question for the remaining 25%:

Is this a scam?

arisAlexis•8h ago
Too good to be true, incomprehensible jargon to go along...
KaiserPro•7h ago
I mean it sure looks like a scam.

I really like the design of it though.

unsupp0rted•7h ago
I doubt it’s a scam. Beff might be on to something or completely delusional, but not actively scamming.
vzcx•7h ago
The best conmen have an abundance confidence in themselves.
delichon•7h ago
This one just released a prototype platform, "XTR-0", so if it's a fraud the jig will shortly be up.

https://extropic.ai/writing/inside-x0-and-xtr-0

natosaichek•4h ago
I think it's more a concern that the hardware isn't useful in the real world, rather than that the hardware doesn't meet the description they provide of it.
behnamoh•8h ago
Usually there's a negative correlation between the fanciness of a startup webpage and the actual value/product they'll deliver.

This gives "hype" vibes.

noir_lord•6h ago
I'm more impressed that my laptop fans came on when I loaded the page.

It's the one attached to my TV that just runs movies/YT - I don't recall the last time I heard the fans.

ecshafer•6h ago
They did say thermodynamic computing.
imploded_sub•46m ago
Same on a serious dev machine. That page just pegs a core at max, it's sort of impressive.
wfurney•2h ago
Interesting you say that, I had an instinctual reaction in that vein as well. I chalked it up to bias since I couldn’t think of any concrete examples. Something about the webpage being so nice made me think they’ve spent a lot of time on it (relative to their product?) Admittedly I’m nowhere close to even trying to understand their paper, but I’m interested in seeing what others think about it
rcxdude•2h ago
I've seen it as well. One thing that's universally true about potential competitor startups in the field I work in is that the ones who don't actually have anything concrete to show have way nicer websites than ours (some have significantly more funding and still nothing to show).

I have a passing familiarity with the areas they talk about in the paper, and it feels... dubious. Mainly because of the dedicated accelerator problem. Even dedicated neural net accelerators are having difficulty gaining traction against general purpose compute units in a market that is ludicrously hot for neural net processing, and this is talking about accelerating Monte-Carlo processes which are pretty damn niche in application nowadays (especially in situations where you're compute-limited). So even if they succeed in speeding up that application, it's hard to see how worthwhile it would be. And it's not obvious from the publicly available information whether they're close to even beating the FPGA emulation of the concept which was used in the paper.

vlovich123•7h ago
I’ve been wondering how long it would take for someone to try probabilistic computing for AI workloads - the imprecision inherent in the workload makes it ideally suited for AI matrix math with a significant power reduction. My professor in university was researching this space and it seemed very interesting. I never thought it could supplant CPUs necessarily but certainly massive computer applications that don’t require precise math like 3D rendering (and now AI) always seemed like a natural fit.
Imustaskforhelp•7h ago
I don't think that it does AI matrix math with significant power reduction but rather it just seems to provide rng? I may be wrong but I don't think what you are saying is true in my limited knowledge, maybe someone can tell what is the reality of it, whether it can do Ai matrix math with significant power reduction or not or if its even their goal right now as to me currently it feels like a lava lamp equivalent* thing as some other commenter said
rcxdude•2h ago
The paper talks about some quite old-school AI techniques (the kind of thing I learned about in university a decade ago when it was already on its way out). It's not anything to do with matrix multiplications (well, anything do with computing them faster directly) but instead being able to sample from a complex distribution more efficiently by have dedicated circuits to simulate elements of that distribution in hardware. So it won't make your neural nets any faster.
6510•6h ago
I'm still waiting for my memristors.
docandrew•7h ago
Hype aside, if you can get an answer to a computing problem with error bars in significantly less time, where precision just isn’t that important (such as LLMs) this could be a game changer.
alyxya•6h ago
Precision actually matters a decent amount in LLMs. Quantization is used strategically in places that’ll minimize performance degradation, and models are smart enough so some loss in performance still gives a good model. I’m skeptical how well this would turn out, but it’s probably always possible to remedy precision loss with a sufficiently larger model though.
fastball•4h ago
LLMs are inherently probabilistic. Things like ReLU throw out a ton of data deliberately.
alyxya•4h ago
No that isn’t throwing out data. Activation functions perform a nonlinear transformation to increase the expressivity of a function. If you did two matrix multiplications without ReLU in between, your function contains less information than with a ReLU in between.
fastball•19m ago
How are you calculating "less information"?
trevor_extropic•7h ago
If you want to understand exactly what we are building, read our blogs and then our paper

https://extropic.ai/writing https://arxiv.org/abs/2510.23972

throwaway_7274•5h ago
I was hoping the preprint would explain the mysterious ancient runes on the device chassis :(
ipsum2•5h ago
The answer is that they're cosplaying sci-fi movies, in attempt to woo investors.
simonerlic•5h ago
What, is a bit of whimsy illegal?
rcxdude•2h ago
A product of dubious niche value that has this much effort put into window dressing is suspicious.
dmos62•4h ago
Why are you replying under every other comment here in this low effort, negative manner?
moralestapia•7h ago
Nice!

This is "quantum" computing, btw.

trevormccrt•4h ago
Actually it's not. Here's some stuff to read to get a clearer picture! https://extropic.ai/writing
moralestapia•2h ago
It strictly is not, as no quantum phenomena is being measured (hence why I used the quotes); but if all goes well w/ extropic you'll most likely end up doing quantum again.
fidotron•7h ago
Is this the new term for analog VLSI?

Or if we call it analog is it too obvious what the problems are going to be?

alyxya•7h ago
This seems to be the page that describes the low level details of what the hardware aims to do. https://extropic.ai/writing/tsu-101-an-entirely-new-type-of-...

To me, the biggest limitation is that you’d need an entirely new stack to support a new paradigm. It doesn’t seem compatible with using existing pretrained models. There’s plenty of ways to have much more efficient paradigms of computation, but it’ll be a long while before any are mature enough to show substantial value.

sashank_1509•6h ago
The cool thing about Silicon Valley is serious people try stuff that may seem wild and unlikely and in the off chance it works, entire humanity benefits. This looks like Atomic Semi, Joby Aviation, maybe even OpenAI in its early days.

The bad thing about Silicon Valley is charlatans abuse this openness and friendly spirit, and swindle investors of millions with pipe dreams and worthless technology. I think the second is inevitable as Silicon Valley becomes more famous, more high status without a strong gatekeeping mechanism which is also anathema to its open ethos. Unfortunately this company is firmly in the second category. A performative startup, “changing the world” to satisfy the neurosis of its founders who desperately want to be seen as someone taking risks to change the world. In reality it will change nothing, and go die into the dustbins of history. I hope he enjoys his 15 minutes of fame.

nfw2•6h ago
What makes you so sure that extropic is the second and not the first?
sashank_1509•6h ago
Fundamentally, gut feels by following the founder on Twitter. But if I had to explain, I don’t understand the point of speeding up or getting true RnG, even for diffusion models this is not a big bottleneck, so it sounds more like a buzzword than actual practical technology.
jazzyjackson•6h ago
Having a TRNG is easy, you just reverse bias a zener diode or any number of other strategies that rely on physics for noise. Hype is a strategy they're clearly taking, but people in this thread are so dismissive (and I get why, extropic has been vague posting for years and makes it sound like vaporware) but what does everything think they're actually doing with the money? It's not a better dice roller...
nfw2•5h ago
What is it if not a better dice roller though? Isn't that what they are claiming it is? And also that this better dice rolling is very important (and I admittedly am not someone who can evaluate this)
Void_•6h ago
This gives me Devs vibe (2020 TV Series) - https://www.indiewire.com/awards/industry/devs-cinematograph...
tcdent•6h ago
Such an underrated TV show.
arjie•5h ago
Has anyone received the dev board? What did you do with it? Curious what this can do.
frozenseven•4h ago
It's finally here! Extropic has been working on this since 2022. I'm really excited to see how this performs in the real world.
lordofgibbons•4h ago
How should we think about how much effective compute is being done with these devices compared to classical (GPU) computing? Obviously FLOPs doesn't make sense, so what does?
hereme888•3h ago
Looks like an artifact from Assassin's Creed or Halo.
motohagiography•3h ago
i've followed them for a while and as just a general technologist and not a scientist, i have a probably wrong idea of what they do, but perhaps correcting it will let others write about it more accurately.

my handwavy analogy interpretation was they were in-effect building an analog computer for AI model training, using some ideas that originated in quantum computing. their insight is that since model training is itself probabilistic, you don't need discrete binary computation to do it, you just need something that implements the sigmoid function for training a NN.

they had some physics to show they could cause a bunch of atoms to polarize (conceptually) instantaneously using the thermodynamic properties of a material, and the result would be mostly deterministic over large samples. the result is what they are calling a "probabilistic bit" or pbit, which is an inferred state over a probability distribution, and where the inference is incorrect, they just "get it in post," because the speed of the training data through a network of these pbits is so much more efficient that it's faster to just augment and correct the result in the model afterwards than to use classical clock cycles to directly compute it.

jabedude•3h ago
Question for the experts in the field: why does this need to be a CPU and not a dongle you plug into a server and query?
antics•2h ago
I like this but based on what I am seeing here and the THRML readme, I would describe this as "an ML stack that is fully prepared for the Bayesian revolution of 2003-2015." A kind of AI equivalent of, like, post-9/11 airport security. I mean this in a value-neutral way, as personally I think that era of models was very beautiful.

The core idea of THRML, as I understand it, is to present a nice programming interface to hardware where coin-flipping is vanishingly cheap. This is moderately useful to deep learning, but the artisanally hand-crafted models of the mid-2000s did essentially nothing at all except flip coins, and it would have been enormously helpful to have something like this in the wild at that time.

The core "trick" of the era was to make certain very useful but intractable distributions built on something called "infinitely exchangeable sequences" merely almost intractable. The trick, roughly, was that conditioning on some measure space makes those sequences plain-old iid, which (via a small amount of graduate-level math) implies that a collection of "outcomes" can be thought of as a random sample of the underlying distribution. And that, in turn, meant that the model training regimens of the time did a lot of sampling, or coin-flipping, as we have said here.

Peruse the THRML README[1] and you'll see the who's who of techniques and modeling prodedures of the time. "Gibbs sampling", "probabilistic graphical models", and "energy-based models", and so on. All of these are weaponized coin flipping.

I imagine the terminus of this school of thought is basically a natively-probabilistic programming environment. Garden variety deterministic computing is essentially probabilistic computing where every statement returns a value with probability 1. So in that sense, probabilistic computing is a ful generalization of deterministic computing, since an `if` might return a value with some probability other than 1. There was an entire genre of languages like this, e.g., Church. And now, 22 years later, we have our own hardware for it. (Incidentally this line of inquiry is also how we know that conditional joint distributions are Turing complete.)

Tragically, I think, this may have arrived too late. This is not nearly as helpful in the world of deep learning, with its large, ugly, and relatively sample-free models. Everyone hates to hear that you're cheering from the sidelines, but this time I really am. I think it's a great idea, just too late.

[1]: https://github.com/extropic-ai/thrml/blob/7f40e5cbc460a4e2e9...

Uv is the best thing to happen to the Python ecosystem in a decade

https://emily.space/posts/251023-uv
1200•todsacerdoti•8h ago•687 comments

China has added forest the size of Texas since 1990

https://e360.yale.edu/digest/china-new-forest-report
384•Brajeshwar•1d ago•236 comments

Tell HN: Azure outage

662•tartieret•10h ago•633 comments

IRCd service written in awk

https://example.fi/blog/ircd.html
14•pabs3•29m ago•2 comments

Minecraft removing obfuscation in Java Edition

https://www.minecraft.net/en-us/article/removing-obfuscation-in-java-edition
575•SteveHawk27•10h ago•197 comments

Raspberry Pi Pico Bit-Bangs 100 Mbit/S Ethernet

https://www.elektormagazine.com/news/rp2350-bit-bangs-100-mbit-ethernet
70•chaosprint•3h ago•14 comments

OS/2 Warp, PowerPC Edition

https://www.os2museum.com/wp/os2-history/os2-warp-powerpc-edition/
29•TMWNN•3h ago•11 comments

Dithering – Part 1

https://visualrambling.space/dithering-part-1/
223•Bogdanp•8h ago•48 comments

AWS to bare metal two years later: Answering your questions about leaving AWS

https://oneuptime.com/blog/post/2025-10-29-aws-to-bare-metal-two-years-later/view
626•ndhandala•15h ago•430 comments

How the U.S. National Science Foundation Enabled Software-Defined Networking

https://cacm.acm.org/federal-funding-of-academic-research/how-the-u-s-national-science-foundation...
57•zdw•5h ago•15 comments

AOL to be sold to Bending Spoons for $1.5B

https://www.axios.com/2025/10/29/aol-bending-spoons-deal
192•jmsflknr•10h ago•170 comments

Kafka is Fast – I'll use Postgres

https://topicpartition.io/blog/postgres-pubsub-queue-benchmarks
311•enether•12h ago•248 comments

A century of reforestation helped keep the eastern US cool

https://news.agu.org/press-release/a-century-of-reforestation-helped-keep-the-eastern-us-cool/
89•softwaredoug•3h ago•10 comments

Tailscale Peer Relays

https://tailscale.com/blog/peer-relays-beta
258•seemaze•10h ago•71 comments

Crunchyroll is destroying its subtitles

https://daiz.moe/crunchyroll-is-destroying-its-subtitles-for-no-good-reason/
174•Daiz•3h ago•58 comments

OpenAI’s promise to stay in California helped clear the path for its IPO

https://www.wsj.com/tech/ai/openais-promise-to-stay-in-california-helped-clear-the-path-for-its-i...
155•badprobe•9h ago•210 comments

Board: New game console recognizes physical pieces, with an open SDK

https://board.fun/
147•nicoles•23h ago•56 comments

The Internet runs on free and open source software and so does the DNS

https://www.icann.org/en/blogs/details/the-internet-runs-on-free-and-open-source-softwareand-so-d...
111•ChrisArchitect•8h ago•7 comments

GLP-1 therapeutics: Their emerging role in alcohol and substance use disorders

https://academic.oup.com/jes/article/9/11/bvaf141/8277723?login=false
156•PaulHoule•2d ago•67 comments

How to Obsessively Tune WezTerm

https://rashil2000.me/blogs/tune-wezterm
79•todsacerdoti•7h ago•47 comments

Keep Android Open

http://keepandroidopen.org/
2342•LorenDB•22h ago•748 comments

Meta and TikTok are obstructing researchers' access to data, EU commission rules

https://www.science.org/content/article/meta-and-tiktok-are-obstructing-researchers-access-data-e...
147•anigbrowl•4h ago•67 comments

Responses from LLMs are not facts

https://stopcitingai.com/
148•xd1936•5h ago•100 comments

Using Atomic State to Improve React Performance in Deeply Nested Component Trees

https://runharbor.com/blog/2025-10-26-improving-deeply-nested-react-render-performance-with-jotai...
4•18nleung•3d ago•0 comments

More than DNS: Learnings from the 14 hour AWS outage

https://thundergolfer.com/blog/aws-us-east-1-outage-oct20
79•birdculture•2d ago•25 comments

Upwave (YC S12) is hiring software engineers

https://www.upwave.com/job/8228849002/
1•ckelly•10h ago

Composer: Building a fast frontier model with RL

https://cursor.com/blog/composer
179•leerob•10h ago•133 comments

How blocks are chained in a blockchain

https://www.johndcook.com/blog/2025/10/27/blockchain/
50•tapanjk•2d ago•21 comments

Extropic is building thermodynamic computing hardware

https://extropic.ai/
97•vyrotek•8h ago•70 comments

Tailscale Services

https://tailscale.com/blog/services-beta
126•xd1936•1d ago•28 comments