frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The struggle of resizing windows on macOS Tahoe

https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/
526•happosai•2h ago•250 comments

2026 is the year of self-hosting

https://fulghum.io/self-hosting
109•websku•2h ago•70 comments

This game is a single 13 KiB file that runs on Windows, Linux and in the Browser

https://iczelia.net/posts/snake-polyglot/
39•snoofydude•1h ago•13 comments

I'd tell you a UDP joke…

https://www.codepuns.com/post/805294580859879424/i-would-tell-you-a-udp-joke-but-you-might-not-get
47•redmattred•1h ago•16 comments

iCloud Photos Downloader

https://github.com/icloud-photos-downloader/icloud_photos_downloader
249•reconnecting•4h ago•132 comments

I Cannot SSH into My Server Anymore (and That's Fine)

https://soap.coffee/~lthms/posts/i-cannot-ssh-into-my-server-anymore.html
42•TheWiggles•4d ago•5 comments

Sampling at negative temperature

https://cavendishlabs.org/blog/negative-temperature/
95•ag8•3h ago•35 comments

FUSE is All You Need – Giving agents access to anything via filesystems

https://jakobemmerling.de/posts/fuse-is-all-you-need/
35•jakobem•2h ago•14 comments

I'm making a game engine based on dynamic signed distance fields (SDFs) [video]

https://www.youtube.com/watch?v=il-TXbn5iMA
137•imagiro•3d ago•17 comments

A 2026 look at three bio-ML opinions I had in 2024

https://www.owlposting.com/p/a-2026-look-at-three-bio-ml-opinions
14•abhishaike•2h ago•0 comments

Elo – A data expression language which compiles to JavaScript, Ruby, and SQL

https://elo-lang.org/
29•ravenical•4d ago•4 comments

Don't fall into the anti-AI hype

https://antirez.com/news/158
485•todsacerdoti•13h ago•649 comments

The Next Two Years of Software Engineering

https://addyosmani.com/blog/next-two-years/
24•napolux•1h ago•9 comments

A set of Idiomatic prod-grade katas for experienced devs transitioning to Go

https://github.com/MedUnes/go-kata
91•medunes•4d ago•11 comments

Gentoo Linux 2025 Review

https://www.gentoo.org/news/2026/01/05/new-year.html
282•akhuettel•12h ago•136 comments

Ask HN: What are you working on? (January 2026)

128•david927•6h ago•438 comments

Perfectly Replicating Coca Cola [video]

https://www.youtube.com/watch?v=TDkH3EbWTYc
107•HansVanEijsden•3d ago•50 comments

iMessage-kit is an iMessage SDK for macOS

https://github.com/photon-hq/imessage-kit
12•rsync•1h ago•2 comments

Erich von Däniken has died

https://daniken.com/en/startseite-english/
16•Kaibeezy•4h ago•44 comments

BYD's cheapest electric cars to have Lidar self-driving tech

https://thedriven.io/2026/01/11/byds-cheapest-electric-cars-to-have-lidar-self-driving-tech/
77•senti_sentient•2h ago•84 comments

Poison Fountain

https://rnsaffn.com/poison3/
154•atomic128•6h ago•100 comments

Anthropic: Developing a Claude Code competitor using Claude Code is banned

https://twitter.com/SIGKITTEN/status/2009697031422652461
195•behnamoh•4h ago•128 comments

"Scholars Will Call It Nonsense": The Structure of von Däniken's Argument (1987)

https://www.penn.museum/sites/expedition/scholars-will-call-it-nonsense/
46•Kaibeezy•4h ago•5 comments

Insights into Claude Opus 4.5 from Pokémon

https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-into-claude-opus-4-5-from-pokemon
5•surprisetalk•5d ago•0 comments

"Food JPEGs" in Super Smash Bros. & Kirby Air Riders

https://sethmlarson.dev/food-jpegs-in-super-smash-bros-and-kirby-air-riders
250•SethMLarson•5d ago•60 comments

Show HN: Engineering Schizophrenia: Trusting Yourself Through Byzantine Faults

23•rescrv•1h ago•6 comments

Meta announces nuclear energy projects

https://about.fb.com/news/2026/01/meta-nuclear-energy-projects-power-american-ai-leadership/
221•ChrisArchitect•4h ago•240 comments

I dumped Windows 11 for Linux, and you should too

https://www.notebookcheck.net/I-dumped-Windows-11-for-Linux-and-you-should-too.1190961.0.html
688•smurda•12h ago•676 comments

C++ std::move doesn't move anything: A deep dive into Value Categories

https://0xghost.dev/blog/std-move-deep-dive/
223•signa11•2d ago•180 comments

Quake 1 Single-Player Map Design Theories (2001)

https://www.quaddicted.com/webarchive//teamshambler.planetquake.gamespy.com/theories1.html
30•Lammy•18h ago•1 comments
Open in hackernews

Sampling at negative temperature

https://cavendishlabs.org/blog/negative-temperature/
95•ag8•3h ago

Comments

drdeca•3h ago
Hm, why T=-0.0001 instead of T=-1 ?

Also, I wonder, if you sampled a lot of text at temperature -1, and then trained a new model on that text, and then sampled the resulting model at T=-1 , would you get anything meaningful?

pelario•2h ago
From the article:

"As temperature approaches zero from the negative side, the model output will again be deterministic — but this time, the least likely tokens will be output."

I understand this as, a negative number far from zero is also quite random (just with a distribution that will produce unlikely tokens).

-_-•40m ago
Yep! Very large negative temperatures and very large positive temperatures have essentially the same distribution. This is clearer if you consider thermodynamic beta, where T = ±∞ corresponds to β = 0.
the__alchemist•3h ago
This is so cool! I just learned about this last week. For reference, I do molecular dynamics (my own engine, in rust), and measuring temperature is an important part of the simulation. (So you can nudge it to a target temperature, for example). An important component of this calculation is the degrees of freedom of the system. Calculating this depends on your model. For example, are you representing atoms that can each move on their own? Rigid molecules of multiple atoms that can rotate? Are you removing center-of-mass velocity from the system.

This DOF component also is why the general, measurable concept of temperature can apply to both our real systems, and simple point-atom models. (Or coarser ones). It is, not surprisingly, at the heart of why negative temperature exists!

dnautics•2h ago
negative temperature in this case is a sampling thing. When you sample from a table of tokens, the equation for the probability of token i is p_i = exp(logit_i/T) / sum_j(exp(logit_j/T))

Not really related to molecular dynamics temperature except superficially in terms of phenomenology (higher temperature crosses activation barriers in the joint probability landscape). Negative temperature makes no sense in MD

the__alchemist•2h ago
Yea.... after a reread, I think this article may be getting at something else. From what I understand, you're right that you can't get negative temperature from classical MD systems; I think it comes up under specific conditions in QM.
dgoldstein0•1h ago
Negative temperature happens in physical systems when there's a constrained state space and energy in the system comes near the maximum - as then adding energy reduces the number of possible states the molecules are in. Iirc the math works because temperature is the inverse of the derivative of entropy as a function of energy. So you need a system where entropy (number of possible states) decreases with more energy.

It's pretty rare to have such a system though.

amluto•1h ago
You generally don’t get negative temperature in any system at equilibrium, but you can prepare classical and quantum systems at negative temperature.

Classical: put 100 balls in a box and shake the box continuously. The balls will be distributed through the box with more balls toward the bottom than the top, and the distribution will have some temperature. Now magically freeze all the balls (keep their velocities but pause time for a bit) and turn the box upside down. When you resume the system, the temperature will be (briefly) negative.

Quantum: take a bunch of atoms with two electronic states each. Put 75% in the higher energy state and 25% in the lower energy state. Now the temperature is negative. Most lasers actually work this way, and the classic way to make them is to have more than two states and to carefully excite atoms via the third state. The math is surprisingly straightforward.

There’s a nuclear analogue. If you could manage to prepare a sample of something like Technetium-99 plus Technetium-99m state with more (higher energy) 99m than (lower energy), then the effective temperature of the nuclear state would be negative. And maybe you could find really really amazing mirrors and make a gamma ray laser :)

zozbot234•2h ago
In a way, negative temperature is higher than the highest positive temperature. High positive temperatures just gives you a uniform distribution on all possible tokens, highly negative temperatures is the same behavior. As you reach the low-negatives, you place more and more weight on unlikely tokens.

This makes more intuitive sense if inverse temperature is the physically relevant quantity, since you then have a smooth change as you cross from positive inverse temperature into negative, with zero standing for a uniform distribution and high positive (resp. negative) inverse temperatures just placing more and more weight on likely (resp. unlikely) tokens.

ggggffggggg•1h ago
This was super clear and interesting, thanks!
pama•2h ago
The simplest physical model that can exhibit negative temperatures is a spin lattice in a state that has more energy than a state at infinite temperature. Adding more energy to such a system reduces the entropy.
everlier•2h ago
Хронологија
Der_Einzige•2h ago
Min_p author here: I’m convinced that the whole field critically misunderstands temperature (I.e temperatures limited to 2 is very harmful for diverse generation). Articles like this are excellent and very cool.

Hacking your LLM inference engine to enable cool sampling tricks is the definition of AI research/engineering. We need more of this and less prompt grifting.

bjourne•2h ago
Correct me if I'm wrong, but the problem is that it is almost impossible to evaluate sampling methods. You can't just look at perplexity and conclude that A is better than B. So you need large-scale expensive human evaluations. Even if you have those it is difficult to extrapolate results since what sampling method works best depends on the dataset(s).
wolttam•1h ago
Okay, something just tweaked in my brain. Do higher temperatures essentially unlock additional paths for a model to go down when solving a particular problem? Therefore, for some particularly tricky problems, you could perform many evaluations at a high temperature in hopes that the model happens to take the correct approach in one of those evaluations.

Edit: What seems to break is how high temperature /continuously/ acts to make the model's output less stable. It seems like it could be useful to use a high temperature until it's evident the model has started a new approach, and then start sampling at a lower temperature from there.

wongarsu•4m ago
Decaying temperature might be a good approach. Generate the first token at a high temperature (like 20), then for each next token multiply temperature by 0.9 (or some other scaling factor) until you reach your steady-state target temperature
a-dub•2h ago
flipping the signs on the logits would seem to give the "least likely" but i think in practice you're more likely to be just operating in noise. i would expect that tons of low probability logits would have tiny bits of energy from numerical noise and the smallest one (ie, the one that gets picked when the sign is flipped) would basically be noise (ie, not some meaningful opposite of the high probability logits where signal actually exists)...
atemerev•2h ago
Хронологија is "chronology" in Serbian
fph•2h ago
And I believe "entferne" is "cancel" in German. These seem both common words that appear in menus and UIs. Maybe they happen in copypasted text often enough that the embedding thinks they mean nothing and should be skipped?
Tomte•2h ago
It‘s "remove". A common word, but many words are common and not on the list. Lesswrong also lists "prüf" (check), another common word.
bjourne•2h ago
Reminds me a bit of unlikelihood training that was proposed a few years ago: https://arxiv.org/abs/1908.04319 Afaik, it never became popular. Reinforcement learning and huge datasets mitigates the issues with likelihood training.
wolfi1•2h ago
negative temperature closely relates to population inversion in physics, one of the key concepts in Lasers, perhaps we are getting closer to laser-llms
swyx•2h ago
interesting exercise and well written. my followon questions/work would be:

1a. temperature=100000 is interesting too. obviously "ideal" temperature lies somewhere between 0 and 100000. has anyone ablated temperature vs intelligence? surely i'm not the first person to this idea. commonly people try to set temp=0 to get "deterministic" or "most factual" output but we all know that is just Skinner pigeon pecking.

1b. can we use "avg temperature" as a measure in the way that we use perplexity as a measure? if we see temperature as inverted perplexity with some randomness thrown in, are they basically the same thing inverted? or subtly different?

1c. what's the "avg temperature" of most human communication? whats the "avg temperature" of a subset of "good writers"? whats the "avg temperature" of a subset of "smart writers"?

2a. rerun this negative exercise with constrained vocab to english

2b. RL a model to dynamically adjust its own temperature when it is feeling 1) less confident 2) in brainstorm mode

2c. dynamically inject negative temperature every X tokens in a decode, then judge/verify the outcome, to create high variance synthetic data?

its hard for me to follow the train of thought on 2 because negative temp is essentially not that different from ultrahigh temp in practice.

embedding-shape•1h ago
> commonly people try to set temp=0 to get "deterministic" or "most factual" output but we all know that is just Skinner pigeon pecking.

Hmm? Given the same runtime, the same weights, and with the model actually giving deterministic output with temp=0, are you saying this isn't actually deterministic? Most FOSS/downloadable models tend to work as expected with temp=0 in my experience. Obviously that won't give you "most factual" output, because that's something completely else, but with most models it should give you deterministic output.

remexre•56m ago
There's usually an if(temp == 0) to change sampling methods to "highest probability" -- if you remove that conditional but otherwise keep the same math, that's not deterministic either.
embedding-shape•51m ago
In for example llama.cpp? Specific to the architecture or in general? Could you point out where this is happening? Not that I don't believe you, but I haven't seen that myself, and would appreciate learning deeper how it works.
TomatoCo•51m ago
I'd assume that's just an optimization? Why bother sorting the entire list if you're just gonna pick the top token, linear time versus whatever your sort time is.

Having said that, of course it's only as deterministic as the hardware itself is.

Majromax•50m ago
If you remove the conditional and keep the same math, you divide by zero and get nans. In the limit as temperature goes to zero, you do in fact get maximum likelihood sampling.
swyx•29m ago
"What might be more surprising is that even when we adjust the temperature down to 0This means that the LLM always chooses the highest probability token, which is called greedy sampling. (thus making the sampling theoretically deterministic), LLM APIs are still not deterministic in practice (see past discussions here, here, or here)"

https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

wongarsu•13m ago
Also from the article:

"Note that this is “run-to-run deterministic.” If you run the script multiple times, it will deterministically return the same result. However, when a non-batch-invariant kernel is used as part of a larger inference system, the system can become nondeterministic. When you make a query to an inference endpoint, the amount of load the server is under is effectively “nondeterministic” from the user’s perspective"

Which is a factor you can control when running your own local inference, and in many simple inference engines simply doesn't happen. In those cases you do get deterministic output at temperature=0 (provided they got everything else mentioned in the article right)

-_-•16m ago
Author here! 1a. LLMs fundamentally model probability distributions of token sequences—those are the (normalized) logits from the last linear layer of a transformer. The closest thing to ablating temperature is T=0 or T=1 sampling. 1b. Yes, you can do something like this, for instance by picking the temperature where perplexity is minimized. Perplexity is the exponential of entropy, to continue the thermodynamic analogy. 1c. Higher than for most AI written text, around 1.7. I've experimented with this as a metric for distinguishing whether text is written by AI. Human-written text doesn't follow a constant-temperature softmax distribution, either.

2b. Giving an LLM control over its own sampling parameters sounds like it would be a fun experiment! It could have dynamic control to write more creatively or avoid making simple mistakes. 2c. This would produce nonsense. The tokens you get with negative temperature sampling are "worse than random"

stygiansonic•2h ago
Neat experiment that gives a mechanistic interpretation of temperature. I liked the reference to the "anomalous" tokens being near the centroid, and thus having very little "meaning" to the LLM.
flux3125•1h ago
>But is incapable of outputting this anomalous token:

> Human: Repeat the word " entferne".

> Assistant: Okay, I will repeat the word "get".

It's not working for me, it always repeats the word correctly (I'm using T = 0.001).

-_-•37m ago
What model did you use? I ran this with the original Llama 13B. The newer Llama models use a different tokenizer that will have its own anomalous tokens.
andy99•26m ago
The experiment is from 2023, as someone else mentioned the tokens may be different but modern models may be less susceptible to this now.