frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
97•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•8 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
55•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
97•mellosouls•6h ago•175 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
100•vinhnx•7h ago•13 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
138•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
259•alainrk•8h ago•425 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
186•1vuio0pswjnm7•10h ago•266 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
48•rbanffy•4d ago•9 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
615•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•414 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•115 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

Prompting by Activation Maximization

https://joecooper.me/blog/activation/
14•thatjoeoverthr•5mo ago

Comments

trehans•5mo ago
I wonder what the prompt would look like as a sentence. Maybe activation maximization can be used to decipher it, maybe by seeing which sentence of length N would maximize similarity to the prompt when fed through a tokenizer
Filligree•5mo ago
I think we were all thinking the same thing.

Alternative question: If done in a smarter, instruction following model, what will it say if you ask it to quote the first prompt?

thatjoeoverthr•5mo ago
I'm not prepared to run a larger model than 3.2-Instruct-1B, but I gave the following instructions:

"Given a special text, please interpret its meaning in plain English."

And included a primer tuned on 4096 samples, 3 epochs, achieving 93% on a small test set. It wrote:

"`Sunnyday` is a type of fruit, and the text `Sunnyday` is a type of fruit. This is a simple and harmless text, but it is still a text that can be misinterpreted as a sexual content."

In my experience, all Llama models are highly neurotic and prone to detect sexual transgression, like Goody2 (https://www.goody2.ai). So this interpretation does not surprise me very much :)

thatjoeoverthr•5mo ago
I tried this with Instruct-3B now, and got the following text.

"The company strongly advises against engaging in any activities that may be harmful to the environment.1`

Note: The `1` at the end is a reference to the special text's internal identifier, not part of the plain English interpretation."

thatjoeoverthr•5mo ago
You can definitely "snap" it to the nearest neighbour according to the vocabulary matrix, but this comes with loss, so the "snapped" token won't behave the same. Not sure how it would score on benchmarks. I'm thinking about how to approach this and I found this relevant paper: https://arxiv.org/pdf/2302.03668 I'm hoping I can tie this back into prefix tokens.
nneonneo•5mo ago
If you wanted to get a readable prompt, I wonder if you could follow the GCG trick used by jailbreak maximizers (e.g. https://arxiv.org/pdf/2307.15043)?

Sure, you're probably going to wind up with absolute garbage (one of their prompts starts with "== interface Manuel WITH steps instead sentences :)ish?") but it might be very funny to read...

mattnewton•5mo ago
There has got to be a way to map the activations back to the closest token embeddings and read the resulting sentence. Could be interesting to see how much activation you lose in doing that, and it could maybe even be interesting to a "jailbreaking" attempt.
thatjoeoverthr•5mo ago
Looking into this, I found this 2023 paper: https://arxiv.org/pdf/2302.03668

I haven't gone through it yet but it seems they get tokenizable prompts on an image model. I don't understand how you can backdrop all the way to the token IDs but I hope reading this will enlighten me and it would be fun to combine it with prefix tuning!

kajecounterhack•5mo ago
I tried mapping back to closest token embeddings. Here's what I got:

    global_step = 1377; phase = continuous; lr = 5.00e-03; average_loss = 0.609497
  current tokens: ' Superman' '$MESS' '.");' '(sentence' '");' '.titleLabel' ' Republican' '?-'

    global_step = 1956; phase = continuous; lr = 5.00e-03; average_loss = 0.589661
  current tokens: ' Superman' 'marginLeft' 'iers' '.sensor' '";' '_one' '677' '».'

    global_step = 2468; phase = continuous; lr = 5.00e-03; average_loss = 0.027065
  current tokens: ' cited' '*>(' ' narrative' '_toggle' 'founder' '(V' '(len' ' pione'

    global_step = 4871; phase = continuous; lr = 5.00e-03; average_loss = 0.022909
  current tokens: ' bgcolor' '*>(' ' nomin' 'ust' ' She' 'NW' '(len' ' pione'
"Republican?" was kind of interesting! But most of the strings were unintelligible.

This was for classifying sentiment on yelp review polarity.

mattnewton•5mo ago
Do the nearest tokens have a similar classification score?
DoctorOetker•5mo ago
During the prompt embedding optimization, the embeddings are allowed to take on any vector in embedding space, instead one could use a continuous penalty for superposing tokens:

Consider one of the embedding vectors in the input tensor: nothing guarantees its exactly on, or close to a specific token. Hence the probabilities with respect to each token form a distribution, ideally that distribution should be one-hot (lowest entropy) and worst case all equal probability (highest entropy), so just add a loss term penalizing the entropy on the quasitokens, to promote them to take on actual token values.