frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
45•valyala•2h ago•19 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
228•ColinWright•1h ago•244 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
31•valyala•2h ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
128•AlexeyBrin•8h ago•25 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
8•gnufx•1h ago•1 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
132•1vuio0pswjnm7•9h ago•160 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
71•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
836•klaussilveira•22h ago•251 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
181•alephnerd•2h ago•124 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1064•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
85•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
493•theblazehen•3d ago•178 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
215•jesperordrup•12h ago•77 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
14•momciloo•2h ago•0 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
231•alainrk•7h ago•366 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
577•nar001•6h ago•261 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
9•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
41•rbanffy•4d ago•8 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
30•marklit•5d ago•3 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
80•speckx•4d ago•91 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
278•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
289•dmpetrov•23h ago•156 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•272 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
431•ostacke•1d ago•111 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
7•josephcsible•29m ago•1 comments
Open in hackernews

Understanding Transformers Using a Minimal Example

https://rti.github.io/gptvis/
295•rttti•5mo ago

Comments

aabdel0181•5mo ago
very cool!
busymom0•5mo ago
I'd also recommend another article on this topic of LLMs discussed a few days ago. I read it to the finish line and understood everything fully:

> How can AI ID a cat?

https://news.ycombinator.com/item?id=44964800

xwowsersx•5mo ago
So glad you shared this. Super accessible without diluting. Thank you!
CGMthrowaway•5mo ago
Honest feedback - I was really excited when I read the opening. However, I did not come away from this without a greater understanding than I already had.

For reference, my initial understanding was somewhat low: basically I know a) what embedding is basically b) transformers work by matrix multiplication, and c) it's something like a multi-threaded Markov chain generator with the benefit of prior-trained embeddings

onename•5mo ago
Have you checked out this video from 3Blue1Brown that talks bit about transformers?

https://youtu.be/wjZofJX0v4M

CGMthrowaway•5mo ago
I've seen it but I don't believe I've watched it all the way through. I will now
imtringued•5mo ago
I personally would rather recommend people to just look at these architectural diagrams [0] and try to understand them. There is the caveat that they do not show how attention works. For that you need to understand softmax(QK^T)V and multi head attention being a repetition of this multiple times. GQA, MHA, etc just messes around with reusing Q or K or V in clever ways.

[0] https://huggingface.co/blog/vtabbott/mixtral

rhdunn•5mo ago
There's also various videos by Welch Labs that are very good. -- https://www.youtube.com/@WelchLabsVideo/videos
nikki93•5mo ago
Pasting a comment I posted elsewhere:

Resources I’ve liked:

Sebastian Raschka book on building them from scratch

Deep Learning a Visual Approach

These videos / playlists:

https://youtube.com/playlist?list=PLoROMvodv4rOY23Y0BoGoBGgQ... https://youtube.com/playlist?list=PLoROMvodv4rOwvldxftJTmoR3... https://youtube.com/playlist?list=PL7m7hLIqA0hoIUPhC26ASCVs_... https://www.youtube.com/live/uIsej_SIIQU?si=RHBetDNa7JXKjziD

here’s a basic impl that i trained on tinystories to decent effect: https://gist.github.com/nikki93/f7eae83095f30374d7a3006fd5af... (i used claude code a lot to help with the above bc a new field for me) (i did this with C and mlx before but ultimately gave into the python lol)

but overall it boils down to:

- tokenize the text

- embed tokens (map each to a vector) with a simple NN

- apply positional info so each token also encodes where it is

- do the attention. this bit is key and also very interesting to me. there are three neural networks: Q, K, V – that are applied to each token. you then generate a new sequence of embeddings where each position has the Vs of all tokens added up weighted by the Q of that position dot’d with the K of the other position. the new embeddings are /added/ to the previous layer (adding like this is called ‘residual’)

- also do another NN pass without attention, again adding the output (residual) there’s actually multiple ‘heads’ each with a different Q, K, V – their outputs are added together before that second NN pass

there’s some normalization at each stage to keep the numbers reasonable and from blowing up

you repeat the attention + forward blocks many times, then the last embedding in the final layer output is what you can sample based on

i was surprised by how quickly this just starts to generate coherent grammar etc. having the training loop also do a generation step to show example output at each stage of training was helpful to see how the output qualitatively improves over time, and it’s kind of cool to “watch” it learn.

this doesn’t cover MoE, sparse vs dense attention and also the whole thing about RL on top of these (whether for human feedback or for doing “search with backtracking and sparse reward”) – i haven’t coded those up yet just kinda read about them…

now the thing is – this is a setup for it to learn some processes spread among the weights that do what it does – but what those processes are seems still very unknown. “mechanistic interpretability” is the space that’s meant to work on that, been looking into it lately.

hunter2_•5mo ago
Similarly, I was really excited when I read the headline here on HN and thought this would be about the electrical device. I wonder if the LLM meaning has eclipsed the electrical meaning at this point, as a default in the absence of other qualifiers, in communities like this.
zxexz•5mo ago
It does seem to. I’ve been working on some personal projects where I’ve needed to look up and research transformers quite a bit (the kind that often has a ferrite core) and it has been frustrating. Frustrating not just trying to search for the wire datasheets, etc., but also because I often have to use the other transformer via service to find what I’m looking for because search is so enshittified by the newer definition.
quitit•5mo ago
I had a similar feeling, I think a little magic was lost by the author trying to be as concise as possible, which is no real fault of their own as it can go down the rabbit hole very quickly.

Instead I believe this might work better as a guided exercise where a person can work on it over a few hours rather than being spoon-fed it over the 10 minute reading time. Or breaking up the steps into "interactive" sections that more clearly demarcate the stages.

Regardless I'm very supportive of people making efforts to simplify this topic, each attempt always gives me something that I either forgot or neglect.

rttti•5mo ago
Thanks a lot for your feedback. I like your idea. This matches the pattern that you learn best what you try and experience yourself.
anshumankmr•5mo ago
It might be meant for the folks who are not well versed in transformers.
photon_lines•5mo ago
If you want my 'intuitive' explanation of how transformers work - you can find it here (if you're a visual learner -- I think you'll like this one) albeit it is a bit long: https://photonlines.substack.com/p/intuitive-and-visual-guid...
CGMthrowaway•5mo ago
This I've read before and it was very helpful. It's probably where most of my understanding comes from.

If I'm interpreting it correctly, it sort of validates my intuition that attention heads are "multi-threaded markov chain models" , in other words if autocomplete just looks at level 1, a transformer looks at level 1 for every word in the input plus many layers deeper for every word (or token) in the input.. while bringing a huge pre-training dataset to bear.

If that's correct more or less, something that surprises me is how attention is often treated as some kind of "breakthrough" - it seems obvious to me that improving a markov chain recommendation would involve going deeper and dimensionalizing the context in a deeper way.. the technique appears the same just the amount of analysis is more. I'm not sure what I'm missing here. Perhaps adding those extra layers was a hard problem thta we hadnt figured out how to efficiently do yet (?)

photon_lines•5mo ago
So I posted this conversation between Ilya Sutskever (one of the creators of ChatGPT) and Lex Fridman within that blog post and I'll provide it again below because I think it does a good job of summarizing what exactly 'makes transformers work':

  Ilya Sutskever: Yeah, so the thing is the transformer is a combination of multiple ideas simultaneously of which attention is one.

  Lex Friedman: Do you think attention is the key?

  Ilya Sutskever: No, it's a key, but it's not the key. The transformer is successful because it is the simultaneous combination of multiple ideas. And if you were to remove either idea, it would be much less successful. So the transformer uses a lot of attention, but attention existed for a few years. So that can't be the main innovation. The transformer is designed in such a way that it runs really fast on the GPU. And that makes a huge amount of difference. This is one thing. The second thing is that transformer is not recurrent. And that is really important too, because it is more shallow and therefore much easier to optimize. So in other words, it uses attention, it is a really great fit to the GPU and it is not recurrent, so therefore less deep and easier to optimize. And the combination of those factors make it successful.
I'm not sure if the above answers your question, but I tend to think of transformers more-of as 'associative' networks (similar to humans) -- they miss many of the components which actually makes humans human (like imitation learning and consciousness (we still don't know what consciousness actually is)) but for the most part, the general architecture and the way they 'learn' I believe mimics a process similar to how regular humans learn: neurons the fire together, wire together (i.e. associative learning). This is what a huge large-language model is to me: a giant auto-associative network that can comprehend and organize information.
rttti•5mo ago
Author here. Thanks a lot for the honest feedback. It makes me realize that the title might have been overselling. While this project was a milestone on my personal learning journey, the article does not offer the same experience to the reader. Reading experience design is what I probably should put more focus on in my next writing.
neuroelectron•5mo ago
For me, I feel like this could use a little bit more explanation. It's brief and the grammar or cadence is very clunky.
rttti•5mo ago
Thanks a lot for the feedback! Highly appreciated.
meindnoch•5mo ago
I'd be surprised if anyone understood transformers from this.
runamuck•5mo ago
I love how you represent each token in the form of five stacked boxes, with height, weight etc. depicting different values. Where did you get this amazing idea? I will "steal" it for plotting high dimensionality data.
rttti•5mo ago
Great. Would love to learn what you apply it for and how it works out for you.

I think it does not scale well beyond 5 boxes (20 numbers) because the stacks become too complex to remember and identify patterns in. This is me, could be also quite individual.

dpflan•5mo ago
Here is another take on visualizing transformers from Georgia Tech researchers: https://poloclub.github.io/transformer-explainer/

Also, the Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/

Also, this HN comment has numerous resources: https://news.ycombinator.com/item?id=35712334

credit_guy•5mo ago
Here's the best video I have seen about transformers [1]. It is made by Welch Labs. It talks about DeepSeek and what their main innovation was, but it covers transformers in general too, and I couldn't find any other better description of transformers.

Also, here's an interactive "transformer explainer" that is absolutely mind-blowing [2].

[1] https://www.youtube.com/watch?v=0VLAoVGf_74

[2] https://poloclub.github.io/transformer-explainer/