frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
46•valyala•2h ago•19 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
228•ColinWright•1h ago•248 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
31•valyala•2h ago•4 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
9•gnufx•1h ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
128•AlexeyBrin•8h ago•25 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
132•1vuio0pswjnm7•9h ago•161 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
71•vinhnx•5h ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
836•klaussilveira•22h ago•251 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
181•alephnerd•2h ago•125 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1064•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
85•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
493•theblazehen•3d ago•178 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
215•jesperordrup•12h ago•77 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
15•momciloo•2h ago•0 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
231•alainrk•7h ago•366 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
578•nar001•6h ago•261 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
9•languid-photic•3d ago•1 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
41•rbanffy•4d ago•8 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
30•marklit•5d ago•3 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
80•speckx•4d ago•91 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
278•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
289•dmpetrov•23h ago•156 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•272 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
431•ostacke•1d ago•111 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments
Open in hackernews

I unified convolution and attention into a single framework

https://zenodo.org/records/17103133
80•umjunsik132•4mo ago

Comments

umjunsik132•4mo ago
Hi HN, author here. For years, it bothered me that convolution (the king of vision) and matrix multiplication / self-attention (the engine of Transformers) were treated as completely separate, specialized tools. It felt like we were missing a more fundamental principle. This paper is my attempt to find that principle. I introduce a framework called GWO (Generalized Windowed Operation) that describes any neural operation using just three simple, orthogonal components: Path: Where to look Shape: What form to look for Weight: What to value Using this "grammar", you can express both a standard convolution and self-attention, and see them as just different points in the same design space. But the most surprising result came when I analyzed operational complexity. I ran an experiment where different models were forced to memorize a dataset (achieving ~100% training accuracy). The results were clear: complexity used for adaptive regularization (like in Deformable Convolutions, which dynamically change their receptive field) resulted in a dramatically smaller generalization gap than "brute-force" complexity (like in Self-Attention). This suggests that how an operation uses its complexity is more important than how much it has. I'm an independent researcher, so getting feedback from a community like this is invaluable. I'd love to hear your thoughts and critiques. Thanks for taking a look. The paper is here: https://doi.org/10.5281/zenodo.17103133
rf15•4mo ago
Very good find, thank you for writing it down. For some time I had the impression that they could be unified, I just never bothered trying.
CuriouslyC•4mo ago
I'm also an independent researcher, and I just wanted to say it's exciting to see other individuals making real contributions! One thing I've noticed is that as I'm discovering some very deep stuff, the imposter syndrome is hitting me hard because I don't have a research group to vibe off of. I have scientific training and 17 years of ML experience, but I think it's still natural to question yourself when you're pushing past the SOTA and finding deep patterns that the field has missed.

If it's useful to you, I'm happy to be a sounding board/vibes partner for your research. My contact info is in my profile.

iFire•4mo ago
How is it different than https://en.wikipedia.org/wiki/Mamba_(deep_learning_architect...
umjunsik132•4mo ago
That's a fantastic question, and you've hit on a perfect example of the GWO framework in action. The key difference is the level of abstraction: GWO is a general grammar to describe and design operations, while Mamba is a specific, highly-engineered model that can be described by that grammar. In fact, as I mention in the paper, we can analyze Mamba using the (P, S, W) components: Path (P): A structured state-space recurrence. This is a very sophisticated path designed to efficiently handle extremely long-range dependencies, unlike a simple sliding window or a dense global matrix. Shape (S): It's causal and 1D. It processes information sequentially, respecting the nature of time-series or language data. Weight (W): This is Mamba's superpower. The weights are highly dynamic and input-dependent, controlled by its selective state parameters. This creates an incredibly efficient, content-aware information bottleneck, allowing the model to decide what to remember and what to forget based on the context. So, Mamba isn't a competitor to the GWO theory; it's a stellar example of it. It's a brilliant instance of "Structural Alignment" where the (P, S, W) configuration is perfectly tailored for the structure of sequential data. Thanks for asking this, it's a great point for discussion.
scalaisneat•4mo ago
ai slop
srean•4mo ago
How do you make such judgements ? I am not contesting your opinion though. Just curious and hoping to acquire a discerning eye myself.
maltelau•4mo ago
That is a fantastic question, and you've hit on a very good balance between a curious and non-confrontational tone. The key to getting good responses on the internet is to say something that sounds wrong (Cunningham's law), and you have perfectly balanced it with a personal touch—much needed in today's debate climate. Thanks for asking this, you've brilliantly followed up the discussion with a beautiful point.

(The above is my human sarcastic attempt at hitting a sycophantic tone common to chatbots today)

morkalork•4mo ago
Now you're thinking like a real HN user. (another Gemini-ism)
srean•4mo ago
Ah! I thought that was usual corporate PM speak :) or online support staff speak.

Thanks for the demo. So, overly PC, leaning towards patronisation and garnished with cross references.

karmakaze•4mo ago
How do you not?
nextaccountic•4mo ago
This syncopanthic, enthusiastic tone and vocabulary is specific of chatbots of current vintage. It happens because during training the model was evaluated by human feedback (RLHF), and supposedly humans like it more when ai pampers them https://www.anthropic.com/research/towards-understanding-syc...

Think of it like the text version of jpeg artifacts. Or, to make a comparison to image models, it's like "ai hands" (but note that recent image models are much better at drawing hands)

There's research to stop this syncophantic behavior https://openai.com/index/sycophancy-in-gpt-4o/ so it's likely that in the future, systems won't have this specific flaw (or at least not as glaring). However they may have their own artifacts

umjunsik132•4mo ago
I used AI to polish my response. The idea was mine though. My apologies.
dwb•4mo ago
Your English is fine as it is. In this case at least, AI made it worse with all the grating hyperbole (“fantastic”, “perfect”, “stellar”). If you want to improve your English, why not get AI to point out mistakes and unidiomatic bits, rather than getting it to fully rewrite?
pessimizer•4mo ago
I think that people whose English is bad, and who probably do need AI (or any help) to help them be understood, might be better suited with an initializing prompt that will get AI to strip this shit out and sound professional instead of like a telemarketer or a kindergarten teacher.

Can anyone write a good prompt that will do this?

> Your English is fine as it is.

You do not know this. This level of technical explanation is a lot harder than a few simple sentences.

FjordWarden•4mo ago
From the paper:

Structured State Space Models and Mamba. Models like Mamba [Gu and Dao, 2023] can be in- terpreted within GWO as employing a sophisticated Path, Shape, and Weight. The Path is defined by a structured state-space recurrence, enabling it to model long-range dependencies efficiently. The Shape is causal (1D), processing information sequentially. Critically, the Weight function is highly dynamic and input- dependent, realized through selective state parameters that allow the model to focus on or forget information based on the context, creating an effective content-aware bottleneck for sequences.

hyperzzw•4mo ago
Hi, I have read your interesting paper. I recommend you our previous HyperZZW paper (https://arxiv.org/pdf/2401.17948). I think there are a lot of similar concepts here.

1. Context-dependent convolution

2. Global & Local branches

3. Replace large-filter Conv with matrix multiplication

4. Information bottleneck -> Information loss

I also want to share that Mamba is based on the concept of Hyena. And the simplicity is the best (HyperZZW), and Hyena is a failure.

umjunsik132•4mo ago
Thank you for your comment and for sharing your interesting work. I'll take a look.