frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Overview of the Ada Computer Language Competition (1979)

https://iment.com/maida/computer/redref/
50•transpute•4h ago•5 comments

Veo 3 and Imagen 4, and a new tool for filmmaking called Flow

https://blog.google/technology/ai/generative-media-models-io-2025/
648•youssefarizk•17h ago•386 comments

Convolutions, Polynomials and Flipped Kernels

https://eli.thegreenplace.net/2025/convolutions-polynomials-and-flipped-kernels/
58•mfrw•6h ago•6 comments

Withnail and I (2001)

https://www.criterion.com/current/posts/122-withnail-and-i
53•dcminter•3d ago•16 comments

Gemma 3n preview: Mobile-first AI

https://developers.googleblog.com/en/introducing-gemma-3n/
342•meetpateltech•17h ago•119 comments

“ZLinq”, a Zero-Allocation LINQ Library for .NET

https://neuecc.medium.com/zlinq-a-zero-allocation-linq-library-for-net-1bb0a3e5c749
178•cempaka•12h ago•63 comments

Litestream: Revamped

https://fly.io/blog/litestream-revamped/
330•usrme•15h ago•76 comments

Clojuring the web application stack: Meditation One

https://www.evalapply.org/posts/clojure-web-app-from-scratch/index.html
103•adityaathalye•21h ago•25 comments

What makes a good engineer also makes a good engineering organization (2024)

https://moxie.org/2024/09/23/a-good-engineer.html
200•kiyanwang•2d ago•59 comments

Building my own solar power system

https://medium.com/@joe_5312/pg-e-sucks-or-how-i-learned-to-stop-worrying-and-love-building-my-own-solar-system-acf0c9f03f3b
167•JKCalhoun•2d ago•121 comments

Writing into Uninitialized Buffers in Rust

https://blog.sunfishcode.online/writingintouninitializedbuffersinrust/
82•luu•1d ago•72 comments

Using unwrap() in Rust is Okay (2022)

https://burntsushi.net/unwrap/
10•pierremenard•2d ago•6 comments

The NSA Selector

https://github.com/wenzellabs/the_NSA_selector
256•anigbrowl•16h ago•68 comments

Deep Learning Is Applied Topology

https://theahura.substack.com/p/deep-learning-is-applied-topology
428•theahura•21h ago•168 comments

AI's energy footprint

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
204•pseudolus•1d ago•205 comments

My favourite fonts to use with LaTeX (2022)

https://www.lfe.pt/latex/fonts/typography/2022/11/21/latex-fonts-part1.html
139•todsacerdoti•4d ago•38 comments

Someone got an LLM running on a Commodore 64 from 1982, and it runs as well

https://www.xda-developers.com/llm-running-commodore-64/
13•ghuntley•2d ago•5 comments

Red Programming Language

https://www.red-lang.org/p/about.html
166•hotpocket777•16h ago•89 comments

Show HN: 90s.dev – Game maker that runs on the web

https://90s.dev/blog/finally-releasing-90s-dev.html
278•90s_dev•20h ago•96 comments

Why does the U.S. always run a trade deficit?

https://libertystreeteconomics.newyorkfed.org/2025/05/why-does-the-u-s-always-run-a-trade-deficit/
243•jnord•23h ago•573 comments

A Secret Trove of Rare Guitars Heads to the Met

https://www.newyorker.com/magazine/2025/05/26/a-secret-trove-of-rare-guitars-heads-to-the-met
59•bookofjoe•8h ago•23 comments

Life before the web – Running a Startup in the 1980's (2016)

https://blog.zamzar.com/2016/07/13/life-before-the-web-running-a-startup-in-the-1980s/
46•gscott•4d ago•8 comments

Taito-tastic: Kiki Kaikai and its Hardware

https://nicole.express/2025/pocky-but-wheres-rocky.html
29•ingve•2d ago•3 comments

My new hobby: watching AI slowly drive Microsoft employees insane

https://old.reddit.com/r/ExperiencedDevs/comments/1krttqo/my_new_hobby_watching_ai_slowly_drive_microsoft/
3•laiysb•6m ago•1 comments

Show HN: A Tiling Window Manager for Windows, Written in Janet

https://agent-kilo.github.io/jwno/
242•agentkilo•19h ago•83 comments

Linguists find proof of sweeping language pattern once deemed a 'hoax'

https://www.scientificamerican.com/article/linguists-find-proof-of-sweeping-language-pattern-once-deemed-a-hoax/
99•bryanrasmussen•2d ago•101 comments

OpenAI Codex hands-on review

https://zackproser.com/blog/openai-codex-review
133•fragmede•20h ago•93 comments

Robin: A multi-agent system for automating scientific discovery

https://arxiv.org/abs/2505.13400
139•nopinsight•18h ago•18 comments

The Dawn of Nvidia's Technology

https://blog.dshr.org/2025/05/the-dawn-of-nvidias-technology.html
157•wmf•17h ago•60 comments

Semantic search engine for ArXiv, biorxiv and medrxiv

https://arxivxplorer.com/
120•0101111101•13h ago•20 comments
Open in hackernews

Convolutions, Polynomials and Flipped Kernels

https://eli.thegreenplace.net/2025/convolutions-polynomials-and-flipped-kernels/
58•mfrw•6h ago

Comments

Sourabhsss1•5h ago
The visualizations make the concept easy to grasp.
bjt12345•4h ago
This and complex analysis are fascinating topics in Undergraduate studies.
incognito124•1h ago
My favourite use case for this: By the same derivation as this blog, one can prove that, if you have any two probability distributions X and Y (they can be different), the probability distribution of X+Y is a convolution of the PMFs/PDFs of X and Y.
srean•1h ago
On similar lines the MAX operator on the random variables become PRODUCT operator on its distribution.

It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.

This turns out to be quite useful in estimating completion time of dependant parallel jobs.

One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask.

If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.

For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.

incognito124•14m ago
Thanks for sharing the name of that problem! I've encountered it before while optimizing batched LLM inference. The whole batch would last until all queries in a batch were done, and by changing the batch size, you'd trade off per-query-speed (better in a larger batch) with overall performance (worse with a larger batch).

Nowadays I think this is solved in an entirely different way, though.

srean•10m ago
That's right, on disk mapreduce are not as frequently encountered in practice compared to its heydays.