frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
86•valyala•4h ago•16 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•15 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
35•zdw•3d ago•4 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
89•mellosouls•6h ago•168 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
132•valyala•4h ago•99 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
47•surprisetalk•3h ago•52 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
96•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•23h ago•256 comments

First Proof

https://arxiv.org/abs/2602.05192
66•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1092•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•5h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
4•mbitsnbites•3d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
233•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
516•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
93•onurkanbkrc•8h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
334•ColinWright•3h ago•401 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
254•alainrk•8h ago•412 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
182•1vuio0pswjnm7•10h ago•252 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
611•nar001•8h ago•269 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
35•marklit•5d ago•6 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
27•momciloo•4h ago•5 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
47•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
96•speckx•4d ago•109 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
32•sandGorgon•2d ago•15 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
287•isitcontent•1d ago•38 comments
Open in hackernews

Feynman vs. Computer

https://entropicthoughts.com/feynman-vs-computer
90•cgdl•2mo ago

Comments

eig•2mo ago
What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)?
MengerSponge•2mo ago
Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.
jgalt212•2mo ago
The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids.
adrianN•2mo ago
At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow.
kens•2mo ago
I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.
ogogmad•2mo ago
Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way.
fph•2mo ago
With many quadrature rules (e.g. trapezoidal rule, Simpson's rule) you have a very cheap error estimator obtained by comparing the results over n and 2n subdivision points.
edschofield•2mo ago
Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.

See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf

a-dub•2mo ago
as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.
JKCalhoun•2mo ago
As a hobbyist, I'm playing with analog computer circuits right now. If you can match your curve with a similar voltage profile, a simple analog integrator (an op-amp with a capacitor connected in feedback) will also give you the area under the curve (also as a voltage of course).

Analog circuits (and op-amps just generally) are surprising cool. I know, kind of off on a tangent here but I have integration on the brain lately. You say "4 lines of Python", and I say "1 op-amp".)

dreamcompiler•2mo ago
Yep. This is also how you solve differential equations with analog computers. (You need to recast them as integral equations because real-world differentiators are not well-behaved, but it still works.)

https://i4cy.com/analog_computing/

ogogmad•2mo ago
How does this compare to the Picard-Lindelof theorem and the technique of Picard iteration?
addaon•2mo ago
One of my favorite circuits from Korn & Korn [0] is an implementation of an arbitrary function of a single variable. Take an oscilloscope-style display tube. Put your input on the X axis as a deflection voltage. Close a feedback loop on the Y axis with a photodiode, and use the Y axis deflection voltage as your output. Cut your function of one variable out of cardboard and tape to the front of the tube.

[0] https://www.amazon.com/Electronic-Analog-Computers-D-c/dp/B0...

bncndn0956•2mo ago
N-SPHERES

https://youtu.be/BDERfRP2GI0

N-SPHERES ist the most complex Oscilloscope Music work by Jerobeam Fenderson & Hansi3D and took six years to make.

Since it is almost entirely created with parametric functions, it is possible to store only these functions in an executable program and let the program create the audio and video output on the fly. The storage space required for such a program is just a fraction of an audio or video file, so that it's possible to store the executables for the entire audiovisual EP all on one 3.5" 1.44MB floppy disk.The first 500 orders will receive the initial numbered edition with pen-plotted artwork

nakamoto_damacy•2mo ago
Speaking of Analog computation:

A single artificial neuron could be implemented as:

Weighted Sum

Using a summing amplifier:

net = Σ_i (Rf/Ri * xi)

Where resistor ratios set the synaptic weights.

Activation Function

Common op-amp activation circuits:

Saturating function: via op-amp with clipping diodes → approximated sigmoid

Hard limiter: comparator behavior for step activation

Tanh-like response: differential pair circuits

Learning

Early analog systems often lacked on-device learning; weights were manually set with potentiometers or stored using:

Memristive elements (recent)

Floating-gate MOSFETs

Programmable resistor networks

tim333•2mo ago
On op-amps I've got a personal theory that the cochlea amplifier in ear is basically an op amp providing negative feedback to prevent excessive amplitudes rather than the positive feedback mentioned in Wikipedia https://en.wikipedia.org/wiki/Cochlear_amplifier
bananaflag•2mo ago
> I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.

I'd be interested in this. So finding classical closed form solutions is the actual thing desired there?

morcus•2mo ago
I think what the author was alluding to was the path integral formulation [of quantum mechanics] which was advanced in large part by Feynman.

It's not that finding closed form solutions is what matters (I don't think most path integrals would have closed form solutions), but that the integration is done over the space of functions, not over Euclidian space (or a manifold in Euclidian space, etc...)

pinkmuffinere•2mo ago
I haven’t read tfa, so apologies if I’m missing context. But convolution is one example of an integral that outputs a function. Convolution is fundamental for control theory.

https://en.wikipedia.org/wiki/Convolution?wprov=sfti1

messe•2mo ago
An integral trick I picked up from a lecturer at university: if you know the result has to be of the form ax^n for some a that's probably rational and some integer n but you're feeling really lazy and/or it's annoying to simplify (even for mathematica), just plug in a transcendental value for x like Zeta[3].

Then just divide by powers of that irrational number until you have something that looks rational. That'll give you a and n. It's more or less numerical dimensional analysis.

It's not that useful for complicated integrals, but when you're feeling lazy it's a fucking godsend to know what the answer should be before you've proven it.

EDIT: s/irrational/transcendental/

Animats•2mo ago
Good numerical integration is easy, because summing smooths out noise. Good numerical differentiation is hard, because noise is amplified.

Conversely, good symbolic integration is hard, because you can get stuck and have to try another route through a combinatoric maze. Good symbolic differentiation is easy, because just applying the next obvious operation usually converges.

Huh.

Mandatory XKCD: [1]

[1] https://xkcd.com/2117/

kkylin•2mo ago
That's exactly right. A couple more things:

- Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.

- If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.

ogogmad•2mo ago
The usage of confidence intervals here reminds me of the clearest way to see that integration is a computable operator, to the same degree that a function like sin() or sqrt() is computable. It's true thanks to a natural combination of (i) interval arithmetic and (ii) the "Darboux integral" approach to defining integration. So, intervals can do magic.
8bitsrule•2mo ago
Cool how the computer versions seem to work well as long as re-normalization isn't involved.
ForOldHack•2mo ago
I would bet on Feynman any day of the week. Numerical methods came up in 'Hidden Figures' and her solution was to use Euler to move from a elliptical orbit to a parabolic descent.