frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
638•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
936•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•238 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Feynman vs. Computer

https://entropicthoughts.com/feynman-vs-computer
90•cgdl•2mo ago

Comments

eig•2mo ago
What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)?
MengerSponge•2mo ago
Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.
jgalt212•2mo ago
The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids.
adrianN•2mo ago
At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow.
kens•2mo ago
I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.
ogogmad•2mo ago
Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way.
fph•2mo ago
With many quadrature rules (e.g. trapezoidal rule, Simpson's rule) you have a very cheap error estimator obtained by comparing the results over n and 2n subdivision points.
edschofield•2mo ago
Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.

See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf

a-dub•2mo ago
as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.
JKCalhoun•2mo ago
As a hobbyist, I'm playing with analog computer circuits right now. If you can match your curve with a similar voltage profile, a simple analog integrator (an op-amp with a capacitor connected in feedback) will also give you the area under the curve (also as a voltage of course).

Analog circuits (and op-amps just generally) are surprising cool. I know, kind of off on a tangent here but I have integration on the brain lately. You say "4 lines of Python", and I say "1 op-amp".)

dreamcompiler•2mo ago
Yep. This is also how you solve differential equations with analog computers. (You need to recast them as integral equations because real-world differentiators are not well-behaved, but it still works.)

https://i4cy.com/analog_computing/

ogogmad•2mo ago
How does this compare to the Picard-Lindelof theorem and the technique of Picard iteration?
addaon•2mo ago
One of my favorite circuits from Korn & Korn [0] is an implementation of an arbitrary function of a single variable. Take an oscilloscope-style display tube. Put your input on the X axis as a deflection voltage. Close a feedback loop on the Y axis with a photodiode, and use the Y axis deflection voltage as your output. Cut your function of one variable out of cardboard and tape to the front of the tube.

[0] https://www.amazon.com/Electronic-Analog-Computers-D-c/dp/B0...

bncndn0956•2mo ago
N-SPHERES

https://youtu.be/BDERfRP2GI0

N-SPHERES ist the most complex Oscilloscope Music work by Jerobeam Fenderson & Hansi3D and took six years to make.

Since it is almost entirely created with parametric functions, it is possible to store only these functions in an executable program and let the program create the audio and video output on the fly. The storage space required for such a program is just a fraction of an audio or video file, so that it's possible to store the executables for the entire audiovisual EP all on one 3.5" 1.44MB floppy disk.The first 500 orders will receive the initial numbered edition with pen-plotted artwork

nakamoto_damacy•2mo ago
Speaking of Analog computation:

A single artificial neuron could be implemented as:

Weighted Sum

Using a summing amplifier:

net = Σ_i (Rf/Ri * xi)

Where resistor ratios set the synaptic weights.

Activation Function

Common op-amp activation circuits:

Saturating function: via op-amp with clipping diodes → approximated sigmoid

Hard limiter: comparator behavior for step activation

Tanh-like response: differential pair circuits

Learning

Early analog systems often lacked on-device learning; weights were manually set with potentiometers or stored using:

Memristive elements (recent)

Floating-gate MOSFETs

Programmable resistor networks

tim333•2mo ago
On op-amps I've got a personal theory that the cochlea amplifier in ear is basically an op amp providing negative feedback to prevent excessive amplitudes rather than the positive feedback mentioned in Wikipedia https://en.wikipedia.org/wiki/Cochlear_amplifier
bananaflag•2mo ago
> I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.

I'd be interested in this. So finding classical closed form solutions is the actual thing desired there?

morcus•2mo ago
I think what the author was alluding to was the path integral formulation [of quantum mechanics] which was advanced in large part by Feynman.

It's not that finding closed form solutions is what matters (I don't think most path integrals would have closed form solutions), but that the integration is done over the space of functions, not over Euclidian space (or a manifold in Euclidian space, etc...)

pinkmuffinere•2mo ago
I haven’t read tfa, so apologies if I’m missing context. But convolution is one example of an integral that outputs a function. Convolution is fundamental for control theory.

https://en.wikipedia.org/wiki/Convolution?wprov=sfti1

messe•2mo ago
An integral trick I picked up from a lecturer at university: if you know the result has to be of the form ax^n for some a that's probably rational and some integer n but you're feeling really lazy and/or it's annoying to simplify (even for mathematica), just plug in a transcendental value for x like Zeta[3].

Then just divide by powers of that irrational number until you have something that looks rational. That'll give you a and n. It's more or less numerical dimensional analysis.

It's not that useful for complicated integrals, but when you're feeling lazy it's a fucking godsend to know what the answer should be before you've proven it.

EDIT: s/irrational/transcendental/

Animats•2mo ago
Good numerical integration is easy, because summing smooths out noise. Good numerical differentiation is hard, because noise is amplified.

Conversely, good symbolic integration is hard, because you can get stuck and have to try another route through a combinatoric maze. Good symbolic differentiation is easy, because just applying the next obvious operation usually converges.

Huh.

Mandatory XKCD: [1]

[1] https://xkcd.com/2117/

kkylin•2mo ago
That's exactly right. A couple more things:

- Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.

- If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.

ogogmad•2mo ago
The usage of confidence intervals here reminds me of the clearest way to see that integration is a computable operator, to the same degree that a function like sin() or sqrt() is computable. It's true thanks to a natural combination of (i) interval arithmetic and (ii) the "Darboux integral" approach to defining integration. So, intervals can do magic.
8bitsrule•2mo ago
Cool how the computer versions seem to work well as long as re-normalization isn't involved.
ForOldHack•2mo ago
I would bet on Feynman any day of the week. Numerical methods came up in 'Hidden Figures' and her solution was to use Euler to move from a elliptical orbit to a parabolic descent.