frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The AI Agile Era: How AI Is Compressing the Software Lifecycle

https://blog.withmantle.com/ai-agile-era/
1•Osis•43s ago•1 comments

Drug for celiac shows promise in treating post-Covid syndrome in children

https://medicalxpress.com/news/2025-07-drug-celiac-disease-severe-covid.html
1•PaulHoule•1m ago•0 comments

Running Single-Core vs. Multi-Core Web Servers on Node.js with Rust

https://www.npmjs.com/package/brahma-firelight
1•StellaMary•2m ago•0 comments

Unstract: Open-source platform to ship document extraction APIs in minutes

https://github.com/Zipstack/unstract
1•naren87•3m ago•0 comments

Google's AI pointed him to a customer service number. It was a scam

https://www.washingtonpost.com/technology/2025/08/15/google-ai-overviews-scam/
1•fortran77•3m ago•1 comments

A new 5″ variant of Raspberry Pi Touch Display 2

https://www.raspberrypi.com/news/a-new-5-variant-of-raspberry-pi-touch-display-2/
1•righthand•3m ago•0 comments

Zero 2.0 – local-first vault with field-level encryption (free)

1•techdobz•3m ago•0 comments

Music Has No Enemies – How Music "Soothed the Savage Beast" at Normandy

https://www.warhistoryonline.com/instant-articles/music-soothed-savage-beast-normandy-watch.html
1•rishabhd•4m ago•0 comments

Robots.txt Is a Suicide Note

https://wiki.archiveteam.org/index.php/Robots.txt
3•rafram•4m ago•1 comments

There's no such thing as a 'coolcation'

https://www.cnn.com/2025/08/18/climate/nordic-heat-waves-arctic
1•voxleone•4m ago•0 comments

AI copilots reshape game development

https://www.developer-tech.com/news/how-ai-copilots-are-reshaping-game-development-according-to-coplays-ceo/
1•josvdwest•4m ago•0 comments

LLMs suggest women seek lower salaries than men in job interviews

https://www.computerworld.com/article/4028148/bias-alert-llms-suggest-women-seek-lower-salaries-than-men-in-job-interviews.html
1•arkadiyt•9m ago•0 comments

BeyondWeb: Lessons from Scaling Synthetic Data for Trillion-Scale Pretraining

https://blog.datologyai.com/beyondweb/
1•hurrycane•12m ago•0 comments

Intelligence

https://halfanhour.blogspot.com/2025/08/on-intelligence.html
1•speckx•12m ago•0 comments

Escaping the Steamcar Era of AI

https://speakez.tech/blog/escaping-the-steamcar-era-of-ai/
1•banashark•13m ago•0 comments

Turning an iPad Pro into the Ultimate Classic Macintosh

https://blog.gingerbeardman.com/2021/04/17/turning-an-ipad-pro-into-the-ultimate-classic-macintosh/
10•rcarmo•15m ago•0 comments

Macintosh Drawing Software Compared

https://blog.gingerbeardman.com/2021/04/24/macintosh-drawing-software-compared/
2•rcarmo•15m ago•0 comments

Left to Right Programming: Programs Should Be Valid as They Are Typed

https://graic.net/p/left-to-right-programming
6•graic•16m ago•3 comments

The Crisis of the University Started Long Before Trump

https://www.compactmag.com/article/the-crisis-of-the-university-started-long-before-trump/
1•ubiquitysc•17m ago•0 comments

AI Is Power-Hungry

https://paulkrugman.substack.com/p/ai-is-power-hungry
2•caycep•20m ago•0 comments

Privacy First AI Inference

https://www.geodd.io
1•malithh•21m ago•1 comments

Meta Horizon Creator Competition: Open-Source Champions

https://developers.meta.com/horizon/blog/introducing-open-source-champions/
1•acossta•22m ago•1 comments

Readeck client for self-hosted bookmarks: simple and mobile-friendly

1•potetotown•22m ago•1 comments

GSA Issues RFI for AI-Based Procurement Ecosystem

https://feedback.gsa.gov/jfe/form/SV_3OiEmKfescQv034
1•ateesdalejr•25m ago•0 comments

Protecting You from Social Engineering Campaigns: An Update from Workday

https://blog.workday.com/en-us/protecting-you-from-social-engineering-campaigns-update-from-workday.html
3•impish9208•26m ago•0 comments

Free AWS/ GCP/ Azure partner funding

https://funding.partnerplex.ai/
2•cheesepizza•27m ago•0 comments

VPN company Mullvad reminds users it will no longer use OpenVPN

https://mullvad.net/en/blog/reminder-that-openvpn-is-being-removed
2•Improvement•28m ago•0 comments

The lottery ticket hypothesis: why neural networks work

https://nearlyright.com/how-ai-researchers-accidentally-discovered-that-everything-they-thought-about-learning-was-wrong/
2•076ae80a-3c97-4•30m ago•0 comments

Monitor your Site with our new MCP

https://mcp.statusnow.dev/index.html
1•nkruger•32m ago•0 comments

Show HN: Whispering – Open-source, local-first dictation you can trust

https://github.com/epicenter-so/epicenter/tree/main/apps/whispering
2•braden-w•32m ago•0 comments
Open in hackernews

Who Invented Backpropagation?

https://people.idsia.ch/~juergen/who-invented-backpropagation.html
75•nothrowaways•1h ago

Comments

fritzo•1h ago
TIL that the same Shun'ichi Amari who founded information geometry also made early advances to gradient descent.
mystraline•1h ago
> BP's modern version (also called the reverse mode of automatic differentiation)

So... Automatic integration?

Proportional, integrative, derivative. A PID loop sure sounds like what they're talking about.

eigenspace•56m ago
Reverse move automatic differentiation is not integration. It's still differentiation, but just a different method of calculating the derivative than the one you'd think to do by hand. It basically just applies the chain rule in the opposite order from what is intuitive to people.

It has a lot more overhead than regular forwards mode autodiff because you need to cache values from running the function and refer back to them in reverse order, but the advantage is that for function with many many inputs and very few outputs (i.e. the classic example is calculating the gradient of a scalar function in a high dimensional space like for gradient descent), it is algorithmically more efficient and requires only one pass through the primal function.

On the other hand, traditional forwards mode derivatives are most efficient for functions with very few inputs, but many outputs. It's essentially a duality relationship.

stephencanon•19m ago
I don't think most people think to do either direction by hand; it's all just matrix multiplication, you can multiply them in whatever order makes it easier.
imtringued•56m ago
Forward mode automatic differentiation creates a formula for each scalar derivative. If you have a billion parameters you have to calculate each derivative from scratch.

As the name implies, the calculation is done forward.

Reverse mode automatic differentiation starts from the root of the symbolic expression and calculates the derivative for each subexpression simultaneously.

The difference between the two is like the difference between calculating the Fibonacci sequence recursively without memoization and calculating it iteratively. You avoid doing redundant work over and over again.

digikata•20m ago
There are large bodies of work for optimization of state space control theory that I strongly suspect as a lot of crossover for AI, and at least has very similar mathematical structure.

e.g. optimization of state space control coefficients looks something like training a LLM matrix...

cubefox•59m ago
See also: The Backstory of Backpropagation - https://yuxi.ml/essays/posts/backstory-of-backpropagation/
pjbk•57m ago
As it is stated, I always thought it came from formulations like Euler-Lagrange procedures in mechanics used in numeric methods for differential geometry. In fact when I recreated the algorithm as an exercise it immediately reminded me of gradient descent for kinematics, with the Jacobian calculation for each layer similar to an iterative pose calculation in generalized coordinates. I never thought it was something "novel".
pncnmnp•55m ago
I have a question that's bothered me for quite a while now. In 2018, Michael Jordan (UC Berkeley) wrote a rather interesting essay - https://medium.com/@mijordan3/artificial-intelligence-the-re... (Artificial Intelligence — The Revolution Hasn’t Happened Yet)

In it, he stated the following:

> Indeed, the famous “backpropagation” algorithm that was rediscovered by David Rumelhart in the early 1980s, and which is now viewed as being at the core of the so-called “AI revolution,” first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

I was wondering whether anyone could point me to the paper or piece of work he was referring to. There are many citations in Schmidhuber’s piece, and in my previous attempts I've gotten lost in papers.

psYchotic•46m ago
I found this,maybe it helps: https://gwern.net/doc/ai/nn/1986-rumelhart-2.pdf
pncnmnp•43m ago
Apologies - I should have been clear. I was not referring to Rumelhart et al., but to pieces of work that point to "optimizing the thrusts of the Apollo spaceships" using backprop.
costates-maybe•25m ago
I don't know if there is a particular paper exactly, but Ben Recht has a discussion of the relationship between techniques in optimal control that became prominent in the 60's, and backpropagation:

https://archives.argmin.net/2016/05/18/mates-of-costate/

dataflow•44m ago
I asked ChatGPT and it gave a plausible answer but I haven't fact checked. It says "what you’re thinking of is the “adjoint/steepest-descent” optimal-control method (the same reverse-mode idea behind backprop), developed in aerospace in the early 1960s and applied to Apollo-class vehicles." It gave the following references:

- Henry J. Kelley (1960), “Gradient Theory of Optimal Flight Paths,” ARS Journal.

- A.E. Bryson & W.F. Denham (1962), “A Steepest-Ascent Method for Solving Optimum Programming Problems,” Journal of Applied Mechanics.

- B.G. Junkin (1971), “Application of the Steepest-Ascent Method to an Apollo Three-Dimensional Reentry Optimization Problem,” NASA/MSFC report.

throawayonthe•24m ago
it's rude to show people your llm output
drsopp•22m ago
Why?
danieldk•15m ago
Because it is terribly low-effort. People are here for interesting and insightful discussions with other humans. If they were interested in unverified LLM output… they would ask an LLM?
drsopp•6m ago
Who cares if it is low effort? I got lots of upvotes for my link to Claude about this, and pncnmnp seems happy. The downvoted comment from ChatGPT was maybe a bit spammy?
drsopp•32m ago
Perhaps this:

Henry J. Kelley (1960). Gradient Theory of Optimal Flight Paths.

[1] https://claude.ai/public/artifacts/8e1dfe2b-69b0-4f2c-88f5-0...

pncnmnp•22m ago
Thanks! This might be it. I looked up Henry J. Kelley on Wikipedia, and in the notes I found a citation to this paper from Stuart Dreyfus (Berkeley): "Artificial Neural Networks, Back Propagation and the Kelley-Bryson Gradient Procedure" (https://gwern.net/doc/ai/nn/1990-dreyfus.pdf).

I am still going through it, but the latter is quite interesting!

duped•27m ago
They're probably talking about Kalman Filters (1961) and LMS filters (1960).
pjbk•12m ago
To be fair, any multivariable regulator or filter (estimator) that has a quadratic component (LQR/LQE) will naturally yield a solution similar to backpropagation when an iterative algorithm is used to optimize its cost or error function through a differentiable tangent space.
cubefox•17m ago
> ... first arose in the field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts of the Apollo spaceships as they headed towards the moon.

I think "its" refers to control theory, not backpropagation.

dudu24•45m ago
It's just an application of the chain rule. It's not interesting to ask who invented it.
qarl•41m ago
From the article:

Some ask: "Isn't backpropagation just the chain rule of Leibniz (1676) [LEI07-10] & L'Hopital (1696)?" No, it is the efficient way of applying the chain rule to big networks with differentiable nodes—see Sec. XII of [T22][DLH]). (There are also many inefficient ways of doing this.) It was not published until 1970 [BP1].

uoaei•6m ago
The article says that but it's overcomplicating to the point of being actually wrong. You could, I suppose, argue that the big innovation is the application of vectorization to the chain rule (by virtue of the matmul-based architecture of your usual feedforward network) which is a true combination of two mathematical technologies. But it feels like this and indeed most "innovations" in ML is only considered as such due to brainrot derived from trying to take maximal credit for minimal work (i.e., IP).
mindcrime•39m ago
Who didn't? Depending on exactly how you interpret the notion of "inventing backpropagation" it's been invented, forgotten, re-invented, forgotten again, re-re-invented, etc, about 7 or 8 times. And no, I don't have specific citations in front of me, but I will say that a lot of interesting bits about the history of the development of neural networks (including backpropagation) can be found in the book Talking Nets: An Oral History of Neural Networks[1].

[1]: https://www.amazon.com/Talking-Nets-History-Neural-Networks/...

convolvatron•32m ago
don't undergrad adaptive filters count?

https://en.wikipedia.org/wiki/Adaptive_filter

doesn't need a differentiation of the forward term, but if you squint it looks pretty close

caycep•22m ago
this fight has become legendary and infamous
caycep•22m ago
this fight has become legendary and infamous, and also pops up on HN every 2-3 years
aaroninsf•19m ago
When I worked on neural networks, I was taught David Rumelhart.
cs702•16m ago
Whatever the facts, the OP comes across as sour grapes. The author, Jürgen Schmidhuber, believes Hopfield and Hinton did not deserve their Nobel Prize in Physics, and that Hinton, Bengio, and LeCun did not deserve their Turing Award. Evidently, many other scientists disagree, because both awards were granted in consultation with the scientific community. Schmidhuber's own work was, in fact, cited by the Nobel Prize committee as background information for the 2024 Nobel.[a] Only future generations of scientists, looking at the past more objectively, will be able to settle these disputes.

[a] https://www.nobelprize.org/uploads/2024/11/advanced-physicsp...

uoaei•10m ago
Calling the implementation of chain rule "inventing" is most of the problem here.
dicroce•8m ago
Isn't it just kinda a natural thing once you have the chain rule?
PunchTornado•4m ago
Funny that hinton is not mentioned. Like how childish can the author be?