frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
162•isitcontent•7h ago•18 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
215•eljojo•10h ago•136 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
266•vecti•10h ago•126 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
54•phreda4•7h ago•9 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
14•NathanFlurry•15h ago•5 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
147•bsgeraci•1d ago•61 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
3•AGDNoob•3h ago•1 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•12h ago•5 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
4•rahuljaguste•7h ago•1 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
4•osmansiddique•5h ago•0 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
23•dchu17•12h ago•11 comments

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps

https://github.com/tosin2013/jupyter-notebook-validator-operator
2•takinosh•5h ago•0 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
171•vkazanov•1d ago•48 comments

Show HN: 33rpm – A vinyl screensaver for macOS that syncs to your music

https://33rpm.noonpacific.com/
3•kaniksu•6h ago•0 comments

Show HN: Chiptune Tracker

https://chiptunes.netlify.app
3•iamdan•7h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
10•KevinChasse•12h ago•10 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
8•sawyerjhood•13h ago•0 comments

Show HN: An open-source system to fight wildfires with explosive-dispersed gel

https://github.com/SpOpsi/Project-Baver
2•solarV26•10h ago•0 comments

Show HN: Agentism – Agentic Religion for Clawbots

https://www.agentism.church
2•uncanny_guzus•11h ago•0 comments

Show HN: Disavow Generator – Open-source tool to defend against negative SEO

https://github.com/BansheeTech/Disavow-Generator
5•SurceBeats•16h ago•1 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
567•deofoo•5d ago•166 comments

Show HN: BPU – Reliable ESP32 Serial Streaming with Cobs and CRC

https://github.com/choihimchan/bpu-stream-engine
2•octablock•13h ago•0 comments

Show HN: Total Recall – write-gated memory for Claude Code

https://github.com/davegoldblatt/total-recall
10•davegoldblatt•1d ago•6 comments

Show HN: Hibana – An Affine MPST Runtime for Rust

https://hibanaworks.dev
3•o8vm•14h ago•0 comments

Show HN: Beam – Terminal Organizer for macOS

https://getbeam.dev/
2•faalbane•14h ago•2 comments

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

https://github.com/bethington/ghidra-mcp
294•xerzes•2d ago•66 comments
Open in hackernews

Show HN: The Hessian of tall-skinny networks is easy to invert

https://github.com/a-rahimi/hessian
31•rahimiali•3w ago
It turns out the inverse of the Hessian of a deep net is easy to apply to a vector. Doing this naively takes cubically many operations in the number of layers (so impractical), but it's possible to do this in time linear in the number of layers (so very practical)!

This is possible because the Hessian of a deep net has a matrix polynomial structure that factorizes nicely. The Hessian-inverse-product algorithm that takes advantage of this is similar to running backprop on a dual version of the deep net. It echoes an old idea of Pearlmutter's for computing Hessian-vector products.

Maybe this idea is useful as a preconditioner for stochastic gradient descent?

Comments

MontyCarloHall•3w ago
>If the Hessian-vector product is Hv for some fixed vector v, we're interested in solving Hx=v for x. The hope is to soon use this as a preconditioner to speed up stochastic gradient descent.

Silly question, but if you have some clever way to compute the inverse Hessian, why not go all the way and use it for Newton's method, rather than as a preconditioner for SGD?

rahimiali•3w ago
Good q. The method computes Hessian-inverse on a batch. When people say "Newton's method" they're often thinking H^{-1} g, where both the Hessian and the gradient g are on the full dataset. I thought saying "preconditioner" instead of "Newton's method" would make it clear this is solving H^{-1} g on a batch, not on the full dataset.
MontyCarloHall•3w ago
I'd call it "Stochastic Newton's Method" then. :-)
rahimiali•3w ago
fair. thanks. i'll sleep on it and update the paper if it still sounds right tomorrow.

probably my nomenclature bias is that i started this project as a way to find new preconditioners on deep nets.

hodgehog11•3w ago
Just a heads up in case you didn't know, taking the Hessian over batches is indeed referred to as Stochastic Newton, and methods of this kind have been studied for quite some time. Inverting the Hessian is often done with CG, which tends to work pretty well. The only problem is that the Hessian is often not invertible so you need a regularizer (same as here I believe). Newton methods work at scale, but no-one with the resources to try them at scale seems to be aware of them.

It's an interesting trick though, so I'd be curious to see how it compares to CG.

[1] https://arxiv.org/abs/2204.09266 [2] https://arxiv.org/abs/1601.04737 [3] https://pytorch-minimize.readthedocs.io/en/latest/api/minimi...

semi-extrinsic•3w ago
For solving physics equations there is also Jacobian-free Newton-Krylov methods.
conformist•3w ago
Yes the combination of Krylov and quasi-Newton methods are very successful for physics problems (https://en.wikipedia.org/wiki/Quasi-Newton_method).

Iirc eg GMRES is a popular Krylov subspace method.

throwaway198846•3w ago
I lately used these methods and BFGS worked better than CG for me.
hodgehog11•3w ago
Absolutely plausible (BFGS is awesome), but this is situation dependent (no free lunch and all that). In the context of training neural networks, it gets even more complicated when one takes implicit regularisation coming from the optimizer into account. It's often worthwhile to try a SGD-type optimizer, BFGS, and a Newton variant to see which type works best for a particular problem.
jeffjeffbear•3w ago
I haven't looked into it in years, but would the inverse of a block bi-diagonal matrix have some semiseperable structure? Maybe that would be good to look into?
rahimiali•3w ago
just to be clear, semiseparate in this context means H = D + CC', where D is block diagonal and C is tall & skinny?

If so, it would be nice if this were the case, because you could then just use the Woodbury formula to invert H. But I don't think such a decomposition exists. I tried to exhaustively search through all the decompositions of H that involved one dummy variable (of which the above is a special case) and I couldn't find one. I ended up having to introduce two dummy variables instead.

jeffjeffbear•3w ago
> just to be clear, semiseparate in this context means H = D + CC', where D is block diagonal and C is tall & skinny?

Not quite, it means any submatrix taken from the upper(lower) part of the matrix has some low rank. Like a matrix is {3,4}-semiseperable if any sub matrix taken from the lower triangular part has at most rank 3 and any submatrix taken from the upper triangular part has at most rank 4.

The inverse of an upper bidiagonal matrix is {0,1}-semiseperable.

There are a lot of fast algorithms if you know a matrix is semiseperable.

edit: link https://people.cs.kuleuven.be/~raf.vandebril/homepage/public...

rahimiali•3w ago
thanks for the explanation! sorry i had misread the AI summary on "semiseparable".

i need to firm my intuition on this first before i can say anything clever, but i agree it's worth thinking about!

Lerc•3w ago
I am not a mathematician, but I do enough weird stuff that I encounter things referring to Hessians, yet I don't really know what they are, because everyone who writes about them does so in terms that assumes the reader knows what they are.

Any hints? The Battenburg graphics of matrices?

stevenae•3w ago
This helped me, coming from an ml background: https://randomrealizations.com/posts/xgboost-explained/
Nevermark•3w ago
GRADIENT

In the context of optimizing parameters of a model, the Gradient consists of all the derivatives of the output being optimized (i.e. the total error measure) with respect to each of the models parameters.

This creates a simplified version of the model, linearized around its current parameter values, making it easy to see which direction to take a small step to move the ultimate output in the direction that is desired.

And easy to see which parameters adjust the desired output more vs. less.

[EDIT] Nx1 1st derivative vector, N = #parameters, 1 = scalar output.

HESSIAN

The Hessian consists of all 2nd order derivatives, i.e. not just slope, but the curvature of the model, around the current parameter values.

Calculating all the first and 2nd degree derivatives takes more calculations and memory, but allows for more information as to which direction to take a learning step. As not only do we know how the output will respond linearly to a small parameter change, but whether larger changes will produce higher or lower than linear responses.

This can allow for the calculation of much larger changes to parameters, with high output improvements, speeding up training considerably, per training step.

But the trade off is each learning step requires more derivative calculations and memory. So a conducive model architecture, and clever tricks, are often needed to make the Hessian worth using, on larger models.

[EDIT] NxNx1 = NxN 2nd derivative matrix, N = #parameters, 1 = scalar output.

JACOBIAN

Another derivative type is the Jacobian, which is the derivate of every individual output (i.e. all those numbers we normally think of as the outputs, not just the final error measure), with respect to every parameter.

Jacobians can become enormous matrices. For billions of parameters, on billions of examples, with 100's of output elements, we would get a billions x 100's of billions derivative matrix. So the Jacobians calculation can take enormous amounts of extra computation and memory. But there are still occasions (much fewer) when using it can radically speed up training.

[EDIT] NxQxM 1st derivative matrix, N = #parameters, Q = #samples, M = #output elements

At this point, we have enough computer power and memory available, that all small enough problems should be trained with Jacobians in my view. Levenberg-Marquardt is an optimization algorithm that uses Jacobians. It can be orders of magnitude faster than gradient descent.

tubs•3w ago
You explain well so what I never understood is how the Jacobians aren't the first derivatives themselves?

Also if you have happen to have any suggestions for linear algebra for someone who uses it without really understanding it (I can write a measurement function for an EKF from scratch OK, but I don't really understand why the maths does what it does) I would really appreciate it.

mxwsn•3w ago
The Jacobian is first derivatives, but for a function mapping N to M dimensions. It's the first derivative of every output wrt every input, so it will be an N x M matrix.

The gradient is a special case of the Jacobian for functions mapping N to 1 dimension, such as loss functions. The gradient is an N x 1 vector.

Nevermark•3w ago
[EDIT] Updated original comment to include matrix dimensions.

If you want a serious text that goes through the relevant linear algebra and optimization mathematics in depth up front, Neural Network Design, 2nd edition is a good one. [Disclaimer, co-author]. We took great pains to walk through every conceptual and mathematical topic before we apply those concepts to machine learning. We use MATLAB a lot, which may or may not be helpful.

Another potential option is "Linear Algebra and Optimization for Machine Learning", which looks good and also starts out with linear algebra before machine learning. I haven't read it, but the first 2020 edition gets good reviews, and a second 2026 edition just came out, apparently with a fair amount of positive revision. Given the speed of change, that's nice to see.

Lerc•3w ago
Thank you very much for this description.

If I understand it in a nutshell. If Gradient is the angle Hessian is the curvature.

and Jacobians let you know how much weights contributed to the blue component of something identified as a big blue cat.

I think.

Jacobians look like they could be used to train concept splitters. For instance if an LLM has a grab bag of possible conversation paths, the final embedding would have information for each path, but once the selection is made it could filter the embedding to that path, which would be beneficial for chain of thought using the filtered embedding instead of the predicted token. I always wondered how much the thinking in embedding space carried around remnants of conversation paths not taken.

petters•3w ago
Would be great to see this work continued with some training runs
rahimiali•3w ago
Agreed. But these things have a way of not working out, and one the sadness, one forgets to celebrate the intermediate victories. I wanted to share an intermediate victory before reality crushes the joy.
holg•3w ago
Great work. Making the Hessian calculation linear in depth is a solid intermediate step. Thanks for sharing this; I look forward to seeing the final results as this research matures.