frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Anthropic Will Now Train Claude on Your Chats

https://www.macrumors.com/2025/08/28/anthropic-claude-chat-training/
1•tosh•1m ago•0 comments

Func Prog Podcast #9 with Hécate

https://discourse.haskell.org/t/func-prog-podcast-9-with-hecate/12854
1•Vosporos•2m ago•0 comments

Why Radiology AI Didn't Work and What Comes Next

https://www.outofpocket.health/p/why-radiology-ai-didnt-work-and-what-comes-next
1•nradov•3m ago•0 comments

A Snake Hunt in God's Country

https://www.theparisreview.org/blog/2025/08/25/a-snake-hunt-in-gods-country/
1•bookofjoe•6m ago•0 comments

A Federal Appellate Court Finds the NLRB to Be Unconstitutional

https://prospect.org/justice/2025-08-25-federal-appellate-court-finds-nlrb-unconstitutional/
2•Tadpole9181•6m ago•0 comments

Seeing infrared: contact lenses that grant 'super-vision'

https://www.theguardian.com/science/2025/may/22/infrared-contact-lenses-super-vision
3•colinprince•7m ago•0 comments

The biggest frogs build their own ponds

https://www.science.org/content/article/world-s-biggest-frogs-build-their-own-ponds
1•MaysonL•8m ago•0 comments

Skills You Need to Develop to Be a Better CTO (2017)

https://m.brianmcmanus.org/5-skills-you-need-to-develop-to-be-a-better-cto-528ad055706d
1•colinprince•9m ago•0 comments

The Anti-Autocracy Handbook: Scholars Guide to Navigating Democratic Backsliding

https://zenodo.org/records/15696097
2•nabla9•10m ago•0 comments

Benedict Evans: Why AI Isn't What You Think

https://fs.blog/knowledge-project-podcast/benedict-evans/
1•feross•10m ago•0 comments

Long context GPT-OSS fine-tuning

https://unsloth.ai/blog/gpt-oss-context
1•danielhanchen•11m ago•1 comments

Why AI Models Are Bad at Verifying Photos

https://www.cjr.org/tow_center/why-ai-models-are-bad-at-verifying-photos.php
1•giuliomagnifico•13m ago•0 comments

A Denisovan skull is upending the story of human evolution

https://www.newscientist.com/article/2492337-an-incredible-denisovan-skull-is-upending-the-story-...
2•Anon84•14m ago•0 comments

Show HN: DataCompose – PyJanitor-style dataframe cleaning for PySpark

https://github.com/datacompose/datacompose
1•tccole•15m ago•0 comments

Fasting may affect metabolism and immune response differently in the obese

https://linkinghub.elsevier.com/retrieve/pii/S2589004225011332
1•PaulHoule•16m ago•0 comments

Medicare Will Require Prior Approval for Certain Procedures

https://www.nytimes.com/2025/08/28/health/medicare-prior-approval-health-care.html
2•whack•16m ago•0 comments

Exoplan: Health-Driven Calendar

https://exoplan.io
1•exo_paul•17m ago•0 comments

Interactive Monty Hall Problem Simulator with Probability Visualization

https://montyhallsim.vercel.app/
1•ig1201•18m ago•1 comments

Show HN: I built the ATS YC said would never work

https://www.gethivemind.ai
2•BrainyZeiny•18m ago•0 comments

Building the Space Industry in Colombia

https://www.saganprog.com
1•felipediaz_•19m ago•1 comments

Reevaluating the revolution that fed the world

https://beyondimitation.substack.com/p/revisiting-the-revolution-that-fed
1•mellosouls•23m ago•0 comments

A conservative vision for AI alignment

https://www.lesswrong.com/posts/iJzDm6h5a2CK9etYZ/a-conservative-vision-for-ai-alignment
3•flypunk•24m ago•1 comments

The Economics of Envy

https://www.astralcodexten.com/p/the-economics-of-envy
2•feross•24m ago•0 comments

Health Effects of Cousin Marriage: Evidence from US Genealogical Records

https://www.aeaweb.org/articles?id=10.1257/aeri.20230544
3•speckx•30m ago•0 comments

Solana Consensus – From Forks to Finality

https://neodyme.io/en/blog/solana_consensus/
1•lawrenceyan•30m ago•0 comments

Affiliates Flock to 'Soulless' Scam Gambling Machine

https://krebsonsecurity.com/2025/08/affiliates-flock-to-soulless-scam-gambling-machine/
7•todsacerdoti•32m ago•0 comments

'Isn't Designed to Solve Privacy Concerns,' Grafana CTO on Bring Your Own Cloud

https://www.theregister.com/2025/08/28/grafanas_tom_wilkie_interview/
2•rntn•32m ago•0 comments

Show HN: A high-level search agent

https://www.gensee.ai/tooling.html
5•bobby_zhu•33m ago•0 comments

Uncertain<T>

https://nshipster.com/uncertainty/
6•samtheprogram•34m ago•0 comments

The Toad Report #1

https://willmcgugan.github.io/toad-report-1/
3•ingve•36m ago•0 comments
Open in hackernews

Important machine learning equations

https://chizkidd.github.io//2025/05/30/machine-learning-key-math-eqns/
233•sebg•6h ago

Comments

cl3misch•5h ago
In the entropy implementation:

    return -np.sum(p * np.log(p, where=p > 0))
Using `where` in ufuncs like log results in the output being uninitialized (undefined) at the locations where the condition is not met. Summing over that array will return incorrect results for sure.

Better would be e.g.

    return -np.sum((p * np.log(p))[p > 0])
Also, the cross entropy code doesn't match the equation. And, as explained in the comment below the post, Ax+b is not a linear operation but affine (because of the +b).

Overall it seems like an imprecise post to me. Not bad, but not stringent enough to serve as a reference.

jpcompartir•5h ago
I would echo some caution if using as a reference, as in another blog the writer states:

"Backpropagation, often referred to as “backward propagation of errors,” is the cornerstone of training deep neural networks. It is a supervised learning algorithm that optimizes the weights and biases of a neural network to minimize the error between predicted and actual outputs.."

https://chizkidd.github.io/2025/05/30/backpropagation/

backpropagation is a supervised machine learning algorithm, pardon?

cl3misch•5h ago
I actually see this a lot: confusing backpropagation with gradient descent (or any optimizer). Backprop is just a way to compute the gradients of the weights with respect to the cost function, not an algorithm to minimize the cost function wrt. the weights.

I guess giving the (mathematically) simple principle of computing a gradient with the chain rule the fancy name "backpropagation" comes from the early days of AI where the computers were much less powerful and this seemed less obvious?

cubefox•4h ago
What does this comment have to do with the previous comment, which talked about supervised learning?
cl3misch•3h ago
The previous comment highlights an example where backprop is confused with "a supervised learning algorithm".

My comment was about "confusing backpropagation with gradient descent (or any optimizer)."

For me the connection is pretty clear? The core issue is confusing backprop with minimization. The cited article mentioning supervised learning specifically doesn't take away from that.

imtringued•3h ago
Reread the comment

"Backprop is just a way to compute the gradients of the weights with respect to the cost function, not an algorithm to minimize the cost function wrt. the weights."

What does the word supervised mean? It's when you define a cost function to be the difference between the training data and the model output.

Aka something like (f(x)-y)^2 which is simply the quadratic difference between the result of the model given an input x from the training data and the corresponding label y.

A learning algorithm is an algorithm that produces a model given a cost function and in the case of supervised learning, the cost function is parameterized with the training data.

The most common way to learn a model is to use an optimization algorithm. There are many optimization algorithms that can be used for this. One of the simplest algorithms for the optimization of unconstrained non-linear functions is stochastic gradient descent.

It's popular because it is a first order method. First order methods only use the first partial derivative known as the gradient whose size is equal to the number of parameters. Second order methods converge faster, but they need the Hessian, whose size scales with the square of the to be optimized parameters.

How do you calculate the gradient? Either you calculate each partial derivative individually, or you use the chain rule and work backwards to calculate the complete gradient.

I hope this made it clear that your question is exactly backwards. The referenced blog is about back propagation and unnecessarily mentions supervised learning when it shouldn't have done that and you're the one now sticking with supervised learning even though the comment you're responding to told you exactly why it is inappropriate to call back propagation a supervised learning algorithm.

imtringued•4h ago
The German Wikipedia article makes the same mistake and it is quite infuriating.
bee_rider•5h ago
Are eigenvalues or singular values used much in the popular recent stuff, like LLMs?
calebkaiser•4h ago
LoRa uses singular value decomposition to get the low rank matrices. In different optimizers, you'll also see eigendecomposition or some approximation used (I think Shampoo does something like this, but it's been a while).
bob1029•5h ago
MSE remains my favorite distance measure by a long shot. Its quadratic nature still helps even in non-linear problem spaces where convexity is no longer guaranteed. When working with generic/raw binary data where hamming distance would be theoretically more ideal, I still prefer MSE over byte-level values because of this property.

Other fitness measures take much longer to converge or are very unreliable in the way in which they bootstrap. MSE can start from a dead cold nothing on threading the needle through 20 hidden layers and still give you a workable gradient in a short period of time.

cgadski•4h ago
> This blog post has explored the most critical equations in machine learning, from foundational probability and linear algebra to advanced concepts like diffusion and attention. With theoretical explanations, practical implementations, and visualizations, you now have a comprehensive resource to understand and apply ML math. Point anyone asking about core ML math here—they’ll learn 95% of what they need in one place!

It makes me sad to see LLM slop on the front page.

maerch•4h ago
Apart from the “—“, what else gives it away? Just asking from a non-native perspective.
kace91•4h ago
Not op, but it is very clearly the final summary telling the user that the post they asked the AI to write is now created.
TFortunato•4h ago
This is probably not going to be a very helpful answer, but I sort of think of it this way: you probably have favorite authors or artist (or maybe some really dislike!), where you could probably take a look at a piece of their work, even if its new to you, and immediately recognize their voice & style.

A lot of LLM chat models have a very particular voice and style they use by default, especially in these longer form "Sure, I can help you write a blog article about X!" type responses. Some pieces of writing just scream "ChatGPT wrote this", even if they don't include em-dashes, hah!

TFortunato•4h ago
OK, on reflection, there are a few things,

Kace's response is absolutely right that the summaries tend to be a place where there is a big giveaway.

There is also something about the way they use "you" and the article itself... E.g. the "you now have a comprehensive resource to understand and apply ML math. Point anyone asking about core ML math here..." bit. This isn't something you would really expect to read in a human written article. It's a ChatBot presenting it's work to "you", the single user it's conversing with, not an author addressing their readers. Even if you ask the bot to write you an article for a blog, a lot of times it's response tends to mix in these chatty bits that address the user or directly references to the users questions / prompts in some way, which can be really jarring when transferred to a different medium w/o some editing

cgadski•4h ago
It's not really about the language. If someone doesn't speak English well and wants to use a model to translate it, that's cool. What I'm picking up on is the dishonesty and vapidness. The article _doesn't_ explore linear algebra, it _doesn't_ have visualizations, it's _not_ a comprehensive resource, and reading this won't teach you anything beyond keywords and formulas.

What makes me angry about LLM slop is imagining how this looks to a student learning this stuff. Putting a post like this on your personal blog is implicitly saying: as long as you know some some "equations" and remember the keywords, a language model can do the rest of the thinking for you! It's encouraging people to forgo learning.

Romario77•4h ago
It's just too bombastic for what it is - listing some equations with brief explanation and implementation.

If you don't know these things on some level already the post doesn't give you too much (far from 95%), it's a brief reference of some of the formulas used in machine learning/AI.

dawnofdusk•3h ago
I have some minor complaints but overall I think this is great! My background is in physics, and I remember finally understanding every equation on the formula sheet given to us for exams... that really felt like I finally understood a lot of physics. There's great value in being comprehensive so that a learner can choose themselves to dive deeper, and for those with more experience to check their own knowledge.

Having said that, let me raise some objections:

1. Omitting the multi-layer perceptron is a major oversight. We have backpropagation here, but not forward propagation, so to speak.

2. Omitting kernel machines is a moderate oversight. I know they're not "hot" anymore but they are very mathematically important to the field.

3. The equation for forward diffusion is really boring... it's not that important that you can take structured data and add noise incrementally until it's all noise. What's important is that in some sense you can (conditionally) reverse it. In other words, you should put the reverse diffusion equation which of course is considerably more sophisticated.

0wis•3h ago
Currently improving my foundation in data preparation for ML, this short and right article is a gem.
dkislyuk•2h ago
Presenting information theory as a series of independent equations like this does a disservice to the learning process. Cross-entropy and KL-divergence are directly derived from information entropy, where InformationEntropy(P) represents the baseline number of bits needed to encode events from the true distribution P, CrossEntropy(P, Q) represents the (average) number of bits needed for encoding P with a suboptimal distribution Q, and KL-Divergence (better referred to as relative entropy) is the difference between these two values (how many more bits are needed to encode P with Q, i.e. quantifying the inefficiency):

relative_entropy(p, q) = cross_entropy(p, q) - entropy(p)

Information theory is some of the most accessible and approachable math for ML practitioners, and it shows up everywhere. In my experience, it's worthwhile to dig into the foundations as opposed to just memorizing the formulas.

(bits assume base 2 here)

morleytj•45m ago
I 100% agree.

I think Shannon's Mathematical Theory of Communication is so incredibly well written and accessible that anyone interested in information theory should just start with the real foundational work rather than lists of equations, it really is worth the time to dig into it.

roadside_picnic•1h ago
While this very much looks like AI slop, it does remind me of a wonderful little book (which has many more equations): Formulas Useful for Linear Regression Analysis and Related Matrix Theory - It's Only Formulas But We Like Them [0]

That book is pretty much what it says on the cover, but can be useful as a reference given it's pretty thorough coverage. Though, in all honesty, I mostly purchased it due to the outrageous title.

0. https://link.springer.com/book/10.1007/978-3-642-32931-9