frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•2m ago•0 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•6m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•22m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•28m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•28m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•31m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•34m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•44m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•44m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•49m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•53m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•54m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•57m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•57m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
2•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•2h ago•1 comments
Open in hackernews

I Solved a 7-Day Calculation Problem in a Weekend

https://medium.com/@jithinsankar.nk/how-i-solved-a-7-day-calculation-problem-in-a-weekend-3fb1a54f2518
19•alomaki•7mo ago

Comments

alomaki•7mo ago
I had a 7-day compute problem, 3 days to solve it, and no extra hardware. Here's what worked.
judahmeek•7mo ago
It's odd that OP didn't seem to consider applying the nearest cached value for any given slider stop.

The Gaussian frequency was a cool idea, however.

Also, I would speculate that projected sales would likely be a continuous function in most cases, so I'm curious why they didn't try fitting a function based on initial results.

alomaki•7mo ago
OP here,

Ah, good point. To be honest, interpolation didn't even cross my mind.

The model output wasn't just one number, it was a messy JSON with a 12-week forecast. Trying to average two of those felt like a whole other task, and with the deadline, my brain was just stuck on how to pick the right numbers to cache.

But yeah, it's a really great idea. Will definitely keep it in mind for the next demo.

ViscountPenguin•7mo ago
Tbh, I'm really not sure why something like this should take 15 seconds to compute. That's roughly a few trillion floating point ops for a problem that has been solvable for decades. I have trouble imagining any reasonable model for mapping price -> sales needing that much compute.

Also, fwiw, I really wouldn't expect clients slider clicks to follow a normal distribution. A normal distribution occurs when you have the sum of a large number of random variables with Finite (and bounded) expectation and variance; or alternatively, when you're modelling a process with known Expectation and Variance, but not any higher order moments. If anything, I'd expect human beings to play with the slider more around extremal points, like the start and end.

jasonjmcghee•7mo ago
Also very curious about what kind of model this is and how it could (so far as it sounds) take 100% of the hardware for 15 seconds per request.
mylesp•7mo ago
My first instinct would be to do a one or two in the middle then both of the extremes. The assumption that it would be a normal distribution is so strange to me in this situation.
recursivecaveat•7mo ago
Yeah, this is like if you asked your friend how much more productive they think coffee makes them, and they replied with a four hundred thousand degree polynomial over the milliliter. Reality is never that predictable; reasonable people can recognize when they have run out of significant digits. Something has gone severely wrong in your data-modeling, your invocation of the model, or both. If it actually is doing compute for 15s, then to the extent that this works, it is wrapping a vastly simpler function, which I would suggest you graph and use going forward instead. It will save you the runtime, its outputs will be more reliable, and you will get actual insights.
curtisf•7mo ago
What is the actual model that takes 15 seconds to compute?

If I understand the setting, you are estimating the demand curve for a given price... And there are only 40 such curves to compute.

Surely each curve is fit with only a few parameters, probably fewer than five. (I think for small price ranges the demand curve is usually approximated as something trivial like y=mx+b or y=a/(x+b)+c)

Why does evaluating the model at a particular price need to take milliseconds, yet alone 15 seconds?

kookamamie•7mo ago
This is the right question. The article reads like answering a XY-problem, where instead focusing on the actual issue, the author triples down on polishing a turd.
conductr•7mo ago
I have this problem all the time and I can usually run the calcs in a simple multithreaded process pool/queues. While each calc may still take 15s, I run a dozen or more at a time. This helps me refresh the cache in a reasonable amount of time, doesn’t really focus on improving calc speed of the underlying service which is obviously another potential opportunity
sour-taste•7mo ago
So parallel precomputation wasn't an option?
sebmaynard•7mo ago
Unless I'm misunderstanding, seems like the approach taken here (normal distribution and Monte Carlo simulation) might achieve similar?

https://filiph.github.io/unsure/

stephenbez•7mo ago
I would have just had the slider move in increments of $0.10 or $0.05.

If I was a user, I might be confused why the increment is not consistent across the whole range.

Sophira•7mo ago
Maybe I'm just confused, but why even use an ML model for this? It's all just calc, right?
thom•7mo ago
This is one of those junior engineer moments where they technically did perceive a problem and solve it, but you wish they had just come and asked for some advice first.
haneul•7mo ago
Don’t dangle the man - enrich him with your advice!
seanhunter•7mo ago
Well the details in the article are sparse, but given what we are told, it seems highly likely that instead of using their ML model directly, they could use their ML model to fit a regression or a piecewise polynomial (eg a linear interpolation or spline) over the result. So the user input is not driving the ML model it is simply an input into a polynomial giving a calculation that is trivial for a modern computer.

Then they wouldn’t even need to cache anything and the result would be instantaneous with no real loss of accuracy.

mylesp•7mo ago
This whole article reads like it was written by someone with no ability to step back for a second and think of other much easier solutions. They just go all in on the first thing they think of even when it is not effective at all.

Just increase the increment size, or if you really want 1c increments you could precompute every 5c or so and then just do linear interpolation between them.

EGreg•7mo ago
Yeah dude, seriously…

Linear interpolation on small intervals is like, a model of a model. And that’s exactly what differentiable functions are, anyway. And if you want to be fancier then sample the model and fit some polynomials to interpolate between those samples.

If they were really time constrained they could precompute things sparsely first (for a demo the next day) and then progressively refine the values in between.

Why did this trend on HN?

goloroden•7mo ago
I „solved” it by ignoring a part of the problem?

Please don’t take this personal, but IMHO that’s not solving the problem. That’s coming up with a workaround.

_dain_•7mo ago
It's cool that you got it working in time for the demo, but I think your reasoning is unsound.

>I remembered this from my engineering days at the College of Engineering, Trivandrum. It’s called the Normal Distribution, or Gaussian distribution. It’s based on the idea that data near the mean (the average) is more frequent than data far from the mean.

There are a lot of non-normal distributions where that's the case. The normal distribution is a specific thing that arises from summing together lots of small random variables.

It's not a good model of people moving sliders on a UI: a person's decision to set the value to e.g. 0.8 is really one discrete thing, not a sum of a bunch of independent micro-decisions. There's no physical/statistical law preventing someone from grabbing the slider and thrusting all the way to the left or the right, and in fact people do this all the time on UIs. The client can move the slider however he pleases ...

So I think you just got lucky that the client didn't do that. Don't rely on it not happening in the future!

(You could also imagine fitting a normal distribution to user behaviour, but it turns out the standard deviation is just really large. That would be technically defensible but also useless for your situation, since there would be substantial probability at the min/max values of the finite range. It would be close to uniform.)

(Also, who's to say the mean is in the middle of the slider range?)

Anyway I'm curious what the ML model was doing that took 15 seconds. Are you sure there's no way to speed it up?

poulpy123•7mo ago
"How I scammed my client in one weekend" would be more exact