frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A stateful browser agent using self-healing DOM maps

https://100x.bot/a/a-stateful-browser-agent-using-self-healing-dom-maps
40•shardullavekar•2h ago•27 comments

Hyperflask – Full stack Flask and Htmx framework

https://hyperflask.dev/
18•emixam•1h ago•1 comments

Launch HN: Inkeep (YC W23) – Open Source Agent Builder

https://github.com/inkeep/agents
17•engomez•1h ago•11 comments

Upcoming Rust language features for kernel development

https://lwn.net/Articles/1039073/
185•pykello•8h ago•93 comments

Liquibase continues to advertise itself as "open source" despite license switch

https://github.com/liquibase/liquibase/issues/7374
247•LaSombra•6h ago•193 comments

VOC injection into a house reveals large surface reservoir sizes

https://www.pnas.org/doi/10.1073/pnas.2503399122
18•PaulHoule•4d ago•4 comments

LINQ and Learning to Be Declarative

https://www.nickstambaugh.dev/posts/LINQ-and-being-declarative
10•sieep•6d ago•6 comments

New coding models and integrations

https://ollama.com/blog/coding-models
154•meetpateltech•8h ago•50 comments

JustSketchMe – Digital Posing Tool

https://justsketch.me
128•surprisetalk•6d ago•24 comments

Claude Haiku 4.5

https://www.anthropic.com/news/claude-haiku-4-5
675•adocomplete•21h ago•261 comments

Jiga (YC W21) Is Hiring Full Stacks

https://www.workatastartup.com/jobs/44310
1•grmmph•2h ago

Lace: A New Kind of Cellular Automata Where Links Matter

https://www.novaspivack.com/science/introducing-lace-a-new-kind-of-cellular-automata
4•airesearcher•52m ago•1 comments

Flies keep landing on North Sea oil rigs

https://theconversation.com/thousands-of-flies-keep-landing-on-north-sea-oil-rigs-then-taking-off...
146•speckx•6d ago•58 comments

TurboTax’s 20-year fight to stop Americans from filing taxes for free (2019)

https://www.propublica.org/article/inside-turbotax-20-year-fight-to-stop-americans-from-filing-th...
448•lelandfe•8h ago•225 comments

Like MS Excel, Pivot tables never die

https://www.rilldata.com/blog/why-pivot-tables-never-die
8•articsputnik•1h ago•9 comments

Credential Stuffing

https://ciamweekly.substack.com/p/credential-stuffing
24•mooreds•2d ago•17 comments

Steve Jobs and Cray-1 to be featured on 2026 American Innovations $1 coin

https://www.usmint.gov/news/press-releases/united-states-mint-releases-2026-american-innovation-o...
195•maguay•7h ago•182 comments

Silver Snoopy Award

https://www.nasa.gov/space-flight-awareness/silver-snoopy-award/
72•LorenDB•4d ago•18 comments

The people rescuing forgotten knowledge trapped on old floppy disks

https://www.bbc.com/future/article/20251009-rescuing-knowledge-trapped-on-old-floppy-disks
42•jnord•5d ago•17 comments

Nightmare Fuel: What is Skibidi Toilet, How it demos a non-narrative future

https://journal.media-culture.org.au/index.php/mcjournal/article/view/3108
50•mallowdram•2h ago•73 comments

Free applicatives, the handle pattern, and remote systems

https://exploring-better-ways.bellroy.com/free-applicatives-the-handle-pattern-and-remote-systems...
75•_jackdk_•10h ago•21 comments

Apple M5 chip

https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for...
1176•mihau•1d ago•1259 comments

Leaving serverless led to performance improvement and a simplified architecture

https://www.unkey.com/blog/serverless-exit
440•vednig•1d ago•234 comments

Zed is now available on Windows

https://zed.dev/blog/zed-for-windows-is-here
487•meetpateltech•22h ago•310 comments

Build a Superscalar 8-Bit CPU (YouTube Playlist) [video]

https://www.youtube.com/watch?v=bwjMLyBU4RU&list=PLyR4neQXqQo5nPdEiMbaEJxWiy_UuyNN4&index=1
109•lrsjng•5d ago•12 comments

Are hard drives getting better?

https://www.backblaze.com/blog/are-hard-drives-getting-better-lets-revisit-the-bathtub-curve/
245•HieronymusBosch•21h ago•129 comments

A Gemma model helped discover a new potential cancer therapy pathway

https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/
180•alexcos•19h ago•42 comments

The Hidden Math of Ocean Waves Crashes Into View

https://www.quantamagazine.org/the-hidden-math-of-ocean-waves-crashes-into-view-20251015/
37•pykello•7h ago•1 comments

What is going on with all this radioactive shrimp?

https://www.consumerreports.org/health/food-safety/radioactive-shrimp-explained-a5493175857/
95•riffraff•5d ago•25 comments

Show HN: Halloy – Modern IRC client

https://github.com/squidowl/halloy
345•culinary-robot•1d ago•92 comments
Open in hackernews

Dumb statistical models, always making people look bad

https://statmodeling.stat.columbia.edu/2025/04/18/dumb-statistical-models-always-making-people-look-bad/
118•hackandthink•6mo ago

Comments

delichon•5mo ago
> why it’s often hard to demonstrate the value of human knowledge once you have a decent statistical model.

This seems to be a near restatement of the bitter lesson. It's not just that large enough statistical models outperform algorithms built from human expertise, they also outperform human expertise directly.

gopalv•5mo ago
> they also outperform human expertise directly

When measured statistically.

This is not a takedown of that statement, but the reason we've trouble with this idea is that it works in the lab and not always in real life.

To set up a clean experiment, you have define what success looks like before you conduct the experiment - that the output variable is defined.

Once you know what to measure ahead of time to determine success, then statistical models tend to not be as random as a group of humans in achieving that target.

The variance is bad in an experiment, but variance jitter is needed in an ever changing world even if most variants are worse off.

For example, if you can predict someone's earning potential from their birth zipcode, it is not wrong and often more right than otherwise.

And then if you base student loans and business loan interest rates on the basis of birth zipcodes, the original prediction does become more right.

The experimental version that's a win, but in real life that's a terrible loss to society.

bobsomers•5mo ago
> > they also outperform human expertise directly

> When measured statistically.

THANK YOU. It's mildly infuriating how often people forget that one of the things most human experts are good at is knowing when they are looking at something that is likely in distribution vs. out of distribution (and thus, updating their priors).

jonahx•5mo ago
The original article discusses this explicitly.
AstralStorm•5mo ago
Ah yes, the self fulfilling prophecies or hallucinations based on models trained on models. Overfitting. Ending up in an evolutionary dead end...

Type 4 error of not asking a question one should also exists.

So thing is, suppose you're handling the common cases right - you have software that's say 95% correct. The important bit is how critical the remaining 5% failures are. If one of them happens to be "I give up my computer and data to the exploit" or "everything is destroyed" or "a lot of people die", then the extra 1% better average is no good to any inside observer.

It so happens that a lot of people believe themselves to be outside observers, especially rich.

(What's the success bonus for someone getting treated nicely?)

nitwit005•5mo ago
You don't even need a statistical model. We make checklists because we know we'll fail to remember to check things.

Humans are tool users. If you make a statistical table to consult for some medical issue, you've using a tool.

taeric•5mo ago
I was going to say that it doesn't have to be a statistical model. Notable that statistical models are already seen as less than complete analytical models, for many people. (I think that is almost certainly a poor way of wording it? Largely just trying to say that F=ma and such are also models that don't have conditional answers.)

At any rate, I'm curious on some of the readings this post brings up. I'm also vaguely remembering that human's can have some odd behaviors where requiring justification or reasoning of decisions can sometimes provide more predictable decisions; but at a cost that you may not fully explore viable decisions.

dominicq•5mo ago
As a matter of practicality, it seems that you professionally now want to be firmly in the tails of the data distribution for your field, e.g. expert in those things that happen rarely.

Or maybe even be in a domain which, for whatever reason, is poorly represented by a statistical model, something where data points are hard to get.

genewitch•5mo ago
> expert in those things that happen rarely

Replacement bolt: 15¢ Knowing which bolt had to be replaced: $9,999.85

rawgabbit•5mo ago
OTOH. The blog mentions that humans excel at novel situations. Such as when there is little training data, when envisioning alternate outcomes, or when recognizing the data is wrong.

The most recent example I can think of is "Frank". In 2021, JPMorgan Chase acquired Frank, a startup founded by Charlie Javice, for $175 million. Frank claimed to simplify the FAFSA process for students. Javice asserted the platform had over 4 million users, but in reality, it had fewer than 300,000. To support her claim, she allegedly hired a data science professor to generate synthetic data, creating fake user profiles. JPMorgan later discovered the discrepancy when a marketing campaign revealed a high rate of undeliverable emails. In March 2025, Javice was convicted of defrauding JPMorgan.

IMO an data expert could have recognized the fake user profiles through the fact he has seen e.g., how messy real data is, know the demographics of would be users of a service like Frank (wealthy, time stressed families), know tell tale signs of fake data (clusters of data that follow obvious "first principles").

willvarfar•5mo ago
> an data expert could have recognized the fake user profiles through the fact he has seen e.g., how messy real data is, know the demographics of would be users of a service like Frank (wealthy, time stressed families), know tell tale signs of fake data

perhaps the data science professor who generated the fake data was quite well versed in all this and put effort into deliberately adding messiness and skew etc?

3abiton•5mo ago
It's unfortunate how under appreciated is statistics, in nearly all (spare academic) positions that I occupied, mostly in the technical domain interacting with non-technical stakeholders, anectodal evidence always take priority compared to statistical backed data, for decision making. It's absurd sometimes.
bsder•5mo ago
This is because the correct answer is rarely the politically palatable answer.
TheAceOfHearts•5mo ago
Anecdotally, the way I've heard many stats related tools described is as follows: if the tool confirms something that we already knew then it's a waste of time or money because it doesn't tell us anything new, and if it doesn't agree with what we already know then it's obviously wrong.

I don't think it's a trivial problem though. It's notoriously easy to twist stats to sell any narrative. And Goodhart's Law all but guarantees that any meaningful metric will get hacked.

gwern•5mo ago
> There are a few ways to look at this from the standpoint of information that is available to the decision-maker. One is that human knowledge is valuable for guiding developing the model, but once you have a statistical model, it’s a better aggregator of the information. This is echoed by research on judgmental bootstrapping (https://gwern.net/doc/statistics/decision/1974-dawes.pdf), where a statistical model trained on a human expert’s past judgments will tend to outperform that expert.

By the way, note that this applies to LLMs too. One of the biggest pons asinorums that people get hung up on is the idea that "it just imitates the data, therefore, it can never be better than the average datapoint (or at least, best datapoint); how could it possibly be better?"

Well, we know from a long history that this is not that hard: humans make random errors all the time, and even a linear model with a few parameters or a little flowchart can outperform them. So it shouldn't be surprising or a mystery if some much more complicated AI system could too.

AIPedant•5mo ago
> One of the biggest pons asinorums that people get hung up on is the idea that "it just imitates the data, therefore, it can never be better than the average datapoint (or at least, best datapoint); how could it possibly be better?"

Hmm - the phrasing that perhaps holds more water is that LLMs just imitate the data, which means that novel ideas / code tends to be smashed against the force of averaging when fed into an LLM. E.g. NotebookLM summaries/podcasts are good infotainment but they tend to flatten unconventional paragraphs into platitudes or common wisdom. Obviously this is very subjective and hard to benchmark.

airstrike•5mo ago
> Obviously this is very subjective and hard to benchmark.

I agree, but it also feels very obvious once you've been exposed to it enough times. The internet is filled of written or spoken AI slop that can generally be spotted with ease by trained eyes and ears.

jon_richards•5mo ago
The problem making a bear-proof trash can is that there's significant overlap between the smartest bears and the dumbest tourists.
roenxi•5mo ago
> and even a linear model with a few parameters

Using a simple average of past performance to predict future performance is also a technique that is often disturbingly effective vs. standard practice. I suppose technically that is a linear model, but really deserves its own class.

AstralStorm•5mo ago
Up to a point where the prediction runs afoul of the time horizon and changing unmodelled circumstances.

They do not have sufficient explicit risk or variance management. Makes them highly fragile. There are more robust versions of the estimators... Still have a problem.

Remember 2008? That market ran on these easy models.

gwern•5mo ago
Yes, exponential smoothing in forecasting is another classic example of the robustness of simple models. You can throw all your fancy ARIMAs and Box-Cox transforms at a time-series, and much of the time, it is hard to distinguish from a simple moving average.

Specifically, the Makridakis M forecasting competitions (https://en.wikipedia.org/wiki/Makridakis_Competitions) have shown for a long time that beating the baselines is shockingly difficult.

In fact, classic machine learning only really started to convincingly win with the second-to-last, M5: https://www.sciencedirect.com/science/article/pii/S016920702... ; and neural methods only just sort of began working with the latest one, M6: https://www.sciencedirect.com/science/article/pii/S016920702... . (Possibly with M7 we'll see scaled-up meta-learning Transformers finally start beating the Bayesian or decision-tree forecasters. But I don't know if or when a M7 might be held.)

senkora•5mo ago
> pons asinorums

This is a new one for me, so, in the spirit of the article, I will "act in the world to acquire more information as needed".

> An obstacle which will defeat a beginner or foolish person. [from 17th c.]

> From New Latin pons asinorum, from Latin pōns (“bridge”) + genitive plural of asinus (“donkey”). Literally, “bridge of donkeys”.

https://en.wiktionary.org/wiki/pons_asinorum

mwkaufma•5mo ago
User "Anoneuoid" from the source's own comment thread:

  There is another aspect here where those averaged outcomes are also the output of statistical models. So it is kind of like asking whether statistical models are better at agreeing with other statistical models than humans.
AstralStorm•5mo ago
You need to compare on both different variables and additionally produce actual error estimates on the comparison.

Say, suppose you're measuring successful treatments. You would have to both use the count, perhaps signed even (subtracting abject failures such as deaths), cost (financial or number of visits), then verify these numbers with a follow up.

See, the definition of success is critical here. OR and NNT are not evaluating side effects negatively, for example.

So it may turn out that you're comparing completely different ideas of better instead of matching models.

whatever1•5mo ago
At least when humans are wrong we own it. Statistical models can be wrong 100% of the times you used them and the claim is ‘oh this is how statistics work, you did not query the model infinite times’.

My point is that in many occasions being right on average is less important than being right on the tail.

vintermann•5mo ago
> Minimizing loss over aggregates is what a statistical model is designed to do, so if you evaluate human judgment against statistical predictions in aggregate on data similar to what the model was trained on, then you should expect statistical prediction to win

This reminds me of the many years machine translation was evaluated on BLEU towards reference translations, because they didn't know any better ways. Turns out that if you measure translation quality by n-gram precision towards a reference translation, then methods based on n-gram precision (such as the old pre-NMT Google translate) were really hard to beat.

reedf1•5mo ago
If there is not a human-explainable reason a model has made a prediction - and it's just a statistical blob in multi-dimensional feature space (which we cannot introspect) perceived improvement over humans is simply overfitting. It will be extremely good at finding the median issue, or following a decision tree in a more exacting way than a human. What a human can do is expand the degrees of freedom of their internal model at-will, integrate out of sample data, and have a natural human-bias to the individual at the expense of the median. I'd rather have that...
bicepjai•5mo ago
Someone had to say this. All models are dump, but some are useful.
kreyenborgi•5mo ago
Versus https://predictive-optimization.cs.princeton.edu/