frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RIP Associate Product Managers

https://github.com/jackreacher80/product-manager-skill
1•neocortex666•1m ago•0 comments

ADL Shut Down Sora

https://twitter.com/ADL/status/2037585125765185572
1•black6•2m ago•0 comments

Netscape News Feed Straight Out of the Late 00s

https://isp.netscape.com/
2•mistyvales•2m ago•0 comments

Claude Code Chronicles

https://darshanmakwana412.github.io/2026/03/claude-code-chronicles/
1•darshanmakwana•4m ago•0 comments

Show HN: I built an 8-axis MTG draft advisor that runs inside ChatGPT

https://savecraft.gg/games/mtga
1•Veraticus•4m ago•1 comments

Show HN: Escape the Room, bounded AI stats game

https://github.com/AymanJabr/Escape-the-room-AI-stats-game
1•AymanJabr•5m ago•0 comments

Wp-tarpit – A honeypot that wastes WordPress scanners' time

https://github.com/lakeforestcomputer-com/wp-tarpit/
1•xLFCx•9m ago•1 comments

Ask HN: I'm hiring a SysAdmin in El Paso. Is there a place on HN to share?

1•WarcrimeActual•12m ago•2 comments

Shipping a Week's Work in a Day using parallel Claude agents

https://thewriting.dev/shipping-a-weeks-work-in-a-day-using-claude-code/
1•r0rshrk•14m ago•0 comments

Small Programming Tricks · will keleher

https://will-keleher.com/posts/small-programming-tricks-matter/
1•sharjeelsayed•15m ago•0 comments

Reddit Users Are Being Targeted by Stake's Covert Advertising Tactics

https://old.reddit.com/r/redscarepod/comments/1s3xvw6/how_reddit_users_are_being_maliciously_targ...
4•47thpresident•17m ago•0 comments

The Tmux Intro I Wish I Had Gotten – Simple Thread

https://www.simplethread.com/the-tmux-intro-i-wish-i-had-gotten/
1•sharjeelsayed•18m ago•0 comments

LLMs Do Not Grade Essays Like Humans

https://arxiv.org/abs/2603.23714
2•PretzelFisch•19m ago•2 comments

First Western Digital, now Sony: The tech giant suspends SD card sales

https://mashable.com/article/sony-sd-card-sales-suspended-memory-shortage
4•_tk_•21m ago•1 comments

Spot – Git repo code search, replace, diff and merge

https://github.com/gritzko/librdx/tree/master/spot
1•gritzko•21m ago•2 comments

Cursor Pagination out-of-the box for PrimeFaces and JPA

https://docs.flowlogix.com/#section-jpa-lazymodel-cursor-pagination
1•lprimak•24m ago•0 comments

What to Know: Working in China [video]

https://www.youtube.com/watch?v=bz7KuzEPqcs
1•simonpure•25m ago•0 comments

JP Morgan map shows crude oil ticking time bomb hits oil supply in April

https://www.msn.com/en-us/money/markets/this-map-shows-a-crude-ticking-time-bomb-that-hits-much-o...
3•ck2•27m ago•1 comments

CVE-2026-33691: OWASP CRS whitespace padding bypass vulnerability

1•relunsec•27m ago•0 comments

Show HN: /slot-machine development (CC vs. Codex; CE vs. superpowers)

https://github.com/pejmanjohn/slot-machine
2•pejmanjohn•28m ago•0 comments

Pretext: browser independent text layouting engine for the web

https://twitter.com/i/status/2037713766205608234
1•lewisjoe•30m ago•0 comments

Anthropic Donations: Guesses and Uncertainties

https://www.lesswrong.com/posts/NAwqT8wDkLRovcbZ9/anthropic-donations-guesses-and-uncertainties
1•joozio•32m ago•0 comments

Show HN: Create setups, deploy and share them

3•victorzidaroiu•34m ago•2 comments

Folie à Machine: LLMs and Epistemic Capture

https://www.lesswrong.com/posts/2hyGiAnLKEFv3jBHt/folie-a-machine-llms-and-epistemic-capture
1•joozio•34m ago•0 comments

Every Country in Our Supply Chain Has Declared an Emergency

https://energyandresilience.substack.com/p/every-country-in-our-supply-chain
2•measurablefunc•35m ago•0 comments

Pretext

https://chenglou.me/pretext/
2•sysbot•35m ago•0 comments

Quantum frontiers may be closer than they appear

https://blog.google/innovation-and-ai/technology/safety-security/cryptography-migration-timeline/
4•OJFord•36m ago•0 comments

Show HN: Danobang! – Multiplayer CJK (Chinese, Japanese, Korean) Word Game

https://danobang.com
3•maenbalja•37m ago•1 comments

LeaseWitness – Free lease agreement generator (15 types, no sign-up)

1•alexchemours•39m ago•0 comments

Show HN: Baton – A daemon that sends Claude to fix your GitHub issues

https://github.com/mraza007/baton
2•mr_o47•39m ago•0 comments
Open in hackernews

Twice this week, I have come across embarassingly bad data

https://successfulsoftware.net/2026/03/29/stop-publishing-garbage-data-its-embarrassing/
44•hermitcrab•1h ago

Comments

hermitcrab•1h ago
If you are putting out data without doing even the most basic validation, then you should be ashamed.
ramon156•1h ago
What about most of Show HN's projects nowadays? Sometimes the docs straight up lie, and it takes 5 minutes to figure that out. Should they also be ashamed?

What about people who don't know how their own code works? Despite it working flawlessly? I'm asking because I don't really know.

add-sub-mul-div•1h ago
This has become a spam site for AI shovelware projects that are nearly always posted by accounts with no activity here outside of self promotion.
Calazon•1h ago
> Sometimes the docs straight up lie, and it takes 5 minutes to figure that out. Should they also be ashamed?

Yes.

hermitcrab•1h ago
>Sometimes the docs straight up lie, and it takes 5 minutes to figure that out. Should they also be ashamed?

Yes. Lying is bad, even if some people are trying hard to normalise it.

>What about people who don't know how their own code works? Despite it working flawlessly?

I think that is fine, as long as you aren't making untrue claims.

akudha•55m ago
How is it fair to compare a Show HN project with official government datasets? People depend on government datasets, multi-billion dollar businesses are built on top of them. A show HN project is typically someone building it in a weekend. They’re not even remotely in the same league.

Sure it is expensive to check every number, but at least some of it can be automated and flagged for human review, no? Switching lat/long numbers. For example

subscribed•49m ago
If they publish a lie they should be ashamed, even if their lie is orders of magnitude less impactful.

And if someone publishes a flawless code but have no idea how it works, its not their code, quite clearly, AMD they should be ashamed if they lie it is.

It's just, like, my opinion, but I like it :)

torginus•1h ago
Data and metrics is 90% what upper management sees of your project. You might not care about it, and treat it as an afterthought, but it's almost the most important thing about it organizationally.

People who don't heed this advice get to discover it for themselves (I sure did)

IF you can't make the data convincing, you'll lose all trust, and nobody will do business with you.

agent_anuj•57m ago
It is not just embarrassing, it can potentially kill your demo, project or even product as user will first look at data and then the tech behind it. If the data is wrong, it means the tech does not work. I never took data seriously during my demos in the first 10 years of my career and no wonder the audience rejected most of my work though it was backed by solid platforms.
mlaretallack•1h ago
I saw the RAC one this morning, though I was miss reading the graph, as why would the RAC publish such an obvious mistake.

I have written my own Home Assistant custom component for the UK fuel finder data, and yes, the data really is that bad.

GMoromisato•1h ago
Clean data is expensive--as in, it takes real human labor to obtain clean data.

One problem is that you can't just focus on outliers. Whatever pattern-matching you use to spot outliers will end up introducing a bias in the data. You need to check all the data, not just the data that "looks wrong". And that's expensive.

In clinical drug trials, we have the concept of SDV--Source Data Verification. Someone checks every data point against the official source record, usually a medical chart. We track the % of data points that have been verified. For important data (e.g., Adverse Events), the goal is to get SDV to 100%.

As you can imagine, this is expensive.

Will LLMs help to make this cheaper? I don't know, but if we can give this tedious, detail-oriented work to a machine, I would love it.

hermitcrab•1h ago
>Clean data is expensive--as in, it takes real human labor to obtain clean data.

Yes, data can contain subtle errors that are expensive and difficult to find. But the 2nd error in the article was so obvious that a bright 10 year would probably have spotted it.

GMoromisato•34m ago
Agreed--and maybe they should have fixed it.

But sometimes the "provenance" of the data is important. I want to know whether I'm getting data straight from some source (even with errors) rather than having some intermediary make fixes that I don't know about.

For example, in the case where maybe they flipped the latitude and longitude, I don't want them to just automatically "fix" the data (especially not without disclosing that).

What they need to do is verify the outliers with the original gas station and fix the data from the source. But that's much more expensive.

chaps•25m ago
Exactly. This is a big problem with "open data". A lot goes into cleaning it up to make it publishable, which often includes removing data so that the public "doesn't get confused". Now I have to spend months and months fighting FOIA fights to get the original raw, messy data because someone , somewhere had opinions on what "clean data" is. I'll pass -- give me the raw, messy data.
hermitcrab•25m ago
Or just omit the rows that are obviously wrong (and document the fact).
chaps•20m ago
"obviously wrong" is a never ending rabbit hole and you'll never, ever be satisfied because there will always be something "obviously wrong" with the data.

Messy data is a signal. You're wrong to omit signal.

GMoromisato•14m ago
100%. There is even signal in the pattern of errors. If you remove some errors but not others, you lose signal.
GMoromisato•16m ago
Deleting the row loses some information, such as the existence of that gas station.

A better solution is to add a field to indicate that "the row looks funny to the person who published the data". Which, I guess is useful to someone?

But deleting data or changing data is effectively corrupting source data, and now I can't trust it.

gdulli•57m ago
Why would you give this sort of work to a machine that can't be responsibly used without checking its output anyway?
GMoromisato•50m ago
It's not obvious to me that LLMs can't be made reliable.
Phlogistique•1h ago
That it's it's better to publish the garbage data than to not publish it though. I would worry about complaining too much lest they just decide to stop publishing it because it creates bad PR.
nick__m•58m ago
As long as the garbage data is authentic and the method used to produce it is adequately detailed, I agree with you that: "it's better to publish the garbage data than to not publish it"

But fake data or garbage data without the method, is better left unpublished !

hermitcrab•57m ago
Hard disagree on that. They just need a basic smell test before they put it out.
Tempest1981•53m ago
Agree. Maybe just add a Disclaimer.md file.
chaps•59m ago
I have mixed feelings about this. On one hand, yeah stop publishing garbage data, but as a FOIA nerd... I'll take the data in any state it is. I'm not personally going to be able to clean the data before I receive it. Does that mean I shouldn't release the unsanitized (public) data knowing that it has garbage data within? Hell no. Instead, we should learn and cultivate techniques to work with shit data. Should I attempt to clean it? Sure. But it becomes a liability problem very, very quickly.
hermitcrab•53m ago
So you expect the 1000s of people trying to use the fuel price data to each individually clean and validate it, rather than the supplier doing it?
chaps•51m ago
What...?
torginus•24m ago
What does it mean to clean the data?

Do you remove those weird implausible outliers? They're probably garbage, but are they? Where do you draw the line?

If you've established the assumption that the data collection can go wrong, how do you know the points which look reasonable are actually accurate?

Working with data like this has unknown error bars, and I've had weird shit happen where I fixed the tracing pipeline, and the metrics people complained that they corrected for the errors downstream, and now due to those corrections, the whole thing looked out of shape.

chaps•16m ago
"What does it mean to clean the data?"

This isn't possible to answer generally, but I'm sure you know that.

Look -- I've been in nonstop litigation for data through FOIA for the past ten years. During litigation I can definitely push back on messy data and I have, but if I were to do that on every little "obviously wrong" point, then my litigation will get thrown out for me being a twat of a litigant.

Again, I'd rather have the data and publish it with known gotchas.

Here's an example: https://mchap.io/using-foia-data-and-unix-to-halve-major-sou...

Should I have told the Department of Finance to fuck off with their messy data? No -- even if I want to. Instead, we learn to work with its awfulness and advocate for cleaner data. Which is exactly what happened here -- once me and others started publishing stuff about tickets data and more journalists got involved, the data became cleaner over time.

stared•57m ago
I dislike the premise. I mean, good data is wonderful.

But if institutions are expected to release clear data or nothing, almost always it is the later.

What is important, is to offer as much methodology and caveats as possible, even if in an informal way. Because there is a difference between "data covers 72% of companies registered in..." vs expecting that data is full and authoritative, whereas it is missing.

(Source: 10 years ago I worked a lot with official data. All data requires cleaning.)

sd9•14m ago
Agreed, pretty much all data is flawed. I still want my hands on it.
albert_e•53m ago
Concluding passage:

> Authors should have their work proof read

Agreed.

Opening passage:

> A quick plot of the latitude and longitude shows some clear outliners

"outliners"

Ouch!

hermitcrab•48m ago
OP here. Ouch indeed. I did actually get it proofread. But that was missed. I can't fire my proofreader, as we are married. ;0)

Now fixed.

rdiddly•45m ago
Not fixed at this hour
hermitcrab•38m ago
You might need to do a refresh.
alias_neo•50m ago
I was looking at that RAC chart this morning. Given it's Sunday, and I was reading before my morning coffee, I'm not ashamed to say it took me a good few seconds of zooming in and out to realise they'd used a decimal point where a comma should have been.

Easy type to make, but seriously, does no one even take a cursory look at the charts when publishing articles like this? The chart looks _obviously_ wrong, so imagine how many are only slightly wrong and are missed.

The fuel prices one could surely be solved with a tiny bit of validation; are the coordinates even within a reasonable range? Fortunately, in the UK, it's really easy to tell which is latitude and which is longitude due to one of them being within a digit or two of zero on either side.

Frank-Landry•23m ago
Did a bot write this title?
bobro•7m ago
This article assumes that there is a person with dedicated time to validate the data. Imagine you want this data and ask for it, but the government says, “sorry, we have this data, but we read an article that said we can only publish it if we spend a lot of time validating it. This data changes frequently and we don’t have a chunk of a full-time data analyst’s salary to spend on it, so we just aren’t going to publish anything. We’d rather put out nothing than embarrass ourselves, so you can’t even try to validate it yourself.”
chaps•4m ago
In fact, the government agencies will argue that they have zero legal obligation to clean the data, let alone figure anything about the data, and that they're just giving you the data as-is. This happened to me on a FOIA call where I was trying to get data from the county state's attorney. Clean vs not clean data is the wrong fight.