People who don't heed this advice get to discover it for themselves (I sure did)
IF you can't make the data convincing, you'll lose all trust, and nobody will do business with you.
I have written my own Home Assistant custom component for the UK fuel finder data, and yes, the data really is that bad.
One problem is that you can't just focus on outliers. Whatever pattern-matching you use to spot outliers will end up introducing a bias in the data. You need to check all the data, not just the data that "looks wrong". And that's expensive.
In clinical drug trials, we have the concept of SDV--Source Data Verification. Someone checks every data point against the official source record, usually a medical chart. We track the % of data points that have been verified. For important data (e.g., Adverse Events), the goal is to get SDV to 100%.
As you can imagine, this is expensive.
Will LLMs help to make this cheaper? I don't know, but if we can give this tedious, detail-oriented work to a machine, I would love it.
Yes, data can contain subtle errors that are expensive and difficult to find. But the 2nd error in the article was so obvious that a bright 10 year would probably have spotted it.
But sometimes the "provenance" of the data is important. I want to know whether I'm getting data straight from some source (even with errors) rather than having some intermediary make fixes that I don't know about.
For example, in the case where maybe they flipped the latitude and longitude, I don't want them to just automatically "fix" the data (especially not without disclosing that).
What they need to do is verify the outliers with the original gas station and fix the data from the source. But that's much more expensive.
Messy data is a signal. You're wrong to omit signal.
A better solution is to add a field to indicate that "the row looks funny to the person who published the data". Which, I guess is useful to someone?
But deleting data or changing data is effectively corrupting source data, and now I can't trust it.
But fake data or garbage data without the method, is better left unpublished !
Do you remove those weird implausible outliers? They're probably garbage, but are they? Where do you draw the line?
If you've established the assumption that the data collection can go wrong, how do you know the points which look reasonable are actually accurate?
Working with data like this has unknown error bars, and I've had weird shit happen where I fixed the tracing pipeline, and the metrics people complained that they corrected for the errors downstream, and now due to those corrections, the whole thing looked out of shape.
This isn't possible to answer generally, but I'm sure you know that.
Look -- I've been in nonstop litigation for data through FOIA for the past ten years. During litigation I can definitely push back on messy data and I have, but if I were to do that on every little "obviously wrong" point, then my litigation will get thrown out for me being a twat of a litigant.
Again, I'd rather have the data and publish it with known gotchas.
Here's an example: https://mchap.io/using-foia-data-and-unix-to-halve-major-sou...
Should I have told the Department of Finance to fuck off with their messy data? No -- even if I want to. Instead, we learn to work with its awfulness and advocate for cleaner data. Which is exactly what happened here -- once me and others started publishing stuff about tickets data and more journalists got involved, the data became cleaner over time.
But if institutions are expected to release clear data or nothing, almost always it is the later.
What is important, is to offer as much methodology and caveats as possible, even if in an informal way. Because there is a difference between "data covers 72% of companies registered in..." vs expecting that data is full and authoritative, whereas it is missing.
(Source: 10 years ago I worked a lot with official data. All data requires cleaning.)
> Authors should have their work proof read
Agreed.
Opening passage:
> A quick plot of the latitude and longitude shows some clear outliners
"outliners"
Ouch!
Now fixed.
Easy type to make, but seriously, does no one even take a cursory look at the charts when publishing articles like this? The chart looks _obviously_ wrong, so imagine how many are only slightly wrong and are missed.
The fuel prices one could surely be solved with a tiny bit of validation; are the coordinates even within a reasonable range? Fortunately, in the UK, it's really easy to tell which is latitude and which is longitude due to one of them being within a digit or two of zero on either side.
Clean vs not clean data is the wrong fight.
"Stop Publishing Garbage Data, It’s Embarrassing"
To the rather lamer:
"Twice this week, I have come across embarassingly bad data"
?
hermitcrab•1h ago
ramon156•1h ago
What about people who don't know how their own code works? Despite it working flawlessly? I'm asking because I don't really know.
add-sub-mul-div•1h ago
Calazon•1h ago
Yes.
hermitcrab•1h ago
Yes. Lying is bad, even if some people are trying hard to normalise it.
>What about people who don't know how their own code works? Despite it working flawlessly?
I think that is fine, as long as you aren't making untrue claims.
akudha•1h ago
Sure it is expensive to check every number, but at least some of it can be automated and flagged for human review, no? Switching lat/long numbers. For example
subscribed•1h ago
And if someone publishes a flawless code but have no idea how it works, its not their code, quite clearly, AMD they should be ashamed if they lie it is.
It's just, like, my opinion, but I like it :)