frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Why am I not producing AI slop?

1•skeedle•2m ago•0 comments

Gerrymandering by Both Parties Is Deepening America's Divide

https://www.wsj.com/politics/us-gerrymandering-political-divide-a2a83a28
1•xqcgrek2•3m ago•0 comments

Werr

1•mersandi•4m ago•0 comments

An ancient archaeological site meets conspiracy theories – and Joe Rogan

https://www.npr.org/2025/08/09/nx-s1-5492477/gobleki-tepe-archaeology
1•geox•6m ago•0 comments

Show HN: I made a Ruby on Rails-like framework in PHP (Still in progress)

https://github.com/al3rez/tracks
1•al3rez•6m ago•0 comments

Doge-Pilled

https://www.bloomberg.com/features/2025-luke-farritor-doge/
1•bediger4000•7m ago•1 comments

Long-term exposure to outdoor air pollution linked to increased risk of dementia

https://www.cam.ac.uk/research/news/long-term-exposure-to-outdoor-air-pollution-linked-to-increased-risk-of-dementia
1•hhs•10m ago•0 comments

AGI is not coming – Yannic Kilcher

https://www.youtube.com/watch?v=hkAH7-u7t5k
2•randomgermanguy•15m ago•0 comments

New adhesive surface modeled on a remora works underwater

https://arstechnica.com/science/2025/08/new-adhesive-surface-modeled-on-a-remora-works-underwater/
2•burnt-resistor•17m ago•0 comments

Stanford to continue legacy admissions and withdraw from Cal Grants

https://www.forbes.com/sites/michaeltnietzel/2025/08/08/stanford-to-continue-legacy-admissions-and-withdraw-from-cal-grants/
3•hhs•17m ago•1 comments

Rich-syntax string formatter in TypeScript

https://github.com/vitaly-t/custom-string-formatter
1•VitalyTomilov•21m ago•1 comments

Exposing Satcom in the Sky: Aircraft Systems Vulnerable to Remote Attacks

1•hacker_might•24m ago•0 comments

Show HN: Dalle 3 AI turns words into vivid pictures fast

https://dalle-3.com
2•epistemovault•26m ago•0 comments

Countries with most GPT-5 users, esp. in advanced computation and reasoning?

2•mzk_pi•27m ago•1 comments

Local LLM Hardware in 2025: prices and token per second [video]

https://www.youtube.com/watch?v=GkTqxF_gJbg
2•bishopsmother•28m ago•0 comments

Show HN: I made a Google images clone for Pixiv (Japanese art website)

https://onegai.moe/
2•nameislonjuin•29m ago•1 comments

US-French SWOT Satellite Measures Tsunami After Quake

https://www.jpl.nasa.gov/news/us-french-swot-satellite-measures-tsunami-after-massive-quake/
1•perihelions•33m ago•0 comments

Machine learning highlights factors associated with Arabidopsis circadian clock

https://www.nature.com/articles/s41467-025-62196-w
2•bryanrasmussen•39m ago•1 comments

Constant-traffic padded and encrypted network tunnel

https://github.com/markasoftware/i405-tunnel
2•marcodiego•40m ago•0 comments

It's not detection, it's verification

https://www.clipcert.com
3•stuvinton•42m ago•1 comments

Private Welsh island with 19th century fort goes on the market

https://www.cnn.com/2025/08/08/business/thorne-island-fort-wales-scli-intl
10•makaimc•46m ago•3 comments

Yet Another LLM Rant

https://overengineer.dev/txt/2025-08-09-another-llm-rant/
3•sohkamyung•46m ago•1 comments

Show HN: Kimi K2 – Powerful Open-Source AI

https://kimik2ai.app
8•NoScopeNinja•47m ago•2 comments

Yes, the referee might be biased. Discipline in English football

https://blog.engora.com/2025/08/yes-referee-might-be-biased-discipline.html
3•Vermin2000•47m ago•1 comments

R0ML's Ratio

https://blog.glyph.im/2025/08/r0mls-ratio.html
2•zdw•50m ago•0 comments

A subtle bug with Go's errgroup

https://gaultier.github.io/blog/subtle_bug_with_go_errgroup.html
4•broken_broken_•51m ago•0 comments

Ohyaml.wtf: How good is your knowledge of YAML?

https://www.ohyaml.wtf/
2•thunderbong•52m ago•0 comments

Seeking

1•prairieroadent•52m ago•0 comments

The hard steps model and whole Earth-system transitions

https://royalsocietypublishing.org/doi/full/10.1098/rstb.2024.0105
2•Luc•53m ago•1 comments

Virtual Cell Challenge – win USD 100k

https://virtualcellchallenge.org/
1•Tdsone•55m ago•0 comments
Open in hackernews

I prefer human-readable file formats

https://adele.pollux.casa/check-human.php?redirect=%2Fgemlog%2F2025-08-04_why_I_prefer_human-readble_file_formats.gmi
51•Bogdanp•3h ago

Comments

rickcarlino•3h ago
Do you have the Gemini:// URL? I’m getting a URL resolution error.
rizky05•2h ago
gemini://adele.pollux.casa/gemlog/2025-08-04_why_I_prefer_human-readble_file_formats.gmi
JdeBP•1h ago
Given that the author mentions CSV and text table formats, the article's list of the "entire Unix toolchain" is significantly impoverished not only by the lack of ex (which is usefully scriptable) but by the lack of mlr.

* https://miller.readthedocs.io/

vis/unvis are fairly important tools for those text tables, too.

Also, FediVerse discussion: https://social.pollux.casa/@adele/statuses/01K1VA9NQSST4KDZP...

hebocon•1h ago
Wow, I've never heard of 'mlr' before. Looks like a synthesis of Unix tools, jq, and others? Very useful - hopefully it's packaged everywhere for easy access.
IanCal•1h ago
> Unlike binary formats or database dumps, these files don't hide their meaning behind layers of abstraction. They're built for clarity, for resilience, and for people who like to know what's going on under the hood.

Csv files hide their meaning in external documentation or someone’s head, are extremely unclear in many cases (is this a number or a string? A date?) and is extremely fragile when it comes to people editing them in text editors. They entirely lack checks and verification at the most basic level and worse still they’re often but not always perfectly line based. Many tools then work fine until they completely break you file and you won’t even know. Until I get the file and tell you I guess.

I’ve spent years fixing issues introduced by people editing them like they’re text.

If you’ve got to use tools to not completely bugger them then you might as well use a good format.

burnt-resistor•1h ago
They're standardized[0], so it's only stupid humans screwing them up.

Maybe you need a database or an app rather than flat files.

0. https://www.ietf.org/rfc/rfc4180.txt

IanCal•56m ago
That came far after csv files started being used and many parsers don’t follow the spec. Even if they do, editing the file manually can easily and silent break it - my criticisms are of entirely valid to the new spec files. The wide range of ways people make csvs is a whole other thing I’ve spent years fixing.

It’s not about the stupidity of the humans, and if it was then planning for “no stupid people” is even stupider than those messing up the files.

> Maybe you need a database or an app rather than flat files.

Flat files are great. What’s needed are good file formats.

burnt-resistor•53m ago
TOML

What's the problem?

IanCal•52m ago
What are you trying to ask? I don’t understand. I’m not talking about toml.
burnt-resistor•46m ago
I gave you a good text file format. You're acting like there are no good file formats. Either invent a domain-specific one, use a standard one, or use a different modality rather than complain that a utopia you won't bother to create doesn't exist.
integralid•15m ago
But TOML is not a good file format. Quite the opposite actually.

https://hitchdev.com/strictyaml/why-not/toml/

Someone•55m ago
> They're standardized[0]

From that article:

“This memo […] does not specify an Internet standard of any kind”

and

“Interoperability considerations:

Due to lack of a single specification, there are considerable differences among implementations. Implementors should "be conservative in what you do, be liberal in what you accept from others" (RFC 793 [8]) when processing CSV files”

burnt-resistor•45m ago
Are you AI? I was replying to a comment, not the article.

Also, you're quoting me to myself: https://news.ycombinator.com/item?id=44837879

fireflash38•1h ago
If you're reading in data, you need to parse and verify it anyway.
IanCal•56m ago
Which you might not be able to do after it’s been broken silently.
fireflash38•23m ago
That's still an issue with binary files too, and you can't even look at them to fix.
refactor_master•1h ago
Clearly there’s a very real need for binary data formats, or we wouldn’t have them. For one, it’s much more space efficient. Does the author know how much storage cost in 1985? Or how slow computers were?

If I time traveled back to 1985 and told corporate to adopt CSV because it’d be useful in 50 years when unearthing old customer records I’d be laughed out of the cigar lounge.

burnt-resistor•1h ago
I guess you've never used UNIX or understood the philosophy.

https://en.wikipedia.org/wiki/Unix_philosophy

There already exist a bazillion binary serialization formats: protobufs, thrift, msgpack, capnproto, etc. but these all suffer from human inaccessibility. Generally, they should be used only when performance becomes a severe limiting factor but never before or it's likely a sign of premature optimization.

tliltocatl•8m ago
It's often too late to overhaul you systems when performance becomes a serve limiting factor. By that point things like data format are already set in stone. The whole "premature optimization" was originally about peep-hole stuff, not architecture-defining concerns, and it's really sad to see it misapplied to "lets store everything as json and use O(n²) everywhere and hopefully it will be someone else's problem".
graemep•48m ago
Except there are many things for which we used human readable formats in the 1980s for which we use binary formats now - HTTP headers, for example.

CSV was definitely in wide use back then.

Text formats are compressible.

self_awareness•17m ago
Text formats are compressible because they waste a lot of space to encode data. Instead of the space of 256 values per byte they use maybe 100.
graemep•9m ago
I assumed that is common knowledge here. The point is that you need to take that into account when discussing storage requirements.
bregma•1h ago
I journeyed from fancy commercial bookkeeping systems that changed data formats every few years (with no useful migration) to GNU Cash and finally to Plain-Text Accounting. I can finally get the information I need with easy backups (through VCS) and flexibility (through various tools that transform the data). The focus is on content, not tools or presentation or product.

When I write I write text. I can transform text using various tools to provide various presentations consumable through various products. The focus is on content, not presentation, tools, or product.

I prefer human-readable file formats, and that has only been reinforced over more than 4 decades as a computer professional.

mxmlnkn•52m ago
I concur with most of these arguments, especially about longevity. But, this only applies to smallish files like configurations because I don't agree with the last paragraph regarding its efficiency.

I have had to work with large 1GB+ JSON files, and it is not fun. Amazing projects such as jsoncons for streaming JSONs, and simdjson, for parsing JSON with SIMD, exist, but as far as I know, the latter still does not support streaming and even has an open issue for files larger than 4 GiB. So you cannot have streaming for memory efficiency and SIMD-parsing for computational efficiency at the same time. You want streaming because holding the whole JSON in memory is wasteful and sometimes not even possible. JSONL tries to change the format to fix that, but now you have another format that you need to support.

I was also contemplating the mentioned formats for another project, but they are hardly usable when you need to store binary data, such as images, compressed data, or simply arbitrary data. Storing binary data as base64 strings seems wasteful. Random access into these files is also an issue, depending on the use case. Sometimes it would be a nice feature to jump over some data, but for JSON, you cannot do that without parsing everything in search of the closing bracket or quotes, accounting for escaped brackets and quotes, and nesting.

andreypopp•24m ago
try clickhouse-local, it's amazing how it can crunch JSON/TSV or whatever at great speed
mriet•50m ago
I can understand this for "small" data, say less than 10 Mb.

In bioinformatics, basically all of the file formats are human-readable/text based. And file sizes range between 1-2Mb and 1 Tb. I regularly encounter 300-600 Gb files.

In this context, human-readable files are ridiculously inefficient, on every axis you can think of (space, parsing, searching, processing, etc.). It's a GD crime against efficiency.

And at that scale, "readable" has no value, since it would take you longer to read the file than 10 lifetimes.

graemep•42m ago
I do not think the argument is that ALL data should be in human readable form, but I think there are far more cases of data being in a binary form when it would be better human readable. Your example of a case where it is human readable when it should be binary is rarer for most of us.

In some cases human readable data is for interchange and it should be processed and queried in other forms - e.g. CSV files to move data between databases.

An awful lot of data is small - and these days I think you can say small is quite a bit bigger than 10Mb.

Quite a lot of data that is extracted from a large system would be small at that point, and would benefit from being human readable.

The benefit of data being human readable is not necessarily that you will read it all, but that it is easier to read bits that matter when you are debugging.

codr7•48m ago
I'll take sexprs over CSV/JSON/YAML/XML any day.
ape4•45m ago
Lets hear it for RTF for documents
adregan•31m ago
Are there any binary formats that include the specification in the format itself?
huhtenberg•18m ago
https://en.wikipedia.org/wiki/ASN.1
xandrius•16m ago
Don't most binary format must have some specification somewhere (either private or public)?

Unless someone just decided to shove random stuff in binary mode and call it a day?

kamatour•28m ago
Readable files are great… until they’re 1TB and you just want to cry.
LoganDark•26m ago
To be fair, nothing's great when I want to cry.
qiine•19m ago
1TB of perfectly readable, human despair.
self_awareness•22m ago
I'm not sure the author knows much about binary formats.

Binary formats are binary for a reason. Speed of interpretation is one reason. Usage of memory is another reason. Directly mapping it and using it, is another reason. Binary formats can make assumptions about system memory page size. They can store internal offsets to make incremental reading faster. None of this is offered by text formats.

Also, the ability to modify text formats is completely wrong. Nothing can be changed if we introduce checksums inside text formats. Also if we digitally sign a format, then nothing can be changed despite the fact that it's a text format.

Also, comparing CSV files to internal database binary format? It's like comparing a book cover to the ERP system of a library. Meaning, it's comparing two completely different things.

Too•22m ago
Let’s say that hypothetically one were to disagree with this. What would be the best alternative format? One that has ample of tooling for editing and diffing, as though it was text, yet stores things more efficiently.

Most of the arguments presented in TFA are about openness, which can still be achieved with standard binary formats and a schema. Hence the problem left to solve is accessibility.

I’m thinking something like parquet, protobuf or sqllite. Despite their popularities, still aren’t trivial for anyone to edit.

mschwaig•13m ago
Human-readability was one of the aspects that I enjoyed about using CCL,the Categorical Configuration Language (https://chshersh.com/blog/2025-01-06-the-most-elegant-config...), in one of my projects recently.

It saves you from escaping stuff inside of multiline-strings by using meaningful whitespace.

What I did not like about CCL so much that it leaves a bunch of stuff underspecified. You can make lists and comments with it, but YOU have to decide how.

whobre•7m ago
Even "human-readable" formats are only readable if you have proper tools - i.e. editors or viewers.

If a binary file has a well-known format and tools available to view/edit it, I see zero problems with it.

kjellsbells•4m ago
Ease of: reading, comprehension, manipulation, short- and long-term retrieval are not the same problems. All file formats are bad at at least one of these.

Given an arbitrary stream of bytes, readability only means the human can inspect the file. We say "text is readable" but that's really only because all our tooling for the last sixty years speaks ASCII and we're very US-centric. Pick up a text file from 1982 and it could be unreadable (EBCDIC, say). Time to break out dd and cross your fingers.

Comprehension breaks down very quickly beyond a few thousand words. No geneticist is loading up a gig of CTAGT... and keeping that in their head as they whiz up and down a genome. Humans have a working set size.

Short term retrieval is excellent for text and a PITA for everything else. Raise your hand if you've gotten a stream of bytes, thrown file(1) at it, then strings(1), and then resorted to od or picking through the bytes.

Long term retrieval sucks for everyone. Even textfiles. After all, a string of bytes has no intrinsic meaning except what the operating system and the application give it. So who knows if people in 2075 will recognise "48 65 6C 6C 6F 20 48 4E 21"?