frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•3m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•6m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•9m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•14m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•29m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•36m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•36m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•39m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•41m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•51m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•52m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•57m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•2h ago•0 comments
Open in hackernews

Peeking Inside Gigantic Zips with Only Kilobytes

https://ritiksahni.com/blog/peeking-inside-gigantic-zips-with-only-kilobytes/
33•rtk0•3mo ago

Comments

rtk0•3mo ago
In this blog, I wrote about the architecture of a ZIP file and how we can leverage HTTP range requests to download files without decompressing the archive, in-browser.
gildas•3mo ago
For implementation in a library, you can use HttpRangeReader [1][2] in zip.js [3] (disclaimer: I am the author). It's a solid feature that has been in the library for about 10 years.

[1] https://gildas-lormeau.github.io/zip.js/api/classes/HttpRang...

[2] https://github.com/gildas-lormeau/zip.js/blob/master/tests/a...

[3] https://github.com/gildas-lormeau/zip.js

toomuchtodo•3mo ago
Based on your experience, is zip the optimal archive format for long term digital archival in object storage if the use case calls for reading archives via http for scanning and cherry picking? Or is there a more optimal archive format?
gildas•3mo ago
Unfortunately, I will have difficulty answering your question because my knowledge is limited to the zip format. In the use case presented in the article, I find that the zip format meets the need well. Generally speaking, in the context of long-term archiving, its big advantage is also that there are thousands of implementations for reading/writing zip files.
duskwuff•3mo ago
ZIP isn't a terrible format, but it has a couple of flaws and limitations which make it a less than ideal format for long-term archiving. The biggest ones I'd call out are:

1) The format has limited and archaic support for file metadata - e.g. file modification times are stored as a MS-DOS timestamp with a 2-second (!) resolution, and there's no standard system for representing other metadata.

2) The single-level central directory can be awkward to work with for archives containing a very large number of members.

3) Support for 64-bit file sizes exists but is a messy hack.

4) Compression operates on each file as a separate stream, reducing its effectiveness for archives containing many small files. The format does support pluggable compression methods, but there's no straightforward way to support "solid" compression.

5) There is technically no way to reliably identify a ZIP file, as the end of central directory record can appear at any location near the end of the file, and the file can contain arbitrary data at its start. Most tools recognize ZIP files by the presence of a local file header at the start ("PK\x01\x02"), but that's not reliable.

Lammy•3mo ago
> there's no straightforward way to support "solid" compression.

I do it by ignoring ZIP's native compression entirely, using store-only ZIP files and then compressing the whole thing at the filesystem level instead.

Here's an example comparison of the same WWW site rip in a DEFLATE ZIP, in a store-only ZIP with zstd filesystem compression, in a tar with same zstd filesystem compression (identical size but less useful for seeking due to lack of trailing directory versus ZIP), and finally the raw size pre-zipping:

  982M preserve.mactech.com.deflate.zip
  408M preserve.mactech.com.store.zip
  410M preserve.mactech.com.tar
  3.8G preserve.mactech.com


  [Lammy@popola] zfs get compression spinthedisc/Backups/WWW
  NAME                     PROPERTY     VALUE           SOURCE
  spinthedisc/Backups/WWW  compression  zstd            local

This probably wouldn't help GP with their need for HTTP seeking since their HTTP server would incur a decompress+recompress at the filesystem boundary.
nicman23•3mo ago
lool why use zip then anyways? put them in a folder
Lammy•3mo ago
It's for when you have a very large number of mostly-identical files, like web pages with consistent header and footer. If 408MiB versus 3.8GiB is a meaningless difference to you then sure don't bother with compression, but why I want it should be very obvious to most people here.
nicman23•3mo ago
you completely missed what i asked you but ok
Lammy•3mo ago
I don't think I did, but please explain :)

The last example in my list of four file sizes is them in a folder. Filesystem compression works at the file level, so you have to turn many-almost-identical-files into one file in order to benefit from it. ZFS does have block-level deduplication, but that's it's own can of worms that shouldn't be turned on flippantly due to resource requirements and `recordsize` tuning needed to really benefit from it.

nicman23•3mo ago
you do not need dedup just use reflinks for everything. if that workflow does not work then eh i understand why you would use zips

although zfs dedup is probably better in 2025

gildas•3mo ago
FYI, zip.js has no issues with 1 (it can be fixed with standard extra fields), 3 (zip64 support), and 5 (you cannot have more than 64K of comment data at the end of the file).
duskwuff•3mo ago
With regard for the first two - that's good for zip.js, but the problem is that support for those features isn't universal. There's been a lot of fragmentation over the last 36 years (!).

As far as the last (file type detection) goes, the generally agreed upon standard is that file formats should be "sniffable" by looking for a signature in the file's header - ideally within the first few bytes of the file. Having to search through 64 KB of the file's end for a signature is a major departure from that pattern.

xg15•3mo ago
This is really cool! Could also make a useful standalone command line tool.

I think the general pattern - using the range header + prior knowledge of a file format to only download the parts of a file that are relevant - is still really underutilized.

One small problem I see is that a server that does not support range requests would just try to send you the entire file in the first request, I think.

So maybe doing a preflight HEAD request first to see if the server sends back Accept-Ranges could be useful.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Ran...

xp84•3mo ago
How common is it in practice today to not support ranges? I remember back in the early days of broadband (c. 2000) when having a Download Manager was something most nerds endorsed, that most servers then supported partial downloads. Aside from toy projects has anyone encountered a server which didn't allow ranges (unless specifically configured to forbid it)?
xg15•3mo ago
I'd guess everything where support would have to be manually implemented.

For static files served by CDNs or an "established" HTTP servers I think support is pretty much a given (though e.g. Python's FastAPI only got support in 2020 [1]), but for anything dynamic, I doubt many devs would go through the trouble and implement support if it wasn't strictly necessary for their usecase.

E.g. the URL may point to a service endpoint that loads the file contents from a database or blob storage instead of the file system. Then the service would have to implement range support itself and translate them to the necessary storage/database calls (if those exist), etc etc. That's some effort you have to put in.

Even for static files, there may be reverse proxies in front that (unintentionally) remove the support again. E.g. [2]

[1] https://github.com/Kludex/starlette/issues/950

[2] https://caddy.community/t/cannot-seek-further-in-videos-usin...

jeffrallen•3mo ago
Here's the results of my investigation into the same question:

https://blog.nella.org/2016/01/17/seeking-http/

(Originally written for Advent of Go.)

rtk0•3mo ago
Lovely. I had so much fun exploring and writing about this topic. Thanks for sharing.
HPsquared•3mo ago
7-zip does this. You can see it if you open (to view) a large ZIP file on slow network drive. There's no way it is downloading the whole thing. You can extract single files from the ZIP also with only a little traffic.
dividuum•3mo ago
Would be surprised if that’s not how basically all tools behave, as I expect them all to seek to the central directory and to the referenced offset of individual files when extracting. Doesn’t really make a difference if that’s across a network file system or a local disc.
aeblyve•3mo ago
This is also quite easy to do with .tar files, not to be confused with .tar.gz files.
dekhn•3mo ago
tar does not have an index.
Lammy•3mo ago
> That question took me into the guts of the ZIP format, where I learned there’s a tiny index at the end that points to everything else.

Tangential, but any Free Software that uses `shared-mime-info` to identify files (any of your GNOMEs, KDEs, etc) are unable to correctly identify Zip files by their EOCD due to lack of accepted syntax for defining search patterns based on negative file offsets. Please show your support on this Issue if you would also like to see this resolved: https://gitlab.freedesktop.org/xdg/shared-mime-info/-/issues... (linking to my own comment, so no this is not brigading)

Anything using `file(1)` does not have this problem: https://github.com/file/file/blob/280e121/magic/Magdir/zip#L...

silasb•3mo ago
I've been looking at this for gunzip files as well. There is a rust solution that looks interesting called https://docs.rs/indexed_deflate/latest/indexed_deflate/. My goals are to be able to index mysql dump files by tables boundaries.
dabinat•3mo ago
I wrote a Rust command-line tool to do this for internal use in my SaaS. The motivation was to be able to index the contents of zip files stored on S3 without incurring significant egress charges. Is this something that people would generally find useful if it was open-sourced?
rtk0•3mo ago
Yes, the motivation to explore was something similar. I was curious if downloading ZIP files could be made more efficient over the web.
saulpw•3mo ago
Here's my Python library that does the same[0]. And it's incorporated into VisiData so you can view a .csv from within a .zip file over HTTP without downloading the whole .zip file.

[0] https://github.com/saulpw/unzip-http/

rtk0•3mo ago
Lovely! Thanks for sharing. I had so much fun learning about ZIP and writing the blog post.
jacknews•3mo ago
My 16yo son did exactly this over the last week as part of his Rust minecraft mod manager, using http range requests to get the file length, then the directory, then individual file data.

I'll dig up a link.