frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Tree Borrows

https://plf.inf.ethz.ch/research/pldi25-tree-borrows.html
92•zdw•1h ago•9 comments

Why LLMs Can't Write Q/Kdb+: Writing Code Right-to-Left

https://medium.com/@gabiteodoru/why-llms-cant-write-q-kdb-writing-code-right-to-left-ea6df68af443
100•gabiteodoru•1d ago•57 comments

Ruby 3.4 frozen string literals: What Rails developers need to know

https://www.prateekcodes.dev/ruby-34-frozen-string-literals-rails-upgrade-guide/
133•thomas_witt•3d ago•62 comments

A fast 3D collision detection algorithm

https://cairno.substack.com/p/improvements-to-the-separating-axis
47•OlympicMarmoto•1h ago•2 comments

Is the doc bot docs, or not?

https://www.robinsloan.com/lab/what-are-we-even-doing-here/
135•tobr•8h ago•68 comments

Helm local code execution via a malicious chart

https://github.com/helm/helm/security/advisories/GHSA-557j-xg8c-q2mm
134•irke882•10h ago•65 comments

Most RESTful APIs aren't really RESTful

https://florian-kraemer.net//software-architecture/2025/07/07/Most-RESTful-APIs-are-not-really-RESTful.html
179•BerislavLopac•9h ago•281 comments

X Chief Says She Is Leaving the Social Media Platform

https://www.nytimes.com/2025/07/09/technology/linda-yaccarino-x-steps-down.html
120•donohoe•1h ago•109 comments

US Court nullifies FTC requirement for click-to-cancel

https://arstechnica.com/tech-policy/2025/07/us-court-cancels-ftc-rule-that-would-have-made-canceling-subscriptions-easier/
343•gausswho•17h ago•326 comments

Florida is letting companies make it harder for highly paid workers to swap jobs

https://www.businessinsider.com/florida-made-it-harder-highly-paid-workers-to-swap-jobs-2025-7
69•pseudolus•1h ago•66 comments

Galiliean-invariant cosmological hydrodynamical simulations on a moving mesh

https://wwwmpa.mpa-garching.mpg.de/~volker/arepo/
5•gone35•2d ago•1 comments

Bootstrapping a side project into a profitable seven-figure business

https://projectionlab.com/blog/we-reached-1m-arr-with-zero-funding
662•jonkuipers•1d ago•168 comments

I Ported SAP to a 1976 CPU. It Wasn't That Slow

https://github.com/oisee/zvdb-z80/blob/master/ZVDB-Z80-ABAP.md
61•weinzierl•2d ago•39 comments

Phrase origin: Why do we "call" functions?

https://quuxplusone.github.io/blog/2025/04/04/etymology-of-call/
157•todsacerdoti•12h ago•105 comments

IKEA ditches Zigbee for Thread going all in on Matter smart homes

https://www.theverge.com/smart-home/701697/ikea-matter-thread-new-products-new-smart-home-strategy
227•thunderbong•6h ago•118 comments

7-Zip for Windows can now use more than 64 CPU threads for compression

https://www.7-zip.org/history.txt
166•doener•2d ago•119 comments

RapidRAW: A non-destructive and GPU-accelerated RAW image editor

https://github.com/CyberTimon/RapidRAW
209•l8rlump•13h ago•89 comments

Astro is a return to the fundamentals of the web

https://websmith.studio/blog/astro-is-a-developers-dream/
200•pumbaa•6h ago•169 comments

Using MPC for Anonymous and Private DNA Analysis

https://vishakh.blog/2025/07/08/using-mpc-for-anonymous-and-private-dna-analysis/
18•vishakh82•4h ago•7 comments

Breaking Git with a carriage return and cloning RCE

https://dgl.cx/2025/07/git-clone-submodule-cve-2025-48384
349•dgl•22h ago•140 comments

A Emoji Reverse Polish Notation Calculator Written in COBOL

https://github.com/ghuntley/cobol-emoji-rpn-calculator
9•ghuntley•3d ago•0 comments

ESIM Security

https://security-explorations.com/esim-security.html
86•todsacerdoti•6h ago•40 comments

Hugging Face just launched a $299 robot that could disrupt the robotics industry

https://venturebeat.com/ai/hugging-face-just-launched-a-299-robot-that-could-disrupt-the-entire-robotics-industry/
97•fdaudens•1h ago•87 comments

Where can I see Hokusai's Great Wave today?

https://greatwavetoday.com/
106•colinprince•12h ago•81 comments

Frame of preference A history of Mac settings, 1984–2004

https://aresluna.org/frame-of-preference/
156•K7PJP•15h ago•21 comments

I'm Building LLM for Satellite Data EarthGPT.app

https://www.earthgpt.app/
85•sabman•2d ago•11 comments

Supabase MCP can leak your entire SQL database

https://www.generalanalysis.com/blog/supabase-mcp-blog
781•rexpository•22h ago•421 comments

Smollm3: Smol, multilingual, long-context reasoner LLM

https://huggingface.co/blog/smollm3
345•kashifr•23h ago•70 comments

Serving a half billion requests per day with Rust and CGI

https://jacob.gold/posts/serving-half-billion-requests-with-rust-cgi/
94•feep•1d ago•71 comments

That white guy who can't get a job at Tim Hortons? He's AI

https://www.cbc.ca/news/ai-generated-fake-marketing-1.7578772
25•pseudolus•1h ago•6 comments
Open in hackernews

7-Zip for Windows can now use more than 64 CPU threads for compression

https://www.7-zip.org/history.txt
166•doener•2d ago

Comments

avidiax•9h ago
Why was there a limitation on Windows? I can't find any such limit for Linux.
lmm•9h ago
Seems like this is a general Windows thing per https://learn.microsoft.com/en-us/windows/win32/procthread/p... - applications that want to run on more than 64 CPUs need to be written with dedicated support for doing so.
lofties•9h ago
Windows has a concept of processor groups, that can have up to 64 (hardware) threads. I assume they updated 7zip to support multiple processor groups.
dwattttt•8h ago
The linked Processor Group documentation also says:

> Applications that do not call any functions that use processor affinity masks or processor numbers will operate correctly on all systems, regardless of the number of processors.

I suspect the limitation 7zip encountered was in how it checked how many logical processors a system has, to determine how many threads to spawn. GetActiveProcessorCount can tell you how many logical processors are on the system if you pass ALL_PROCESSOR_GROUPS, but that API was only added in Windows 7 (that said, that was more than 15 years ago, they probably could've found a moment to add and test a conditional call to it).

dspillett•8h ago
It isn't just detecting the extra logical processors, you have to do work to utilise them. From the linked text:

"If there are more than one processor group in Windows (on systems with more than 64 cpu threads), 7-Zip distributes running CPU threads across different processor groups."

The OS does not do that for you under Windows. Other OSs handle that many cores differently.

> more than 15 years ago, they probably could've found a moment to add and test a conditional call to it

I suspect it hasn't been an issue much at all until recently. Any single block of data worth spinning up that many threads for compressing is going to be very large, you don't want to split something into too small chunks for compression or you lose some benefit of the dynamic compression dictionary (sharing that between threads would add a lot of inter-thread coordination work, killing any performance gain even if the threads are running local enough on the CPU to share cache). Compression is not an inherently parallelizable task, at least not “embarrassingly” so like some processes.

Even when you do have something to compress that would benefit for more than 64 separate tasks in theory, unless it is all in RAM (or on an incredibly quick & low latency drive/array) the process is likely to be IO starved long before it is compute starved, when you have that much compute resource to hand.

Recent improvements in storage options & CPUs (and the bandwidth between them) have presumably pushed the occurrences of this being worthwhile (outside of artificial tests) from “practically zero” to “near zero, but it happens”, hence the change has been made.

Note that two or more 7-zip instances working on different data could always use more than 64 threads between them, if enough cores to make that useful were available.

dwattttt•6h ago
Are you sure that if you don't attempt to set any affinities, Windows won't schedule 64+ threads over other processor groups? I don't have any system handy that'll produce more than 64 logical processors to test this, but I'd be surprised if Windows' scheduler won't distribute a process's threads over other processor groups if you exceed the number of cores in the group it launches into.

The referenced text suggests applications will "work", but that isn't really explicit.

Dylan16807•6h ago
They're either wrong or thinking about windows 7/8/10. That page is quite clear.

> starting with Windows 11 and Windows Server 2022 the OS has changed to make processes and their threads span all processors in the system, across all processor groups, by default.

> Each process is assigned a primary group at creation, and by default all of its threads' primary group is the same. Each thread's ideal processor is in the thread's primary group, so threads will preferentially be scheduled to processors on their primary group, but they are able to be scheduled to processors on any other group.

monocasa•3h ago
I mean, it seems it's quite clear that a single process and all of its threads will just be assigned to a single processor group, and it'll take manual work for that process to use more than 64 cores.

The difference is just that processes will be assigned a processor group more or less randomly by default, so they'll be balanced on the process level, but not the thread level. Not super helpful for a lot of software systems on windows which had historically preferred threads to processes for concurrency.

Dylan16807•3h ago
> it'll take manual work for that process to use more than 64 cores.

No it won't.

monocasa•3h ago
It absolutely will. Your process is only assigned a single processor group at process creation time. The only difference now is that it's by default assigned a random processor group rather than inheriting the parent's. For processes that don't require >64 cores, this means better utilization at the system level. However you're still assigned <=64 cores by default per process by default.

That's literally why 7-zip is announcing completion of that manual work.

Dylan16807•2h ago
The 7zip code needed to change because it was counting cores by looking at affinity masks, and that limits it to 64.

It also needed to change if you want optimal scheduling, and it needed to change if you want it to be able to use all those cores on something that isn't windows 11.

But for just the basic functionality of using all the cores: >Starting with Windows 11 and Windows Server 2022, on a system with more than 64 processors, process and thread affinities span all processors in the system, across all processor groups, by default

That's documentation for a single process messing with its affinity. They're not writing that because they wrote a function to put different processes on different groups. A single process will span groups by default.

Dylan16807•6h ago
That depends on what format you're using. Zip compresses every file separately. Bzip and zstd have pretty small maximum block sizes and gzip doesn't gain much from large blocks anyway. And even when you're making large blocks, you can dump a lot of parallelism into searching for repeat data.
silon42•8h ago
Maybe WaitForMultipleObjects limit of 64 (MAXIMUM_WAIT_OBJECTS) applies?

An ugly limitation on an API that initially looks superior to Linux equivalents.

monocasa•8h ago
A lot of synchronization primitives in the NT kernel are based on a register width bit mask of a CPU set, so each collection of 64 hardware threads on 64 bit systems kind of runs in its own instance of the scheduler. It's also unfortunately part of the driver ABI since these ops were implemented as macros and inline functions.

Because of that, transitioning a software thread to another processor group is a manual process that has to be managed by user space.

zik•5h ago
Wow. That's surprisingly lame.
Const-me•4h ago
The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.

The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.

monocasa•4h ago
I mean, x86 didn't, but other systems had been exceeding 64 cores since the late 90s.

And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

Const-me•4h ago
> other systems had been exceeding 64 cores since the late 90s.

Windows didn’t run on these other systems, why would Microsoft care about them?

> x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it

For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.

monocasa•4h ago
> Windows didn’t run on these other systems, why would Microsoft care about them?

Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.

. For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.

Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.

rsynnott•2h ago
Windows NT was originally intended to be multi-platform.
p_ing•1h ago
NT was and continues to be multi-platform.

That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.

zamadatix•4h ago
> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.

monocasa•4h ago
There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.

I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.

zamadatix•4h ago
Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.
monocasa•3h ago
Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.
zamadatix•2h ago
3.1 (1993) - KAFFINITY bitmask

5.0 (1999) - NUMA scheduling

6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node

Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)

Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA

If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.

The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.

elzbardico•3h ago
AMD and Intel were focused on single core performance, because personal desktop computing was the bigger business until around mid to late 2000s.

Single core performance is really important for client computing.

monocasa•3h ago
They were absolutely interested in the server market as well.
sidewndr46•2h ago
Why would an application need to be NUMA aware on Linux? Most software I've ever written or looked at has no concept of NUMA. It works just fine.
arp242•3h ago
> Computers didn’t exceed 64 logical processors per system until around 2014.

Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.

That said, development of NT started in 1988, and it may have been less obvious then.

immibis•2h ago
Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...
sidewndr46•2h ago
That was actually the DEC team from what I understand, Microsoft just hired all of their OS engineers when they collapsed
meepmorp•2h ago
Dave Cutler left DEC in 1988 and started working on WINNT at MS, well before the collapse.
rsynnott•2h ago
The Sun E10K (up to 64 physical processors) came out in 1997.

(Now, NT for Sparc never actually became a thing, but it was certainly on Microsoft's radar at one point)

whalesalad•1h ago
Windows is a terrible operating system.
xxs•1h ago
WaitForMultipleObjects is limited to 64... since forever.
d33•7h ago
I worry that 7-Zip is going to lose relevance because lack of zstd support. zlib's performance is intolerable for large files and zlib-ng's SIMD implementation only helps here a bit. Which is a shame, because 7-Zip is a pretty amazing container format, especially with its encryption and file splitting capabilities.
sammy2255•7h ago
https://github.com/mcmilk/7-Zip-zstd
abhinavk•7h ago
https://github.com/M2Team/NanaZip

It includes the above patches as well as few QoL features.

d33•5h ago
Thanks! Any ideas why it didn't get merged? Clearly 7-Zip has some development activity going on and so does this fork...
Beretta_Vexee•3h ago
Working with Igor Pavlov, the creator of 7-zip, does not seem very straightforward (understatement).
Tuldok•2h ago
7-zip's development is very cathedral. Igor Pavlov doesn't look like he accepts contributions from the public.
rf15•6h ago
Not that many people care about zstd; I would assume most 7-zip users care about the convenience of the gui.
jorvi•5h ago
.. but 7-zip has a pretty terrible GUI?

Hence why PeaZip is so popular, and J-Zip used to be before it was stuffed with adware.

general1726•5h ago
Most people won't use that GUI, but will right click file or folder -> 7-Zip -> Add To ... and it will spit out a file without questions.

Granted Windows 11 has started doing the same for its zip and 7zip compressors.

Same trick goes for opening archives or executables (Installers) as archives.

axus•2h ago
Let's chat about Windows 11 right-click menu. I'm pretty sure they hid all the application menu extensions to avoid worst-case performance issues.
p_ing•1h ago
Exactly it. 3rd parties injecting their extensions harmed performance, which people turn around and blame Microsoft for.
m-schuetz•4h ago
All the GUI I need is right click-> extract here or to folder. And 7zip is doing that nicely.
Gormo•4h ago
> .. but 7-zip has a pretty terrible GUI?

Since you're asking, the answer is no. 7-Zip has an efficient and elegant UI.

Jackson__•3h ago
PeaZip is popular? It seems a lot less tested than 7zip; Last time I tried to use it, it failed to unpack an archive because the password had a quote character or something like that. Never had such crazy issues in 7zip myself.
delfinom•3h ago
I would never trust PeaZip.

The author updates code in the github repo....by drag and drop file uploads. https://github.com/peazip/PeaZip/commits/sources/

sidewndr46•2h ago
If you're expecting a "mobile first" or similar GUI where most of the screen is dedicated to whitespace, basic features involves 7 or more mouse clicks and for some reason it all gets changed every ~6 months then yes the 7zip GUI is terrible.

Desktop software usability peaked sometime in the late 90s, early 2000s. There's a reason why 7zip still looks like ~2004

yapyap•5h ago
if by gui u mean the ability to right click a .zip file and unzip it just through the little window that pops up ur totally right. At least that + the unzipping progress bar is what I appreciate 7zip for
Beretta_Vexee•3h ago
I just hope that the recipient will be able to open the file without too much difficulty. I am willing to sacrifice a few megabytes if necessary.
arp242•3h ago
It's been a long time since I used Windows, but back in the day I used 7-Zip exactly because it could open more or less $anything. That's also why we installed it on many customer computers.

On Linux bsdtar/libarchive gives a similar experience: "tar xf file" works on most things.

devilbunny•2h ago
7-Zip is like VLC: maybe not the best, but it’s free (speech and beer) and handles almost anything you throw at it. For personal use, I don’t care much about efficient compression either computationally or in terms of storage; I just want “tar, but won’t make a 700 MB blank ISO9660 image take 700 MB”.
quickthrowman•2h ago
I use the right click context menu to run 7zip, why would you open a GUI?
quietbritishjim•1h ago
That is a GUI!
m-schuetz•5h ago
Being a bit faster or efficient won't make most people switch. 7z offers great UX (convenient GUI and support for many formats) that keeps people around.
rat9988•4h ago
If anything, the gui and ux is terrible compared to winrar.
dikei•4h ago
I use ZSTD a ton in my programming work where efficiency matters.

But for sharing files with other people, ZIP is still king. Even 7z or RAR is niche. Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.

cesarb•4h ago
> Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.

You can use ZSTD with ZIP files too! It's compression method 93 (see https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT which is the official ZIP file specification).

Which reveals that "everyone can open a ZIP file" is a lie. Sure, everyone can open a ZIP file, as long as that file uses only a limited subset of the ZIP format features. Which is why formats which use ZIP as a base (Java JAR files, OpenDocument files, new Office files) standardize such a subset; but for general-purpose ZIP files, there's no such standard.

(I have encountered such ZIP files in the wild; "unzip" can't decompress them, though p7zip worked for these particular ZIP files.)

dikei•4h ago
Well, only a lunatic would use ZIP with anything but DEFLATE/DEFLATE64
redeeman•2h ago
there are A LOT of zip files using lzma in the wild. also, how about people learn to use updated software? should newer video compression technologies not be allowed in mkv/mp4.

if you cant open it, well.. then stop using 90ies winzip

1over137•2h ago
>how about people learn to use updated software?

How about software developers learn to keep software working on old OSes and old hardware?

tiagod•1h ago
What stops you from running updated zip/unzip on an old OS or on old hardware?
krapht•22m ago
Nothing, but what stops you from using DEFLATE64?

Installing new software has a real time and hassle cost, and how much time are you actually saving over the long run? It depends on your usage patterns.

RealStickman_•5m ago
Supporting old APIs and additional legacy ways of doing things has a real cost in maintenance.
Am4TIfIsER0ppos•2h ago
mkv or mp4 with h264 and aac is good enough. mp3 is good enough. jpeg is good enough. zip with deflate is also good enough.
e4m2•1h ago
"Good enough" is not good enough.
easton•4h ago
> new Office files

I know what you mean, I’m not being pedantic, but I just realized it’s been 19 years. I wonder when we’ll start calling them “Office files”.

mauvehaus•3h ago
> I wonder when we’ll start calling them “Office files”.

Probably around the same time the save icon becomes something other than a 3 1/2" floppy disk.

kevinventullo•2h ago
Nowadays I’ve noticed fewer applications have a save icon at all, relying instead on auto-save.
jl6•2h ago
English is evolving as a hieroglyphic language. That floppy disk icon stands a good chance of becoming simply the glyph meaning "save". The UK still uses an icon of an 1840s-era bellows camera for its speed camera road signs. The origin story will be filed away neatly and only its residual meaning will be salient.
guappa•3h ago
You can and I've done it… but you can't expect anything to be able to decompress it unless you wrote it yourself.
justin66•3h ago
> Copyright (c) 1989 - 2014, 2018, 2019, 2020, 2022

Mostly it seems nutty that, after all these years, they’re still updating the zip spec instead of moving on to a newer format.

pornel•3h ago
The English language is awful, and we keep updating it instead of moving to a newer language.

Some things are used for interoperability, and switching to a newer incompatible thing loses all of its value.

justin66•1h ago
Spoken languages are not analogous to file formats. Human beings are not analogous to archive programs.

Interestingly, I think if you examine the errors in the analogy you made, you’ll better understand why simply spinning off a highly tweaked zip format into its own thing with its own file extension might make more sense than stretching the existing format and hoping everyone adopts the most recent changes.

sidewndr46•2h ago
Same thing with "WAV" files. There's at least 3 popular formats for the audio data out there.
martinald•42m ago
More 'useful' one is webp. It has both a lossy and lossless compression algorithm, which have very different strengths and weaknesses. I think nearly every device supports reading both, but so many 'image optimization' libraries and packages don't - often just doing everything as lossy when it could be lossless (icons and what not).
throw0101d•2h ago
> You can use ZSTD with ZIP files too!

Support for which was added in 2020:

> On 15 June 2020, Zstandard was implemented in version 6.3.8 of the zip file format with codec number 93, deprecating the previous codec number of 20 as it was implemented in version 6.3.7, released on 1 June.[36][37]

* https://en.wikipedia.org/wiki/Zstd#Usage

So I'm not sure how widely deployed it would be.

xxs•1h ago
Most linux distributions have zip support with zstd.
notepad0x90•3h ago
I don't know about, had a dicey situation recently where powershell's compress-archive couldn't handle archives >4GB and had to use 7zip. it is more reliable and you can ship 7za.exe or create self-extracting archives (wish those were more of a thing outside of the windows world).
chasil•2h ago
In the realm of POSIX.2 and UNIX relatives, the closest analog would be a "shar" archive.

They are not regarded kindly.

https://en.wikipedia.org/wiki/Shar_(file_format)

sidewndr46•3h ago
What are you compressing with zstd? I had to do this recently and the "xz" utility still blows it away in terms of compression ratio. In terms of memory and CPU usage, zstd wins by a large margin. But in my case I only really cared about compression ratio
vlovich123•2h ago
people tend to care about decompression speed - xz can be quite slow decompressing super compressed files whereas zstd decompression speed is largely independent of that.

People also tend to care about how much time they spend on compression for each incremental % of compression performance and zstd tends to be a Pareto frontier for that (at least for open source algorithms)

bracketfocus•1h ago
This makes sense. A lot of end-users have internet speeds that can outpace the decompression speeds of heavily compressed files. Seems like there would be an irrational psychological aspect to it as well.

Unfortunately for the hoster, they either have to eat the cost of the added bandwidth from a larger file or have people complain about slow decompression.

vlovich123•17m ago
Well the difference is quite a bit more manageable in practice since you’re talking about single digit space difference vs a 2-100x performance in decompression.
xxs•1h ago
do you have examples where xz 'blows it away', not just zstd -3?
Szpadel•1h ago
in my experience using zstd --long --ultra -22 gives marginally better compression ratio than xz -9 while being significantly faster
soruly•14m ago
I think it depends on what you're compressing. I experimented with my data full of hex text xml files. xz -6 is both faster and smaller than zstd -19 by about 10%. For my data, xz -2 and zstd -17 achieve the same compressed size but xz -2 is 3 times faster than zstd -17. I still use xz for archive because I rarely needs to decompress them.
jart•15m ago
Use the pigz command for parallel gzip. Mark Adler also has an example floating around somewhere about how to implement basically the same thing using Z_BLOCK.
Beretta_Vexee•3h ago
You are looking for 7-Zip Zstd: https://github.com/mcmilk/7-Zip-Zstd

I don't know what your use case is, but it seems to be quite a niche.

zx2c4•2h ago
I was curious upon seeing this and found the thread where its inclusion was turned down: https://sourceforge.net/p/sevenzip/discussion/45797/thread/a...
pjmlp•3h ago
As long as it does a better job than whatever Windows team packs into the OS, they're safe.

Even on latest Windows 11 takes minutes to do what 7-Zip does in seconds.

Goes to show how good all those leetcode interviews turn out.

conkeisterdoor•2h ago
Glad I'm not the only one who feels this way. WinZip is a slow and bloated abomination, especially compared to 7-Zip. The right-click menu context entry for 7-Zip is very convenient and runs lightning fast. WinZip can't compete at all.
pjmlp•39m ago
Mixing channels here, WinZip is a commercial product, unrelated to Windows 11 7-zip support, and my comment.

https://www.winzip.com

jccalhoun•2h ago
Since Windows 11 incorporated libarchive back in October 2023 there is less reason to use 7-zip on windows. I would be surprised if any of my friends even know what a zip file is let alone zstd.
rs186•2h ago
If you ever try to extract an archive file of several gigabyte size with hundreds of thousands of files (I know, it's rare), the built-in one is as slow as a turtle compared to 7z.
privatelypublic•2h ago
It already has- look up nanazip
xxs•1h ago
There are lots of 7zip alike with zstd support (it's a plugin effectively). On [corporate] Windows NanaZip would be my choice as it's available in Windows store.

on anything else - either directly zstd or tar

Night_Thastus•29m ago
7-zip is the de-facto tool on Windows and has been for a long time. It's more than fast and compressed enough for 99% of peoples use cases.

It's not going anywhere anytime soon.

The more likely thing to eat into its relevance is now that Windows has built-in basic support for zipping/unzipping EDIT: other formats*, which relegates 7-zip to more niche uses.

izzydata•19m ago
Is there something different about the built in zip context menu functionality now than before? I'm pretty sure you could convert something to a zip file since forever ago by right clicking any file.
Night_Thastus•13m ago
It could support basic ZIP files, but only Windows 11 added support for 7-Zip (.7z), RAR (.rar), TAR, and TAR variants (like .tar.gz, .tar.bz2, etc).

That makes it 'good enough' for the vast majority of people, even if it's not as fast or fully-featured as 7-Zip.

malfist•15m ago
Windows has had built in zip/unzip since vista. 7zip is far superior (and the install base proves that)
Night_Thastus•11m ago
As mentioned in another comment, zip support actually goes further back as far as '98, but only Windows 11 added support for handling other formats like RAR/7-Zip/.tar/.tar.gz/.tar.bz2/etc.

That allows it to be a default that 'just works' for most people without installing anything extra.

The vast majority of users don't care about the extra performance or functionality of a tool like 7-zip. They just need a way to open and send files and the Windows built-in tool is 'good enough' for them.

aquir•5h ago
7-zip is one of the software that I miss since I’ve moved to macOS
DeepSeaTortoise•5h ago
How about PeaZip?
aquir•4h ago
I've used PeaZip in the past but only on Windows, I was not aware that a MacOS version exists! I'll give it a try. Cheers
MYEUHD•5h ago
If you're talking about the program you use in the terminal, you can install it via homebrew
immibis•2h ago
No, the GUI. 7-zip integrates well with the shell: select a group of files, right click -> make zip file, and so on. Or right-click a zip file and select extract. If you're accustomed to Linux you might not know what they're talking about.

TortoiseGit (and TortoiseSVN) are similarly convenient. Right click a folder with an SVN repo checked out, and select "SVN update". Right-click an empty space, and select "SVN checkout". SVN was the main distribution method for some modding communities before things like Steam Workshop and Github, specifically because TortoiseSVN made it so convenient. Checkout into your addons folder, and periodically update. What could be simpler?

portaltonowhere•5h ago
Keka is also really nice!

https://www.keka.io/

aquir•4h ago
Never heard of it, I'll give it a try!
ltbarcly3•5h ago
Wow, a program that doesn't matter anymore has been very very minimally enhanced on a platform that doesn't matter anymore, benefitting the 7 users that have more than 64 real cores with Windoes and are regularly compressing archives so large that it doesn't drastically reduce the compression ratio to split it into more thsn 64 sections.

Posting this link to hn has consumed more human potential than the thing it is describing will save up to the end of time.

marcodiego•4h ago
https://xkcd.com/619/
lihaciudaniel•3h ago
7zip has been the greatest usage for limbo x86 on mobile.

You just termux qemu-utils convert your qcow2 partitions to IMG and 7zip can read IMG file

Try yourself to see you can extract from your emulated windows

leecarraher•2h ago
I've used pbzip2 which takes the same parallel blocked compression approach 7zip seems to be taking (using AI's analysis of the changes). Theoretically the compression is less efficient, but i haven't noticed a difference in practice.
izzydata•18m ago
This may or may not be a relevant question, but does the terminology of "zip" have the same origin as the zip disk drive?
malfist•16m ago
No. Zip format significantly predates the zip disk.