That does force you to duplicate some assets a lot. It's also more important the slower your seeks are. This technique is perfect for disc media, since it has a fixed physical size (so wasting space on it is irrelevant) and slow seeks.
I'd love to see it analysed. Specifically, the average number of nonseq jumps vs overall size of the level. I'm sure you could avoid jumps within megabytes. But if someone ever got closer to filling up the disk in the past, the chances of contiguous gigabytes are much lower. This paper effectively says that if you have long files, there's almost guaranteed gaps https://dfrws.org/wp-content/uploads/2021/01/2021_APAC_paper... so at that point, you may be better off preallocating the individual does where eating the cost of switching between them.
Nowadays? No. Even those with hard disks will have lots more RAM and thus disk cache. And you are even guaranteed SSDs on consoles. I think in general no one tries this technique anymore.
But it also depends on how the assets are organized, you can probably group the level specific assets into a sequential section, and maybe shared assets could be somewhat grouped so related assets are sequential.
By default, Windows automatically defragments filesystems weekly if necessary. It can be configured in the "defragment and optimize drives" dialog.
If you break it up into smaller files, those are likely to be allocated all over the disk; plus you'll have delays on reading because windows defender makes opening files slow. If you have a single large file that contains all resources, even if that file is mostly sequential, there will be sections that you don't need, and read ahead cache may work against you, as it will tend to read things you don't need.
Which makes me think: Has there been any advances in disk scheduling in the last decade?
https://www.arrowheadgamestudios.com/2025/10/helldivers-2-te...
But for a mechanical drive, you'll get much better throughput on sequential reads than random reads, even with command queuing. I think earlier discussion showed it wasn't very effective in this case and taking 6x the space for a marginal benefit for the small % of users with mechanical drives isn't worth while...
If the game was ~20GB instead of ~150GB almost no player with the required CPU+GPU+RAM combination would be forced to put it on a HDD instead of a SSD.
I still don't know but found instead an interesting reddit post were users found and analyzed this "waste of space" three month ago.
https://www.reddit.com/r/Helldivers/comments/1mw3qcx/why_the...
PS: just found it. According to this Steam discussion it does not download the duplicate data and back then it only blew up to ~70 GB.
https://steamcommunity.com/app/553850/discussions/0/43725019...
[0] https://partner.steamgames.com/doc/sdk/uploading#AppStructur...
282 comments
I'm not an arrowhead employee, but my guess is at some point in the past, they benchmarked it, got a result, and went with it. And that's about all there is to it.
>We now know that, contrary to most games, the majority of the loading time in HELLDIVERS 2 is due to level-generation rather than asset loading. This level generation happens in parallel with loading assets from the disk and so is the main determining factor of the loading time. We now know that this is true even for users with mechanical HDDs.
they did absolutely zero benchmarking beforehand, just went with industry haresay, and decided to double it just in case.
>we looked at industry standard values and decided to double them just in case.
habbekrats•1w ago
wvbdmp•1w ago
breve•1w ago
habbekrats•1w ago
jsheard•3m ago
tetris11•1w ago
That being said, cartridges were fast. The move away from cartridges was a wrong turn
BizarroLand•1w ago
crest•37m ago
maccard•38m ago
[0] https://news.ycombinator.com/item?id=10066338