Splitting a file into chunks, hashing them in parallel, and then hashing the resulting hashes is certainly a valid method but not the same as hashing a file the traditional way.
Unless the world changes how they publish hashes of files available for download, I don’t see the point.
I_like_tomato•57m ago
The reasoning here is to improve getting hash of a large file (let say size > 100GB). Reading the file content sequently and hashing it will take a lot longer
BobbyTables2•1h ago
Splitting a file into chunks, hashing them in parallel, and then hashing the resulting hashes is certainly a valid method but not the same as hashing a file the traditional way.
Unless the world changes how they publish hashes of files available for download, I don’t see the point.
I_like_tomato•57m ago