but for vps, where the cpu usage is extremely low and ram is expensive, it might make sense to sacrifice a little performance for more db cache maybe. can't say without more context
technically sure you're correct but the actual overhead of lz4 was more or less at the noise floor of other things going on on the system to the extent that I think lz4 without thought or analysis is the best advice always.
Unless you have a really specialized use case the additional compression from other algorithms isn't at all worth the performance penalty in my opinion.
The next two show it fitting 20% more data with 2-3x the CPU, which is a tougher tradeoff but still useful in a lot of situations.
The rest of the post analyzes the CPU cost in more detail, so yeah it's worse in every subcategory of that. But the increase in compression ratio is quite valuable. The conclusion says it "provides the highest compression ratio while still maintaining acceptable speeds" and that's correct. If you care about compression ratio, strongly consider zstd.
On a system with integrated graphics and 8 (16 logical) cores and 32 GB of system memory I achieve what appears to be optimal performance using:
zramen --algorithm zstd --size 200 --priority 100 --max-size 131072 make
sysctl vm.swappiness=180
sysctl vm.page-cluster=0
sysctl vm.vfs_cache_pressure=200
sysctl vm.dirty_background_ratio=1
sysctl vm.dirty_ratio=2
sysctl vm.watermark_boost_factor=0
sysctl vm.watermark_scale_factor=125
sysctl kernel.nmi_watchdog=0
sysctl vm.min_free_kbytes=150000
sysctl vm.dirty_expire_centisecs=1500
sysctl vm.dirty_writeback_centisecs=1500
Compression factor tends to stay above 3.0. At very little cost I more than doubled my effective system memory. If an individual workload uses a significant fraction of system memory at once complications may arise.I had considered some kind of test where each parameter is perturbed a bit in sequence, so that you get an estimate of a point partial derivative. You would then do an iterative hill climb. That probably won't work well in my case since the devices I'm optimizing have too much variance to give a clear signal on benchmarks of a reasonable duration.
I use lz4-rle as first layer, but if page is idle for 1h it is recompressed using zstd lvl 22 in the background
it is great balance, for responsiveness Vs compression ratio
https://gist.github.com/Szpadel/9a1960e52121e798a240a9b320ec...
kragen•3mo ago
mscdex•3mo ago
kragen•3mo ago
PhageGenerator•3mo ago
kragen•3mo ago
Ferret7446•3mo ago
RealStickman_•3mo ago
heavyset_go•3mo ago
1oooqooq•3mo ago
heavyset_go•3mo ago
kasabali•3mo ago
heavyset_go•3mo ago