I recently came across gdu (1) and have installed/used it on every machine since then.
And this is why I tried Plausible once and never looked back.
To get basic but effective analytics, use GoAccess and point it at the Caddy or Nginx logs. It’s written in C and thus barely uses memory. With a few hundreds visits per day, the logs are currently 10 MB per day. Caddy will automatically truncate if logs go above 100 MB.
> Note: this was written fully by me, human.
The authorization can probably be done somehow in nginx as well.
flanfly•2h ago
jaapz•2h ago
omarqureshi•2h ago
jcims•2h ago
And of course there's nothing to say that both of these things can't be done simultaneously.
theshrike79•2h ago
Except that one time when .NET decides that the incoming POST is over some magic limit and it doesn't do the processing in-memory like before, but instead has to write it to disk, crashing the whole pod. Fun times.
Also my Unraid NAS has two drives in "WARNING! 98% USED" alert state. One has 200GB of free space, the other 330GB. Percentages in integers don't work when the starting number is too big :)
dspillett•2h ago
Defence in depth is a good idea: proper alarms, and a secondary measure in case they don't have the intended effect.
pixl97•1h ago
n4r9•1h ago
fifilura•2h ago
saagarjha•2h ago
3form•40m ago
ninalanyon•2h ago
testplzignore•1h ago
bombcar•1h ago
dspillett•1h ago
It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.
--------
[1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.
[2] drives under-allocate by default IIRC
Chaosvex•1h ago
bombcar•1h ago
throw0101d•59m ago
ZFS has a "reservation" mechanism that's handy:
> The minimum amount of space guaranteed to a dataset, not including its descendants. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
Quotas prevent users/groups/directories (ZFS datasets) from using too much space, but reservations ensure that particular areas always have a minimum amount set aside for them.
dijit•48m ago
I knew I didn’t invent the concept, as there’s so many systems that cannot recover if the disk is totally full. (a write may be required in many systems in order to execute an instruction to remove things gracefully).
The latest thing I found with this issue is Unreal Engines Horde build system, its so tightly coupled with caches, object files and database references: that a manual clean up is extremely difficult and likely to create an unstable system. But you can configure it to have fewer build artefacts kept around and then it will clear itself out gracefully. - but it needs to be able to write to the disk to do it.
Now that I think about it, I don’t do this for inodes, but you can run out of those too and end up in a weird “out of disk” situation despite having lots of usable capacity left.
layer8•47m ago
ape4•33m ago
freedomben•11m ago