What stops the attacker from just editing /etc/rc.securelevel and then doing a normal reboot?
Certainly a full reboot leaves more tracks than no full reboot? So it's harder to hide?
This is definitely one of those “security vs convenience” situations where you can easily shoot yourself in the foot, but it’s great to have the option when you need it.
I don’t think this is “security vs convenience”, I’d more argue it’s possible to think you’ve made this secure but you’ve missed something and haven’t configured it to be as secure as you think. An approach like others have suggested with remote logging is at least easier to reason about.
(Yes, quite harsh, but for some use cases it may be the right thing to do, i.e. to fail closed).
That log server is properly firewalled/hardened so a hacked server can’t be used as a stepping stone to compromise the log server.
Maybe you even have access restrictions in place for the log server so people can’t wipe their own misdeeds (4-eyes principle).
This is how it’s been done for 35+ years, nothing special about this.
See e.g. https://www.youtube.com/watch?v=FiEGoVzmyvs but dot-matrix was also used and at least a little less noisy.
While the standard might effectively call for immutable logs¹, he needs to read between the lines one step further: those logs do not need to be on the same machine. You could stream logs to another system that stores them immutably from the PoV of anyone except those with root or physical access to it. You still have a problem if an attacker gets access to both the source system(s) and the log sinks², there might be a latency issue meaning you could easily lose the last few log entries in the case of a complete disaster, and you have an extra moving part in your infrastructure to monitor, but it satisfies the requirement where immutable filesystem flags can not.
----
[0] Yes, you'll know something happened, and you might guess it was malicious and not random corruption, but enough tracks might be covered to stop you working out the initial who & how.
[1] and some standars explicitly call for them
[2] Careful granular access management should largely mitigate that risk. That could be a problem if you are a small organisation trying to protect against internal disgruntled admins³, but you could use a a 3rd party log-sink service in that case.
[3] This may seem overly paranoid, but if it is required for the standard your target audience wants you to have a certificate for…, and TBH it isn't that paranoid.
The standard states that you should do something about X, and perhaps that your choice of how to do X should have property Y, but won't go into thither specifics. All the certificate you have, of you have one, really says is that you seem to have covered the relevant points in what you decided to do, and that you are actually implementing what you decided to do. This is one of the reasons why, despite companies having a pile of certificates like that, large prospective clients send a huge questionnaire to anyone wishing to tender for a job: the questionnaire on post fills in the gaps by requesting further detail on how you implemented the requirements of the standard (and in many cases makes it obvious that their are wrong answers that you could give).
On the other hand it's great to have documentation like this. I feel there's a gradient between convenience and security and immutable local logs could provide a layer of defense without requiring another server for logging. Maybe a "nice to have" for a small homelab, security practice, etc.
--------
[1] such schemes existed much prior to bitcoin & friends, though they were not used a lot back than.
johnisgood•6mo ago
mmsc•6mo ago
mzajc•6mo ago
mmsc•6mo ago
claviola•6mo ago
johnisgood•6mo ago
That said, I am sure your comment will be useful to some!