Just an excellent example of how to approach & elucidate a problem domain.
secrets.forEach(secret => logMessage = logMessage.replaceAll(secret, '**'))
secrets can also churn, so even if you did your example would require something besides an in-memory array.
and, the final point: what if your secret masking code fails on an exception, too ;)
Some of the same techniques apply, like using domain primitives, but some PII (like names and addresses) is eventually templated into flatter (text) values, and processed by other layers which do not recognize 'brands' as suggested.
Data scanners: Regexes are fine for SSNs and the like, but to be really effective, one would need a full-on Named Entity Recognition in the pipeline, perhaps just as a canary. (Wait, that might actually work?)
Dataflow analysis and control applies in a BIG way, e.g. separating an audit log for forensics, where you really NEED the PII, from a technical log which the SREs can dig into without being suspected of stealing sensitive info. Start there.
Also if you have audit records, you want accessing a secret to be logged separately from accessing logs.
You could have 100s of people who have a business need to look at syslog from a router, but approximately nobody who should have access to login creds of administrative users and maybe 10s of people with access to automation role account creds.
At first sight it seems a complicated and inferior approximation of techniques from the article: not automatically single use, not statically checked, somewhat error prone for proper secret usage, not really preventing well-intentioned idiots from accidentally extracting, "laundering" and leaking the secret, removing secrets from logs at a dangerously late stage with some chance of leaks.
Even if I trust me.
Audits happen. I assume other people will eventually see this bad practice.
My argument is that generally everyone has access to all the logs. If you restrict the access and add guardrails around it, you can minimize the surface area and also ways it can be leaked out.
If you take a defensive approach towards, you have to assume that some secret is getting logged somewhere. The goal then becomes a way to reduce the surface area or blast radius of this possible leakage.
I have very strong opinions on this issue that boils down to. _why are you logging everything you lazy asses_ and _adding all the secrets into another tool just to scan for them in logs just adds another point for them to leak_...
Especially since the ability of lines getting censored even when the secrets were just part of words showed that probably no hashing was involved.
But its a security tool so it stays. I kinda feel like Cassandra but I think I can already predict a major security issue with it or others with the same functionality in the future. its like some goddamn blind spot that software that is to prevent X cannot be vulnerable to X but somehow often is vulnerable because prevention of X and not being vulnerable to X are two separate things somehow.
Secondly, you can't represent the heap & stack well as strings. Concurrent threads and object trees are better debugged with a debugger (e.g. gdb).
What I did at a previous shop was remove the passwords as part of a smart gdb script that runs when the core is dumped, before it gets written to a readable location.
Writing the script also helped to demonstrate how to extract the passwords in the first place.
And an exact match is just part of the problem; if a dev redacts the end and another dev redacts the start, you can still reassemble the secret with enough logs.
Regex matching on logs is slow but if performed on every node the CPU load is distributed vs. doing this upstream. Configuration management can push the regex rules to all the nodes. This won't help with unknown-unknowns but those can be added quickly to all nodes through configuration management after peer review.
Rsyslog also supports encrypting the log stream so that secret leakage is limited to the sending nodes and the central nodes and it checks a few boxes.
Another thing that helps is limiting to warn and above sent upstream and using an agent on the local nodes to monitor for keywords in the range of info to debug to let someone know to go check the node logs. Less junk on the centralized servers that may have SOC1/SOC2/PCI/FEDRAMP log retention requirements. One can not leak what is not sent in the first place.
The kitchen sink example in particular is one that trips up people. Without knowing the specifics of how a library may deal with failure edge cases, it can catch you off guard (e.g., axios errors including API key headers).
A lot of these problems come from architectures where secrets go over the wire instead of just using signatures/ids. But in cases where you have to use some third party platform, there's often no choice.
We’ve been working on an open source tool, Kingfisher, that pairs fast detection (Hyperscan + Tree-Sitter) with live validation for a bunch of providers (cloud + common SaaS) so you can down-rank false positives and focus on the secrets that really matter. It plugs in at the chokepoints this post suggests: CI, repo/org sweeps, and sampled log archives (stdin/S3) after a Vector/rsyslog hop.
Examples:
kingfisher scan /path/to/app.log --only-valid
kingfisher scan --s3-bucket my-logs --s3-prefix prod/2025/09/
Baselines help keep noise down over time.Repo: https://github.com/mongodb/kingfisher (Apache-2.0)
Disclosure: I help maintain Kingfisher.
> const secret = new Secret("...")
one of those things that's obvious in retrospect. That's a cute trick I'll definitely be stealing.
Which reminds me of why I hate tiny standard libraries as seen in JavaScript: features like SecureString work only if they're used pervasively. It has to be in the std lib and it has to be used everywhere so that you almost never have to unwrap them. It's critical that credentials are converted to SecureString as soon as possible and that they stay as SecureString values until the last possible instant when they're passed to some external API call deep inside even a third-party a library.
mlhpdx•18h ago
> And while people will write the code that accidentally introduces sensitive data into logs, they’re also the ones that will report, respond, and fix them.
This should probably be the first point and not the last.