cat > .env << EOF
DATABASE_URL=${{ secrets.TEST_DATABASE_URL }}
STRIPE_API_KEY=${{ secrets.STRIPE_TEST_KEY }}
EOF
which also addresses the trust and rotation problems. I suppose for dev secrets those are annoying, but even with secretspec you would have to rotate dev secrets when someone is offboarded.[1] https://devenv.sh/blog/2025/07/21/announcing-secretspec-decl...
We hope that one day github actions would integrate secretspec more tightly, leaving aside using environment variables as a transport.
That's going to be a long journey, one worth striving for.
Maybe I haven't worked at enough places, but... when has this ever been allowed/encouraged/normalized?
You can't necessarily revoke a secret just because it is in Hashicorp Vault or AWS Secrets Manager. Revocation is a function of the system that provisions and/or uses the secret for authentication, not the system that stores the secret. E.g. if you generate a certificate and store the private key with vault or sops, the revocation procedure is identical and has nothing to do with the secrets storage system.
Auditing access can be done coarsely by auditing access to the encryption key. Admittedly, this is an area where a more sophisticated system offers benefits. Although it isn't exactly iron clad -- a service may access a secret then leak or transfer the secret without that being visible in the audit log.
One of my favorite incidents during this clean-up effort was, the security team + my team had discovered a lot of DB credentials were just sitting on developer's local machines and basically nowhere else that made any kind of sense, and they'd hand them around as needed via email or message. So, we made tickets everywhere we found instances of this to migrate to the secret management platform. One lead developer with a privileged DB credential wrote a ticket that was basically:
"Migrate secret to secret management platform" and in the info section, wrote the plaintext value of the key, inadvertently giving anyone with Jira read access to a sensitive production database. Even when it was explained to him I could tell he didn't really understand fully why that was silly. Why did he have it in the first place is a natural followup question, but these situations don't happen in a vacuum, there's usually a lot of other dumb stuff happening to even allow such a situation to unfold.
I'm genuinely curious as to what the fireable offenses here would be. If the company had an existing (broken) culture of keeping unencrypted secrets I wouldn't expect people following that culture to be fired for it.
I have setup AWS + SOPS in several projects now, and the developers do not have access to the secrets themselves nor the encryption key (which is stored in AWS). Only once did we ever require to rollback a secret and that happened at AWS level, not the code’s. Also it happened within the key rotation period, so it was easy.
For us it’s easier to track changes (not the value, but when it changes), easier to associate it with incidents.
edit: it’s not covered in the post, but it is on the launch and doc site: https://secretspec.dev/providers/onepassword/
Realistically, why would your different environments have different ways of consuming secrets from different locations? Yes, you wouldn't use AWS Secrets Manager in your local testing, maybe... but giving each developer control and management of their own secrets, in their own locations, is just begging for trouble. How do you handle sharing of common secrets? How do you handle scenarios where some parts are shared (e.g. a shared api key for a dev third party API) but others aren't (local instance of test db)? How do you make sure that api key that everyone uses in dev is actually rotated from times to times, and nobody has stored it in clear text .env because once they had issues with OnePassword's service being down, and left it at that? How do you make sure that nobody is using an insecure secrets manager (e.g. LastPass)?
It's just adding the risk of having the impression that there is proper secrets management, but actually having a mess of everyone doing whatever they feel like with secrets, with no control over who has access to what, and what secret is used where and by whom and why. Which is kind of like a good ~70% of the point of secrets management.
Centralised secrets management or bust, IMO. Ideally with a secrets scanner checking your code doesn't have a secret in clear text left by mistake/lazyness. Vault/OpenBao isn't that complicated to set up, but if really is, your platform probably has something already.
Disclaimer: I work at HashiCorp, but opinions my own, I've been a part of the team implementing Vault at my past job for centralised secrets management and 100% believe it's the way things should be done to minimise the risk of mishandling secrets.
By having a secrets specification we can start working towards a future that will consolidate these providers and allow teams to centralize it if needed, by having simple means of migrating from a mess into a central system.
But the centralized method, as in secretspec, not everyone will accept reading secrets in environment variables, as is also done with the 1password cli run command [1]. They also may need to be injected as files or less secure command line parameters. In the Kubernetes world one solution the is External Secrets Operator [2]. Secrets may also be pulled from an API as well from the cloud host. I won't comment on how that works in k8s.
To note, the reason for reading from file handles is so that the app can watch for changes and reload, e.g., key/token rotations without restarting the server.
But what could be useful to some developers is a secretspec inject subcommand (the universal version of the op inject command). I use op inject / dotenvy with Rust apps -- pretty easy to manage and share credentials. Previously I had something similiar written in Rust that also handled things like base64 / percent-encoding transforms.
If you aren't tied to Rust, probably could just fork external-secrets and get all the provider code for free.
[1] https://developer.1password.com/docs/cli/reference/commands/...
Even then you can run a central Vault/OpenBao/whatever deployment.
One key issue is that splitting general config from secrets is practically extremely difficult because once the variables are accessible to a running code base most languages and code bases don't actually have a way differentiate between them internally.
I skipped the hard part of trying to integrate transparently with actual encrypted secret stores. The architecture leaves open the ability to write a new backend, but I have found that for most things, even in production, the more important security boundaries (for my use cases) mean that putting plaintext secrets in a file on disk adds minuscule risk compared to the additional complexity of adding encryption and screwing something up in the implementation. The reason is that most of those secrets can be rotated quickly because there will be bigger things to worry about if they leak from a prod or even a dev system.
The challenge with a standard for something like this is that the devil is always in the details, and I sort of trust the code I wrote because I wrote it. Even then I assume I screwed something up, which is part of why I don't shared it around (the others are because there are still some missing features and architecture cleanup, and I don't want people depending on something I don't fully trust).
There is a reason I put a bunch of warnings at the top of the readme. Other people shouldn't trust it without extensive review.
Glad to see work in the space trying to solve the problem, because a good solution will need lots of community buy-in to build quality and trust.
It's a standalone tool with YAML configuration, simple to use.
Basically the way it works:
- You create the secret in GCP/AWS/etc Secrets Manager service, and put the secret data there.
- Refer to the secret by its name in Teller.
- Whenever you run `$ teller run ...` it fetches the data from the remote service, and makes it available to your process.
*Configuration Values*
Your laptop is not hosting your website (I presume), so .env is not going to be enough to run your app somewhere other than your laptop.
I get it. You only want to run your app locally, and .env is convenient. But your production server probably isn't going to load your .env file directly, and it will probably need extra or different variables. This disconnect between "the main development environment variables" and "the extra stuff in production" will lead to inconsistencies that you have not tested/developed against. That will lead to production bugs. So keeping track of those differences in a uniform way is pretty useful.
How do you specify configuration for development and production without running into inconsistency bugs? By splitting up your app's configuration into "static" and "dynamic", and version-controlling everything.
1) "Static" configuration is things like environment variables, which do not change from run to run, and are not environment-specific. So for example, an API URL prefix like "/api/routes" is pretty static and probably not going to change. But an IP address definitely will change at some point, so this configuration isn't static. (To think about it another way: on your computer, some environment variables are simply stored in a text file and read into your shell; these are static)
2) "Dynamic" configuration are values that may change, like hostnames, IP addresses, port numbers, usernames, passwords, etc. Secrets are also "dynamic", because they should never be hard-coded into a file or code, and you will want to rotate secrets in the future. All dynamic configuration should be loaded during a deployment process (for example, creating an ECS task definition, or Kubernetes yaml file), or at runtime (an ECS task definition that sources environments from secrets, or a Kubernetes yaml that sources environments from secrets, or a function in your code that calls an API to look up a secret from Hashicorp Vault or similar). In particular for secrets, you want to load those every time your program starts, as close to the application's execution environment as possible. (To think about it another way: some environment variables on your computer require executing a program and getting its output to set the variable - like your $HOSTNAME, $USER, $SHELL, and other variables)
3) Both static and dynamic configuration should be version-controlled, and any change to these should trigger a new deployment. If a value changes, and you don't then immediately make a new deployment, that change could be harboring a lurking bug that you won't find out about until someone makes a deployment much later on, and trying to find the cause will be very difficult.
*Infrastructure Patterns*
test, stage, and prod servers are like pets. You have individual relationships with them, change them in unique ways, until eventually they have their own individual personalities. They become silos that pick up peculiarities that will not be reflected in other environments, and will be hard to replicate or rebuild later.
Instead, use ephemeral infrastructure (the "cattle" in "pets vs cattle"). There should be a "production" infrastructure, which is built with Infrastructure-as-Code, to create an immutable artifact that can simply be deleted and re-created automatically. That same code that builds production should build any other server, for example for testing or staging. When the testing or staging is done, the ephemeral copy should be shut down. They should all be rebuilt frequently to prevent infrastructure rot from setting in.
This pattern does a lot of things, like making sure you have automation for disaster recovery, using automation to prevent inconsistencies, using automation to detect when your infrastructure-as-code has stopped working, saving money by turning off unneeded resources, and the ability to spin up a unique copy of your infrastructure with unique changes in order to test them in parallel to your other infrastructure/changes. It also makes it trivial to test upgrades, patch security holes, or destroy and recreate compromised infrastructure. And of course it saves you time in the long run, because you only expend effort to set it up once.
*This Is Not About Scaling*
I know the first thing everyone's going to complain about is something like "I'm not Facebook, I don't need all that!" or "It works fine for me!".
There's a lot of things we do today that are better for us than what we did before, even though we don't have to. You brush your teeth and wash your hands, right? Well we didn't used to do those things. And you can still live your life without doing them! So why do them at all?
Because we've learned about the downsides of not doing them, and the benefits outweigh the downsides. Getting into the habit of doing things differently may be annoying or painful at first, but then they will become second nature, and you won't even think about it.
I'm not sure exactly what parts of the comment are about secrets rather than how infrastructure should be done, but I see that secrets and configuration have very different lifetimes so they should be provisioned separately. The config can for example be in the git if it's free of secrets.
Secrets are provisioned at runtime, while config is build time.
`secretspec.toml` is in the version control and it tells you all about what's going to happen at runtime.
I don't understand, are you commenting on the design and UX of HN?
Our tool has similar goals, although a slightly different approach. Varlock uses decorator style comments within a .env file (usually a committed .env.schema file) to add additional metadata used for validation, type generation, docs etc. It also introduces a new "function call" syntax for values - which can hold declarative instructions about how to fetch values, and/or can hold encrypted data. We call this new DSL "env-spec" -- similar name :)
Certainly some trade-offs, but we felt meeting people where they already are (.env files) is worthwhile, and will hopefully mean the tool is applicable in more cases. Our system is also explicitly designed to handle all config, rather than just secrets, as we feel a unified system is best. Our plugin system is still in development, but we will allow you to pull specific items from different backends, or apply a set of values, like what you have done. We also have some deeper integrations with end-user code, that provide additional security features - like log redaction and leak prevention.
Anyway, would love to chat!
imglorp•1d ago
domenkozar•1d ago
lucideer•1d ago