It also seems.. irresponsible to claim that @sensitive values "will be always be redacted in CLI output", when the whole point of something like Varlock is to configure some external application that it doesn't control.
And what does "AI-friendly" mean here anyway... beyond, I suppose, varlock being AI slop itself.[0]
[0]: https://github.com/dmno-dev/varlock/tree/514917f4228d49d4404...
So it would seem, on that front, that 1Password is doing the heavy lifting.
Using 1Password in this way has proven way better than storing .env files in plain text on dev machines, where the .env files get picked up if a company does backups, or someone stores a repo in their Dropbox folder, file gets flagged as potential malware and uploaded somewhere for further analysis, etc.
The goal here is to just make it dead simple to do the right thing with minimal effort. Get secrets out of plaintext, avoid the need to send them around insecurely, and help make sure you don't shoot yourself in the foot, which is surprisingly easy to do in hybrid server/client frameworks like Next.js.
Can you set up validations, syncing with various backends, and these protections all of this yourself by wiring together a bunch of tools with custom code? Of course... But here's one that will do it all with minimal effort.
Even if not setting values within your files, you can rely entirely on env vars in the platform where the code runs and still benefit from validation provided by varlock.
Right now we give 1Password as an example, but you can use any provider that has a CLI. We are also working on a plugin system that should make it easier to integrate with any provider.
As for redaction - that note is about how we redact your secrets from _our_ CLI output. However we also provide tools to redact within your application. Right now this works only in JavaScript, by patching global console methods. We will also hook into stdout for varlock run, similar to what the 1Password cli does.
The leak detection is much more interesting - especially in hybrid client/server frameworks, where you can easily shoot yourself in the foot.
By removing plaintext secrets from env files, we totally remove the risk of them leaking via AI code assistants, which I guarantee is happening millions of times a day right now. Also the schema itself and autogenerated types give AI much more context about your env.
For example, when we [1] deploy applications in Kubernetes, we have built an admission controller that fetches secrets from vault and converts them into env variables or secrets for the application at runtime. In this way, you will only have a reference in the form of annotation for the application.
If you give an `.env` as is, people will extract that value and start using it. You will end up leaking secrets.
Another way we have been exploring injecting secrets is via a sidecar for the application or via SDK but the lift seems to be a bit too much.
I think the deployment environment should be responsible for injecting the credentials for the best posture.
Vault can be a huge lift and doesn't make sense for many projects - we wanted to build a tool that makes sense from day one, even when there is no backing provider, but can grow with your team and change providers seamlessly.
the file is basically a big
``` const config = { http: { server_url: ENV === "prod" ? "https://myserver.com" : "http://localhost:3000" ... } } ```
and this lets me type variables, add comments, many niceties, etc
The schema itself (and the automatic types it can generate) also gives AI more context about what configuration is available, and what each item is for.
So…we're not just talking about secrets then. Any text in any file could be leaked. The solution isn't simply moving secrets out of env files, the solution is, um,
*not leaking the contents of local files*
My god. Have we forgotten all semblance of how security & privacy in computing should work?
It's annoying to do it right, so people often take shortcuts - skip adding validation, send files over slack, don't add docs, etc...
The common pattern of using a .env.example file leads to constant syncing problems, and we often have many sources of truth about our config (.env.example, hand-written types, validation code, comments scattered throughout the codebase)
This tool lets you express additional schema info about the config your application needs via decorators in a .env file, and optionally set values, either directly if they are not sensitive, or via calls to an external service. This shouldn't be something we need to recreate when scaffolding out every new project. There should be a single source of truth - and it should work with any framework/language.
Note that you can load env vars into anything via varlock run, not just javascript. The JS integration is a bit deeper, providing automatic type generation, log redaction, and leak prevention.
We think the decorator comments (it’s an open spec we call @env-spec - RFC is here https://github.com/dmno-dev/varlock/discussions/17) are an intuitive addition to .env files that are ubiquitous.
The hope is to remove the papercuts of dealing with env vars (and configuration more generally) by introducing an easy to understand system for validation, and type-safety, with the flexibility to use any third party provider to persist your secrets. We found ourselves reimplementing this stuff on every project, wiring together many tools and custom code, only to end up with a mediocre outcome.
The very common pattern of using `.env.example` leads to constant syncing problems, with many folks resorting to sharing .env files and individual secrets over slack, even when they know they shouldn’t. By turning that example into a schema and involving it in the loading process, it can never get out of sync. With validations built in, if something is wrong, you’ll know right away with a helpful error instead of an obscure runtime crash.
Because the system is aware whether things are sensitive or not it means we can do things like log redaction and leak prevention on the application side of things. Many tools try to do scanning but use regexes, while varlock knows the actual values to look for. We felt these were problems especially worth solving in a world where more frameworks are running the same code on both the server and client.
We intended to share this ourselves on here next week but you beat us to the punch. We’re in the midst of shipping the drop-in next.js integration (hopefully just merged today).
I also see a few comments about the “AI friendly” part. Right now tons of folks have sensitive keys in their .env files that are being leaked to AI assistants left and right. By removing plaintext secrets from env files, it entirely removes this problem. We also want to highlight the fact that with this DSL on top of .env we’re making it much easier for LLMs to write safer code. Part of creating devtools is trying to understand how they will be used in the wild so we’ve been trying to work with common tools (Cursor, Windsurf, Claude, Gemini, etc) to make sure they can coherently write @env-spec for varlock.
We’re literally just getting started so all of your feedback is super valuable.
We’ll continue to expand support for non-js languages (which already work via `varlock run`) as well as add more integrations, and eventually some CI/CD features for teams to help track and manage config.
jelder•8h ago
https://direnv.net is a better solution IMO. Once you set it up in your shell, it loads environment variables from the `.envrc` file in whatever the current directory is automatically. It includes a rich standard library (https://direnv.net/man/direnv-stdlib.1.html) for manipulation of PATH, etc.
For secrets, I just add this line to `.envrc`:
And add `.envrc.private` to `.gitignore`. Now that just works anywhere, whether the authors of whatever tool I'm using official support `.env` files or not.awestroke•8h ago
judofyr•7h ago
conception•7h ago
candiddevmike•7h ago
__jonas•7h ago
I use mise to load environment variables from .env, since I also use it to manage tool versions. When I don't have that available I just do
direnv is definitely another good option.jelder•7h ago
mananaysiempre•2h ago
For example, I can write a Nix flake, put “use flake” (and nothing else) in my .envrc in the same directory, and have whatever PATH, PYTHONPATH, etc. changes that are needed to develop against the flake’s dependencies automatically applied when I enter the directory. You could almost certainly use this with virtualenv, nvm, or the like as well, I just haven’t tried.
throw-the-towel•7h ago
w0m•7h ago
theozero•4h ago
At the end of the day you could imagine varlock loading your config, and injecting it using a method that is not env vars - a file, sidecar process, etc.
0xbadcafebee•6h ago
The thing that runs your code is responsible for adding the environment variables from wherever they are kept for that environment. The execution environments running your code are not checking out your Git repository and reading files from it before they execute your program; that would introduce a chicken-and-egg problem (and make it much harder to run your program in new environments).
Below is an example of why you can't just load all your variables in a single ".env" file.
Environments:
I want to run my program in development! I want to run my program in staging! I want to run my program in production! You have to add the environment variables to the execution environment before your program ever gets run. Otherwise (for example) you could never pull a Git repository to load variables, because where would the credentials to pull your Git repository come from? They have to be added beforehand. So that beforehand step is where you add all your environment variables for that environment. Your program merely reads them when it is executed; no need for your program to read additional files.You should not do something like keep a ".env.dev", ".env.stage", ".env.prod", packaged up in a container. Each execution environment may need slightly different settings at run time. And secrets should not be kept in your Git repo, they should be loaded as-needed, at runtime, by your execution environment.
All this is covered neatly in The Twelve Factor App (https://12factor.net/config). As someone who's been doing this for two decades, I highly recommend everyone follow their guide to the letter.
theozero•4h ago
Imagine every public docker container had an .env.schema which was validated on startup, instead of scattered info about the available env vars in their readme.
0xbadcafebee•2h ago
Schemas are application-specific. Applications all deal with data types differently. Some data types (defined in schemas) even use transforms and complex custom algorithms before their data is validated. So it's better to let the application handle schemas directly on an as-needed basis.
All of that can be done independent of environment variables. Just make a library that validates data types, and pass your environment variables (or any data, from anywhere) to it. This is better not only because you can validate any kind of data, but you can load your data from places other than environment variables (from disk, from database, etc). This kind of general abstraction is more useful for general purpose computing, rather than a complex solution tailored for only one use case.
Finally: A schema isn't a replacement for documentation. Just because you have a technical document that defines what data is allowed in a variable, doesn't mean that somebody then knows what the hell that thing does - what it affects, when it should or shouldn't be used, etc. Documentation is for humans, schemas are for computers.