> Modern apps are made of too many services. They're everywhere and in constant communication.
So we made tooling to make it easier for you to make more of them!
Do we really struggle bringing up services as containers and applying kube configs?
For my development of services that run in kube, I don't dev with kube, you shouldn't have to. I also use docker-compose for most dev env services.
Perhaps i'm not developing the right kind of software. Whoever finds this type of tool useful, when would you use it?
otherwise I think it's meant for systems where system that you need for testing is to big to work on your local machine.
By default, Tilt is actually intended for local development using kind, minikube or other similar tooling. It supports developing against a multi-node cluster but it requires extra configuration and slows down iteration time.
- Service discovery
- Volume mounts
- Ingress and Certificates
- Metrics scraping and configuration
- Dashboards
It’s really quite powerful and replaces the need to mock things out with docker compose. If you’re deploying to Kubernetes, Tilt gives you the option to avoid “development-only” setups like docker compose.
I think if you told our team to go back to Docker Compose they'd revolt on the spot haha
If I can run external dependencies in docker locally, I can setup my app to run entirely from my laptop. That's all docker-compose does, just runs dev deps like DBs and other services i'm not editing code of.
As far as code reloading goes, there is a million tools to do that already. Go already compiles locally much faster than seconds.
All that being said, why are people choosing to develop in containers/kubernetes?
Maybe apps that need to be more tightly integrated with kube would benefit from this?
Tilt is a monitor process that builds and starts your services, with a hot-reload loop that rebuilds and restarts your services when the underlying code changes. The hot reload loop even works for statically compiled languages.
# vim: ts=2 sw=2 et
And a shebang lineObvs the real magic is the live syncing patches into remote containers though
I like a fast svelte dev environment with something like docker-compose which might require some mocked out dependencies to keep things fast and then using Kubernetes for other environments once I have local tests passing.
In my case, I find that I prefer having higher fidelity and simpler service code by using Tilt to avoid mocks. It's also nice for frontend development because, using a Kubernetes ingress, you can avoid the need for things like frontend proxies, CORS and other development-only setup.
I have a big mix of setups for my projects. I like Vite+Bun (client+server) for smallish projects because the dev servers start instantly. But then I have to remind myself how to actually productionize it when it comes time for that, but it's not too hard.
Then sometimes I need to bring in a database. I don't like running those locally, undockerized because now you're locked into one specific version and it just gets messy. So sometimes if I'm lazy I just connect to my prod database server and just create a separate dev DB on it if needed. Then I don't need to start and stop anything, I just connect. Easy.
For a big complex app you can mix-and-match any of the above. Run the services that you need to iterate heavily on locally, and just farm out the rarely changing stuff to the cloud.
Want to create a Kubernetes secret? It's as simple as:
load('ext://secret', 'secret_yaml_generic')
k8s_yaml(secret_yaml_generic(...))
Want to create that secret from Vault instead? load('ext://vault_client', 'vault_read_secret', 'vault_set_env_vars')
vault_set_env_vars('https://localhost:8200','mytoken')
my_foo = vault_read_secret('path/myfoo', 'value')
my_bar = vault_read_secret('path/mybar', 'foobar')
Clace uses Starlark for defining apps, instead of something like YAML. https://github.com/claceio/clace/blob/main/examples/utils.st... is a config file which defines around seven apps (apps are downloaded and deployed directly from git).
Clace uses Starlark for defining app level routing rules also. This avoids the need to use a nginx like DSL for routing.
I see that you also have docker-compose files -- are those for different tasks or for developer preference?
I'm also curious to understand why you have different build scripts for CI (`buildx`) vs local (regular docker build)? In our team, we use the same build processes for both.
Single cluster deployments are very easy.
My problem is that these services we manage in production are deployed across multiple regions (or k8s clusters).
Debugging _distributed_ applications is the issue.
There is also .PHONY, but that would make the rule to always be triggered. Maybe I'm misremembering, it's been a long time :)
Can you do it? Sure, but somewhere someone is going to be suffering.
There seem to be a lot of tools in this space. I wish they wouldn't call themselves tools for "dev environments" when they are really more like tools for "deploying an app to your local machine", which is rather different.
I firmly believe that the primary way of interacting with my tests should be the ability to run them one by one from the IDE, and running the code should be run / attach with breakpoints.
I simple have a container for each project using my own container-shell
I run my bundles / whatever. Have all the tooling and can use VSCode to attach via ssh (I use orbstack, so I get project hostnames for free)
It’s the best workflow for me. I really wanted to like containers but again, it’s too heavy, buggy, bulky.
You give up a bit of snappiness, sure, but you can also keep the very small non container based tooling like linting outside of the container.
You give up way more than snappiness. Doing real development work, i.e. compiling, testing, debugging, is very cumbersome in a remote environment.
So where do you want to spend your time? Bandaids to make remote development suck less, or effort to develop locally, natively? There is no free lunch. If you choose "neither", your developer experience is gonna suck. (Most companies choose "neither" by the way, either consciously or unconsciously).
it really isn’t? Of course it depends on the ecosystem. But for jvm for example you literally just expose your debugging port 5005 out of the container and boom step through, and other live debugging works just as well as outside of the container. And as of course you allude to, if you’re native you are facing a “works on my machine” problem unless you are all in on a hermetic and reproducible solution like bazel or nix. And chances are unless you are having that crack team of 10xers that a good hunk of your dev user base are going to struggle with the complexity and general ecosystem issues with these two solutions.
You’ve probably seen the worst world where people do containers wrong. And a lot of people do them wrong. But it’s pretty easy to learn how to do them right. Someone can study multi stage docker builds for half a day and write perfectly fast, cached first containerized builds. Proper buildkit cached local containers are extremely fast.
There’s other ways of course, each with their own tradeoffs. You can do everything in nix, and now you are spending your time fighting with nix. You can do everything in bazel and now you are spending time fighting with bazel. In the end your stuff is gonna go into a container anyway (for most people). You still need to understand the container technology because of that. so why not both reduce your toolchain sprawl and simultaneously recreate that exact environment on the local machine?
TL;DR you can run some of your infra in local-dev that provide parity with your production environment.
i.e., it worked with my existing kustomize+k8s setup. It adds portforwarding, and fast file sync into the running containers, which is all I really want. Rebuilding an image every time you make a change sucks.
cirego•2mo ago
I love how Tilt enables creating a local development environment that lets my services run the same in production , test and development. Greatly simplifies my service code and improved my quality.
In particular, I’d love to see Tilt be better around handling things like CRDs (there’s no way to mark a k8s_yaml as depending on a CRD being available, a frequent source of broken tilt up invocations).
Having said that, the first thing I do, when working on any new project, is to get “tilt up” working.
Things I’ve used for testing include: eBPF-based collectors for security and observability, data pipelines, helm chart development, and Kubernetes controllers. It’s very flexible and powerful for a wide range of development.
AYBABTME•2mo ago
Happy to see new releases of Tilt even if the pace has slowed down. It's a very useful tool.
simultsop•2mo ago
cirego•2mo ago
To your other point, Tilt is to development as ArgoCD is to deployment. Tilt enables on-demand, reproducible development environments that are sufficiently high-fidelity that you can often replace your shared and/or long-lived testing clusters.
With Tilt, I test my application using the same Kubernetes specs / Kustomizations / Helm Charts that you use to deploy into production. When it comes time to deploy my application, I supply these same specs / kustomizations / charts to ArgoCD.
Because I can reuse the specs for both testing and production, I enjoy far greater testability of my application, improving quality and time to market.