We were frustrated by how slow and difficult it can be to iterate on large batch processing pipelines. Simple small changes necessitated rebuilding docker containers, waiting for GCP-batch, or AWS-batch pipelines to redeploy, and waiting for vm's to cold boot, a >5minute per iteration dev cycle, all just so I can see what error my code throws this time, then do it all over again! Many other tools in the space were either too complicated, closed-source / managed only, too difficult to setup and manage, or simply too expensive.
This is why we created Burla, a way to just run my stupid python function, in whatever docker container I want, on whatever hardware I want, on thousands of VM's, until it's done. It comes with a dashboard where I can monitor long running background jobs. It's open-source, and can be installed with one command. Even with thousands of vm's running, changes to code deploy and start running in around 2 seconds, vastly shortening the developer cycle compared to tools like GCP-batch and AWS-Batch.
Our long term goal is to just make more cloud services simple, fast, and open-source. We believe that, in general, whether you're coding locally, or on a cluster of 1000 machines, infrastructure should update and react quickly, like under-a-second quickly. We should be able to iterate at the speed of thought, not at the speed my lambda function, batch workload, ETL-pipeline, or Kubernetes service takes to redeploy!