It tries to achieve as much compatibility with Gitlab pipelines as it makes sense to help developers get a fast feedback loop by running their jobs locally. On MacOS it uses the Apple Container CLI to spin up fast containers - you can customize the VM specs for this, but it's also compatible with Docker and Podman. On Linux it works with Podman, Docker or Nerdctl. You can use a local Ollama AI Agent or Codex to troubleshoot a failing job right in the job window. The prompt that gets sent to the AI agent for troubleshooting can also be customized. Claude code support is going to land some time this week.
Right now it's used to run its own Gitlab pipeline and a few other projects that i'm working on. The tool is in its infancy so it might be rough around the edges.
An MCP feature is going to land somewhere today or tomorrow that would allow you to hook it up to your AI agent of choice, which might provide more value for people using AI agents as their daily driver.