I was running into the problem with opencode where I wanted to try a prompt against 5 different models at once, but didn't want to do the work to manage the worktrees and get things running in each instance. So I built up Kaleidoscope, a way to run multiple agents against a single problem, to make the AI slot machine a little easier to get better results out of.
It works best for problems that can be verified by inspecting a running result. It depends on opencode and tmux to do the heavy lifting of agent running and pane management, so it acts as more of the "turbo" for AI to get the result you want or to explore different possibilities.