In 2026: frontend web developer reinvents tmux.
Guys, please do us the service of pre-filtering your crack token dreams by investigating the tool stack which is already available in the terminal ... or at least give us the courtesy of explaining why your vibecoded Greenspun's 10th something is a significant leg up on what already exists, and perhaps has existed for many years, (and is therefore, in the training set, and is therefore, probably going to work perfectly out of the box).
But in practice you are padding token counts of agents reading streams of TUIs instead of leveraging standard unix pipes that have been around from day 1.
TLDR - your agent wants a CLI anyway.
Disclaimer: still a cool project and thank you to the author for sharing.
I very regularly need to interact with my work through a python interpreter. My work is scientific programming. So the variables might be arrays with millions of elements. In order to debug, optimize, verify, or improve in any way my work, I cannot rely on any other methods than interacting with the code as it's being run, or while everything is still in memory. So if I want to really leverage LLMs, especially to allow them to work semi-autonomously, they must be able to do the same.
I'm not going to dump tens of GB of stuff to a log file or send it around via pipes or whatever. Why is there a nan in an array that is the product of many earlier steps in a code that took an hour to run? Why are certain data in a 200k-variable system of equations much harder to fit than others, and which equations are in tension with each other to prevent better convergence?
Are interpreters and pdb not great, previously-existing tools for this kind of work? Does a new tool that lets LLMs/agents use them actually represent some sort of hack job because better solutions have existed for years?
I saw this post a while ago that turned me on to the idea: https://news.ycombinator.com/item?id=46570397
My complaint is that tmux handles them perfectly. Exactly the claim that OP is making with their software - is served by robust 18 year old software.
In 2026, it costs nearly nothing to thoroughly and autonomously investigate related software — so yes I am going to be purposefully abrasive about it.
In the same vein as the parent comment, the curiosity is why you would vibe code a solution instead of reaching for grep.
Ideally Ghostty would offer primitives to launch splits but c’est la vie. Apple automation it is.
This is exactly how I do most of my data analysis work in Julia.
https://github.com/david-crespo/dotfiles/blob/main/claude/sk...
On one hand this is normal in education and pedagogy to have the student or apprentice put the boring pieces together to find the wonder of the puzzle itself, but on the other this is how we end up with https://xkcd.com/927/
I wish I was keeping better track of them all but there's a bunch of neat tmux based multi-agent systems. Agent of Empires for example has a ton of code around reading session data out of the various terminal uis. https://github.com/njbrake/agent-of-empires
Ideally imo tui apps also would have accessibility APIs. The structured view of those APIs feels like it would be nice to have. And it would mean that an agent could just use accessibility and hit both gui and tui. For example voxcode recent submission does this on mac for understanding what file is open/line numbers. https://github.com/jensneuse/voxcode https://news.ycombinator.com/item?id=47688582
https://github.com/halfwhey/skills/tree/master/plugins/tmux
Two use cases I use this for is debugging with GDB/PDB and giving me walkthroughs on how to use TUIs
Another use was for them to read the logs out of your development web server ( typical npm run dev, go run .)
I could do this with tmux send-keys and tmux capture-pane, you just need to organise the session, panes and windows and tell the agent where is what.
That was my first agent to tool communication experience, and it was cool.
After that I experimented with a agent to agent communication, and I would prompt to claude "after you finish ask @alex to review your code". In the CLAUDE.md file i'd explain that to talk to @alex you need to send the message using tmux send-keys to his tmux session, and to codex I'd say "when you received a review request from @claudia do .. such and such, and when you finish write her back the result of it" I added one more agent to coordinate a todo list, and send next tasks.
After that I got a bit carried away and wrote some code to organise things in matrix chat rooms, (because the mobile app just works with your server) and I was fascinated that they seem to be collaborating quite well (to some extend), but it didn't scale.
I abandoned the "project" because after all I found agents were getting better and better and implementing internal todo tasks, subagents ...etc plus some other tmux orchestrations tools appeared every other day.
I got fatigued of some many new ai things coming up, that and the end, I went back to just use iTerm, split panes, and manually coordinate things. Tabs for projects, panes for agents, no more than 2 agents per project ( max 3 for a side non conflicting task ) I think that is also what cognitively does not tire me.
My project name was cool though, tamex, as in tame tmux agents :)
And to comment on the submission, I think the idea has potential, I might give it a try, the key is to have low friction and require low cognitive load from the end user. I think that's why skills after all are the thing that is going to stick the most.
wolttam•2h ago
This is one area that makes me feel like our current LLM approach is just not quite general enough.
Yes, developers and power users love the command-line, because it is the most efficient way to accomplish many tasks. But it's rarely (never?) our only tool. We often reach for TUIs and GUIs.
It's why approaches like this get me excited: https://si.inc/posts/fdm1/