Curious about emergent behavior over longer runs. Do agents tend to converge toward a stable social equilibrium, or does tension keep escalating without an external nudge? Most multi-agent setups I've seen either flatline into agreement or spiral into chaos — the atmosphere tracking (tension, noise, warmth) suggests you've thought about this, but I'd love to know what you've actually observed in practice.
The hot-reload on agent .md files is a genuinely clever design choice. Keeping the agent state as plain text on disk means you can version-control a simulation, diff two runs of the same scene, or just crack open a file and understand exactly why Mike is behaving the way he is. That's a level of interpretability most agent frameworks completely ignore.
The name is perfect, by the way.
RedsonNgwira•4h ago
Took me a while to realise I could actually build that.
The interesting technical challenge was making agents feel genuinely distinct rather than variations of the same helpful AI voice. The solution was grounding each agent in real behavioral research pulled at world-creation time, storing their full identity in a plain markdown file, and giving them a specific grievance — something eating at them before the scene even starts.
Happy to answer questions about the agent prompting approach, the parallel asyncio loop, or anything else. Built from Malawi on zero budget using free API tiers.