It started out as a machine ethics prototype in Java and gradually morphed into a full agent runtime. All the intermediate variations worth saving are on my GitHub repo.
I recall trying to explain to a mentor what exactly I was building. I found it difficult because there was no existing category for this kind of agent. It is not quite an assistant, it does more than run tasks. It is not quite an autonomous agent because even though it is autonomous, its autonomy is bounded. I kept falling back to the example of assistance animals, like guide dogs. This provided what I needed, the example of a non-human agent that has bounded autonomy. But this is not a guide dog, it is an AI system. I needed to look to examples in fiction to add the final part - JARVIS, K9 from Dr. Who, Rhadamanthus from The Golden Age novel. All these systems have bounded autonomy and have a long term professional relationship with humans like a family lawyer or doctor whose services are retained. Hence the type of this system is an Artificial Retainer.
The system has lots of interesting features - ambient self perception, introspection tooling and a safety system based on computational ethics (Becker) and decision theory (Beach). It is auditable, backed up to git and can manage its own work with a scheduler and a supervised team of subagents. The website and the accompanying paper provide more details.
I make no huge claims for the system, it is pretty new. What I offer it as is a reference implementation of a new category of AI agent, one that I think we need. The road to AGI is all very well and I am not sure Springdrift gets us any closer. But it does represent an attempt to build an intermediate safe type of agent that we seem to be missing.
All feedback and comments welcome!
GitHub: https://github.com/seamus-brady/springdrift
Arxiv paper: https://arxiv.org/abs/2604.04660
Eval data: https://huggingface.co/datasets/sbrady/springdrift-paper-eva...
Website: https://springdrift.ai/