Privacy Note: This product is not built for privacy yet. The current use case is internal tools or beta features where users aren’t promised privacy. But the point of this tool is NOT FOR PRODUCTION.
In the future, there will be a feature for anonymizing all private information automatically.
Problem: My project this year was a tool for pediatricians to do their insurance claims assisted by AI.
Niche industries like this require a ton of examples, fine-tuning and re-prompting to actually get them a product that works. Then, it requires monitoring the output to some extent (of course with the hospital’s consent) so small model changes or edge cases don’t break outputs for at least the first couple months.
This monitoring takes months of being distracted from doing new features. And every new feature I wanted to ship required this constant beta monitoring to get it to a reliable state. This also includes internal tools and automations that I needed to work reliably. That is when I started wishing I had an AI engineer/architect monitoring outputs 24-7 for every new feature’s first month. In real-world software, programs need to break less. Like almost never. And current AI models often don’t get us quite there. From 90% to 100 or 95 to 100. We waste months before shipping new features trying to tweak it internally without the model being able to have the hybrid of being improved live in the real world.
In niche agent environments, you sometimes need an actual human to jump in.
How it works: First, a beta deployment. You deploy your AI to do X business use case in beta or internally.
Each step of your pipeline queries our API with what models you prefer, etc.
Then, a human who is in charge of a batch of outputs will see a flagged output when something goes wrong (we agree first on what that means). They can then use human judgement to tweak the prompt, prompt a different model, or provide added context over and over in multiple parallel threads until the correct output comes out.
Second, fine tuning. You now own a dataset of what changes to your prompt and what changes to the output were made that caused that magical output. Thousands of changes and tweaks that can take your model to the next level internally for each feature are in your db. This data allows you to ship faster, with better guarantees and much less manual testing that isn’t being rewarded or punished by the real world.
Who are the humans? I’m a developer doing the tickets manually with my technical friends I’m paying out of pocket for now (yes, it IS available 24/7!!!). This is intentionally manual during beta, with clear review guidelines, so we understand the process before trying to hire.
How slow is it? Most of the time no human will touch it and sometimes a human will take a quick unnoticeable automated action. In some edge cases, you’ll feel some noticeable slowing (10s+) but we’re looking to accelerate those as well, and the alternative is fully broken output.
Who is it not for? This is not meant for consumer apps, privacy-sensitive production systems, or teams expecting zero human involvement.
vmitro•7h ago
As I do also work on a similar concept, where HITL is the first class citizen, can you tell us a bit more about the underlying technology stack, if it's possible for users to host their own models for inference and fine tuning, how are pipelines defined and such?
gitpullups•6h ago
gitpullups•2h ago