We built a tool for automatically assessing business health from complex management financial documents and bank statements.
In order to get this working reliably, we had to iterate extensively on the prompts for the structured output at the field level.
This led us to building a feature to allow users to improve them too, and with some context engineering, we have what we are calling “feedback loops”.
Here is a 2 min demo https://www.youtube.com/watch?v=ZDNlEZydoXU
How it works (video runs through these steps):
1. Create a target form for your extraction (AI helps create it)
2. Upload a document (choose from different models)
3. Review the quality of the extraction (and check the PDF citations)
4. If there are any mistakes, correct them and give feedback at the field level
5. Once you feel like you've seen enough errors and provided enough corrections, use the Feedback workflow to refine your field descriptions.
You can try the app here https://app.sea.dev/
(it works best when you do at least 2-3 reasonable extractions between prompt refinements)
The feedback feature is currently in beta. If you make some corrections and then go to https://app.sea.dev/feedback you'll see them ready for review and refinement.
I would love to get feedback on how we might improve this idea. The video was a toy example but it is live and working with a few early credit analysts customers and they seem to like it.