We got interested in this problem when we saw how often small documentation slip-ups can snowball into huge financial, legal, and even life-threatening outcomes. Sometimes it’s just a mistyped medication time or a missing discharge note - basic stuff - but when you’re dealing with claims and regulatory rules, a minor error can trigger an automatic denial. Wrong copy-pasting on a discharge note will be uncovered by the insurance provider and will cost stressful appeal. By the time an overworked clinical or compliance team discovers it, it’s usually too late to just fix it. Our own experiences hit close to home: Dmitry’s family member faced grave consequences from a misread lab result, and Sergey comes from a full medical family that’s battled these issues up close.
Here’s our demo if you’d be interested to take a look - https://www.loom.com/share/add16021bb29432eba7f3254dd5e9a75
Our solution is a set of AI agents that plug directly into a clinic or hospital EHR/EMR system. As clinicians go about their daily routines, WorkDone continuously monitors the records. If it spots something that looks off-like a missing signature or a suspicious timestamp- it asks the responsible staff member to double-check and correct it on the spot. We want to prevent errors from becoming big headaches and wasted hours down the road. Technically, this involves running a secure event listener on top of EHR APIs and applying a group of coordinated AI agents that’s been loaded with clinical protocols, payor rules, and finetuned on historical claim denials and regulatory guidelines. The moment the model flags a potential error, an agent nudges the user to clarify or confirm. If it’s a genuine mistake, we request correction approval from the provider and fix it right away and store an audit trail for compliance. We are extending the approach to finding conflicting medication or prescribed treatments.
What’s different about our approach from AI tools for hospital revenue management is this focus on near-real-time intervention. Most tools detect errors after the claim has already been submitted, so compliance teams end up firefighting. We think the best place to fix something is in the flow of work itself. One common question about the use of AI in the medical/health field is: what if the AI hallucinates or gets something wrong? In our case, since the tool is flagging possible errors and its primary effect is to get extra human review, there’s no impact on anything health-critical like treatments. Rather the risk is that too many false positives could waste staff members’ valuable time. For pilots, we are starting with read-only mode in which we use API only to retrieve the data, and we are able to see that the QA we built into the agent orchestration layer does a pretty good job for spotting common documentation mistakes even in lengthy charts (for instance, multi-day hospital stay).
We’re in the early stages of refining our system, and we’d love feedback from the community. If you have ideas on integrating with EHRs, experiences with compliance tools, or just general insights about working in healthcare environments, we’re all ears. We’re also on the lookout for early users - particularly rehabs, small clinics and hospitals - willing to give our AI a try and tell us where it needs improvement.
Thanks for reading, and let us know what you think!
candiddevmike•7h ago
digitaltzar•6h ago
To start, we integrate with Kipu and Athena, just happened that our first clients are rehabs and clinics that use these 2.
Good point on the desire to stay in HR for the review workflow, this is our vision that could be achieved with widgets in specific EHRs but this is down the road. Once mistake is identified, we notify clinical professionals via standard communication like emails and also keep a dashboard with the list of 'topics' inside our portal.
dang•6h ago
Oops - it's my job to catch that as the editor. I've added it above now. Thanks for the heads-up!