Here’s a video: https://www.youtube.com/watch?v=u3Pi1iih1_Y.
Before this, I (Roya) worked in human-machine interaction at Huawei, and before that as a construction site inspector for civil engineering firms. I have a PhD in Civil Engineering, and in my experience reporting was by far the most tedious and mind-numbing part of the job.
You’d walk around a site all day taking short notes (maybe, often you'd rely on memory) and snapping photos, then go to three more sites before finally making it back to the office and try to remember everything you wanted to write. Sometimes you’d fill in gaps from memory or you’d keep it purposefully vague. Reports had to be consistent, branded, and checked by senior engineers. It was a huge time sink across the team.
Writing reports was the worst part of the job, so we built Opusense to get rid of it. On-site, users type or dictate short notes (e.g. “rebar exposed east end of slab”), and the tool turns them into full sentences, paragraphs, tables, or photo captions in a report template that matches the firm’s format. You can work offline, and it syncs automatically when back online.
Most inspection and reporting tools are built for checklist-style workflows (which is great for home inspections or punch lists), but civil, structural, environmental, or geotechnical engineers usually need freeform notes, not radio buttons.
This is a particularly good fit for LLMs because engineering field reports live in a constrained, conventional domain: similar language, repeated structures, and highly standardized content across firms and projects. There’s a lot of redundancy and grunt work, summarizing the same site conditions, formatting repetitive data, translating field notes into polished paragraphs, all of which LLMs handle well with the right prompting and guardrails. We’re not generating arbitrary prose; we’re transforming structured inputs (notes, images, forms) into structured outputs, with firm-defined templates and required fields that minimize the risk of hallucination. When facts matter (e.g. test results or measurements), we keep them grounded in the user’s input, the model doesn’t invent data because there’s nothing for it to invent. This makes it one of those cases where LLMs aren’t just a novelty, they're genuinely the best tool for the job.
Under the hood, we use a combination of prompt-engineered LLMs and firm-specific formatting rules to get outputs that don’t just sound good, but also look right. We’ve recently added translation features, and we’re iterating quickly based on field feedback. We charge per seat and are deployed at mid size firms, and trialing with some multinational engineering firms who have thousands of reports to file each week. We're also starting to see interest from construction managers and developers who do their own internal QA reporting.
We don't have a self-serve way to try out the product yet, because the way our business works requires templates to be customized by company. But there’s a demo at https://www.youtube.com/watch?v=u3Pi1iih1_Y, and if you want to poke around the UI yourself, here’s a sample account to log in with:
login: hndemo@opusense.com
password: OpusenseHacker2025
The app is available for download on the Apple and Google Play stores. When sample reports are generated, you can log into the web interface to also view them online through our website (www.opusense.com) with the same login credentials.We’d love to hear how others are thinking about tools for field work, reporting, or similar workflows (engineering, architectural, etc.). If you’ve built in this space, or have thoughts on how to improve it, we’re all ears!
swyx•4h ago
rcody•3h ago