I’m a solo founder working on SentienceAPI, a perception & execution layer that helps LLM agents act reliably on real websites.
LLMs are good at planning steps, but they fail a lot when actually interacting with the web. Vision-only agents are expensive and unstable, and DOM-based automation breaks easily on modern pages with overlays, dynamic layouts, and lots of noise.
My approach is semantic geometry-based visual grounding.
Instead of giving the model raw HTML (huge context) or a screenshot (imprecise) and asking it to guess, the API first reduces a webpage into a small, grounded action space made only of elements that are actually visible and interactable. Each element includes geometry plus lightweight visual cues, so the model can decide what to do without guessing.
I built a reference app called MotionDocs on top of this. The demo below shows the system navigating Amazon Best Sellers, opening a product, and clicking “Add to cart” using grounded coordinates (no scripted clicks).
Demo video (Add to Cart): [https://youtu.be/1DlIeHvhOg4](https://youtu.be/1DlIeHvhOg4)
How the agent sees the page (map mode wireframe): [https://sentience-screenshots.sfo3.cdn.digitaloceanspaces.co...](https://sentience-screenshots.sfo3.cdn.digitaloceanspaces.co...)
This wireframe shows the reduced action space surfaced to the LLM. Each box corresponds to a visible, interactable element.
Code excerpt (simplified):
``` from sentienceapi_sdk import SentienceApiClient from motiondocs import generate_video
video = generate_video( url="https://www.amazon.com/gp/bestsellers/", instructions="Open a product and add it to cart", sentience_client=SentienceApiClient(api_key="your-api-key-here") )
video.save("demo.mp4") ```
How it works (high level):
The execution layer treats the browser as a black box and exposes three modes:
* Map: identify interactable elements with geometry and visual cues * Visual: align geometry with screenshots for grounding * Read: extract clean, LLM-ready text
The key insight is visual cues, especially a simple is_primary signal. Humans don’t read every pixel — we scan for visual hierarchy. Encoding that directly lets the agent prioritize the right actions without processing raw pixels or noisy DOM.
Why this matters:
* smaller action space → fewer hallucinations * deterministic geometry → reproducible execution * cheaper than vision-only approaches
TL;DR: I’m building a semantic geometry grounding layer that turns web pages into a compact, visually grounded action space for LLM agents. It gives the model a cheat sheet instead of asking it to solve a vision puzzle.
This is early work, not launched yet. I’d love feedback or skepticism, especially from people building agents, RPA, QA automation, or dev tools.
— Tony W
tonyww•2h ago
```
[ { "id": 42, "role": "button", "text": "Add to Cart", "bbox": { "x": 935, "y": 529, "w": 200, "h": 50 }, "visual_cues": { "cursor": "pointer", "is_primary": true, "color_name": "yellow" } }, { "id": 43, "role": "link", "text": "Privacy Policy", "bbox": { "x": 100, "y": 1200, "w": 80, "h": 20 }, "visual_cues": { "cursor": "pointer", "is_primary": false } } ]
```
This prototype builds on several open-source libraries:
MoviePy – video composition and rendering Pillow (PIL) – image processing and overlays
The demo app (MotionDocs) uses the public SentienceAPI SDK, generated from OpenAPI, which is the same interface used by the system internally.