In general, I have three pain points with debugging realtime, multi-model, multi-modal AI stuff. 1. where's the latency creeping in? 2. What context actually got passed to the models. 3. Did the model/processor get data in the format it expected.
For 1 and 3, Whisker is a big step forward. For 2, something like LangFuse (Open Telementry) is very helpful.
aconchillo•1d ago
With Whisker you can:
- View a live graph of your pipeline
- Watch frame processors flash in real time
- Select a processor to inspect its frames
- Filter frames by name
- Select a frame to trace its full path