When working in hallucination-sensitive contexts (where cost of error is very high), found myself constantly jumping between AI outputs/analysis and source docs to see with my own eyes that a source doc says what the AI claims it says. This is a simple little tool that tries to alleviate that pain, based on the theory that screenshots are less likely to be hallucinated than text.