My goal was to make the process as frictionless as possible so you don't expend cognitive load thinking about the tool. To that end, there are no hotkeys or buttons to initiate the chat widget, the extension just detects natural language as you type and populates the widget after a threshold.
The LLM gets the text content in your current viewport as context.