We hear consistent complaints from academics, researchers, and scientists about AI chatbots having poor PDF awareness and often generating incorrect citations. This is bad for high-level researchers and people learning who need answers grounded in evidence.
We took the best parts of Cursor, like workspace awareness, @ symbol referencing, and human approval/oversight, and built a PDF browser with academic database searching (ArXiv and Semantic Scholar) + our Ubik agents that can highlight text down to the line level and use our "Detailed Notes Tool".
Why is this better? With Ubik agents, you can either upload a PDF or search for an open-access paper and save it to your workspace (ingest the document and turn it into an interactive AI doc). With our Cursor-like @ referencing, prompt agents like: What the paper @example is about, highlight 10 points using the notes tool, and summarize why each point is important.
Unlike any available agents or models, Ubik agents can highlight text down to the line level; we call this a note. Every note is referable in chat using the @ symbol or drag-and-drop it into chat with any other file, canvas, found paper, etc.
Cross-analyze, annotate, and generate with citations. Pick from 20+ models, and use @ symbol referencing to craft better prompts that minimize hallucination and increase efficacy.
Start researching: https://app.ubik.studio/chat
Full app for MacOS and Windows soon + Custom EVAL suite almost done and ready to start test (will def share findings)