Why: I'm a grad student. While experimenting with Google’s Nano Banana, I found it might help learning—not just flashy demos.
What’s different
- Teaching-first output: the goal isn’t just “pretty images” — it’s “learnable visuals.” The structure is designed to be teachable and scannable.
- Nano Banana for learning: popular and cool, yes — but here it’s explored for pedagogy. Early results are promising; I’d love your feedback.
How it works - First text, then visuals: first generate a structured explanation (definition, key points, short example, related concepts), and then produce a clear, education-oriented visual.
- Models: I route text to GPT‑5 and visuals through Nano Banana.
- Stack: Deployed on Vercel with Next.js (my first time — genuinely smooth).
Would love blunt feedback:
- Does this actually help you learn/remember?
- Which export formats matter (Image / PDF / Video)?
- Which subjects should I prioritize?
Link: https://knowviz.app/en
renedloh•1h ago
I’m a math grad student, not a professional developer. I was playing around with Google's newer models ("Nano Banana") and was struck by how their outputs could be structured for learning, not just for creating cool-looking demos. That was the "aha!" moment.
The developer experience was a huge leap for me. In the past, I've cobbled together projects with React + Python/FastAPI on a self-managed server, but this was my first time using Next.js and Vercel. The difference was night and day—it felt incredibly smooth and let me focus on the product itself.
A key insight during the build was realizing the generation had to be serial: text first (I'm using GPT-5 for concept explanation structure), then using that text as rich context for the visual model. It made the infographics much more relevant.
I'm genuinely here to learn from this community. Any and all feedback—brutal or otherwise—is welcome.