VAM Seek renders a 2D thumbnail grid next to your video. Click any cell to jump. All frame extraction happens client-side via canvas – no server processing, no pre-generated thumbnails.
- 15KB, zero dependencies - One-line integration - Works with any <video> element
Live demo: https://haasiy.main.jp/vam_web/deploy/lolipop/index.html
Would love feedback!
littlestymaar•3w ago
However, the execution is meh. The UX is terrible (on mobile at least) and the code and documentation are an overly verbose mess. The entire project ought to fit in the size of the AI generated readme. Using AI for exploration and prototyping is fine, but you can't ship that slop mate, you need to do the polishing yourself.
haasiy•3w ago
littlestymaar•3w ago
Then, improving the signal to noise ratio of your project actually help “shipping the next feature”, as LLM themselves get lost in the noise they make.
Finally, if you want people to use your project, you need to show us that it's better than what they can make by themselves. And it's especially true now that AI reduces the cost of building new stuff. If you can't work with Claude to build something better that what Claude builds, your project isn't worth more than its token count.
haasiy•3w ago
My role was to architect the bridge between UI/UX design and the underlying video data processing. Handling frame extraction via Canvas, managing memory, and ensuring a seamless seek experience without any backend support requires a deep understanding of how these layers interact.
Simply connecting a backend to a UI might be common, but eliminating the backend entirely while maintaining the utility is a high-level engineering choice. AI was my hammer, but I was the one who designed the bridge. To say this is worth no more than its token count ignores the most difficult part: the intent and the structural simplification that makes it usable for others in a single line of code.
littlestymaar•3w ago
Ironic.
haasiy•3w ago
haasiy•3w ago
littlestymaar•3w ago
haasiy•3w ago