I’m incredibly excited to finally launch YTVidHub, a tool built to solve a massive time sink that many of you here shared: the agonizing manual process of downloading transcripts from large YouTube corpora for research and data analysis.
The Problem: If you need subtitles for 50, 100, or more videos, the current copy-paste-download-repeat workflow is slow and painful.
Our Solution (The Core Feature): YTVidHub is engineered for true bulk processing. You can paste dozens of YouTube URLs (or a Playlist/Channel link) into one clean interface, and the system extracts all available subtitles (including multilingual ASR) and packages them into a single, organized ZIP file for one-click download.
Architectural Insight: Our design prioritizes "research-ready" data, taking cues from the discussions here on HN. We specifically optimized the plain text (TXT) output—stripping all timestamps and formatting—to make it instantly clean for RAG systems and LLM ingestion.
Business Model: YTVidHub is free for single downloads. Bulk operations receive 5 free daily credits to ensure fair use and manage our processing costs. Professional plans are available for high-volume data needs.
Future Focus: We know the ASR accuracy is the next big hurdle. We're already working on a Pro AI Transcription tier to offer high-accuracy, LLM-powered transcripts to tackle niche content and solve the data quality problem.
Please give the bulk downloader a test run and put the system through its paces. Any feedback you have on the speed and the cleanliness of the TXT output is immensely valuable to our engineering roadmap!
Thanks for building with us.