We're launching Loopdesk Beta v2 today after 6 months of working with 300+ creators. It's an AI-powered video editor that uses chat-based prompting and genre-specific workflows.
The Problem We're Solving
Video editing is time-consuming, especially for content creators who need to produce videos regularly. The traditional timeline-based editing workflow requires manual sorting, organizing clips, adding captions, generating highlights, and applying effects. For podcasters and educational content creators, explaining jargon or adding visual annotations adds another layer of complexity. We wanted to automate the repetitive parts while keeping creative control.
How We Got Here
We started this project because we were creating tutorial content ourselves and spending 4-5 hours editing a single 20-minute video. The bottleneck wasn't the creative decisions—it was the mechanical tasks: organizing footage, transcribing, captioning, finding key moments. We tried existing AI editors but found they either lacked control or couldn't understand context-specific needs (like podcast vs. tutorial editing patterns).
What We Built
Loopdesk uses an agentic chat interface for video operations. Instead of dragging elements on a timeline, you can prompt the AI to perform edits. The system analyzes your video content and suggests genre-specific workflows—it recognizes whether you're editing a podcast, tutorial, vlog, or product demo and adapts accordingly.
Technical Details:
CoEditor Agent: Chat-based interface with quick action buttons. Maintains chat history so you can switch between editing sessions or revert to earlier versions
Genre Recognition: Analyzes video content (audio patterns, scene composition, cut frequency) to determine content type and suggest appropriate workflows. For example, podcast workflows prioritize jargon explainers and chapter markers, while tutorial workflows focus on screen annotations and step highlighting
GPU Rendering Pipeline: Built custom rendering engine optimized for cloud GPUs. We're using CUDA for parallel processing of effects and exports
AI Automation Layer: Generates captions using Whisper-based transcription, creates summarizations for show notes, identifies key highlight moments using sentiment analysis and engagement pattern detection
Template System: Community-driven templates that can be shared and remixed. Templates define workflow sequences, not just visual styling
What's Different
Most AI video editors are either fully automatic (no control) or just add AI features to traditional timeline editors. We're treating the entire editing process as a conversational workflow. The AI is a collaborator that understands intent rather than a tool that executes commands.
The genre-specific workflow system is novel—we trained models on different content types to recognize patterns. A podcast has different editing needs than a tutorial, and the AI adapts its suggestions accordingly.
Current State
Beta v2 is live today. You can:
Upload videos (drag and drop)
Use chat prompts for editing operations
Generate captions, transcripts, and show notes
Get AI-suggested highlight moments
Add VFX animations that are context-aware
Use and create community templates
Access stock video/audio/image library
Render with GPU acceleration
The product is free during beta. We plan to charge $29/month for unlimited rendering once we exit beta, with a free tier for up to 3 exports per month.
Try It Out
We've removed signup barriers for HN—just visit loopdesk.ai and you can start editing immediately. We'd love feedback on the chat interface and workflow suggestions. Are the AI recommendations helpful or intrusive? What genres are we missing?
Demo video: https://www.youtube.com/watch?v=_03SpwgI8Ek
Would love to hear your thoughts on the approach, technical architecture, or UX decisions. What would make this more useful for your workflow?
billconan•2h ago