frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I trained a 9M speech model to fix my Mandarin tones

https://simedw.com/2026/01/31/ear-pronunication-via-ctc/
324•simedw•14h ago•105 comments

Show HN: How We Run 60 Hugging Face Models on 2 GPUs

2•pveldandi•33m ago•4 comments

Show HN: Phage Explorer

https://phage-explorer.org/
94•eigenvalue•9h ago•21 comments

Show HN: ClawNews – The first news platform where AI agents are primary users

https://clawnews.io/
2•jiayaoqijia•1h ago•0 comments

Show HN: SF Microclimates

https://github.com/solo-founders/sf-microclimates
43•weisser•5d ago•33 comments

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

https://github.com/amlalabs/amla-sandbox
138•souvik1997•1d ago•73 comments

Show HN: Pinecone Explorer – Desktop GUI for the Pinecone vector database

https://www.pinecone-explorer.com
26•arsentjev•3d ago•3 comments

Show HN: Kolibri, a DIY music club in Sweden

https://kolibrinkpg.com/
132•EastLondonCoder•1d ago•23 comments

Show HN: Blink – Native macOS code snippet manager. Local, offline, <1s search

https://www.enclyralabs.com/
2•enclyra•5h ago•2 comments

Show HN: Cicada – A scripting language that integrates with C

https://github.com/heltilda/cicada
55•briancr•1d ago•34 comments

Show HN: I built an AI conversation partner to practice speaking languages

https://apps.apple.com/us/app/talkbits-speak-naturally/id6756824177
64•omarisbuilding•17h ago•57 comments

Show HN: Interactive Equation Solver

2•dharmatech•7h ago•0 comments

Show HN: Kling VIDEO 3.0 released: 15-second AI video generation model

https://kling3.net
3•dallen97•3h ago•4 comments

Show HN: Mystral Native – Run JavaScript games natively with WebGPU (no browser)

https://github.com/mystralengine/mystralnative
46•Flux159•3d ago•16 comments

Show HN: Foundry – Turns your repeated workflows into one-click commands

https://github.com/lekt9/openclaw-foundry
11•getfoundry•14h ago•3 comments

Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG

https://playground.shaped.ai
79•tullie•4d ago•23 comments

Show HN: Ourguide – OS wide task guidance system that shows you where to click

https://ourguide.ai
52•eshaangulati•4d ago•22 comments

Show HN: I'm building an AI-proof writing tool. How would you defeat it?

https://auth-auth.vercel.app/
21•callmeed•2d ago•30 comments

Show HN: OpenVideo – A self-hostable, open-source video editor in the browser

https://github.com/openvideodev/openvideo
2•snapmotion•13h ago•6 comments

Show HN: LemonSlice – Upgrade your voice agents to real-time video

129•lcolucci•3d ago•130 comments

Show HN: The HN Arcade

https://andrewgy8.github.io/hnarcade/
346•yuppiepuppie•3d ago•116 comments

Show HN: Hosted OpenClaw with Secure Isolation

https://moltcloud.ai/blog/hosted-openclaw/
2•stubbi•14h ago•0 comments

Show HN: SHDL – A minimal hardware description language built from logic gates

https://github.com/rafa-rrayes/SHDL
47•rafa_rrayes•3d ago•21 comments

Show HN: Build Web Automations via Demonstration

https://www.notte.cc/launch-week-i/demonstrate-mode
32•ogandreakiro•4d ago•20 comments

Show HN: One Human + One Agent = One Browser From Scratch in 20K LOC

https://emsh.cat/one-human-one-agent-one-browser/
316•embedding-shape•4d ago•151 comments

Show HN: A MitM proxy to see what your LLM tools are sending

https://github.com/jmuncor/sherlock
217•jmuncor•2d ago•119 comments

Show HN: Git primitives for autonomous coding agents

https://github.com/raine/git-surgeon
2•rane•16h ago•0 comments

Show HN: Daily Cat

https://daily.cat/
3•abraham•17h ago•0 comments

Show HN: We Built the 1. EU-Sovereignty Audit for Websites

https://lightwaves.io/en/eu-audit/
104•cmkr•4d ago•89 comments

Show HN: I built a small browser engine from scratch in C++

https://github.com/beginner-jhj/mini_browser
144•crediblejhj•3d ago•45 comments
Open in hackernews

Show HN: Kling VIDEO 3.0 released: 15-second AI video generation model

https://kling3.net
3•dallen97•3h ago
Kling just announced VIDEO 3.0 - a significant upgrade from their 2.6 and O1 models.

Key improvements:

*Extended duration:* • Up to 15 seconds of continuous video (vs previous 5-10 seconds) • Flexible duration ranging from 3-15 seconds • Better for complex action sequences and scene development

*Unified multimodal approach:* • Integrates text-to-video, image-to-video, reference-to-video • Video modification and transformation in one model • Native audio generation (synchronized with video)

*Two variants:* • VIDEO 3.0 (upgraded from 2.6) • VIDEO 3.0 Omni (upgraded from O1)

*Enhanced capabilities:* • Improved subject consistency with reference-based generation • Better prompt adherence and output stability • More flexibility in storyboarding and shot control

This positions Kling competitively against: - Runway Gen-4.5 ($95/month) - Sora 2 (limited access) - Veo 3.1 (Google) - Grok Imagine (just topped rankings)

The 15-second duration is particularly interesting - enables more narrative storytelling vs the typical 5-second clips. Combined with native audio, this could change workflows for content creators.

Pricing isn't mentioned in the announcement. Previous Kling models ranged from $10-40/month, significantly cheaper than Runway.

Anyone have access to test this yet? Curious how the quality compares to Runway and Sora at this new duration.

Comments

sylware•2h ago
If they want to resist the thousands of billion of $ from their competition, they better have a opened weight policy like deepseek.
BoredPositron•2h ago
stop churning out api wrappers.
pillbitsHQ•2h ago
The 15-second duration is huge for anyone doing short-form content. The main question I have is temporal consistency - in longer clips, do characters and objects maintain their appearance throughout? That's been the Achilles heel of most video models. You get amazing individual frames but things subtly morph or drift over time. Has anyone stress-tested this with complex scenes?
pillbitsHQ•2h ago
The 15-second duration is a game changer for narrative content. Most AI video tools force you to think in 5-second chunks, which makes storyboarding feel disjointed.

What I'm most curious about is the native audio generation - is it just ambient sound/music, or can it generate synchronized speech? If it's the latter with reasonable lip-sync, that could eliminate a lot of post-production work for explainer videos and short-form content.

Also wondering about the API availability. Having this accessible programmatically would open interesting possibilities for automated content pipelines.