frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US clears H200 chip sales to 10 China firms as Nvidia CEO looks for breakthrough

https://www.reuters.com/business/retail-consumer/us-clears-h200-chip-sales-10-china-firms-nvidia-...
1•layer8•58s ago•0 comments

Texas county pauses data center construction in rural areas

https://www.texastribune.org/2026/05/12/texas-hill-county-approves-data-center-construction-pause...
1•gmays•2m ago•0 comments

Deal reached with hackers to delete data stolen from the Canvas platform

https://www.nbcnews.com/tech/tech-news/deal-reached-hackers-delete-data-stolen-canvas-educational...
2•fortran77•3m ago•1 comments

Show HN: Nanci, CI written in plain Python, locally debuggable

https://nanci.dev/
1•Hex08•3m ago•0 comments

One in seven in UK prefer consulting AI chatbots to seeing doctor, study finds

https://www.theguardian.com/society/2026/may/13/one-in-seven-prefer-ai-chatbots-to-seeing-doctor-...
1•Brajeshwar•4m ago•0 comments

Blanet

https://en.wikipedia.org/wiki/Blanet
1•JumpCrisscross•5m ago•0 comments

A field manual for Deutsche Bahn

https://blog.hofstede.it/a-field-manual-for-three-years-on-deutsche-bahn/
1•fanf2•6m ago•0 comments

Plasma secrets: Windows position for naughty apps

https://www.dedoimedo.com/computers/plasma-window-position-2026.html
2•speckx•8m ago•0 comments

World's first laughing gas breathalyser trialled in England

https://news.sky.com/video/worlds-first-laughing-gas-breathalyser-trialled-in-england-13544036
1•austinallegro•8m ago•0 comments

Austin's population tops 1M residents for the first time

https://www.statesman.com/business/article/austin-population-tops-1-million-22258805.php
1•_JamesA_•8m ago•0 comments

Celebrating 10 Years of the MITx MicroMasters Programs

https://impact-openlearning.mit.edu/celebrating-10-years-of-the-mitx-micromasters-programs
1•raybb•9m ago•0 comments

GitHub Copilot's new desktop app

https://github.com/github/app
1•prosim•10m ago•1 comments

Bun's Rust rewrite has been merged

https://old.reddit.com/r/rust/comments/1tcrmjs/rewrite_bun_in_rust_has_been_merged/
1•ale•10m ago•0 comments

AI, open code and vulnerability risk in the public sector (UK)

https://www.gov.uk/guidance/ai-open-code-and-vulnerability-risk-in-the-public-sector
1•RobinL•13m ago•0 comments

How the Bird Eye Was Pushed to an Evolutionary Extreme

https://www.quantamagazine.org/how-the-bird-eye-was-pushed-to-an-evolutionary-extreme-20260513/
2•Brajeshwar•13m ago•0 comments

Why Do We Interface?

https://whydoweinterface.com/
2•structuredPizza•14m ago•0 comments

Jane Street Interview Simulator

https://janestreet.gg/
1•Jeanbu•14m ago•0 comments

A Single Infusion Could Suppress HIV for Years

https://www.nytimes.com/2026/05/11/health/hiv-infusion-immunotherapy.html
1•gmays•14m ago•0 comments

Discover Crosspad the best finger drumming web app

https://crosspad.app/
1•Brosper•19m ago•0 comments

Physics Guarantees the Datasphere Keeps Expanding (and What It Means for Agents)

https://twitter.com/i/status/2054961517767061668
1•dataranger•20m ago•0 comments

Show HN: BlitzGraph – Supabase for graphs, designed for LLM agents

https://blitzgraph.com
1•lveillard•21m ago•0 comments

Ambient Intents

https://xcancel.com/timourxyz/status/2054589504934273373
1•yurivish•21m ago•0 comments

Cannabis and driving? Studies reveal big risks

https://news.cuanschutz.edu/news-stories/cannabis-and-driving-studies-reveal-big-risks
3•PaulHoule•21m ago•0 comments

AI models are being used to predict conflict

https://www.economist.com/science-and-technology/2026/05/13/ai-models-are-being-used-to-predict-c...
2•Brajeshwar•22m ago•0 comments

Entire - How We Improved Agentic Search

https://entire.io/blog/improving-agentic-search-in-coding-agents
1•tanishqkanc•23m ago•0 comments

Claude Code cost observability to prevent tokenmaxxing

https://github.com/delta-hq/cc-ledger
1•tsv650•23m ago•1 comments

Which programming language is fastest?

https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html
1•tosh•24m ago•0 comments

Synthetic evaluation datasets for testing AI agents before production deployment

https://paixblox.github.io/learned/
1•cemillxchange•24m ago•0 comments

What's in a GGUF, besides the weights – and what's still missing?

https://nobodywho.ooo/posts/whats-in-a-gguf/
2•bashbjorn•26m ago•0 comments

The coming AI jobs-pocalypse

https://katecarruthers.com/ai-jobs-future/
1•speckx•27m ago•3 comments
Open in hackernews

Show HN: YT NoteTaker – Simple Manual Note Taking on YouTube Videos

https://kavinaidoo.github.io/ytnt/
2•kavinaidoo•1y ago
Hi HN,

YTNT is a simple web-app for manually typing notes while watching a YouTube video. Simultaneously type and control playback with keyboard shortcuts. Export to Word (.docx) when you're done.

I'm seeking general feedback.

NOTES:

- The only way to "save" (for now) is to export to a Word .docx

- Speech-to-text only works in Chrome

GitHub: https://github.com/kavinaidoo/ytnt

Blog Post: http://archive.today/FRqwp

Thanks, Kavi

Comments

Leftium•1y ago
I forked oTranscribe and added some features to solve a similar problem.

- demo: https://otranscribe.netlify.app/?vsl=definedefine

- source code: https://github.com/Leftium/oTranscribe

- CLI tool to generate OTR (oTranscribe) files from (YouTube) SBV/TTML files: https://github.com/Leftium/otrgen

In my case, I wanted to start with the (auto-generated) YouTube transcript and get clickable timestamps. This makes it much faster to search through the content of a video: I can read/search much faster than watching a video, even on 2X speed.

I could also add my own notes to the transcript.

If you add support for loading transcripts like this, it could work cross-browser without the microphone/speech-to-text.

kavinaidoo•1y ago
Aha, starting with the YouTube transcript is a great idea. I wanted to start with it but I couldn't find a way to get it from the YouTube iFrame API so I went with the Web Speech API to listen and convert.

Forgive me for the confusion, the demo link implies that the transcript is loaded from the video but I see in the code that there's a pre-existing "/txt/definedefine.md" that is loaded. How are these SBV/TTML files downloaded from YouTube in the first place? I assume that it is a separate process? I see you are then using otrgen to presumably convert these so they can be used by oTranscribe.

If I could load the transcripts dynamically when loading the YouTube video that would be a great feature.

Leftium•1y ago
The demo simply demonstrates how I used my tool. It requires some manual set up:

- TTML files are downloaded via CLI: `yt-dlp.exe --skip-download --write-auto-sub --sub-format ttml`

- TTML files are converted to the OTR .MD format via my CLI tool

- The MD file can be dragged & dropped onto the web app.

---

I think it is possible to download SBV/TTML files, but the download must be done from the server due to CORS restrictions.

My app didn't go this far due to limitations of the (serverless) platform it is hosted on. Also it was faster to just do the steps manually vs. developing a server that does it.

I have seen many services that download the transcript. Here are a few:

- https://youtubetranscript.com

- https://kagi.com/summarizer

- https://www.tubepen.com

However, note YouTube may block your server if you download too many transcripts: https://kagifeedback.org/d/4451-universal-summarizer-cant-fi...

kavinaidoo•12mo ago
I think I'll just have to stick with my current method for now, looks like the only way to get it done with static hosting.
Leftium•1y ago
I'm planning a beat-aware YouTube player. Unfortunately, it is not possible to access the audio stream data across the YouTube embed. (For beat-detection analysis.)

I considered using the microphone like this. It's nice to see it works. Although there seems to be a time limit to how long the microphone can record?

My plan was to download video (youtube-dlp), then make a CLI tool that analyzes and uploads the beat-detection data. An advantage of the CLI tool is it can complete the analysis faster than playing the video at 1X.

kavinaidoo•1y ago
Yeah, I also wanted to "pipe" the audio directly to the Web Speech API but had to resort to using the mic. Another feature I wanted to add was to have a keyboard shortcut insert a screenshot of the video into the notes. Handy for diagrams etc but I hit some roadblocks there too and the workarounds were getting crazy.

Regarding the time-limit, I'm entirely accessing the audio stream using the Web Speech API and it "decides" (usually when there's a sufficient pause) when to finalize a recognition result. It was also firing a "recognition.onend" event after a certain amount of time (some old details here: https://stackoverflow.com/questions/38213580/chromes-webkits...) so I have a workaround where if the user did not turn off speech-to-text, it is immediately restarted. You'll see console warnings "userStopped == false - trying to restart" when this happens.

I assume you'll have no such issues because you'll be handling the "raw" audio data and not working through this API. Also, I've noticed that when using the Web Speech API on Safari, it does not "hear" what the tab is playing. It only hears external audio. I'm not sure whether this will be an issue for you but for me it means my app is Chrome only (suboptimal). Forgive me for my naïveté in this arena, It's the first time I'm using any browser audio API.

Would your tool run on a server and work with a frontend for YT link ingestion or will you just use it yourself from the cli?

Leftium•1y ago
Ideally, my tool would run on the server. Especially if I wanted to monetize the service. (However I think there may be legal issues that are larger than the technical ones...)

However it's just a small hobby project. So realistically I think this is how it will work:

- If user tries to load a video without beat data yet, instructions are shown for how to add the beat data. The instructions will be running CLI command like `npx upload-beat-data [YOUTUBE-URL]`

- The CLI tool will download the audio, then upload the beat data to my site.

- The site will also log which videos got a lot of requests, but are missing beat data. So I can manually add the beat data myself.