I have an old project that relies on AWS transcription and I'd love to migrate it to something local.
- Converts to 16kHz WAV
- Transcribes using native ggerganov whisper
- Calls out to a local LLM to clean the text
- Prints out the final cleaned up transcription
I found that accuracy/success increased significantly when I added the LLM post-processor even with modestly sized 12-14b models.
I've been using it with great success to convert very old dictated memos from over a decade ago despite a lot of background noise (wind, traffic, etc).
[1] https://gist.github.com/scpedicini/455409fe7656d3cca8959c123...
I'm sure there are use cases where using Whisper directly is better, but it's a great addition to an already versatile tool.
For preprocessing, I found it best to convert files to a 16kHz WAV format for optimal processing. I also add low-pass and high-pass filters to remove non-speech sounds. To avoid hallucinations, I run Silero VAD on the entire audio file to find timestamps where there's a speaker. A side note on this: Silero requires careful tuning to prevent audio segments from being chopped up and clipped. I also use a post-processing step to merge adjacent VAD chunks, which helps ensure cohesive Whisper recordings.
For the Whisper task, I run Whisper in small audio chunks that correspond to the VAD timestamps. Otherwise, it will hallucinate during silences and regurgitate the passed-in prompt. If you're on a Mac, use the whisper-mlx models from Hugging Face to speed up transcription. I ran a performance benchmark, and it made a 22x difference to use a model designed for the Apple Neural Engine.
For post-processing, I've found that running the generated SRT files through ChatGPT to identify and remove hallucination chunks has a better yield.
I think latest version of ffmpeg could use whisper with VAD[1], but I still need to explore how with a simple PoC script
I'd love to know more about the post-processing prompt, my guess is that looks like an improved version of `semantic correction` prompt[2], but I may be wrong ¯\_(ツ)_/¯ .
[1] https://ffmpeg.org/ffmpeg-filters.html#toc-whisper-1
[2] https://gist.github.com/eevmanu/0de2d449144e9cd40a563170b459...
I had an old Acer laptop hanging around, so I implemented this: https://github.com/Sanborn-Young/MP3ToTXT
I forget all the details of my tweaks, but I remember that I had better throughput on my version.
I know the OP talked about wanting it local, but thomasmol/whisper-diarization on replicate is fast and cheap. Here's a hacked front end to parse teh JSON: https://github.com/Sanborn-Young/MP3_2transcript
I've been trying to squeeze out performance out of whisper, but felt (at least for non native speakers) the base model does a good job. In terms of pre processing I do VAD & some normalization. But on my rusty thinkpad the processing time is way too long. I'll try some of the forementioned tips and see if the accuracy & perf can get any better. Post which I'm planning to use a SLM for text cleanup & post processing of the transcription. I'm documenting my learnings over at my notes [3].
[1] https://github.com/BharatKalluri/speechshift
[3] https://notes.bharatkalluri.com/speechshift-notes-during-dev...
Have you tried with languages other than English?
Not judging at all. In fact, the opposite. Thanks for sharing this, it's super valuable.
I think I'll learn from various sources here, and be implementing my own local-first transcription.
:thanks.gif:
What people are talking about, avoiding hallucinations through VAD based chunking, etc, are all things I pioneered with Wisprnote, which has been on the App Store for 2 years. Hasn't been updated recently - backlog of other work - but still works just as fine. Paid app. But good quality.
https://apps.apple.com/us/app/wisprnote/id1671480366?l=en-GB...
drewbuschhorn•4mo ago
Pavlinbg•4mo ago
nvdnadj92•4mo ago
- https://huggingface.co/pyannote/speaker-diarization-3.1 - https://github.com/narcotic-sh/senko
I personally love senko since it can run in seconds, whereas py-annote took hours, but there is a 10% WER (word error rate) that is tough to get around.