frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Is the Downfall of SaaS Started?

1•throwaw12•1m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•3m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•5m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•8m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•8m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•10m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•12m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•14m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•17m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•22m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•24m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•27m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•39m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•41m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•42m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•55m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•57m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments
Open in hackernews

Omnilingual ASR: Advancing automatic speech recognition for 1600 languages

https://ai.meta.com/blog/omnilingual-asr-advancing-automatic-speech-recognition/?_fb_noscript=1
162•jean-•2mo ago
HF Demo: https://huggingface.co/spaces/facebook/omniasr-transcription...

GitHub: https://github.com/facebookresearch/omnilingual-asr

Comments

meetpateltech•2mo ago
HF Demo: https://huggingface.co/spaces/facebook/omniasr-transcription...

GitHub: https://github.com/facebookresearch/omnilingual-asr

dang•2mo ago
Thanks! I've added those links to the toptext as well.
tschellenbach•2mo ago
any insights on latency?
samat•2mo ago
How hard is it to make TTS out of this? A few independent journalists from Belarus asked for TTS in their language, but I am no expert, was thinking about re-using Mozilla's work. What's the easiest way to get working TTS for a language?
kulahan•2mo ago
From TFA, it says that it’s extremely easy to add new languages with just a few examples. I didn’t see specifics on how “few” it really is, though.
nl•2mo ago
This is ASR not TTS though.
woodson•2mo ago
EDIT: My bad, please disregard; As akreal pointed out, the MMS TTS models aren’t using the SSL models.

Original post:

You can use the OmniASR SSL models instead of their older MMS models to create TTS models: https://github.com/ylacombe/finetune-hf-vits

willwade•2mo ago
Meta cheated with the mms models. That is they didn’t use a phonemeizsr step. This means they just won’t work or sound very strange. ASR data is usually not quite right for tts. But anyhow - not really answering your question but many of these languages already done in mms. Try them https://huggingface.co/spaces/willwade/sherpa-onnx-tts
akreal•2mo ago
As far as I understand, the MMS TTS models are trained from scratch (section 7.1 of [1]), they do not employ any SSL models. So the OmniASR SSL models are not useful here.

What might be interesting is the newly released OmniASR data, because the MMS data, which was used for the MMS TTS, was never released.

Also, the OmniASR can be used to transcribe some untranscribed speech to train a TTS on it.

[1] MMS paper: https://arxiv.org/pdf/2305.13516

woodson•2mo ago
You’re completely right, I misremembered. I edited my post.
stuffoverflow•2mo ago
This seems like a massive improvement for openly available local ASR. Even the 300M model outperforms whisper-large-v3 according to the paper's benchmarks.
lostmsu•2mo ago
Not sure, I recorded 3 seconds of voice (a single sentence) and the hf demo misrecognized about half of the words.
nshm•2mo ago
This model is actually expected to be bad for popular languages, just like previous MMS it is not accurate at all, it wins by supporting something rare well but never had good ASR accuracy even for Swedish etc. It is more a research thing than a real tool. Unlike Whisper.
nshm•2mo ago
And moreover, you can not tune those models for practical applications. The model is originally trained on very clean data, so lower layers are also not very stable for diverse inputs. To finetune you have to update the whole model, not just upper layers.
yorwba•2mo ago
In section 5.7.5, they fine-tune for "11 low-resource languages, with between 5-10 hours of training data and at least 1 hour of validation splits." "CTC fine-tuning takes ≈1 hour of walltime on 32 GPUs for the 300M scale." If that's too expensive, you also have the option of supplying additional context for the LLM-based model (section 5.5).

As for "very clean data," see section 5.7.4: "Omnilingual + OMSF ASR was intentionally curated to represent naturalistic (i.e., often noisy) audio conditions, diverse speaker identities, and spontaneous, expressive speech."

AIorNot•2mo ago
the global language explorer is fascinating -great work guys

https://aidemos.atmeta.com/omnilingualasr/language-globe

- we are getting closer to BabelFish.. at least for the Earth!

cadamsdotcom•2mo ago
Only a few gb of weights will recognize speech in 1600+ languages.

Freely downloadable and usable by anyone for almost anything.

We truly live in the future.

prodigycorp•2mo ago
Seeing the absurd number of languages made me think of the norm macdonald joke:

Music is the universal language, but one day soon it will be replaced by Chinese.

momojo•2mo ago
Does anyone else feel like they buried the lead?

> Omnilingual ASR was designed as a community-driven framework. People around the world can extend Omnilingual ASR to new languages by using just a few of their own samples.

The world just got smaller

ks2048•2mo ago
Just killed my startup. https://6k.ai

Half joking - hopefully, we can still contribute something to this to this field. Looking forward to doing some tests with this.

internet_points•2mo ago
what is the "Penguin" language?

Also, 1.6k < 6k, and I highly doubt this model is anywhere near as good as it is on EU languages for most of them.

ks2048•2mo ago
That's a dumb joke. Yes, I hope to look in detail at their performance on a couple of low-resource languages. Without lots of speakers and data, I think good metrics are hard to come by. I've found that in Meta's massively-multilingual TTS - what looks impressive at first glance, you can see performance is quite bad on smaller languages.
mcswell•2mo ago
First, let me say that this is impressive. And then let me pose some questions:

As a linguist, I would like to know more about the kinds of languages this works well with, or does not work well with. For example, half the world's languages are tone languages, and the way tones work varies greatly among these. Some just have high and low tones, while others are considerably more complicated; Thai has high, mid, low, rising and falling. Also, tone is relative, e.g. a man's high tone might be a woman's low tone. And some African languages have tones whose absolute frequencies vary across an utterance. So transcribing tone is a quite different problem from transcribing phonemes--and yet for many tone languages, the tone is crucial.

There are also rare(r) phonemes, like the clicks in many languages of southern Africa. Of course maybe they've already trained on some of these languages.

The HuggingFace demo says "Supported Languages[:] For this public demo, we've restricted transcription to low-resource languages with error rates below 10%." That's unclear: 10% word error rate, or character/ phoneme error rate? The meta.com page refers to character error rate (CER); a 10% character error rate can imply a much higher word error rate (WER), since most words contain several characters/ phonemes. That said, there are ways to get around that, like using a dictionary to select among different paths through possible character sequences so you only get known words, and adding to that a morphological parser for languages that have lots of affixes (meaning not all the word forms will be in the dictionary--think walk, walks, walked, walking--only the first will be in most dictionaries.)

Enquiring minds want to know!

aargh_aargh•2mo ago
I'm not an expert but the rule of thumb is to expect something like this:

https://xkcd.com/1838/

____tom____•2mo ago
What I really want to know is how well these could work for non-human languages. No, not aliens, but chimpanzees, dolphins, bonobos. We have hundreds or thousands of hours of recordings.

What would it take to start working on them?

nshm•2mo ago
You can check whale sound recognition project https://arxiv.org/abs/2104.08614
benob•2mo ago
Not tested on that particular model, but the idea has been flying around for some time: https://arxiv.org/abs/2509.04166v1
akreal•2mo ago
There is a dolphin language model project from Google and Georgia Tech: https://blog.google/technology/ai/dolphingemma/
____tom____•2mo ago
That's exactly the kind of thing I was hoping people were working on!
netdevphoenix•2mo ago
I think linguistics, don't deem animals to have languages as you require human level intelligence to use and understand some of the features in human languages like communicating about things that are away from your current timespace location. Animals have communication systems
____tom____•2mo ago
I'm not asserting that bonobos, for example, have as complex a language as humans, just that it would be interesting to understand what language that they do have.

"You haven't experienced Shakespeare until you've read him in the original Bonobo". :-)

netdevphoenix•2mo ago
It would be indeed. But it would not be a language. What animals have is called "communication system". A language can be seen as a type of communication system. Languages are complex by definition and require certain cognitive capabilities: ability to create new sentences based on a set of rules, spatial-temporal displacement, etc.
dSebastien•2mo ago
I'm going to test this with Voice AI to see how it works compared to Whisper and Parakeet

https://voice-ai.knowii.net

sipjca•2mo ago
looks like a paid and closed source fork of the free and open source project Handy: https://github.com/cjpais/Handy

can't say for sure, but a lot of the UI (and text) is quite familiar. the history page is a near rip off which is a giveaway.

i believe the mit license should be distributed since it's almost certainly a derivative work.

"The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software."

I can't confirm if the license is infact distributed, since I would have to pay $50, which quite frankly I'm not going to do.

a bit sad to see a ui reskin claimed as original work. the reskin is totally fine, but I believe the license must be distributed. i believe in the proliferation of this software so im happy to see this overall (it's good enough someone wants to charge for it! that's a big win!) but it's just a bit of a shame how this project has gone about it imo.

copypirate•2mo ago
I thought it looked familiar! Looks like they only changed some of the UI/colors lol.
dSebastien•2mo ago
This is not the case. I've implemented many things differently.

The core of the transcription process is mostly the same and uses the same libraries, but I've rebuilt the UI from scratch to be fully responsive, added support for Wayland/Hyprland (with adaptive window size), implemented lazy loading/filtering/group by date/editing/search/filtering etc to the history screen, implemented history storage/handling differently, added more control over the history feature, added support for custom sounds, improved UX around managing sounds, added a loading screen, added support for model download pause/resume/cancel/delete etc.

These might seem like details, but it all takes time. I started this project 3 weeks ago and this is just the beginning.

In my roadmap I've listed many ideas I have in mind and will be focusing on: https://docs.voice-ai.knowii.net/roadmap

I want to go in a different direction than Handy, and my customers (who are mainly interested in Knowledge Management) too.

dSebastien•2mo ago
I never claimed this was fully original work. My project indeed started as a fork of Handy. I've discussed this here: https://github.com/DeveloPassion/knowii-voice-ai-docs/issues... and https://www.knowii.net/c/announcements/knowii-voice-ai. I also mentioned it in the user documentation (FAQ page) and About page within the app.

I am trying to approach this will full transparency, honesty and respect for what the creator of Handy did. I'm not a grifter.

Please consider that my project is still very young. I didn't include the third-party licenses in my first few releases (I honestly didn't know this about the MIT license, my bad!), but will fix this asap with the next release (hopefully coming out in a few days), and I'll pull the previous releases to avoid distributing versions that don't include the licenses. I'll also add information about the other production dependencies that I'm using.

If you look at my announcement, you'll see that I'm being fully transparent about this and am not interested in cloning Handy at all. My code is already very distant from the initial version I started with and I'm exploring and building features that will probably never be included in Handy. For instance, my app's UI has been created from scratch (with a lot of inspiration from Handy), it is fully responsive and now works on Omarchy (Hyprland/Wayland), which Handy doesn't support at the moment. I have added various features for my own needs and for my first customers (e.g., . In the roadmap of the product, you can see some of the ideas I intend to develop.

I also intend to contribute back to Handy over time. I already have and will continue to do so.

tmikaeld•2mo ago
Swedish

Status: Endangered

"The child-bearing generation can use the language among themselves, but it is seldom being transmitted to children."

What!? A lot must have changed in one generation..

District5524•2mo ago
Yes, there seems to be lots of mistakes and no easy way to mark it. Highly endangered: Malayalam (=35 million speakers), Hungarian (14 million), Uighur (11 million), or Swedish as endangered... These are quite obvious mistakes even for a layperson.
oezi•2mo ago
Unfortunately I don't read anything in the paper about improvements to timing/timestamping. In particular unclean word boundaries are hard with wav2vev2.

And their use of LLMs as part of the transcription process makes it likely that they trained the model to correct mispronounciations by the speaker. This lowers CER because the human transcription often corrects for mispronounciations as well, but reduces the ability of the model to actually transcribe what was said.

benob•2mo ago
> Bring Your Own Language

Few-shot new languages is going to be a game changer for linguists

District5524•2mo ago
I agree that this is a very exciting and really crucial research and I'm glad there is funding for this. But it's very strange that Hungarian is marked as "highly endangered" at https://aidemos.atmeta.com/omnilingualasr/language-globe Highly endangered is supposed to mean "The language is used by grandparents and older generations; while the parent generation may still understand the language, they typically do not speak it to children or among themselves." Then why is Hungarian marked as such? Obviously not true with 14 million active speakers and being the 20th in terms of the most language resources published on the Internet. Additionally, the feedback mechanism seems also broken ("There was an error submitting your feedback. Please try again.")
internet_points•2mo ago
Finnish: "safe" – sounds right

South Estonian: "vulnerable" – sure, yeah

Karelian: "endangered" – seems correct

Swedish: also "endangered" – wat

Ghari (12k speakers): "safe" – :facepalm:

Are these really language-vulnerability ratings or did they just make a mapping from Trump's tariff rates?

District5524•2mo ago
My new favourite mistake is Malayalam being highly endangered...
yorwba•2mo ago
The Ethnologue link in footnote 7 of the paper has utm_source=chatgpt.com at the end, so I suspect whoever was tasked with listing languages and determining their status thought this wasn't important enough to do it themselves and just had ChatGPT give them a list. FWIW, Ethnologue does say that Ghari is "Stable" https://www.ethnologue.com/language/gri/ Meanwhile Swedish is "Institutional," the highest possible level of vitality https://www.ethnologue.com/language/swe/