frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
66•yi_wang•2h ago•23 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
233•valyala•10h ago•45 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
24•RebelPotato•2h ago•4 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
144•surprisetalk•10h ago•146 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
175•mellosouls•13h ago•333 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
62•gnufx•9h ago•55 comments

IBM Beam Spring: The Ultimate Retro Keyboard

https://www.rs-online.com/designspark/ibm-beam-spring-the-ultimate-retro-keyboard
19•rbanffy•4d ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
172•AlexeyBrin•15h ago•32 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
152•vinhnx•13h ago•16 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
41•swah•4d ago•90 comments

First Proof

https://arxiv.org/abs/2602.05192
125•samasblack•12h ago•75 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
298•jesperordrup•20h ago•95 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
69•momciloo•10h ago•13 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
96•randycupertino•5h ago•212 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•12h ago•21 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
35•mbitsnbites•3d ago•3 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
566•theblazehen•3d ago•206 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
7•ujjwalreddyks•5d ago•2 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
286•1vuio0pswjnm7•16h ago•464 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
126•josephcsible•8h ago•154 comments

The silent death of good code

https://amit.prasad.me/blog/rip-good-code
81•amitprasad•4h ago•76 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
29•languid-photic•4d ago•9 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
180•valyala•10h ago•165 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
899•klaussilveira•1d ago•275 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
225•limoce•4d ago•125 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
115•onurkanbkrc•15h ago•5 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
111•zdw•3d ago•55 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
141•speckx•4d ago•224 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
143•videotopia•4d ago•48 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•1h ago•5 comments
Open in hackernews

Launch HN: Uplift (YC S25) – Voice models for under-served languages

113•zaidqureshi•5mo ago
Hi HN, we are Zaid, Muhammad and Hammad, the co-founders of Uplift AI (https://upliftai.org). We build models that speak underserved languages — today: Urdu, Sindhi, and Balochi.

A billion people worldwide can't read. In countries like Pakistan – the 5th most populous country – 42% of adults are illiterate. This holds back the entire economy: patients can't read medical reports, parents can't help with homework, banks can't go fully digital, farmers can't research best practices, and people memorize smartphone app button sequences. Voice AI interfaces can fix all of this, and we think this will perhaps be one of the great benefits of modern AI.

Right now, existing voice models barely work for these languages, and big tech is moving slowly.

Uplift AI was originally a side project to make datasets for translation and voice models. For us it was a "cool side-thing" to work on, not an "important full-time thing" to work on. With some initial data we hacked together a Urdu Voice Bot on Whatsapp and gave it to one domestic worker. In two days 800 people were using it. When we dived deeper into understanding the users, we learned that text interfaces don't work for sooo many. So we started Uplift AI to solve this problem fulltime.

The most challenging part is that all the building blocks needed for great voice models are broken for these languages. For example, if you are creating a speech synthesis model, you will scrape a lot of data from youtube and auto-label it using a transcription model… all very easy to do in English. But it doesn't work in under-served languages because the transcription modes are not accurate.

There are many other challenges. Like when you hire human transcribers to label the data, often they don't have any spell correctors for their languages, and this creates lots of noise in the data… making it hard to train models with low data. There are many more challenges in phonemes, silence detection, diacritization etc.

We solve these problems by making great internal tooling to help with data labeling. Also, we source our own data and don't buy it. This is counterintuitive, but a big advantage over companies buying data and then training. By sourcing our own data we create the right data distributions and get much better models with much less data. By doing the entire thing inhouse, (data, labeling, training, deploying) we are able to make a lot faster progress.

Today we publicly offer a text to speech APIs for Urdu, Sindhi, and Balochi. Here's a video which shows this: https://www.loom.com/share/dcd5020967444c228e9c127151e7a9f5.

Khan Academy is using our tech to dub videos to Urdu (https://ur.khanacademy.org).

Our models excel at informational use cases (like AI bots) but need more work in emotive use-cases like poetry.

We have been giving a lot of people private access in beta mode, and today are launching our models publicly. We believe this will be the fastest way for us to learn about areas that are not performing well so we can fix them quickly.

We'd love to hear from all of you, especially around your experiences with under-served languages (not just the Pakistani ones we're starting with) and your comments in general.

Comments

akshayp29•5mo ago
Pretty cool! Do you think the model would be good at other under-served languages as well? Or is it hypertuned to just these?
zaidqureshi•5mo ago
The model itself can work well for new languages, its just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

Currently the model is only given data for these languages so it doesn't know anything else.

akshayp29•5mo ago
Cool - makes sense!
mandeepj•5mo ago
> just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

À crawler and data ingestion pipeline will not help with that?

zaidqureshi•5mo ago
Gathering audio data online is not that hard, but getting it accurately labelled is challenging, as the speech understanding systems for those languages aren't there either, so we can't automatically do that
pavlov•5mo ago
Nice! Clearly a big and underserved market for voice AI solutions.

Would be nice to have some code examples for using your TTS API with Pipecat.

zaidqureshi•5mo ago
I have to make that.. I did make one for LiveKit which utilizes our websocket API designed for real-time conversation API:

https://docs.upliftai.org/tutorials/livekit-voice-agent

zaidqureshi•5mo ago
btw I did try to first make it with Pipecat and was having some annoying windows issues with getting libraries installed for daily etc. so I posted something that was easily reproducible for the tutorial...
mdbackman•5mo ago
Hi! Pipecat maintainer here. There is no Windows restriction for Pipecat, in general. The DailyTransport does not support Windows, but works on WSL. Though, you don't have to use the DailyTransport. Pipecat has interchangeable transport support. You can do all of your testing on a free, P2P WebRTC transport (SmallWebRTCTransport, based on aiortc) without system restrictions.

Reach out on Discord if you have any challenges.

zaidqureshi•5mo ago
will do!
sanman8119•5mo ago
Would love to see Malayalam here one day!
zaidqureshi•5mo ago
Yes! I will keep track of this comment for the day we do :P
yorwba•5mo ago
Unless that happens within a week or so, this thread will be locked and you won't be able to reply anymore.

It would be good to have a company blog with an RSS feed that people can subscribe to for updates.

zaidqureshi•5mo ago
ah, created a quick google form for language requests! https://forms.gle/XA6nZbmBNK5K7GJv5
sanman8119•5mo ago
Submitted!
zaidqureshi•5mo ago
appreciate it!
moinism•5mo ago
Congrats on the launch! Having support for regional voices is going to open up so many opportunities.
zaidqureshi•5mo ago
Agreed!
nojs•5mo ago
Nice, this is really needed. Would be cool to see some of the less common regional Chinese dialects, which are widely spoken and often the only language older people speak. And even just more accurate regional accents for Mandarin.
zaidqureshi•5mo ago
wow did not know that! Do you feel there is gap in speech understanding here or personalization missing with current TTS?
_waqas_ali_•5mo ago
As a Sindhi speaker myself, amazing stuff. The output is so good. This unlocks the vastness of the internet for millions of people. I am imaging something like NotebookLM but for under-served languages or a hotline where people can call and talk/learn about anything. Do you guys have plans to create b2c products yourself?
zaidqureshi•5mo ago
At the moment we are focused on making the models available through API so developers can make some cool things. We are actively monitoring to see if there is an opportunity that we will be better positioned to solve.

We are planning on hosting an online hackathon soon, so will suggest these things as ideas!

_waqas_ali_•5mo ago
Fair enough. I don’t have a use case for the API yet but I am looking forward to the products that come out of this
zaidqureshi•5mo ago
Maybe will make another post in a month of all the cool products that have come out so far :)..
Bilal_io•5mo ago
Congratulations on the launch! I really hope it doesn't get used to launch misinformation campaigns against the country.

Are you aware of any effort to educate and fight against misinformation in Pakistan?

zaidqureshi•5mo ago
Hope so! It is great that it overall has a big impact on making knowledge more accessible (i.e Khan Academy using it to dub their content in minutes instead of weeks). But there are lots of other areas where it applies as well.
jnmandal•5mo ago
Looks really cool, exciting to see. I have two questions around this:

1. Given that you are concerned with providing access a class of folks that are traditionally ignored by technologists, do you plan to make these models usable for offline purposes? For example an illiterate person I know from Uttarkhand: his home village is not connected to road. Interestingly he does speak Hindi, but his native language I believe is something more obscure. To get home, he walks five hours from the terminus of a road. Connectivity is obviously both limited and intermittent. A usable device might want the voice interface embedded on it. Any plans for this?

2. I have minimal understanding of this but as someone who has learned Hindi/Urdu as a foreign language but in the US, I am often in mixed conversation w/ both Indians and Pakistanis. There never seems to be any issues with communication. I have heard that certain terms (like for example "khub suraat", "shukria", "kitaab") are more Urdu than Hindi. I also studied Arabic, Farsi, and Swahili so I am familiar with these as loanwords Arabic and/or Persian, but in practice I hear Hindi speakers using these terms often. Is the primary value add here political? Is it an accent thing? Thanks in advance for any explanation. This is still very much a mystery to me.

hammadmlk•5mo ago
1. Offline models: Yes that is on the roadmap. There is a big demand for them especially in interactive educational use-cases.

2. Urdu and Modern Hindi can be cross understood in spoken form. The authentic Hindi is much different though and I can't understand the press releases that are done in super authentic Hindi. The writing systems in Urdu and Hindi is completely different too, so even if there is a great TTS system in Hindi, I cant use it. Accent are very different too.

Scripts: ہیلو हेलो

muhammadbsabir•5mo ago
To increase access we’re also exploring telco hotlines. Carrier penetration is much higher than internet, so this could let people use AI through a simple phone call. Some users already pay for similar services like weather updates (for farmers) via SIM balance. But to scale it will likely require government or telco partnerships.
jnmandal•5mo ago
Telco integration sounds amazing. Wishing yall success
muhammadbsabir•5mo ago
Thanks!
adz_6891•5mo ago
This is really cool. Congrats on the launch. Would be interested to know which low resource languages in Sub-Saharan Africa you'd be working on, particularly in Nigeria and South Africa.
zaidqureshi•5mo ago
If you have interest/insights in specific languages, would love if you can fill out this form so we can reach out in the future https://forms.gle/XA6nZbmBNK5K7GJv5

Lots of area to cover for sure!

adz_6891•5mo ago
Submitted!
Lienetic•5mo ago
Very cool, congrats on the launch! What's your plan for when one of the larger players like ElevenLabs or Google adds support for these languages? I would guess the reason why they haven't is because they don't see a large opportunity. How are you thinking about it?
hammadmlk•5mo ago
I think the Voice Models market will be like eCommerce. There will be no global winner instead a few regional winners -- each being really big.

We plan to be one of those winners.

chirau•5mo ago
What does it take to build such a model? As in, the key steps. And how expensive does it get? I might be interested in being a regional player and winner as well, lol. In my own corner of the world in Africa.
hammadmlk•5mo ago
Not much... Just the willingness to work hard on this problem instead of others problems where large revenue is perhaps quicker :)

Ingredients: Decent audio scraping skills, hiring great voice actors for each language, algos to gather text/audio with diverse phonetics, decent ML skills (enough to merge the best features of a few different papers). Lots and lots of data labels (and your own tools to get the data labeled efficiently) And finally GPUs!!!!

None of this is technically hard... the hardest thing is working with Voice Actors (oh man!!!)

muhammadbsabir•5mo ago
Thanks! You’re right, the big players mostly ignore these languages. The additional challenge is the lack of online data, so we spend a lot of effort on data collection and labeling on the ground.

Also companies like ElevenLabs, and Deepgram have done well by focusing on specific use cases, even when the big labs are amazing at English.

Right now these languages are underserved, so there’s a window to build the best models for these languages.

asadm•5mo ago
Congrats on launch, I have been sole-funding a dataset for Sindhi on Common Voice. Did you check that out by any chance?
muhammadbsabir•5mo ago
Amazing! Not yet, I will check it out.

Also, some super cool projects on your website :)

ks2048•5mo ago
Nice work.

Have you looked at the MMS models from Meta and how do they compare?

By publicly release, does that mean offering an API or have you considered huggingface model release? I understand why that might not be best for your business model - but what would be your goal from a business perspective?

zaidqureshi•5mo ago
Launched them through API. From a business perspective is to get adoption of voice apps in targeted regions. Some companies can now create voice agents etc.
hammadmlk1•5mo ago
Yes we read the paper when it came out and reviewed the audios. We didnt find it good enough for adoption. We didnt compare results with MMS in a systematic way coz it seems irrelevant.
primitivesuave•5mo ago
The output quality is remarkable. You mentioned that there are 1 billion illiterate people who would benefit from this, and I would add that there are at least 1 billion additional people who would benefit because they speak a regional dialect. There are many countries across the developing world where the AI tools and translation apps only produce output in the official government dialect (e.g. the Thai spoken in Bangkok, the Hindi spoken in Delhi, or the Mandarin spoken in Beijing). It would be interesting to see how a voice model could be "fine tuned" to better serve a specific regional dialect.
zaidqureshi•5mo ago
Yes! First goal is to get coverage ASAP. I think it will be easy to get dialects in with current model architecture. The hard part will be LLMs catching up on producing consistent text that respects the linguistics as we drill deeper.
willwade•5mo ago
Your datasets. Are they public? For more under represented languages we DONT need closed voice models - what the world really needs is open voice data repositories (eg TTS ready voice banks AND phonemization db in projects like Mozila CommonVoice). Why? Because there is so small need commercially these countries are not commercially viable - but we DO need TTS for assistive technology purposes and this has very little $$$ associated with it

(Saying that Urdu is NOT a small population so well done..!)

zaidqureshi•5mo ago
They aren't public. Agreed on commercially viable, even in Pakistan, businesses are price sensitive, currently there priced realllly cheap (just because they are small).
aneeqdhk•5mo ago
Any plans for speech to text? I want to automatically generate subtitles for videos which have Urdu audio
muhammadbsabir•5mo ago
Yes, we are working speech to text as well. It should be out in the next 2 months.
tugdual•5mo ago
This is what my Master project was about, working in the case of Wolof. I've trained XTTSv2 and had solid results with less than 20h of paired data that wasn't of the highest quality either - hmu: tkerjan@outlook.com