frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Found 39 Algolia Admin Keys Exposed Across Open Source Documentation Sites

https://benzimmermann.dev/blog/algolia-docsearch-admin-keys
46•kernelrocks•1h ago•10 comments

Drone strikes in Haiti that killed 1250, 17 children, condemned by rights group

https://haitiantimes.com/2026/03/11/hrw-condemns-haiti-drone-strikes-killing-children/
89•e12e•1h ago•23 comments

Can I run AI locally?

https://www.canirun.ai/
865•ricardbejarano•11h ago•228 comments

Show HN: Channel Surfer – Watch YouTube like it’s cable TV

https://channelsurfer.tv
391•kilroy123•2d ago•136 comments

Mouser: An open source alternative to Logi-Plus mouse software

https://github.com/TomBadash/MouseControl
160•avionics-guy•5h ago•53 comments

Hammerspoon

https://github.com/Hammerspoon/hammerspoon
181•tosh•5h ago•73 comments

Qatar helium shutdown puts chip supply chain on a two-week clock

https://www.tomshardware.com/tech-industry/qatar-helium-shutdown-puts-chip-supply-chain-on-a-two-...
381•johnbarron•11h ago•355 comments

OpenTelemetry for Rust Developers

https://signoz.io/blog/opentelemetry-rust/
15•dhruv_ahuja•3d ago•1 comments

Parallels confirms MacBook Neo can run Windows in a virtual machine

https://www.macrumors.com/2026/03/13/macbook-neo-runs-windows-11-vm/
171•tosh•10h ago•228 comments

"Added 1M context window for Opus 4.6 by default for Max, Team, and Enterprise"

https://raw.githubusercontent.com/anthropics/claude-code/refs/heads/main/CHANGELOG.md
8•taspeotis•40m ago•1 comments

Stanford researchers report first recording of a blue whale's heart rate (2019)

https://news.stanford.edu/stories/2019/11/first-ever-recording-blue-whales-heart-rate
46•eatonphil•5h ago•33 comments

New 'negative light' technology hides data transfers in plain sight

https://www.unsw.edu.au/newsroom/news/2026/03/New-negative-light-technology-hides-data-transfers-...
51•wjSgoWPm5bWAhXB•2d ago•33 comments

MetaGenesis Core – offline verification for computational claims

https://www.metagenesis-core.dev/
9•Lama9901•2d ago•3 comments

TUI Studio – visual terminal UI design tool

https://tui.studio/
536•mipselaer•13h ago•271 comments

Elon Musk pushes out more xAI founders as AI coding effort falters

https://www.ft.com/content/e5fbc6c2-d5a6-4b97-a105-6a96ea849de5
276•merksittich•7h ago•397 comments

Using Thunderbird for RSS

https://rubenerd.com/using-thunderbird-for-rss/
57•ingve•3d ago•8 comments

Exploring JEPA for real-time speech translation

https://www.startpinch.com/research/en/jepa-encoder-translation/
21•christiansafka•2d ago•4 comments

Show HN: Context Gateway – Compress agent context before it hits the LLM

https://github.com/Compresr-ai/Context-Gateway
57•ivzak•6h ago•39 comments

Lost Doctor Who Episodes Found

https://www.bbc.co.uk/news/articles/c4g7kwq1k11o
202•edent•19h ago•65 comments

Your phone is an entire computer

https://medhir.com/blog/your-phone-is-an-entire-computer
221•medhir•6h ago•225 comments

Bucketsquatting is finally dead

https://onecloudplease.com/blog/bucketsquatting-is-finally-dead
303•boyter•15h ago•158 comments

I beg you to follow Crocker's Rules, even if you will be rude to me

https://lr0.org/blog/p/crocker/
11•ghd_•1h ago•25 comments

Source code of Swedish e-government services has been leaked

https://darkwebinformer.com/full-source-code-of-swedens-e-government-platform-leaked-from-comprom...
197•tavro•14h ago•188 comments

Launch HN: Spine Swarm (YC S23) – AI agents that collaborate on a visual canvas

https://www.getspine.ai/
83•a24venka•10h ago•65 comments

John Carmack about open source and anti-AI activists

https://twitter.com/id_aa_carmack/status/2032460578669691171
211•tzury•6h ago•309 comments

The Wyden Siren Goes Off Again: We'll Be "Stunned" by NSA Under Section 702

https://www.techdirt.com/2026/03/12/the-wyden-siren-goes-off-again-well-be-stunned-by-what-the-ns...
337•cf100clunk•8h ago•102 comments

Launch HN: Captain (YC W26) – Automated RAG for Files

https://www.runcaptain.com/
44•CMLewis•8h ago•23 comments

Hyperlinks in terminal emulators

https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda
84•nvahalik•20h ago•57 comments

Meta Platforms: Lobbying, dark money, and the App Store Accountability Act

https://github.com/upper-up/meta-lobbying-and-other-findings
1137•shaicoleman•14h ago•477 comments

Okmain: How to pick an OK main colour of an image

https://dgroshev.com/blog/okmain/
238•dgroshev•4d ago•43 comments
Open in hackernews

Llasa: Llama-Based Speech Synthesis

https://llasatts.github.io/llasatts/
168•CalmStorm•10mo ago

Comments

CalmStorm•10mo ago
LLaSA is a simple framework for speech synthesis that employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align with standard LLMs such as LLaMA.
WastedCucumber•10mo ago
Probably the title should have the correct capitalization then. Cause I was fully expecting a speech synthesis tool that sounded like llamas talking human language and now I'm bummed out!
StevenNunez•10mo ago
I can't wait see this integrated into Open WebUI! These sound amazing.
gapeleon•10mo ago
You can run an openai-compatible endpoint and point open-webui at it if you want this. I had to add a function to filter out markdown lists, code, etc as the model was choking on them.
mring33621•10mo ago
the long 'uuuuhhhhhhh' from some of the lesser models is killing me.
jszymborski•10mo ago
based on the samples, it really seams like anything smaller than 3B is pretty useless.
hadlock•10mo ago
If you're doing a home lab voice assistant 1B is nice, because on a 12gb gpu you can run a moderately competent 7b LLM and two 1b models; 1 for speech to text and also text to speech, plus some for the wake word monitor. Maybe in a couple of years we can combine all this into a single ~8b model that runs efficiently on 12gb gpu. Nvidia doesn't seem very incentivized right now to sell consumer GPUs that can run all this on a single consumer grade chip when they're making so much money selling commercial grade 48gb cards.
Dlemo•10mo ago
Hui for the activation word?

Shouldn't there be some hardware module be available similar to how Alexa, Siri and Google do it?

Whith a ring buffer detection the word without recording everything?

gapeleon•10mo ago
This finetune seems pretty stable (1b llasa) https://huggingface.co/spaces/HKUST-Audio/Llasa-1B-multi-spe...

1B is actually huge for a TTS model. Here's an 82m model with probably the most stable/coherent output of all the open weights tts models I've tested: https://huggingface.co/spaces/hexgrad/Kokoro-TTS

But if you mean zero-shot cloning, yeah they all seem to have those slurred speech artefacts from time to time.

nialv7•10mo ago
the mispronunciation of 行 and 行 in the Chinese sample is killing me too XD
dheera•10mo ago
> employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align

I really wish when new models were released that they would draw a diagram of all the layers and the tensor input and output sizes at each layer, with zoom in/out capabilities if needed using D3.js or whatever visualization framework if needed. Every single layer should be on there with its input and output sizes.

These one-sentence descriptions, and approximate block diagrams with arrows pointing at each other are never enough to understand how something is actually implemented.

exe34•10mo ago
Sounds like a solid SaaS business plan!
dr_kiszonka•10mo ago
That might be intentional.
imtringued•10mo ago
This already exists in Transformer Lab and ONNX (not recommended for transformers).

You can also build a custom version of llama.cpp that writes out the ggml compute graph. What's irritating is that hugging face didn't add it to their GGUF file viewer.

dheera•10mo ago
Oh, sure, for the well-known models that are already on there.

I just wish that new research would always spell it out in full instead of these silly block diagrams labelled with just e.g. "Cross Attention" and not the exact parameters, number of heads, layer sizes, etc.

Also some of these diagrams use a + for concatenation and some use it for addition, that's another headache to figure out, having layer sizes would make it clear.

ks2048•10mo ago
Odd that the page doesn't seem to link to either,

paper: https://arxiv.org/abs/2502.04128

github: https://github.com/zhenye234/LLaSA_training

thot_experiment•10mo ago
Interesting that there isn't a mention of Orpheus as prior art either since it's the exact same thing.

(https://github.com/canopyai/Orpheus-TTS)

gapeleon•10mo ago
> Interesting that there isn't a mention of Orpheus as prior art either

Llasa-3b (https://huggingface.co/HKUSTAudio/Llasa-3B) came out before Orpheus (https://huggingface.co/canopylabs/orpheus-3b-0.1-ft).

> it's the exact same thing.

They're very similar, but they're not the exact same thing.

Llasa uses xcodec2, a much simpler, lossless 16khz wav codec. This makes it superior for one-shot voice cloning.

Orpheus' 24khz snac codec is lossy which makes it difficult to use for zero-shot cloning as the reference audio gets degraded during tokenization. You can test this here: https://huggingface.co/spaces/Gapeleon/snac_test

But when finetuned on 50+ audio samples, it produces much cleaner 24khz audio than Llasa, and the snac model is much easier to run on consumer hardware than xcodec2 (87t/s for realtime speech, which can be achieved on an RTX3080 for example)

oezi•10mo ago
Do you happen to know why Orpheus and Llasa use Finetuning for voice cloning?

Zonos uses 128-float embeddings for voices and it seems so much nicer. Because you can just mix and match voices without changing the model.

thot_experiment•10mo ago
No, you just condition it with text-voice token pairs and then when conditioning further inference w/ text the voice tokens tend to match the pairs further up in the context.
oezi•10mo ago
Isn't xcodec2 also lossy? I thought it is also just another neural codec (50 tok/s, single codebook).

What are people using to upsampling back to 44,1 or 48 khz? Anything fancy?

woodson•10mo ago
They’re both lossy. They use a VAE-VQ type architecture trained with a combination of losses/discriminators. The differences are mainly the encoder/decoder architecture, the type of bottleneck quantization (RVQ, FSQ, etc.) and of course the training data.