frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hardware Attestation as Monopoly Enabler

https://grapheneos.social/@GrapheneOS/116550899908879585
1530•ChuckMcM•15h ago•493 comments

Local AI needs to be the norm

https://unix.foo/posts/local-ai-needs-to-be-norm/
1133•cylo•16h ago•485 comments

The greatest shot in television: James Burke had one chance to nail this scene (2024)

https://www.openculture.com/2024/10/the-greatest-shot-in-television.html
165•susam•7h ago•66 comments

I'm going back to writing code by hand

https://blog.k10s.dev/im-going-back-to-writing-code-by-hand/
355•dropbox_miner•8h ago•161 comments

Running local models on an M4 with 24GB memory

https://jola.dev/posts/running-local-models-on-m4
308•shintoist•10h ago•94 comments

Obsidian plugin was abused to deploy a remote access trojan

https://cyber.netsecops.io/articles/obsidian-plugin-abused-in-campaign-to-deploy-phantom-pulse-rat/
209•cmbailey•11h ago•107 comments

An AI coding agent, used to write code, needs to reduce your maintenance costs

https://www.jamesshore.com/v2/blog/2026/you-need-ai-that-reduces-your-maintenance-costs
160•cratermoon•10h ago•40 comments

Incident Report: CVE-2024-YIKES

https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes.html
534•miniBill•16h ago•135 comments

Mythos Finds a Curl Vulnerability

https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
129•TangerineDream•3h ago•51 comments

Guitar tuner that uses phone accelerometer

https://tautme.github.io/phone-sensors/accel-tuner.html
13•adm4•3d ago•8 comments

7 lines of code, 3 minutes: Implement a programming language (2010)

https://matt.might.net/articles/implementing-a-programming-language/
49•azhenley•5h ago•11 comments

Show HN: adamsreview – better multi-agent PR reviews for Claude Code

https://github.com/adamjgmiller/adamsreview
34•adamthegoalie•7h ago•10 comments

Ask HN: What are you working on? (May 2026)

185•david927•16h ago•673 comments

First tunnel element of the Fehmarnbelt Tunnel immersed

https://www.arup.com/en-us/news/first-fehmarnbelt-tunnel-element-lowered/
96•robin_reala•3d ago•32 comments

How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?

https://dunkels.com/adam/claude-user-space-ip-stack-ping/
76•adunk•10h ago•23 comments

dBase: 1979-2026

https://delphinightmares.substack.com/p/dbase-1979-2026
74•deeaceofbase•3d ago•23 comments

Guy Goma's Accidental BBC Interview Lives on After 20 Years

https://www.nytimes.com/2026/05/06/business/media/bbc-guy-goma-interview.html
117•nxobject•2d ago•28 comments

Phel v0.36.0 – Lisp on PHP, now with numeric tower and first-class Vars

https://github.com/phel-lang/phel-lang/releases/tag/v0.36.0
31•Chemaclass•3d ago•4 comments

I returned to AWS and was reminded why I left

http://fourlightyears.blogspot.com/2026/05/i-returned-to-aws-and-was-reminded-hard.html
765•andrewstuart•2d ago•535 comments

Traces Of Humanity

https://tracesofhumanity.org/hello-world/
157•alex77456•16h ago•23 comments

Seeing Birdsong

https://www.lucioarese.net/seeing-birdsong/
21•carabiner•3d ago•1 comments

Stop MitM on the first SSH connection, on any VPS or cloud provider

https://www.joachimschipper.nl/Stop%20MITM%20on%20the%20first%20SSH%20connection,%20on%20any%20VP...
117•JoachimSchipper•2d ago•64 comments

Ice Cream Blending (1965) [pdf]

https://bitsavers.org/pdf/ibm/generalInfo/E20-0156-0_Linear_Programming_-_Ice_Cream_Blending.pdf
11•ok123456•2d ago•1 comments

The people preserving the scientific practice of bird banding

https://thenarwhal.ca/bird-banding-ontario/
53•bookofjoe•3d ago•0 comments

Eight More '8-Bit Era' Microprocessors

https://thechipletter.substack.com/p/eight-more-8-bit-era-microprocessors
72•klelatti•2d ago•27 comments

The locals don't know

https://www.quarter--mile.com/The-Locals-Dont-Know
162•herbertl•17h ago•127 comments

Idempotency is easy until the second request is different

https://blog.dochia.dev/blog/idempotency/
307•ludovicianul•3d ago•182 comments

Maryland citizens hit with $2B power grid upgrade for out-of-state AI

https://www.tomshardware.com/tech-industry/artificial-intelligence/maryland-citizens-slapped-with...
250•lemonberry•12h ago•145 comments

Task Paralysis and AI

https://g5t.de/articles/20260510-task-paralysis-and-ai/index.html
237•MrGilbert•1d ago•119 comments

Walking slower? Your ears, not your knees, might be the problem

https://www.wsj.com/health/wellness/hearing-loss-walking-speed-iphone-study-c53c482a
118•marc__1•1d ago•73 comments
Open in hackernews

Llasa: Llama-Based Speech Synthesis

https://llasatts.github.io/llasatts/
168•CalmStorm•1y ago

Comments

CalmStorm•1y ago
LLaSA is a simple framework for speech synthesis that employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align with standard LLMs such as LLaMA.
WastedCucumber•1y ago
Probably the title should have the correct capitalization then. Cause I was fully expecting a speech synthesis tool that sounded like llamas talking human language and now I'm bummed out!
StevenNunez•1y ago
I can't wait see this integrated into Open WebUI! These sound amazing.
gapeleon•1y ago
You can run an openai-compatible endpoint and point open-webui at it if you want this. I had to add a function to filter out markdown lists, code, etc as the model was choking on them.
mring33621•1y ago
the long 'uuuuhhhhhhh' from some of the lesser models is killing me.
jszymborski•1y ago
based on the samples, it really seams like anything smaller than 3B is pretty useless.
hadlock•1y ago
If you're doing a home lab voice assistant 1B is nice, because on a 12gb gpu you can run a moderately competent 7b LLM and two 1b models; 1 for speech to text and also text to speech, plus some for the wake word monitor. Maybe in a couple of years we can combine all this into a single ~8b model that runs efficiently on 12gb gpu. Nvidia doesn't seem very incentivized right now to sell consumer GPUs that can run all this on a single consumer grade chip when they're making so much money selling commercial grade 48gb cards.
Dlemo•1y ago
Hui for the activation word?

Shouldn't there be some hardware module be available similar to how Alexa, Siri and Google do it?

Whith a ring buffer detection the word without recording everything?

gapeleon•1y ago
This finetune seems pretty stable (1b llasa) https://huggingface.co/spaces/HKUST-Audio/Llasa-1B-multi-spe...

1B is actually huge for a TTS model. Here's an 82m model with probably the most stable/coherent output of all the open weights tts models I've tested: https://huggingface.co/spaces/hexgrad/Kokoro-TTS

But if you mean zero-shot cloning, yeah they all seem to have those slurred speech artefacts from time to time.

nialv7•1y ago
the mispronunciation of 行 and 行 in the Chinese sample is killing me too XD
dheera•1y ago
> employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align

I really wish when new models were released that they would draw a diagram of all the layers and the tensor input and output sizes at each layer, with zoom in/out capabilities if needed using D3.js or whatever visualization framework if needed. Every single layer should be on there with its input and output sizes.

These one-sentence descriptions, and approximate block diagrams with arrows pointing at each other are never enough to understand how something is actually implemented.

exe34•1y ago
Sounds like a solid SaaS business plan!
dr_kiszonka•1y ago
That might be intentional.
imtringued•1y ago
This already exists in Transformer Lab and ONNX (not recommended for transformers).

You can also build a custom version of llama.cpp that writes out the ggml compute graph. What's irritating is that hugging face didn't add it to their GGUF file viewer.

dheera•1y ago
Oh, sure, for the well-known models that are already on there.

I just wish that new research would always spell it out in full instead of these silly block diagrams labelled with just e.g. "Cross Attention" and not the exact parameters, number of heads, layer sizes, etc.

Also some of these diagrams use a + for concatenation and some use it for addition, that's another headache to figure out, having layer sizes would make it clear.

ks2048•1y ago
Odd that the page doesn't seem to link to either,

paper: https://arxiv.org/abs/2502.04128

github: https://github.com/zhenye234/LLaSA_training

thot_experiment•1y ago
Interesting that there isn't a mention of Orpheus as prior art either since it's the exact same thing.

(https://github.com/canopyai/Orpheus-TTS)

gapeleon•1y ago
> Interesting that there isn't a mention of Orpheus as prior art either

Llasa-3b (https://huggingface.co/HKUSTAudio/Llasa-3B) came out before Orpheus (https://huggingface.co/canopylabs/orpheus-3b-0.1-ft).

> it's the exact same thing.

They're very similar, but they're not the exact same thing.

Llasa uses xcodec2, a much simpler, lossless 16khz wav codec. This makes it superior for one-shot voice cloning.

Orpheus' 24khz snac codec is lossy which makes it difficult to use for zero-shot cloning as the reference audio gets degraded during tokenization. You can test this here: https://huggingface.co/spaces/Gapeleon/snac_test

But when finetuned on 50+ audio samples, it produces much cleaner 24khz audio than Llasa, and the snac model is much easier to run on consumer hardware than xcodec2 (87t/s for realtime speech, which can be achieved on an RTX3080 for example)

oezi•1y ago
Do you happen to know why Orpheus and Llasa use Finetuning for voice cloning?

Zonos uses 128-float embeddings for voices and it seems so much nicer. Because you can just mix and match voices without changing the model.

thot_experiment•1y ago
No, you just condition it with text-voice token pairs and then when conditioning further inference w/ text the voice tokens tend to match the pairs further up in the context.
oezi•1y ago
Isn't xcodec2 also lossy? I thought it is also just another neural codec (50 tok/s, single codebook).

What are people using to upsampling back to 44,1 or 48 khz? Anything fancy?

woodson•1y ago
They’re both lossy. They use a VAE-VQ type architecture trained with a combination of losses/discriminators. The differences are mainly the encoder/decoder architecture, the type of bottleneck quantization (RVQ, FSQ, etc.) and of course the training data.