frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

https://github.com/KittenML/KittenTTS
241•divamgupta•3h ago
Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. We are excited to launch a preview of our smallest model, which is less than 25 MB. This model has 15M parameters.

This release supports English text-to-speech applications in eight voices: four male and four female. The model is quantized to int8 + fp16, and it uses onnx for runtime. The model is designed to run literally anywhere eg. raspberry pi, low-end smartphones, wearables, browsers etc. No GPU required!

We're releasing this to give early users a sense of the latency and voices that will be available in our next release (hopefully next week). We'd love your feedback! Just FYI, this model is an early checkpoint trained on less than 10% of our total data.

We started working on this because existing expressive OSS models require big GPUs to run them on-device and the cloud alternatives are too expensive for high frequency use. We think there's a need for frontier open-source models that are tiny enough to run on edge devices!

Comments

GaggiX•6h ago
https://huggingface.co/KittenML/kitten-tts-nano-0.1

https://github.com/KittenML/KittenTTS

This is the model and Github page, this blog post looks very much AI generated.

nine_k•6h ago
I hope this is the future. Offline, small ML models, running inference on ubiquitous, inexpensive hardware. Models that are easy to integrate into other things, into devices and apps, and even to drive from other models maybe.
rohan_joshi•48m ago
yeah totally. the quality of these tiny models are only going to go up.
divamgupta•48m ago
That is our vision too!
WhyNotHugo•16m ago
Dedicated single-purpose hardware with models would be even less energy-intensive. It's theoretically possible to design chips which run neural networks and alike using just resistors (rather than transistors).

Such hardware is not general-purpose, and upgrading the model would not be possible, but there's plenty of use-cases where this is reasonable.

mayli•6h ago
Is this english only?
g7r•6h ago
Yes. The FAQ says that multilingual capabilities are in the works.
a2128•6h ago
If you're looking for other languages, Piper has been around in this scene for much longer and they have open-source training code and a lot of models (they're ~60MB instead of 25MB but whatever...) https://huggingface.co/rhasspy/piper-voices/tree/main
kenarsa•5h ago
Or use https://github.com/Picovoice/orca which is about 7MB and supports 8 languages
pezgrande•4h ago
you need api key and internet access to run locally? lol. Classic .NET project.
evgpbfhnr•6h ago
I tried on some Japanese for the kicks of it, it reads... "Chinese letter chinese letter japanese letter chinese letter..." :D

But yeah, if it's like any of the others we'll likely see a different "model" per language down the line based on the same techniques

riedel•19m ago
Actually I found it irritating that the readme does not mention the language at all. I think it is not good practice to deduce it from the language of the readme itself. I would not like to have German language tts models with only a German readme...
toisanji•6h ago
Wow, amazing and good work, I hope to see more amazing models running on CPUs!
rohan_joshi•42m ago
thanks, we're going to release many more models in the future, that can run on just CPUs.
onair4you•6h ago
Okay, lots of details information and example code, great. But skimming through I didn’t see any audio samples to judge the quality?
TheAceOfHearts•6h ago
They posted a demo on reddit[0]. It sounds amazing given the tiny size.

[0] https://old.reddit.com/r/LocalLLaMA/comments/1mhyzp7/kitten_...

onair4you•4h ago
Thanks! Yeah. It definitely isn’t the absolute best in quality but it trounces the default TTS options on macOS (as third party developers are locked out of the Siri voices). And for less than the size of many modern web pages…
blopker•6h ago
Web version: https://clowerweb.github.io/kitten-tts-web-demo/

It sounds ok, but impressive for the size.

nine_k•6h ago
Does anybody find it funny that sci-fi movies have to heavily distort "robot voices" to make them sound "convincingly robotic"? A robotic, explicitly non-natural voice would be perfectly acceptable, and even desirable, in many situations. I don't expect a smart toaster to talk like a BBC host; it'd be enough is the speech if easy to recognize.
roywiggins•6h ago
This one is at least an interesting idea: https://genderlessvoice.com/
cosmojg•4h ago
The voice sounds great! I find it quite aesthetically pleasing, but it's far from genderless.
degamad•4h ago
Interesting concept, but why is that site filled with Top X blogspam?
dang•2h ago
Meet Q, a Genderless Voice - https://news.ycombinator.com/item?id=19505835 - March 2019 (235 comments)
cyberax•39m ago
It doesn't sound genderless.
userbinator•3h ago
A robotic, explicitly non-natural voice would be perfectly acceptable, and even desirable, in many situations[...]it'd be enough is the speech if easy to recognize.

We've had formant synths for several decades, and they're perfectly understandable and require a tiny amount of computing power, but people tend not to want to listen to them:

https://en.wikipedia.org/wiki/Software_Automatic_Mouth

https://simulationcorner.net/index.php?page=sam (try it yourself to hear what it sounds like)

saretup•2h ago
Well, this one is a bit too jarring to the ears.
rixed•2h ago
But there is no latency, as opposed to KittenTTS, so it certainly has its applications too.
cess11•1h ago
Try this demo, which has more knobs:

https://discordier.github.io/sam/

actionfromafar•54m ago
I think it's charming
miki123211•1h ago
SAM and the way it works is not what people typically associate with the term "formant synthesizer."

DECtalk[1,2] would be a much better example, that's as formant as you get.

[1] https://en.wikipedia.org/wiki/DECtalk [2] https://webspeak.terminal.ink

tapper•22m ago
Yeah blind people love eloquence
Twirrim•2h ago
> I don't expect a smart toaster to talk like a BBC host;

Well sure, the BBC have already established that it's supposed to sound like a brit doing an impersonation of an American: https://www.youtube.com/watch?v=LRq_SAuQDec

incone123•1h ago
Depends on the movie. Ash and Bishop in the Alien franchise sound human until there's a dramatic reason to sound more 'robotic'.

I agree with your wider point. I use Google TTS with Moon+Reader all the time (I tried audio books read by real humans but I prefer the consistency of TTS)

quantummagic•6h ago
Doesn't work here. Backend module returns 404 :

https://clowerweb.github.io/node_modules/onnxruntime-web/dis...

Retr0id•5h ago
Looks like this commit 15 minutes ago broke it https://github.com/clowerweb/kitten-tts-web-demo/commit/6b5c...

(seems reverted now)

kenarsa•5h ago
Try https://github.com/Picovoice/orca It's about 7MB all included
satvikpendem•5h ago
Does an apk for Android exist for replacing its speech to text engine? I tried sherpa-onnx but it was too slow for real time usage it seemed, and especially so for audiobooks when sped up.
kenarsa•5h ago
https://github.com/Picovoice/orca/tree/main/demo%2Fandroid
satvikpendem•5h ago
I can't test this out right now, is this just a demo or is it actually an apk for replacing the engine? Because those are two different things, the latter can be used any time you want to read something aloud on the page for example. This is the sherpa-onnx one I'm talking about.

https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html

gary_0•5h ago
Not open source. "You will need internet connectivity to validate your AccessKey with Picovoice license servers ... If you wish to increase your limits, you can purchase a subscription plan." https://github.com/Picovoice/orca#accesskey
cakealert•2h ago
Going online is a dealbreaker but if you really need it you could use ghidra to fix that. I had tried to find a conversion of their model to onnx (making their proprietary pipeline useless) but failed.

Hopefully open source will render them irrelevant in the future.

papichulo2023•1h ago
The guy is just spamming the project in a lot of comments.
Retr0id•5h ago
I tried to replicate their demo text but it doesn't sound as good for some reason.

If anyone else wants to try:

> Kitten TTS is an open-source series of tiny and expressive text-to-speech models for on-device applications. Our smallest model is less than 25 megabytes.

cortesoft•3h ago
Is the demo using the not smallest model?
itake•4h ago
> Error generating speech: failed to call OrtRun(). ERROR_CODE: 2, ERROR_MESSAGE: Non-zero status code returned while running Expand node. Name:'/bert/Expand' Status Message: invalid expand shape

Doesn't seem to work with thai.

jainilprajapati•4h ago
You can also try on https://clowerweb.github.io/node_modules/onnxruntime-web/dis...
nxnsxnbx•2h ago
Thanks, I was looking for that. While the reddit demo sounds ok, even though on a level we reached a couple of years ago, all TTS samples I tried were barley understandable at all
divamgupta•44m ago
This is just an early checkpoint. We hope that the quality will improve in the future.
bkyan•2h ago
I got an error when I tried the demo with 6 sentences, but it worked great when I reduced the text to 3 sentences. Is the length limit due to the model or just a limitation for the demo?
cess11•1h ago
Perhaps a length limit? I tried this:

"This first Book proposes, first in brief, the whole Subject, Mans disobedience, and the loss thereupon of Paradise wherein he was plac't: Then touches the prime cause of his fall, the Serpent, or rather Satan in the Serpent; who revolting from God, and drawing to his side many Legions of Angels, was by the command of God driven out of Heaven with all his Crew into the great Deep."

It takes a while until it starts generating sound on my i7 cores but it kind of works.

This also works:

"blah. bleh. blih. bloh. blyh. bluh."

So I don't think it's a limit on punctuation. Voice quality is quite bad though, not as far from the old school C64 SAM (https://discordier.github.io/sam/) of the eighties as I expected.

divamgupta•46m ago
Currently we don't have chunking enabled yet. We will add it soon. That will remove the length limitations.
belchiorb•1h ago
This doesn’t seem to work on Safari. Works great on Chrome, though
divamgupta•58m ago
Hmm, we will look into it.
tapper•15m ago
You should post on the NVDA email list. https://nvda.groups.io/g/nvda Or the Screen reader list: https://winaccess.groups.io/g/winaccess FYI blind people do not like any lag when reading that’s is why so many still use eloquence and espeak.
rohan_joshi•47m ago
yeah, this is just a preview model from an early checkpoint. the full model release will be next week which includes a 15M model and an 80M model, both of which will have much higher quality than this preview.
Aardwolf•28m ago
On PC it's a python dependency hell but someone managed to package it in self contained JS code that works offline once it loaded the model? How is that done?
mlboss•6h ago
Reddit post with generated audio sample: https://www.reddit.com/r/LocalLLaMA/comments/1mhyzp7/kitten_...
tapper•23m ago
Sounds slow and like something from an anine
ricardobeat•10m ago
Speech speed is always a tunable parameter and not something intrinsic to the model.

The comparison to make is expressiveness and correct intonation for long sentences vs something like espeak. It actually sounds amazing for the size. The closest thing is probably KokoroTTS at 82M params and ~300MB.

pkaye•6h ago
Where does the training data come for the models? Is there an openly available dataset the people use?
wewewedxfgdf•6h ago
say is only 193K on MacOS

  ls -lah /usr/bin/say
  -rwxr-xr-x  1 root  wheel   193K 15 Nov  2024 /usr/bin/say
Usage:

  M1-Mac-mini ~ % say "hello world this is the kitten TTS model speaking"
dented42•6h ago
That’s not a far comparison. Say just calls the speech synthesis APIs that have been around since at least Mac OS 8.

That being said, the ‘classical’ (pre-AI) speech synthesisers are much smaller than kitten, so you’re not wrong per se, just for the wrong reason.

deathanatos•3h ago
The linked repository at the top-level here has several gigabytes of dependencies, too.
wnoise•6h ago
And what dynamic libraries s it linked to? And what other data are they pulling in?
satvikpendem•5h ago
`say` sounds terrible compared to modern neural network based text to speech engines.
wewewedxfgdf•5h ago
Sounds about the same as Kitten TTS.
satvikpendem•5h ago
To me it sounds worse, especially on the construction of certain more complex sentences or words.
selcuka•5h ago
SAM on Commodore 64 was only 6K:

https://project64.c64.org/Software/SAM10.TXT

Obviously it's not fair to compare these with ML models.

tonypapousek•4h ago
Tried that on 26 beta, and the default voice sounds a lot smoother than it used it.

Running `man say` reveals that "this tool uses the Speech Synthesis manager", so I'm guessing the Apple Intelligence stuff is kicking in.

dented42•8m ago
Nothing to do with Apple Intelligence. The speech synthesiser manager (the term manager was used for OS components in Classic Mac OS) has been around since the mid 90s or so. The change you’re hearing is probably a new/modified default voice.
RobKohr•6h ago
What's a good one in reverse; speech to text?
jasonjmcghee•6h ago
Whisper and the many variants. Here's a good implementation.

https://github.com/ggml-org/whisper.cpp

wenc•3h ago
This one is a whisper-based Python package

https://github.com/primaprashant/hns

wkat4242•5h ago
Hmm the quality is not so impressive. I'm looking for a really naturally sounding model. Not very happy with piper/kokoro, XTTS was a bit complex to set up.

For STT whisper is really amazing. But I miss a good TTS. And I don't mind throwing GPU power at it. But anyway. this isn't it either, this sounds worse than kokoro.

kenarsa•5h ago
Try https://github.com/Picovoice/orca
echelon•5h ago
> Hmm the quality is not so impressive. [...] And I don't mind throwing GPU power at it.

This isn't for you, then. You should evaluate quality here based on the fact you don't need a GPU.

Back in the pre-Tacotron2 days, I was running slim TTS and vocoder models like GlowTTS and MelGAN on Digital Ocean droplets. No GPU to speak of. It cost next to nothing to run.

Since then, the trend has been to scale up. We need more models to scale down.

In the future we'll see small models living on-device. Embedded within toys and tools that don't need or want a network connection. Deployed with Raspberry Pi.

Edge AI will be huge for robotics, toys and consumer products, and gaming (ie. world models).

kamranjon•5h ago
The best open one I've found so far is Dia - https://github.com/nari-labs/dia - it has some limitations, but i think it's really impressive and I can run it on my laptop.
guskel•5h ago
Chatterbox is also worth a try.
jainilprajapati•4h ago
You should give try to https://pinokio.co/
andai•5h ago
Can you run it in reverse for speech recognition?
gromgull•1h ago
no, but whisper has a 39M model: https://github.com/openai/whisper
divamgupta•1h ago
We will release an STT model as well.
keyle•5h ago
I don't mind so much the size in MB, the fact that it's pure CPU and the quality, what I do mind however is the latency. I hope it's fast.

Aside: Are there any models for understanding voice to text, fully offline, without training?

I will be very impressed when we will be able to have a conversation with an AI at a natural rate and not "probe, space, response"

Teever•3h ago
Any idea what factors play into latency in TTS models?
divamgupta•59m ago
Mostly model size, and input size. Some models which use attention are O(N^2)
blensor•2h ago
"The brown fox jumps over the lazy dog.."

Average duration per generation: 1.28 seconds

Characters processed per second: 30.35

--

"Um"

Average duration per generation: 0.22 seconds

Characters processed per second: 9.23

--

"The brown fox jumps over the lazy dog.. The brown fox jumps over the lazy dog.."

Average duration per generation: 2.25 seconds

Characters processed per second: 35.04

--

processor : 0

vendor_id : AuthenticAMD

cpu family : 25

model : 80

model name : AMD Ryzen 7 5800H with Radeon Graphics

stepping : 0

microcode : 0xa50000c

cpu MHz : 1397.397

cache size : 512 KB

keyle•2h ago
assuming most answers will be more than a sentence, 2.25 seconds is already long enough if you factor the token generation in between... and imagine with reasoning!... We're not there yet.
moffkalast•1h ago
Hmm that actually seems extremely slow, Piper can crank out a sentence almost instantly on a Pi 4 which is a like a sloth compared to that Ryzen and the speech quality seems about the same at first glance.

I suppose it would make sense if you want to include it on top of an LLM that's already occupying most of a GPU and this could run in the limited VRAM that's left.

colechristensen•2h ago
>Aside: Are there any models for understanding voice to text, fully offline, without training?

OpenAI's whisper is a few years old and pretty solid.

https://github.com/openai/whisper

jiehong•1h ago
Voice to text fully offline can be done with whisper. A few apps offer it for dictation or transcription.
Dayshine•1h ago
Nvidia's parakeet https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2 appears to be state of the art for english: 10x faster than Whisper.

My mid-range AMD CPU is multiple times faster than realtime with parakeet.

sandreas•4h ago
Cool.

While I think this is indeed impressive and has a specific use case (e.g. in the embedded sector), I'm not totally convinced that the quality is good enough to replace bigger models.

With fish-speech[1] and f5-tts[2] there are at least 2 open source models pushing the quality limits of offline text-to-speech. I tested F5-TTS with an old NVidia 1660 (6GB VRAM) and it worked ok-ish, so running it on a little more modern hardware will not cost you a fortune and produce MUCH higher quality with multi-language and zero-shot support.

For Android there is SherpaTTS[3], which plays pretty well with most TTS Applications.

1: https://github.com/fishaudio/fish-speech

2: https://github.com/SWivid/F5-TTS

3: https://github.com/woheller69/ttsengine

divamgupta•1h ago
We have released just a preview of the model. We hope to get the model much better in the future releases.
jainilprajapati•4h ago
♥
maxloh•4h ago
Hi. Will the training and fine-tuning code also be released?

It would be great if the training data were released too!

MutedEstate45•4h ago
The headline feature isn’t the 25 MB footprint alone. It’s that KittenTTS is Apache-2.0. That combo means you can embed a fully offline voice in Pi Zero-class hardware or even battery-powered toys without worrying about GPUs, cloud calls, or restrictive licenses. In one stroke it turns voice everywhere from a hardware/licensing problem into a packaging problem. Quality tweaks can come later; unlocking that deployment tier is the real game-changer.
defanor•2h ago
A Festival's English model, festvox-kallpc16k, is about 6 MB, and it is a large model; festvox-kallpc8k is about 3.5 MB.

eSpeak NG's data files take about 12 MB (multi-lingual).

I guess this one may generate more natural-sounding speech, but older or lower-end computers were capable of decent speech synthesis previously as well.

Joel_Mckay•1h ago
Custom voices could be added, but the speed was more important to some users.

$ ls -lh /usr/bin/flite

Listed as 27K last I checked.

I recall some Blind users were able to decode Gordon 8-bit dialogue at speeds most people found incomprehensible. =3

rohan_joshi•1h ago
yeah, we are super excited to build tiny ai models that are super high quality. local voice interfaces are inevitable and we want to power those in the future. btw, this model is just a preview, and the full release next week will be of much higher quality, along w another ~80M model ;)
vahid4m•4h ago
amazing! can't wait to integrate it into https://desktop.with.audio I'm already using KokorosTTS without a GPU. It works fairly well on Apple Silicon.

Foundational tools like this open up the possiblity of one-time payment or even free tools.

rohan_joshi•38m ago
would love to see how that turns out. the full model release next week will be more expressive and higher quality than this one so we're excited to see you try that out.
glietu•4h ago
Kudos guys!
divamgupta•1h ago
Thanks
wewewedxfgdf•4h ago
Chrome does TTS too.

https://codepen.io/logicalmadboy/pen/RwpqMRV

dang•3h ago
Most of these comments were originally posted to a different thread (https://news.ycombinator.com/item?id=44806543). I've moved them hither because on HN we always prefer to give the project creators credit for their work.

(it does however explain how many of these comments are older than the thread they are now children of)

righthand•2h ago
The sample rate does more than change the quality.
indigodaddy•2h ago
Can coqui run in cpu only?
palmfacehn•2h ago
Yes, XTTS2 has been reasonably performant for me and the cloning is acceptable.
mg•2h ago
Good TTS feels like it is something that should be natively built into every consumer device. So the user can decide if they want to read or listen to the text at hand.

I'm surprised that phone manufacturers do not include good TTS models in their browser APIs for example. So that websites can build good audio interfaces.

I for one would love to build a text editor that the user can use completely via audio. Text input might already be feasible via the "speak to type" feature, both Android and iOS offer.

But there seems to be no good way to output spoken text without doing round-trips to a server and generate the audio there.

The interface I would like would offer a way to talk to write and then commands like "Ok editor, read the last paragraph" or "Ok editor, delete the last sentence".

It could be cool to do writing this way while walking. Just with a headset connected to a phone that sits in one's pocket.

jiehong•1h ago
On Mac OS you can "speak" a text in almost every app, using built in voice (like the Siri voice or some older voices). All offline, and even from the terminal with "say".
babycommando•2h ago
Someone please port this to ONNX so we don't need to do all this ass tooling
victorbjorklund•2h ago
It is not the best TTS but it is freaking amazing it can be done by such a small model and it is good enough for so many use cases.
rohan_joshi•39m ago
thanks, but keep in mind that this model is just a preview checkpoint that is only 10% trained. the full release next week will be of much higher quality and it will include a 15M model and an 80M model.
android521•1h ago
it would be great if there is typescript support in the future
divamgupta•1h ago
Yup it runs on the web browser. https://clowerweb.github.io/kitten-tts-web-demo/
khanan•1h ago
"please join our DISCORD!"...
klipklop•1h ago
I tried it. Not bad for the size (of the model) and speed. Once you install all the massive number of libraries and things needed we are a far cry away from 25MB though. Cool project nonetheless.
Dayshine•1h ago
It mentions ONNX, so I imagine an ONNX model is or will be available.

ONNX runtime is a single library, with C#'s package being ~115MB compressed.

Not tiny, but usually only a few lines to actually run and only a single dependency.

divamgupta•50m ago
We will try to get rid of dependencies.
WhyNotHugo•20m ago
Usually pulling in lots of libraries helps develop/iterate faster. Then can be removed later once the whole thing starts to take shape.
antisol•1h ago

  System Requirements
  Works literally everywhere
Haha, on one of my machines my python version is too old, and the package/dependencies don't want to install.

On another machie the python version is too new, and the package/dependencies don't want to install.

divamgupta•51m ago
We are working to fix that. Thanks
raybb•18m ago
Have you considered offering a uvx command to run to get people going quickly?
Tatiana343•48m ago
ok
hahn-kev•32m ago
Python man
countfeng•46m ago
Very good model, thanks for the open source
rohan_joshi•41m ago
thanks a lot, this model is just a preview checkpoint. the full release next week will be of much higher quality.
tapper•26m ago
I am blind and use NVDA with a sinth. How is this news? I don't get it! My sinth is called eloquence and is 4089KB
Perz1val•23m ago
Is the name a joke on "If the emperor had a tts device"? It's funny

Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model

https://github.com/KittenML/KittenTTS
241•divamgupta•3h ago•129 comments

Anthropic rejects the main developer of the library they use

https://grell.dev/blog/ai_rejection
127•serhack_•55m ago•67 comments

Open models by OpenAI

https://openai.com/open-models/
1744•lackoftactics•15h ago•654 comments

I'm Archiving Picocrypt

https://github.com/Picocrypt/Picocrypt/issues/134
113•jaden•5h ago•39 comments

Genie 3: A new frontier for world models

https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
1299•bradleyg223•18h ago•438 comments

HTML Is Dead, Long Live HTML

https://acko.net/blog/html-is-dead-long-live-html/
22•puzzlingcaptcha•1h ago•7 comments

Some thoughts on journals, refereeing, and the P vs. NP problem

https://blog.computationalcomplexity.org/2025/08/some-thoughts-on-journals-refereeing.html
8•luu•1h ago•0 comments

The Amaranth hardware description language

https://amaranth-lang.org/docs/amaranth/latest/intro.html#the-amaranth-language
48•pabs3•4h ago•13 comments

Why there are so many more South Asian CEOs than East Asian CEOs in the US

https://www.davelu.com/p/learn-to-embrace-conflict
7•United857•1h ago•3 comments

Kart – Distributed version-control for geospatial and tabular data

https://kartproject.org/
16•cameronoliver•2d ago•1 comments

Spotting base64 encoded JSON, certificates, and private keys

https://ergaster.org/til/base64-encoded-json/
278•jandeboevrie•13h ago•118 comments

Ollama Turbo

https://ollama.com/turbo
348•amram_art•13h ago•191 comments

Ozempic shows anti-aging effects in trial

https://trial.medpath.com/news/5c43f09ebb6d0f8e/ozempic-shows-anti-aging-effects-in-first-clinical-trial-reversing-biological-age-by-3-1-years
265•amichail•17h ago•413 comments

Software Rot

https://permacomputing.net/software_rot/
91•pabs3•5h ago•77 comments

Create personal illustrated storybooks in the Gemini app

https://blog.google/products/gemini/storybooks/
155•xnx•11h ago•46 comments

Fire hazard of WHY2025 badge due to 18650 Li-Ion cells

https://wiki.why2025.org/Badge/Fire_hazard
14•fjfaase•1h ago•9 comments

Scientific fraud has become an 'industry,' analysis finds

https://www.science.org/content/article/scientific-fraud-has-become-industry-alarming-analysis-finds
360•pseudolus•21h ago•300 comments

Teacher AI Use Is Already Out of Control and It's Not Ok

https://simonwillison.net/2025/Aug/5/greyduet-on-rteachers/
37•jruohonen•2h ago•22 comments

Bourdain, My Camera, and Me (2021)

https://www.melaniedunea.com/essays/blog-post-title-one-phd62
11•NaOH•2d ago•1 comments

Consider using Zstandard and/or LZ4 instead of Deflate

https://github.com/w3c/png/issues/39
162•marklit•15h ago•90 comments

Marines now have an official drone-fighting handbook

https://www.marinecorpstimes.com/news/your-marine-corps/2025/08/04/the-marines-now-have-an-official-drone-fighting-handbook/
96•Gaishan•5h ago•103 comments

Things that helped me get out of the AI 10x engineer imposter syndrome

https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome/
812•coltonv•18h ago•584 comments

Ask HN: Have you ever regretted open-sourcing something?

196•paulwilsonn•3d ago•269 comments

Claude Opus 4.1

https://www.anthropic.com/news/claude-opus-4-1
756•meetpateltech•15h ago•281 comments

uBlock Origin Lite now available for Safari

https://apps.apple.com/app/ublock-origin-lite/id6745342698
1043•Jiahang•23h ago•401 comments

Build Your Own Lisp

https://www.buildyourownlisp.com/
250•lemonberry•20h ago•66 comments

The first widespread cure for HIV could be in children

https://www.wired.com/story/the-first-widespread-cure-for-hiv-could-be-in-children/
98•sohkamyung•3d ago•16 comments

Kyber (YC W23) is hiring enterprise account executives

https://www.ycombinator.com/companies/kyber/jobs/6RvaAVR-enterprise-account-executive-ae
1•asontha•11h ago

AI is propping up the US economy

https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-its-propping
229•mempko•13h ago•251 comments

Show HN: Stagewise (YC S25) – Front end coding agent for existing codebases

https://github.com/stagewise-io/stagewise
41•juliangoetze•17h ago•47 comments