frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

So, you want to chunk really fast?

https://minha.sh/posts/so,-you-want-to-chunk-really-fast
149•snyy•1d ago

Comments

snyy•1d ago
We're the maintainers of Chonkie, a chunking library for RAG pipelines.

Recently, we've been using Chonkie to build deep research agents that watch topics for new developments and automatically update their reports. This requires chunking a large amount of data constantly.

While building this, we noticed Chonkie felt slow. We started wondering: what's the theoretical limit here? How fast can text chunking actually get if we throw out all the abstractions and go straight to the metal?

This post is about that rabbit hole and how it led us to build memchunk - the fastest chunking library, capable of chunking text at 1TB/s.

Blog: https://minha.sh/posts/so,-you-want-to-chunk-really-fast

GitHub: https://github.com/chonkie-inc/memchunk

Happy to answer any questions!

djoldman•1d ago
English word, clause, sentence, and paragraph boundaries do not always match characters.

How does the software handle these:

Mrs. Blue went to the sea shore with Mr. Black.

"What's for dinner?" Mrs. Blue asked.

brene•1d ago
Do you see this project merge with the Chonkie at some point? Or do you intend to keep it separate?
snyy•1d ago
Memchunk is already in Chonkie as the `FastChunker`

To install: pip install chonkie[fast]

``` from chonkie import FastChunker

chunker = FastChunker(chunk_size=4096) chunks = chunker(huge_document) ```

SkyPuncher•1d ago
I've been seeing a bunch of LLM-adjacent articles recently that are focusing on being fast - and they leave me a bit stumped.

While latency _can_ be a problem, reliability and accuracy are almost always my bottlenecks (to user value). Especially with chunking. Chunking is generally a one-time process where users aren't latency sensitive.

chaboud•1d ago
If you have reliability and accuracy (big if) then the practical usability and cost become performance problems.

And this is a bit of a sliding scale. Of course users want the best possible answer. However, if they can get 80% (magic hand-wavey fakie number) of the best answer on one second instead of 20, that may be a worthwhile tradeoff.

snyy•1d ago
> Chunking is generally a one-time process where users aren't latency sensitive.

This is not necessarily true. For example, in our use case we are constantly monitoring websites, blogs, and other sources for changes. When a new page is added, we need to chunk and embed it fast so it's searchable immediately. Chunking speed matters for us.

When you're processing changes constantly, chunking is in the hot path. I think as LLMs get used more in real time workflows, every part of the stack will start facing latency pressure.

rfw300•1d ago
How much compute do your systems expend on chunking vs. the embedding itself?
smlacy•1d ago
Not all languages have such well-defined and commonly used delimiters. Is this "English only"?
snyy•1d ago
Which language are you thinking of? Ideally, how would you identify split points in this language?

I suppose we've only tested this with languages that do have delimiters - Hindi, English, Spanish, and French

There are two ways to control the splitting point. First is through delimiters, and the second is by setting chunk size. If you're parsing a language where chunks can't be described by either of those params, then I suppose memchunk wouldn't work. I'd be curious to see what does work though!

smlacy•1d ago
There are certainly cases of Greek/Latin without any punctuation at all, typically in a historical context. Chinese & Japanese historically did not have any punctuation whatsoever.
ks2048•1d ago
Do the delimiters have to be single bytes? e.g. Japanese full stop (IDEOGRAPHIC FULL STOP) is 3 bytes in UTF-8.
snyy•1d ago
No, delimiters can be multiple bytes. They have to be passed as a pattern.

// With multi-byte pattern

let metaspace = "<japanese_full_stop>".as_bytes();

let chunks: Vec<&[u8]> = chunk(text).pattern(metaspace).prefix().collect();

vjerancrnjak•1d ago
So, whole english wikipedia in <1 second (~20GB compressed)?

Or is it now a lack of proper pipelining where you first load, then uncompress, then chunk, then write?

Add a nice strong linear model on top like vowpal wabbit and chunk at 100GB/s any language of your choice.

srcreigh•1d ago
4/5 of today's top CNN articles have words with periods in them: "Mr.", "Dr.", "No.", "John D. Smith", "Rep."

The last one also has periods within quotations, so period chunking would cut off the quote.

Havoc•1d ago
I suspect chunking is an exercise in „good enough“
snyy•1d ago
A big chunk size with overlap solves this. Chunks don't have to be be "perfectly" split in order to work well.
srcreigh•1d ago
True, but you don’t need 150GB/s delimiter scanning in that case either.
snyy•1d ago
As the other comment said, its a practice in good enough chunks quality. We focus on big chunks (largest we can make without hurting embedding quality) as fast as possible. In our experience, retrieval accuracy is mostly driven by embedding quality, so perfect splits don't move the needle much.

But as the number of files to ingest grows, chunking speed does become a bottleneck. We want faster everything (chunking, embedding, retrieval) but chunking was the first piece we tackled. Memchunk is the fastest we could build.

ubertaco•1d ago
Does this even work if you're incredulous enough???
SteveJS•1d ago
This gets those cases right.

https://github.com/KnowSeams/KnowSeams

(On a beefy machine) It gets 1 TB/s throughput including all IO and position mapping back to original text location. I used it to split project gutenberg novels. It does 20k+ novels in about 7 seconds.

Note it keeps all dialog together- which may not be what others want, but was what i wanted.

neonsunset•1d ago
.NET's string.Split implementation is very close to what the article showcases, even 3-character limit is there: https://github.com/dotnet/runtime/blob/main/src/libraries/Sy...
stabbles•1d ago
For the particular case of the 5 delimiters '\n', '.', '?', '!', and ';', it just happens to be so that you can do this as a single shuffle instruction, replacing the explicit lookup table.

You can do this whenever `c & 0x0F` is unique for the set of characters you're looking for.

See https://stoppels.ch/2022/11/30/io-is-no-longer-the-bottlenec... for details.

bhavnicksm•1d ago
Hey! Author of the blog here.

This is pretty cool~ Thanks for suggesting this, I will read this in detail and add it to the next (0.5.0) release of memchunk.

CyberDildonics•1d ago
Why does your title not have any context?
dataflow•1d ago
Note your compiler might turn that _mm256_set_epi64x into a load from memory, so there might still be memory accesses you don't expect.
akoboldfrying•1d ago
This is a really neat technique, well explained at your link.

Now that I understand it, I'd describe it as: For each byte, based on its bottom 4 bits, map it to either the unique "target" value that you're looking for that has those bottom 4 bits, or if there is no such target value, to any value that is different from what it is right now. Then simply check whether each resulting byte is equal to its corresponding original byte!

Not sure if the above will help people understand it, but after you understand it, I think you'll agree with the above description :)

mwsherman•1d ago
While this article is about perf — and trading off semantic precision by design — there is a Unicode standard for sentence boundaries, may be interesting: https://www.unicode.org/reports/tr29/#Sentence_Boundaries

I implemented the sentence boundaries, but also thought that the notion of a “phrase” might be useful for such applications: https://github.com/clipperhouse/uax29/tree/master/phrases

bob1029•1d ago
> you have a massive pile of text, and you need to split it into smaller pieces that fit into embedding models or context windows.

I think the recently posted Recursive Language Models paper approaches this in a far more compelling way. They put the long context into the environment and make the LLM write and iterate python code to query against it in a recursive loop. Fig. 2 & 4 are most relevant here.

https://news.ycombinator.com/item?id=46475395

https://arxiv.org/abs/2512.24601

I really like this because it is in The Bitter Lesson genre of solutions. Make the model learn the best way to retrieve info from a massive prompt on disk given the domain and any human feedback (explicit and otherwise).

The bigger the prompt.txt, the less relevant the LLM's raw context capabilities are. Context scaling is quadratic in cost. It's a very expensive rabbit to chase. Recursively invoking the same agent with decomposed problem bits is more of a logarithmic scaling thing. You could hypothetically manage a 1 gigabyte prompt with a relatively minuscule context window under a recursive scheme using nothing other than a shell/python interpreter.

analog8374•1d ago
This warms my heart
Neywiny•1d ago
Some notes: 1. Nice and tight article, good work 2. Shipped a piece of code, always props to that 3. The has_zero_byte it would be nice to actually do the math in the example. As is the example doesn't really show anything. It also should say "its" instead of "it's" 4. The work done per chunk shouldn't include the broadcasts. That should be done at the start of the search and those values kept in the registers, no? 5. Isn't AVX and SSE also SWAR? They're just wider registers 6. I think a graph showing the cost of the lookup table vs n needles would be cool to see

Overall nice work

fmstephe•1d ago
Can some clarify this part of the article for me

"if you search forward, you need to scan through the entire window to find where to split. you’d find a delimiter at byte 50, but you can’t stop there — there might be a better split point closer to your target size. so you keep searching, tracking the last delimiter you saw, until you finally cross the chunk boundary. that’s potentially thousands of matches and index updates."

So I understand that this is optimal if you want to make your chunks as large as possible for a given chunk size.

What I don't understand is why is it desirable to grab the largest chunk possible for a given chunk limit?

Or have I misunderstood this part of the article?

snyy•1d ago
You have the right understanding.

We've found that maximizing chunk size gives the best retrieval performance and is easier to maintain since you don't have to customize chunking strategy per document type.

The upper limit for chunk size is set by your embedding model. After a certain size, encoding becomes too lossy and performance degrades.

There is a downside: blindly splitting into large chunks may cut a sentence or word off mid-way. We handle this by splitting at delimiters and adding overlap to cover abbreviations and other edge cases.

teraflop•1d ago
Don't get me wrong, it's fun to see performance optimizations like this.

But I'd expect that a naive implementation of the same strategy would already take like 0.1% of the time needed to actually generate embeddings for your chunks. So practically, is it really worth the effort of writing a bunch of non-trivial SIMD code to reduce that overhead from 0.1% to 0.001%?

topdog123•1d ago
Agreed. For any code written, there is a sort of return on time expended. Optimisations are really only required when demanded.
imperio59•1d ago
From the author: > at some point we started benchmarking on wikipedia-scale datasets. > that’s when things started feeling… slow.

So they're talking about this becoming an issue when chunking TBs of data (I assume), not your 1kb random string...

groby_b•1d ago
But the bottleneck is generating embeddings either way.

memchunk has a throughput of 164 GB/s. A really fast embedder can deliver maybe 16k embeddings/sec, or ~1.6GB/s (if you assume 100 char sentences)

That's two orders of magnitude difference. Chunking is not the bottleneck.

It might be an architectural issue - you stuff chunks into a MQ, and you want to have full visibility in queue size ASAP - but otherwise it doesn't matter how much you chunk, your embedder will slow you down.

It's still a neat exercise on principle, though :)

viraptor•1d ago
It doesn't matter if A takes much more time than B, if B is large enough. You're still saving resources and time by optimising B. Also, you seem to assume that every chunk will get embedded - they may be revisiting some pages where the chunks are already present in the database.
groby_b•18h ago
Amdahl's law still holds, though. If A and B differ in execution times by orders of magnitude, optimising B yields minimal returns (assuming streaming, vs fully serial processing)

And sure, you can reject chunks, but a) the rejection isn't free, and B) you're still bound by embedding speed.

As for resource savings.... not in the Wikipedia data range. If you scale up massively and go to a PB of data, going from kiru to memchunk saves you ~25 CPU days. But you also suddenly need to move from bog-standard high cpu machines to machines supporting 164GB/s memory throughput, likely full metal with 8 memory channels. I'm too lazy to do the math, but it's going to be a mild difference at O($100)

Again, I'm not arguing this isn't a cool achievement. But it's very much engineering fun, not "crucial optimization".

leobg•1d ago
Nice! Thanks for sharing. Does it do overlap?
8note•17h ago
what i really want to see from this article is a curve showing a tradeoff between speed and embedded text quality. there's the preamble that just going by character has quality problems, but i dont think delimiters are necessarily the best either, vs being able to find paragraph or even chapter boundaries.

how much of a problem is it that ~1 sentence per chunk gets corrupted in the by-character solution? what level of sentence corruption is left in by switching to these delimiters? what level of paragraph/idea corruption is left in with each? chapter/argument level?

“Stop Designing Languages. Write Libraries Instead” (2016)

https://lbstanza.org/purpose_of_programming_languages.html
121•teleforce•2h ago•62 comments

A4 Paper Stories

https://susam.net/a4-paper-stories.html
85•blenderob•2h ago•38 comments

The Eric and Wendy Schmidt Observatory System

https://www.schmidtsciences.org/schmidt-observatory-system/
38•pppone•2h ago•28 comments

LaTeX Coffee Stains [pdf]

https://ctan.math.illinois.edu/graphics/pgf/contrib/coffeestains/coffeestains-en.pdf
6•zahrevsky•15m ago•0 comments

Show HN: KeelTest – AI-driven VS Code unit test generator with bug discovery

https://keelcode.dev/keeltest
13•bulba4aur•1h ago•4 comments

Formal methods only solve half my problems

https://brooker.co.za/blog/2022/06/02/formal.html
45•signa11•4d ago•14 comments

The first new compass since 1936

https://www.youtube.com/watch?v=eiDhbZ8-BZI
52•1970-01-01•5d ago•32 comments

Vector graphics on GPU

https://gasiulis.name/vector-graphics-on-gpu/
105•gsf_emergency_6•4d ago•18 comments

Everyone hates OneDrive, Microsofts cloud app that steals and deletes files

https://boingboing.net/2026/01/05/everyone-hates-onedrive-microsofts-cloud-app-that-steals-then-d...
26•mikecarlton•1h ago•10 comments

Stop Doom Scrolling, Start Doom Coding: Build via the terminal from your phone

https://github.com/rberg27/doom-coding
502•rbergamini27•19h ago•352 comments

Opus 4.5 is not the normal AI agent experience that I have had thus far

https://burkeholland.github.io/posts/opus-4-5-change-everything/
679•tbassetto•21h ago•961 comments

Optery (YC W22) Hiring a CISO and Web Scraping Engineers (Node) (US and Latam)

https://www.optery.com/careers/
1•beyondd•3h ago

Electronic nose for indoor mold detection and identification

https://advanced.onlinelibrary.wiley.com/doi/10.1002/adsr.202500124
155•PaulHoule•14h ago•87 comments

The creator of Claude Code's Claude setup

https://twitter.com/bcherny/status/2007179832300581177
490•KothuRoti•4d ago•319 comments

Show HN: SMTP Tunnel – A SOCKS5 proxy disguised as email traffic to bypass DPI

https://github.com/x011/smtp-tunnel-proxy
99•lobito25•14h ago•33 comments

A 30B Qwen model walks into a Raspberry Pi and runs in real time

https://byteshape.com/blogs/Qwen3-30B-A3B-Instruct-2507/
291•dataminer•18h ago•101 comments

Vietnam bans unskippable ads

https://saigoneer.com/vietnam-news/28652-vienam-bans-unskippable-ads,-requires-skip-button-to-app...
1468•hoherd•22h ago•747 comments

On the slow death of scaling

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5877662
96•sethbannon•11h ago•18 comments

I wanted a camera that doesn't exist, so I built it

https://medium.com/@cristi.baluta/i-wanted-a-camera-that-doesnt-exist-so-i-built-it-5f9864533eb7
421•cyrc•4d ago•131 comments

Show HN: Comet MCP – Give Claude Code a browser that can click

https://github.com/hanzili/comet-mcp
8•hanzili•3d ago•5 comments

Oral microbiome sequencing after taking probiotics

https://blog.booleanbiotech.com/oral-microbiome-biogaia
168•sethbannon•17h ago•71 comments

Investigating and fixing a nasty clone bug

https://kobzol.github.io/rust/2025/12/30/investigating-and-fixing-a-nasty-clone-bug.html
20•r4um•5d ago•0 comments

The ISEE Trajectories

https://www.drmindle.com/isee/
5•drmindle12358•2d ago•4 comments

We recreated Steve Jobs's 1975 Atari horoscope program

https://blog.adafruit.com/2026/01/06/we-recreated-steve-jobss-1975-atari-horoscope-program-and-yo...
86•ptorrone•14h ago•38 comments

What *is* code? (2015)

https://www.bloomberg.com/graphics/2015-paul-ford-what-is-code/
63•bblcla•5d ago•25 comments

CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs

https://chipsandcheese.com/p/ces-2026-taking-the-lids-off-amds
123•rbanffy•17h ago•70 comments

Calling All Hackers: How money works (2024)

https://phrack.org/issues/71/17
298•krrishd•18h ago•189 comments

Gnome dev gives fans of Linux's middle-click paste the middle finger

https://www.theregister.com/2026/01/07/gnome_middle_click_paste/
42•beardyw•1h ago•40 comments

Launch HN: Tamarind Bio (YC W24) – AI Inference Provider for Drug Discovery

74•denizkavi•21h ago•17 comments

Sergey Brin's Unretirement

https://www.inc.com/jessica-stillman/google-co-founder-sergey-brins-unretirement-is-a-lesson-for-...
266•iancmceachern•6d ago•334 comments