To install: pip install chonkie[fast]
``` from chonkie import FastChunker
chunker = FastChunker(chunk_size=4096) chunks = chunker(huge_document) ```
While latency _can_ be a problem, reliability and accuracy are almost always my bottlenecks (to user value). Especially with chunking. Chunking is generally a one-time process where users aren't latency sensitive.
And this is a bit of a sliding scale. Of course users want the best possible answer. However, if they can get 80% (magic hand-wavey fakie number) of the best answer on one second instead of 20, that may be a worthwhile tradeoff.
This is not necessarily true. For example, in our use case we are constantly monitoring websites, blogs, and other sources for changes. When a new page is added, we need to chunk and embed it fast so it's searchable immediately. Chunking speed matters for us.
When you're processing changes constantly, chunking is in the hot path. I think as LLMs get used more in real time workflows, every part of the stack will start facing latency pressure.
I suppose we've only tested this with languages that do have delimiters - Hindi, English, Spanish, and French
There are two ways to control the splitting point. First is through delimiters, and the second is by setting chunk size. If you're parsing a language where chunks can't be described by either of those params, then I suppose memchunk wouldn't work. I'd be curious to see what does work though!
// With multi-byte pattern
let metaspace = "<japanese_full_stop>".as_bytes();
let chunks: Vec<&[u8]> = chunk(text).pattern(metaspace).prefix().collect();
Or is it now a lack of proper pipelining where you first load, then uncompress, then chunk, then write?
Add a nice strong linear model on top like vowpal wabbit and chunk at 100GB/s any language of your choice.
The last one also has periods within quotations, so period chunking would cut off the quote.
But as the number of files to ingest grows, chunking speed does become a bottleneck. We want faster everything (chunking, embedding, retrieval) but chunking was the first piece we tackled. Memchunk is the fastest we could build.
https://github.com/KnowSeams/KnowSeams
(On a beefy machine) It gets 1 TB/s throughput including all IO and position mapping back to original text location. I used it to split project gutenberg novels. It does 20k+ novels in about 7 seconds.
Note it keeps all dialog together- which may not be what others want, but was what i wanted.
You can do this whenever `c & 0x0F` is unique for the set of characters you're looking for.
See https://stoppels.ch/2022/11/30/io-is-no-longer-the-bottlenec... for details.
This is pretty cool~ Thanks for suggesting this, I will read this in detail and add it to the next (0.5.0) release of memchunk.
Now that I understand it, I'd describe it as: For each byte, based on its bottom 4 bits, map it to either the unique "target" value that you're looking for that has those bottom 4 bits, or if there is no such target value, to any value that is different from what it is right now. Then simply check whether each resulting byte is equal to its corresponding original byte!
Not sure if the above will help people understand it, but after you understand it, I think you'll agree with the above description :)
I implemented the sentence boundaries, but also thought that the notion of a “phrase” might be useful for such applications: https://github.com/clipperhouse/uax29/tree/master/phrases
I think the recently posted Recursive Language Models paper approaches this in a far more compelling way. They put the long context into the environment and make the LLM write and iterate python code to query against it in a recursive loop. Fig. 2 & 4 are most relevant here.
https://news.ycombinator.com/item?id=46475395
https://arxiv.org/abs/2512.24601
I really like this because it is in The Bitter Lesson genre of solutions. Make the model learn the best way to retrieve info from a massive prompt on disk given the domain and any human feedback (explicit and otherwise).
The bigger the prompt.txt, the less relevant the LLM's raw context capabilities are. Context scaling is quadratic in cost. It's a very expensive rabbit to chase. Recursively invoking the same agent with decomposed problem bits is more of a logarithmic scaling thing. You could hypothetically manage a 1 gigabyte prompt with a relatively minuscule context window under a recursive scheme using nothing other than a shell/python interpreter.
Overall nice work
"if you search forward, you need to scan through the entire window to find where to split. you’d find a delimiter at byte 50, but you can’t stop there — there might be a better split point closer to your target size. so you keep searching, tracking the last delimiter you saw, until you finally cross the chunk boundary. that’s potentially thousands of matches and index updates."
So I understand that this is optimal if you want to make your chunks as large as possible for a given chunk size.
What I don't understand is why is it desirable to grab the largest chunk possible for a given chunk limit?
Or have I misunderstood this part of the article?
We've found that maximizing chunk size gives the best retrieval performance and is easier to maintain since you don't have to customize chunking strategy per document type.
The upper limit for chunk size is set by your embedding model. After a certain size, encoding becomes too lossy and performance degrades.
There is a downside: blindly splitting into large chunks may cut a sentence or word off mid-way. We handle this by splitting at delimiters and adding overlap to cover abbreviations and other edge cases.
But I'd expect that a naive implementation of the same strategy would already take like 0.1% of the time needed to actually generate embeddings for your chunks. So practically, is it really worth the effort of writing a bunch of non-trivial SIMD code to reduce that overhead from 0.1% to 0.001%?
So they're talking about this becoming an issue when chunking TBs of data (I assume), not your 1kb random string...
memchunk has a throughput of 164 GB/s. A really fast embedder can deliver maybe 16k embeddings/sec, or ~1.6GB/s (if you assume 100 char sentences)
That's two orders of magnitude difference. Chunking is not the bottleneck.
It might be an architectural issue - you stuff chunks into a MQ, and you want to have full visibility in queue size ASAP - but otherwise it doesn't matter how much you chunk, your embedder will slow you down.
It's still a neat exercise on principle, though :)
And sure, you can reject chunks, but a) the rejection isn't free, and B) you're still bound by embedding speed.
As for resource savings.... not in the Wikipedia data range. If you scale up massively and go to a PB of data, going from kiru to memchunk saves you ~25 CPU days. But you also suddenly need to move from bog-standard high cpu machines to machines supporting 164GB/s memory throughput, likely full metal with 8 memory channels. I'm too lazy to do the math, but it's going to be a mild difference at O($100)
Again, I'm not arguing this isn't a cool achievement. But it's very much engineering fun, not "crucial optimization".
how much of a problem is it that ~1 sentence per chunk gets corrupted in the by-character solution? what level of sentence corruption is left in by switching to these delimiters? what level of paragraph/idea corruption is left in with each? chapter/argument level?
snyy•1d ago
Recently, we've been using Chonkie to build deep research agents that watch topics for new developments and automatically update their reports. This requires chunking a large amount of data constantly.
While building this, we noticed Chonkie felt slow. We started wondering: what's the theoretical limit here? How fast can text chunking actually get if we throw out all the abstractions and go straight to the metal?
This post is about that rabbit hole and how it led us to build memchunk - the fastest chunking library, capable of chunking text at 1TB/s.
Blog: https://minha.sh/posts/so,-you-want-to-chunk-really-fast
GitHub: https://github.com/chonkie-inc/memchunk
Happy to answer any questions!
djoldman•1d ago
How does the software handle these:
Mrs. Blue went to the sea shore with Mr. Black.
"What's for dinner?" Mrs. Blue asked.