I got tired of needing llama.cpp bindings or separate tokenizer.json files just to tokenize text, so I wrote a pure Rust library that reads the tokenizer directly from your GGUF model file, the same output as llama.cpp, zero C++ in the dependency tree.