Put that in a loop and its an enormous speed-up.
Also, this might be a stupid question (I'm a Zig newbie) but… instead of calling std.mem.eql() in the while loop to look at each potential match individually, couldn't you repeat the same trick as before? That is, use SIMD to search for the second and second-to-last character of the needle, then third and third-to-last, and so on, and finally take a bitwise AND of all the resulting bit masks? This way, one would avoid looking at each potential match one by one, and instead look at all of them at the same time.
Even if that doesn't work for some reason and you still need to loop over all potential matches individually, couldn't you use SIMD inside the while loop to replace std.mem.eql and thereby speed up string comparison? My understanding was that std.mem.eql loops over bytes one by one and compares them?
This is about using SIMD to avoid even calling std.mem.eql for 99% of the possible attempts.
My read is it would use SIMD if T is @Vector, and not otherwise? But I'm neither a zig nor SIMD expert
But, does that work with non-ascii characters? (aka Unicode).
If you just encoded your string to bytes naïvely, it will probably-mostly still work, but it will get some combining characters wrong if they're represented differently in the two sources you're comparing. (eg, e-with-an-accent-character vs. accent-combining-character+e)
If you want to be correct-er you'll normalize your UTF string[1], but note that there are four different defined ways to do this, so you'll need to choose the one that is the best tradeoff for your particular application and data sources.
[1]: https://en.wikipedia.org/wiki/Unicode_equivalence#Normalizat...
By "naïvely" I assume you mean you would just plug in UTF-8 bytestrings for haystack & needle, without adjusting the implementation?
Wouldn't the code still need to take into account where characters (code points) begin and end, though, in order to prevent incorrect matches?
In any case, no, this works because UTF-8 is self synchronizing. As long as both your needle and your haystack are valid UTF-8, the byte offsets returned by the search will always fall on a valid codepoint boundary.
In terms of getting "combining characters wrong," this is a reference to different Unicode normalization forms.
To be more precise... Consider a needle and a haystack, represented by a sequence of Unicode scalar values (typically represented by a sequence of unsigned 32-bit integers). Now encode them to UTF-8 (a sequence of unsigned 8-bit integers) and run a byte level search as shown by the OP here. That will behave as if you've executed the search on the sequence of Unicode scalar values.
So semantically, a "substring search" is a "sequence of Unicode scalar values search." At the semantic level, this may or may not be what you want. For example, if you always want `office` to find substrings like `office` in your haystack, then this byte level search will not do what you want.
The standard approach for performing a substring search that accounts for normalization forms is to convert both the needle and haystack to the same normal form and then execute a byte level search.
(One small caveat is when the needle is an empty string. If you want to enforce correct UTF-8 boundaries, you'll need to handle that specially.)
You know much more about this than I do though
edit: this is what I mean for example, that `tést` != `tést` in rg, because \ue9 (e with accent) != e\u0301 (e followed by combining character accent)
$ printf "t\\u00E9st" > /tmp/a
$ xxd /tmp/a
00000000: 74c3 a973 74 t..st
$ cat /tmp/a
tést
$ printf "te\\u0301st" > /tmp/b
$ xxd /tmp/b
00000000: 7465 cc81 7374 te..st
$ cat /tmp/b
tést
$ printf "t\\u00E9st" | rg -f - /tmp/a
1:tést
$ printf "t\\u00E9st" | rg -f - /tmp/b
# ed: no result
edit 2: if we normalize the UTF-8, the two strings will match $ printf "t\\u00E9st" | uconv -x any-nfc | xxd
00000000: 74c3 a973 74 t..st
$ printf "te\\u0301st" | uconv -x any-nfc | xxd
00000000: 74c3 a973 74 t..st
Which you know, and indicate! Just working an example of it that maybe will help people understand, I dunnoThis is absolutely in part because of all of the byte oriented optimizations that are baked into ripgrep (and its regex engine). Note that I said a part. Making ripgrep (and its regex engine) work on things other than a sequence of bytes is far more difficult than just porting a bunch of SIMD algorithms. There are also many optimizations and architectural constraints in the code based on the alphabet size. That is, with 8-bit integers, its alphabet size is 256. With 16-bit integers, the alphabet size is 65,536.
In most cases, though, these still focus on AVX/NEON instructions from over 10 years ago, rather than newer and more powerful AVX-512 variations, SVE & SVE2, or RVV.
These newer ISAs can noticeably change how one would implement a state-of-the-art substring search or copy/move operation. In my projects, such as StringZilla, I often use mask K registers (https://github.com/ashvardanian/StringZilla/blob/2f4b1386ca2...) and an input-dependent mix of temporal and non-temporal loads and stores (https://github.com/ashvardanian/StringZilla/blob/2f4b1386ca2...).
In typical cases, the difference between the suggested SIMD kernels and the state-of-the-art can be as significant as 50% in throughput. As SIMD becomes more widespread, it would be beneficial to focus more on delivering software and bundling binaries, rather than just the kernels.
Case in point: I've been very disappointed lately when I wanted to try Ghostty on my laptop and the binary compiled for Debian failed to run due to an invalid instruction. I don't want to force the same experience to others.
AVX512 is such a mess that Intel just removed it after a generation or two. And on ARM SVE side it is even worse. There is already SVE2, but good luck finding even a SVE enabled machine.
Apple does not support it on their Apple Silicon™ (only SME), Snapdragon does not support it even on their latest 8 Elite. 8 Elite Gen 2 is supposed to come with it.
Only Mediatek and Neoverse chips support them. So finding one machine to develop and test such code can be a little difficult.
AVX-512 is also in bad shape market-wise, despite its amazing feature set and how long it's been since initial release. The Steam Hardware Survey, which skews toward the higher end of the market, only shows 18% of the user base having AVX-512 support. And even that is despite Intel's best efforts to reverse progress by shipping all new consumer CPUs with AVX-512 support disabled.
You can also directly call LLVM intrinsics in case this doesn’t work
If you need the exact behavior of `pshufb` you can use asm or the llvm intrinsic [2]. iirc, I once got the compiler to emit a `pshufb` for a runtime shuffle... that always guaranteed indices in the 0..15 range?
Ironically, I also wanted to try zig by doing a StreamVByte implementation, but got derailed by the lack of SSE/AVX intrinsics support.
[1] https://github.com/aqrit/sse2zig/blob/444ed8d129625ab5deec34... [2] https://github.com/aqrit/sse2zig/blob/444ed8d129625ab5deec34...
It doesn't really have to do with what operations LLVM provides for vectors. LLVM supports all the SIMD intrinsics of clang, and LLVM is one of many backends of zig.
I'll also push back on some bits in the end:
> But if it’s so much better, then why haven’t I made a pull request to
> change std.mem.indexOf to use SIMD? Well, the reason is that
>
> std.mem.indexOf is generic over element size, and having a size
> larger than u8 makes the algorithm much slower
>
> The algorithm used in stdmem.indexOf is cross-platform, while the
> SIMD code wouldn’t be. (not all platforms have SIMD registers at all,
> Arm has only 128-bit)
Does Zig not have a way to specialize this for sequences of unsigned 8-bit integers? If not, and you're thereforce force to used a more generic algorithm, that seems pretty unfortunate. > Substring searching is rarely the bottleneck in programs,
> especially ones written in a fast language like Zig. That’s why
> I don’t personally think it would be worth it to add it to the
> standard library.
Oh I'm not sure I buy this at all! Substring search is a primitive operation and easily can be a bottleneck. There's a reason why widely used substring search implementations tend to be highly optimized.This is Zig so I guess the answer is "yeah, duh" but wanted to ask since it sounded like the solution is less, uh, "compiler-friendly" than I would expect.
if (comptime T == u8) {
// code
}
to guarantee that if you're wrong about how the compiler behaves then you'll get a compiler error.But I do think adding explicit `comptime` in places like this is reasonable for the sake of conveying intent to other programmers.
You're totally right about the first part there was a serious consideration to add this to zig's standard library, there would definitely need to be a fallback to avoid the `O(m*n)` situation.
I'll admit that there are a lot of false assumptions at the end, you could totally specialize it for u8 and also get the block size according to CPU features at compile time with `std.simd.suggestVectorSize()`
You have to be careful about how you do it because those runtime checks can easily swamp the performance gains you get from SIMD.
> also get the block size according to CPU features at compile time with `std.simd.suggestVectorSize()`
You have to be careful with this since std.simd.suggestVectorSize is going to return values for the minimum SIMD version you're targeting I believe which can be suboptimal for portable binaries.
You probably want a mix where you carefully compute the vector size for the current platform globally once and have multiple compiled dispatch paths in your binary that you can pick based on that value & let the CPU prefetcher hide the cost of a check before each invocation.
That seems surprising, particularly given that autovectorizing compilers tend to insert pretty extensive preambles that check for whether or not it's likely the vectorized one will have a speedup over the looping version (e.g., based on the number of iterations) and postambles that handle the cases where the number of loop iterations isn't cleanly divisible by the number of elements in the chosen vector size.
Why would checking for supported SIMD instructions cause that much additional work?
Also, even if this is the case, you can always check once and then replace the function body with the chosen one, eliding the check.
Because CPUID checks on x86 are expensive for whatever reason.
> That seems surprising, particularly given that autovectorizing compilers tend to insert pretty extensive preambles that check for whether or not it's likely the vectorized one will have a speedup over the looping version (e.g., based on the number of iterations) and postambles that handle the cases where the number of loop iterations isn't cleanly divisible by the number of elements in the chosen vector size.
Compilers can't elide those checks unless they are given specific flags that tell them the target CPU supports that specific instruction set OR they always just choose to target the minimum supported SIMD instruction set on the target CPU. They often emit suboptimal code for all sorts of reasons, this being one of them.
> Also, even if this is the case, you can always check once and then replace the function body with the chosen one, eliding the check.
Yes, but like I said, you have to do it very carefully to make sure you're calling CPUID once outside of a hot loop to initialize your decision making and then relying on the CPU's predictor to elide the cost of a boolean / switch statement in your code doing the actual dispatch.
If the wheels get reinvented again and again, it means that they should be readily available.
What I am talking about is creating a cross-vendor standard.
The nice thing about Zig’s SIMD operation is the register size support is transparent. You can just declare a 64-byte vector as the Block, and Zig would use an AVX256 or two AVX2 (32-byte) registers behind the scene. All other SIMD operations on the type are transparently done with regard to the registers when compiled to the targeted platform.
Even using two AVX2 registers for 64 bytes of data is a win due to instruction pipelining. Most CPU have multiple SIMD registers and the two 32-byte data chunks are independent. CPU can run them in parallel.
The next optimization is to line the data up at 64 byte alignment, to match the L1/L2 cache line size. Memory access is slow in general.
https://lemire.me/blog/2025/08/09/why-do-we-even-need-simd-i...
lokeg•6mo ago
expenses3•6mo ago
burntsushi•6mo ago
[1]: https://github.com/aarol/substr/blob/9392f9557de735929dfb79e...
MattPalmer1086•5mo ago
The absolute worst case is when both the needle and haystack are both composed of the same byte repeated (e.g. all zero).