frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
1•senekor•34s ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•3m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•5m ago•2 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•6m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•8m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•10m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•12m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•15m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•19m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•21m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•24m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•36m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•38m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•39m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•52m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•55m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•58m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments
Open in hackernews

Processing Strings 109x Faster Than Nvidia on H100

https://ashvardanian.com/posts/stringwars-on-gpus/
216•ashvardanian•4mo ago

Comments

ashvardanian•4mo ago
After publishing this a few days ago, 2 things have happened.

First, it tuned out that StringZilla scales further to over 900 GigaCUPS around 1000-byte long inputs on Nvidia H100. Moreover, the same performance is obviously accessible on lower-end hardware as the algorithm is not memory bound, no HBM is needed.

Second, I’ve finally transitioned to Xeon 6 Granite Rapids nodes with 192 physical cores and 384 threads. On those, the Ice Lake+ kernels currently yield over 3 TeraCUPS, 3x the current Hopper kernels.

The most recent numbers are already in the repo: https://github.com/ashvardanian/StringWa.rs

giancarlostoro•4mo ago
I am curious if RipGrep is faster or would be even faster if using StringZilla. RipGrep is insanely fast as it is.
ashvardanian•4mo ago
I’m not an active RipGrep user, so can’t speak for all usage patterns. My guess: for plain substring searches, you probably won’t see much difference. Where StringZilla may potentially help is in character-set searches.
burntsushi•4mo ago
Nope. ripgrep uses the `memchr` crate for substring search, and in my benchmarks it's generally faster than stringzilla:

    $ rebar cmp results.csv --intersection -f huge
    benchmark                                        rust/memchr/memmem/prebuilt  stringzilla/memmem/oneshot
    ---------                                        ---------------------------  --------------------------
    memmem/pathological/md5-huge-no-hash             47.4 GB/s (1.00x)            38.1 GB/s (1.25x)
    memmem/pathological/md5-huge-last-hash           40.3 GB/s (1.00x)            23.4 GB/s (1.72x)
    memmem/pathological/rare-repeated-huge-tricky    40.4 GB/s (1.04x)            42.0 GB/s (1.00x)
    memmem/pathological/rare-repeated-huge-match     1977.7 MB/s (1.00x)          563.3 MB/s (3.51x)
    memmem/subtitles/common/huge-en-that             35.9 GB/s (1.00x)            25.3 GB/s (1.42x)
    memmem/subtitles/common/huge-en-you              15.9 GB/s (1.00x)            9.5 GB/s (1.67x)
    memmem/subtitles/common/huge-en-one-space        1376.4 MB/s (1.00x)          1364.0 MB/s (1.01x)
    memmem/subtitles/common/huge-ru-that             29.0 GB/s (1.00x)            15.5 GB/s (1.87x)
    memmem/subtitles/common/huge-ru-not              16.0 GB/s (1.00x)            3.5 GB/s (4.53x)
    memmem/subtitles/common/huge-ru-one-space        2.6 GB/s (1.00x)             2.4 GB/s (1.08x)
    memmem/subtitles/common/huge-zh-that             31.2 GB/s (1.00x)            23.8 GB/s (1.31x)
    memmem/subtitles/common/huge-zh-do-not           19.4 GB/s (1.00x)            12.1 GB/s (1.59x)
    memmem/subtitles/common/huge-zh-one-space        5.3 GB/s (1.05x)             5.6 GB/s (1.00x)
    memmem/subtitles/never/huge-en-john-watson       41.2 GB/s (1.00x)            31.2 GB/s (1.32x)
    memmem/subtitles/never/huge-en-all-common-bytes  47.9 GB/s (1.00x)            37.5 GB/s (1.28x)
    memmem/subtitles/never/huge-en-some-rare-bytes   43.4 GB/s (1.00x)            42.7 GB/s (1.02x)
    memmem/subtitles/never/huge-en-two-space         42.2 GB/s (1.00x)            30.7 GB/s (1.37x)
    memmem/subtitles/never/huge-ru-john-watson       42.2 GB/s (1.00x)            42.1 GB/s (1.00x)
    memmem/subtitles/never/huge-zh-john-watson       47.6 GB/s (1.00x)            34.0 GB/s (1.40x)
    memmem/subtitles/rare/huge-en-sherlock-holmes    40.8 GB/s (1.05x)            42.9 GB/s (1.00x)
    memmem/subtitles/rare/huge-en-sherlock           36.7 GB/s (1.16x)            42.5 GB/s (1.00x)
    memmem/subtitles/rare/huge-en-medium-needle      47.7 GB/s (1.00x)            31.3 GB/s (1.52x)
    memmem/subtitles/rare/huge-en-long-needle        44.5 GB/s (1.00x)            32.0 GB/s (1.39x)
    memmem/subtitles/rare/huge-en-huge-needle        45.7 GB/s (1.00x)            33.4 GB/s (1.37x)
    memmem/subtitles/rare/huge-ru-sherlock-holmes    42.1 GB/s (1.00x)            42.2 GB/s (1.00x)
    memmem/subtitles/rare/huge-ru-sherlock           42.3 GB/s (1.01x)            42.9 GB/s (1.00x)
    memmem/subtitles/rare/huge-zh-sherlock-holmes    46.7 GB/s (1.00x)            33.1 GB/s (1.41x)
    memmem/subtitles/rare/huge-zh-sherlock           47.4 GB/s (1.00x)            42.8 GB/s (1.11x)
But I would say they are overall pretty competitive.

If you want to run the benchmarks yourself, you can. First, get rebar[1]. Then, from the root of the `memchr` repository[2]:

    $ rebar build -e 'rust/memchr/memmem/prebuilt' -e 'stringzilla/memmem/oneshot'
    stringzilla/memmem/oneshot: running: cd "benchmarks/./engines/stringzilla" && "cargo" "build" "--release"
    stringzilla/memmem/oneshot: build complete for version 3.12.3
    rust/memchr/memmem/prebuilt: running: cd "benchmarks/./engines/rust-memchr" && "cargo" "build" "--release"
    rust/memchr/memmem/prebuilt: build complete for version 2.7.4
    $ rebar measure -e 'rust/memchr/memmem/prebuilt' -e 'stringzilla/memmem/oneshot' | tee results.csv
    $ rebar rank results.csv
    Engine                       Version  Geometric mean of speed ratios  Benchmark count
    ------                       -------  ------------------------------  ---------------
    rust/memchr/memmem/prebuilt  2.7.4    1.14                            57
    stringzilla/memmem/oneshot   3.12.3   1.43                            54
    $ rebar cmp results.csv --intersection -f never/huge
    benchmark                                        rust/memchr/memmem/prebuilt  stringzilla/memmem/oneshot
    ---------                                        ---------------------------  --------------------------
    memmem/subtitles/never/huge-en-john-watson       41.2 GB/s (1.00x)            31.2 GB/s (1.32x)
    memmem/subtitles/never/huge-en-all-common-bytes  47.9 GB/s (1.00x)            37.5 GB/s (1.28x)
    memmem/subtitles/never/huge-en-some-rare-bytes   43.4 GB/s (1.00x)            42.7 GB/s (1.02x)
    memmem/subtitles/never/huge-en-two-space         42.2 GB/s (1.00x)            30.7 GB/s (1.37x)
    memmem/subtitles/never/huge-ru-john-watson       42.2 GB/s (1.00x)            42.1 GB/s (1.00x)
    memmem/subtitles/never/huge-zh-john-watson       47.6 GB/s (1.00x)            34.0 GB/s (1.40x)
See also: https://github.com/BurntSushi/memchr/discussions/159

[1]: https://github.com/BurntSushi/rebar

[2]: https://github.com/BurntSushi/memchr

clausecker•4mo ago
When I implemented SIMD-accelerated string functions for FreeBSD's libc, I briefly looked at Stringzilla, but the code didn't look particularly interesting or fast. So no surprise here.
ashvardanian•4mo ago
It’s a very nice and detailed benchmark suite! Great effort! Can you please share the CPU model you are running on? I suspect it’s an x86 CPU without AVX-512 support.
burntsushi•4mo ago
i9-12900K, x86-64.

There is definitely no AVX-512 support on my CPU. Which is also true for most of my users. I don't bother with AVX-512 for that reason.

Another substantial population of my users are on aarch64, which memchr has optimizations for. I don't think StringZilla does.

ashvardanian•4mo ago
Makes sense! I mostly focus on newer AVX-512 variants as opposed to older AVX2-only CPUs. As for aarch64, it is supported with both NEON, SVE, and SVE2 kernels for some tasks. The last two are rarely useful, unless you run on AWS Graviton 3 (previous gen) or some of the supercomputers with custom chips like Fujitsu A64FX.
burntsushi•4mo ago
> newer AVX-512 variants as opposed to older AVX2-only CPUs

This is exactly my issue with targeting AVX-512. It isn't just absent on "older AVX2-only CPUs." It's also absent on many "newer AVX2-only CPUs." For example, the i9-14900K. I don't think any of the other newer Intel CPUs have AVX-512 either. And historically, whether an x86-64 CPU supported AVX-512 at all was hit or miss.

AVX-512 has been around for a very long time now, and it has just never been consistently available.

vlovich123•4mo ago
It’s mainly available in data centers, but yes missing in consumer parts. And for a while even in data centers you wanted to be careful about using it due to Intel’s issues with clock downscaling but that hasn’t been true for a few years.
ashvardanian•4mo ago
The consumer situation is changing. A few years ago, when I was working with a team on some closed source HPC stuff, we’ve got everyone Tiger Lake-based laptops to simplify AVX-512 R&D. Now, Zen4-based desktop CPUs also support it.

But its fair to say that I’m mostly focusing on the datacenter/supercomputing hardware, both on the x86 and Arm side.

vlovich123•4mo ago
If you’re targeting AVX-512 Intel consumer it’s pointless. But yes, AMD does continue to ship AVX-512 chips so completely ignoring 512 on consumer isn’t ideal.
William_BB•4mo ago
Could you elaborate on SVE and SVE2? Is that because it's only 128 bits? I think my Macbook (Apple silicon) is one of the two
ashvardanian•4mo ago
Yes, at the scale of 128-bit registers NEON is mostly enough, except for a few categories of instructions missing in that ISA subset, like scatter/gather ops, that can yield 30% boost over serial memory accesses: https://github.com/ashvardanian/less_slow.cpp/releases/tag/v...
giancarlostoro•4mo ago
Thank you! I love RipGrep, its the one thing I install, use it for everything, even non-dev stuff.
jasonjmcghee•4mo ago
Thank you for memchr- really!
llm_nerd•4mo ago
This is neat, and I click the little upvote because hyper-optimizations are a delight.

But realistically, is there any real-world situation where one would use this? What niche or industry or need would benefit from this, where the dependency + setup costs are worth it. Strings just seem to be a long-solved non-issue.

ashvardanian•4mo ago
This last wave of work was actually triggered by the industry over the last 2 years, as the volume of biological sequence data is growing rapidly and more BioTech and Pharma companies are rushing to scale computational pipelines.

Namely, if you look at DeepMind’s AlphaFold 1 and 2, bulk volume of compute time is spent outside of PyTorch - running sequence alignment. Historically, with BLAST. More recently, in other labs, with some of my code :)

ozgrakkurt•4mo ago
Really dig these optimization blogs. Educational and well written
abdellah123•4mo ago
super nice, is there already an extension to use this in Postgres?
ashvardanian•4mo ago
Not that I’m aware of. Some commercial DBMS vendors are experimenting with integrations, but I haven’t really seen much in the Postgres ecosystem.

What excites me in this release is the quality of the new hash functions. I’ve built many over the years but never felt they were worth sharing until now. Having two included here was a personal milestone for me, since I’ve always admired how good xxHash and aHash are and wanted to build something of similar caliber.

The new hashes should be directly useful in databases, for example improving JOIN performance. And the fingerprinting interfaces based on 52-bit modulo math with double-precision FMA units open up another path. They aren’t easy to use and won’t apply everywhere, but on petabyte-scale retrieval tasks they can make a real impact.

ComputerGuru•4mo ago
Great work and nice write up, Ash!

A suggestion: in the comparison table under the “AES and Port-Parallelism Recipe” it would be great to include “streaming support” and “stable output” (across os/arch) as a column.

Also something to beware of, some hash libraries claim to support streaming via the Hasher interface but actually return different results in streaming and one-shot mode (and have different performance profiles). I’m on mobile so I can’t check atm but I’m about 80% sure gxhash has at least one of these problems that prevented me from using it before.

ashvardanian•4mo ago
Thanks! You are likely right! It took a lot of time to make sure that all 6 of ISA-specific versions of StringZilla (https://github.com/ashvardanian/StringZilla/blob/main/includ...) return the same output for both one-shot and incremental construction, and I’m not sure if it was a priority for other projects :)
unwind•4mo ago
I'm not (at the moment) a potential user of this, but I just wanted to say that it was a fantastic page with a really good presentation of the project and its capabilities.

One micro-question on the editing: why are numbers written with an apostrophe (') as the thousands-separator [1]? I know that is used for this purpose in Switzerland and that many programming languages support it. It just seemed very strange for English text, where typically comma (,) would be used, of course.

[1]: https://en.wikipedia.org/wiki/Decimal_separator#Digit_groupi...

[2]: https://en.wikipedia.org/wiki/Apostrophe#Miscellaneous_uses_...

adrian_b•4mo ago
This is the stupid choice made by the C++ (2014) standard.

A digit separator for increased readability of long numbers has been first introduced by Ada (1979-06), which has used the underscore. This usage matched the original reason for the introduction of the underscore in the character set, which had been done by PL/I (1964-12), for increasing the readability of long identifiers, while avoiding the ambiguity caused by using hyphen for that purpose, as previously in COBOL (many LISPs have retained the COBOL usage of the hyphen, because they, like COBOL, do not normally write arithmetic expressions with operators).

Most programming languages that have added a digit separator have followed Ada, by using the underscore.

35 years later, C++ should have done the same and I hate whoever thought otherwise within the people who have updated the standard, causing thus completely unnecessary compatibility problems, e.g. when copying a big initialized array between program text sources written in different languages.

There was some flawed argument against the underscore that it could have caused some parsing problems in some weird legacy programs, but they were not more difficult to solve than avoiding parsing errors caused by the legacy use of the apostrophe in character constants (i.e. forbidding the digit separator as the first character in a number is enough to ensure a non-ambiguous parsing) .

ashvardanian•4mo ago
Thanks for the kind words! In this case, it isn’t tied to any programming language or locale-specific formatting. I just find commas less readable in long numbers, especially in running text across Western languages. Apostrophes feel clearer to me, so I usually stick with them.