The B+tree implementation provides significant performance improvements over industry standards for large trees. For some workloads with large trees, we've observed:
- vs Abseil B+tree: 2-5× faster across insert/find/erase operations - vs std::map: 2-5× faster across insert/find/erase operations
ognarb•1d ago
It seems the code was written with AI, I hope the author knows what he is doing. Last time I tried to use AI to optimize CPU-heavy C++ code (StackBlur) with SIMD, this failed :/
LoganDark•1d ago
https://github.com/logandark/stackblur-iter
klaussilveira•1d ago
Have you tried to do any OpenGL or Vulkan work with it? Very frustrating.
React and HTML, though, pretty awesome.
seg_fault•1d ago
TingPing•1d ago
DrBazza•1d ago
I suppose part of the problem is that training a model on publicly available C++ isn't going to be great because syntactically broken code gets posted to the web all the time, along with suboptimal solutions. I recall a talk saying that functional languages are better for agents because the code published publicly is formally correct.
simonw•1d ago
It's possible Opus 4.5 and GPT-5.2 are significantly less terrible with C++ than previous models. Those only came out within the past 2 months.
They also have significantly more recent knowledge cut-off dates.
klaussilveira•1d ago
I've been recently working with Opus 4.5 and GPT-5.2. Both have been unable to migrate a project from using ARB shaders to 3.3 and GLSL. And I don't mean migrating the shaders themselves, just changing all the boring glue code that tells the application to use GLSL and manage those instead of feeding the ARB shaders directly.
They have also failed spectacularly at implementing this paper: https://www.cse.chalmers.se/~uffe/soft_gfxhw2003.pdf
No matter how I sliced it, I could not get a simple cube to have the shadows as described in the paper.
I've also recently tried to get Opus 4.5 to move the Job system from Doom 3 BFG to the original codebase. Clean clone of dhewm3, pointed Opus to the BFG Job system codebase, and explained how it works. I have also fed it the Fabien Sanglard code review of the job system: https://fabiensanglard.net/doom3_bfg/threading.php
As well as the official notes that explain the engine differences: https://fabiensanglard.net/doom3_documentation/DOOM-3-BFG-Te...
I did that because, well, I had ported this job system before and knew it was something pretty "pluggable" and could be implemented by an LLM. Both have failed. I'm yet to find a model that does this.
simonw•1d ago
Will be interesting to see if models in six months time can handle this, since they clearly can't do it today.
dustbunny•14h ago
I might try to implement this paper, great find! I love this 2000-2010 stuff
klaussilveira•10h ago
https://artis.inrialpes.fr/Publications/2003/HLHS03a/SurveyR...
https://mrelusive.com/publications/papers/SIMD-Shadow-Volume...
https://terathon.com/gdc05_lengyel.pdf
inetknght•1d ago
I've started adding this to all of my new conversations and it seems to help:
My question to the LLM then follows in the next paragraph. Foregoing most of the LLM's code-writing capabilities in favor of giving observations and ideas seems to be a much better choice for productivity. It can still lead me down rabbit holes or wrong directions, but at least I don't have to deal with 10 pages of prose in its output or 50 pages of ineffectual code.tarnith•23h ago
As soon as it starts trying to write actual code or generate a bunch of files it's less than helpful very quickly.
Perhaps I haven't tried enough, but I'm entirely unsold on this for anything lower level.
dustbunny•14h ago
Learning "the old ways" is certainly valuable, because the AIs and the resources available are bad at these old ways.
FpUser•1d ago
Also to generate boilerplate / repetitive.
Overall I consider it a win.
nurettin•1d ago
shihab•1d ago
AI is always good at going from 0 to 80%, it's the last 20% it struggles with. It'd be interesting to see a claude-written code making its way to a well-established library.
leopoldj•5h ago