The fact that the Zig ecosystem follows the pattern set by the standard library to pass the Allocator interface around makes it super easy to write idiomatic code, and then decide on your allocation strategies at your call site. People have of course been doing this for decades in other languages, but it's not trivial to leverage existing ecosystems like libc while following this pattern, and your callees usually need to know something about the allocation strategy being used (even if only to avoid standard functions that do not follow that allocation strategy).
It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.
Makes my little finitist heart get warm and fuzzy.
It’s actually quite tricky though. The allocation still happens and it’s not limited to, so you could plausibly argue both ways.
It’s mostly a theoretical issue, though, because all real computer systems have limits. It’s just that in languages that assume unlimited memory, the limits aren’t written down. It’s not “part of the language.”
Also it's giving me flashbacks to LwIP, which was a nightmare to debug when it would exhaust its preallocated buffer structures.
We used to have very little memory, so we developed many tricks to handle it.
Now we have all the memory we need, but tricks remained. They are now more harmful than helpful.
Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation.
Theoretically infinite memory isn't really the problem with reasoning about Turing-complete programs. In practice, the inability to guarantee that any program will halt still applies to any system with enough memory to do anything more than serve as an interesting toy.
I mean, I think this should be self-evident: our computers already do have finite memory. Giving a program slightly less memory to work with doesn't really change anything; you're still probably giving that statically-allocated program more memory than entire machines had in the 80s, and it's not like the limitations of computers in the 80s made us any better at reasoning about programs in general.
But why? If you do that you are just taking memory away from other processes. Is there any significant speed improvement over just dynamic allocation?
2. Speed improvement? No. The improvement is in your ability to reason about memory usage, and about time usage. Dynamic allocations add a very much non-deterministic amount of time to whatever you're doing.
But if you're assuming that overcommit is what will save you from wasting memory in this way, then that sabotages the whole idea of using this scheme in order to avoid potential allocation errors.
1. Doesn't the overcommit feature lessen the benefits of this? Your initial allocation works but you can still run out of memory at runtime.
2. For a KV store, you'd still be at risk of application level use-after-free bugs since you need to keep track of what of your statically allocated memory is in use or not?
For the second question, yes, we have to keep track of what's in use. The keys and values are allocated via a memory pool that uses a free-list to keep track of what's available. When a request to add a key/value pair comes in, we first check if we have space (i.e. available buffers) in both the key pool and value pool. Once those are marked as "reserved", the free-list kind of forgets about them until the buffer is released back into the pool. Hopefully that helps!
leumassuehtam•2h ago
It's baffling that a technique known for 30+ years in the industry have been repackage into "tiger style" or whatever this guru-esque thing this is.
addaon•1h ago
nickmonad•1h ago
testdelacc1•1h ago
It’s tempting to cut people down to size, but I don’t think it’s warranted here. I think TigerBeetle have created something remarkable and their approach to programming is how they’ve created it.
codys•1h ago
jandrewrogers•1h ago
Allocation aside, many optimizations require knowing precisely how close to instantaneous resource limits the software actually is, so it is good practice for performance engineering generally.
Hardly anyone does it (look at most open source implementations) so promoting it can’t hurt.
mitchellh•56m ago
I think the more constructive reality is discussing why techniques that are common in some industries such as gaming or embedded systems have had difficulty being adopted more broadly, and celebrating that this idea which is good in many contexts is now spreading more broadly! Or, sharing some others that other industries might be missing out on (and again, asking critically why they aren't present).
Ideas in general require marketing to spread, that's literally what marketing is in the positive (in the negative its all sorts of slime!). If a coding standard used by a company is the marketing this idea needs to live and grow, then hell yeah, "tiger style" it is! Such is humanity.
pjmlp•19m ago
brabel•5m ago
Most applications don’t need to bother the user with things like how much memory they think will be needed upfront. They just allocate how much and when necessary. Most applications today are probably servers that change all the time. You would not know upfront how much memory you’d need as that would keep changing on every release! Static allocation may work in a few domains but it certainly doesn’t work in most.