It's not a failure but relatedly as sbrk is linear, you also don't really have a reasonable way to deal with fragmentation. For example, suppose you allocate 1000 page sized objects and then free all but the last one. With an mmap based heap, you can free all 999 other pages back to the OS whereas with sbrk you're stuck with those 999 pages you don't need for the lifetime of that 1000th object (better hope it's not long lived!).
Really, sbrk only exists for legacy reasons.
Thanks to the wonders of virtual memory, you can madvise(MADV_DONTNEED), and return the memory to the OS, without giving up the address space.
Since this is all allocator.h[0] contains aside from other include statements, why have allocator.h at all?
0 - https://github.com/t9nzin/memory/blob/main/include/allocator...
I haven’t read in full so not sure if it discusses using blocks vs other structures (eg stack-based allocators, stack being the data structure not the program stack.) Ie, it’s a set of implementation choices. It still seems to reflect common ways of allocating in far more detail than many blogs I’ve read on the topic do.
https://screwjankgames.github.io/engine%20programming/2020/0...
https://www.bytesbeneath.com/p/the-arena-custom-memory-alloc...
I also don't know how much we want to butcher this blog post, but:
> RAM is fundamentally a giant array of bytes, where each byte has a unique address. However, CPUs don’t fetch data one byte at a time. They read and write memory in fixed-size chunks called words which are typically 4 bytes on 32-bit systems or 8 bytes on 64-bit systems.
CPUs these days fetch entire cache lines. Memory is also split into banks. There are many more details involved, and it is viewing memory as a giant array of bytes that is fundamentally broken. It's a useful abstraction up until some point, but it breaks apart once you analyze performance. This part of the blog didn't seem very accurate.
No need to free in short living processes
quibono•7h ago
I don't think that's the case here though.