IIRC, Unix's original as works that way: during assembly, the text and data sections are written into separate temporary files, and then they are merged together into the a.out. And yes, it's slow.
I don't follow. He has just said that although the size of the arena is finite, the input and output are unbounded, and the compiler does its work by processing "a sequence of chunks" (i.e. those things that will fit into the finitely sized arena). That's not "O(1) intermediate processing artifacts". It's still O(n).
> [...] can clarify compiler’s architecture, and I won’t be too surprised if O(1) processing in compilers would lead to simpler code
This doesn't seem like an intuitive conclusion at all. There's more recordkeeping needed now, and more machinery in need of being implemented, and one should expect that this scheme would make for things that are neither simple nor easy.
We haven't even gotten around to addressing how "statically allocating" a fixed size arena that your program necessarily subdivides into pieces (before moving onto the next chunk and doing the same) is just "dynamic allocation with extra steps". (If you want or just think that it would be neat to write/control your own allocator, then fine, but... say that.)
delifue•5d ago
Static memory allocation requires hardcoding an upper limit of size of everything. For example, if you limit each string to be at most 256 bytes, then a string with only 10 bytes will waste 246 bytes of memory.
If you limit string length to 32 bytes it will waste fewer memory but when a string longer than 32 bytes comes it cannot handle.
Joker_vD•5d ago
No? Unless you limit each string to be exactly 256 bytes but that's silly.
> If you limit string length to 32 bytes it will waste fewer memory but when a string longer than 32 bytes comes it cannot handle.
Not necessarily. The early compilers/linkers routinely did "only the first 6/8 letters of an identifier are meaningful" schtick: the rest was simply discarded.
AlotOfReading•1h ago
Even if you needed to hardcode upper size limits, which your compiler already does to some extent (the C/C++ standards anticipate this by setting minimum limits for certain things like string length), you wouldn't actually pay the full price on most systems because of overcommit. There are other downsides to this depending on implementation details like how you reclaim memory and spawn compiler processes, so I'm not suggesting it as a good idea. It's just possible.