But mentioning the borrow checker raises an obvious question that I don’t see addressed in this post: what happens if you try to take a reference to an object in the temporary allocator, and use it outside of the temporary allocator’s scope? Is that an error? Rust’s borrow checker has no runtime behavior, it only exists to create errors in cases like that, so the title invites the question of how your this mechanism handles that case but doesn’t answer it.
This is of course not as good as ASAN or a borrow checker, but it interacts very nicely with C.
Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.
It should also be clear from the rest of the blog post that it doesn't try to make any claims that it's a novel technique (it's something that has been around for a long time). What's novel is that it's well integrated into the stdlib.
But you said it yourself in your previous message:
> A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet)
So the issue is clearly not solved.
And to be complete about the answer:
> in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.
I can see multiple issues with this:
- it's only in safe mode
- it's safe only as long as the memory is never used again for a different purpose, which seems to imply that either this is not safe (if it's written again) or that it leaks massive amounts of memory (if it's never written to again)
> Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.
Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"? To me that way of reading seems reasonable, but the title still looks plainly wrong.
Heck, even citing the borrow checker *at all* seems wrong, this is more about RAII than lifetimes (and RAII in Rust is solved with ownership, not the borrow checker).
You can use --sanitize=address to get this today, or use the Vmem-based temp allocator (which is only in the 0.7.4 prerelease and only for 64 bit POSIX) if you're curious how it feels and works in practice.
> I can see multiple issues with this:
There is a constant trade-off, and being as safe as possible is obviously great, but there is also the question of performance.
The context matters though, it's a C-like language, an evolution of C. So it doesn't try to be a completely new language with new semantics, and that creates a lot of constraints.
The "safe-C" C-dialects usually add a lot of additional annotations that doesn't seem particularly palatable to most developers.
> Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"?
Yes I am afraid you do. But that's my fault (since I suggested the title, even though I didn't write the article), and not yours.
This post is about memory management and doesn't seem to be concerned much about safety in any way. In C3, does anything prevent me from doing this:
fn int* example(int input)
{
@pool()
{
int* temp_variable = mem::tnew(int);
*temp_variable = input;
return temp_variable;
};
}
Presumably this is referring to Rust, which has a borrow checker and slow compile times. The author is, I assume, under the common misconception that these facts are closely related. They're not; I think the borrow checker runs in linear time though I can't find confirmation of this, and in any event profiling reveals that it only accounts for a small fraction of compile times. Rust compile times are slow because the language has a bunch of other non-borrow-checking-related features that trade off compilation speed for other desiderata (monomorphization, LLVM optimization, procedural macros, crates as a translation unit). Also because the rustc codebase is huge and fairly arcane and not that many people understand it well, and while there's a lot of room for improvement in principle it's mostly not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.
There are other legitimate criticisms you can raise at the Rust borrow checker such as cognitive load and higher cost of refactoring, but the compilation speed argument is just baseless.
> Lifetime annotations are checked at compile-time. ... This is the major reason for slower compilation times in Rust.
This misconception is being perpetuated by Rust tutorials.
Be aware that it is not part of the rust-lang organization, it's a third party.
Lexically scoped lifetimes don't address this at all.
What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used. That's of course not the level of compile time checking that Rust does. But then Rust has a lot more in the language in order to support this.
Conversely C3 does have contracts as a language feature, which Rust doesn't have, so C3 is able to do static checking with the contracts to reject contract violations at compile time, which runtime contracts like some Rust creates provides, can't do.
The article makes no mention of this, so in the context of the article the title remains very wrong. I could also not find a page in the documentation claiming this is supported (though I have to admit I did not read all the pages), nor an explanation of how this works, especially in relation to the performace hit it would result in.
> C3 is able to do static checking with the contracts to reject contract violations at compile time
I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts. Even worse, violating them when not using safe mode results in "unspecified behaviour", but really it's undefined behaviour (violating contracts is even their list of undefined behaviour! [2])
[1]: https://c3-lang.org/language-common/contracts/
[2]: https://c3-lang.org/language-rules/undefined-behaviour/#list...
The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees (which is good, because capabilities will be added on the road to 1.0).
> I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts.
No, there is no guarantee at the language level because doing so would make a conforming implementation of the compiler harder than it needs to be. In addition, setting exact limits may hamper innovation of compilers that wish to add more analysis but will hesitate to reject code that can be statically know to violate contracts.
At higher optimizations, the compiler is allowed to assume that the contracts evaluate to true. This means that code like `assert(i == 1); if (i != 1) return false;` can be reduced to a no-op.
So the danger here is then if you rely on the function giving you a valid result even if the indata is not one that the function should work with.
And yes, it will be optional to have those "assumes" inserted.
Already today in current compiler, doing something trivial like writing `foo(0)` to a function that requires that the parameter > 1 is caught at compile time. And it's not doing any real analysis yet, but it will definitely happen.
I looked at the allocator source code and there’s no use-after-free protection beyond zeroing on free, and that is in no way sufficient. Many UAF security exploits work by using a stale pointer to mutate a new allocation that re-uses memory that has been freed, and zeroing on free does nothing to stop these exploits.
@pool appears to be exactly what C++ does automatically when objects fall out of scope.
(This doesn't seem to have anything to do with borrow checking though, which is a memory safety feature not a memory management feature. Rust manages memory with affine types which is a completely separate thing, you could write an entire program without a single reference if you really wanted to)
Because that is the context. It is the constraint that C3, C, Odin, Zig etc maintains, where RAII is out of the question.
I think the only real grace is you don't have to pass around the allocator. But then you run into the issue where now anyone allocating needs to know about the lifetimes of the pool of the caller. If A -> B (pool) -> C and the returned allocation of C ends up in A, now you potentially have a pointer to freed memory.
Sending around the explicit allocator would allow C to choose when it should allocate globally and when it should allocate on the pool sent in.
Rust doesn't even have a good allocator interface yet, so libraries like bumpalo have a parallel implementation of some stdlib types.
I'm confused: how is it not exactly RAII?
It has very little to do with trying to manage temporary memory lifetimes.
It doesn't solve the case when lifetimes are indeterminate. But often they are well know. Consider "foo(bar())" where "bar()" returns an allocated object that we wish to free after "foo" has used it. In something like C it's easy to accidentally leak such a temporary object, and doing it properly means several lines of code, which might be bad if it's intended for an `if` statement or `while`.
Now we usually cannot do "bar(foo())" because it then leaks. We could allocate a buffer on the stack, and then do "bar(foo(&buffer))", but this relies on us safely knowing that the buffer does not overflow.
If the language has RAII, we can use that to return an object which will release itself after going out of scope e.g. std::unique_ptr, but this relies on said RAII and preferably move semantics.
If the context is RAII-less semantics, this is not trivial to solve. Languages that run into this is C3, Zig, C and Odin.
With the temp allocator solution, we can write `bar(foo())` if `foo` always allocates a temp variable, or `bar(foo(tmem))` if it takes an allocator.
Given a function `foo` that is allocating an object "o" and returns it to the upper scope, how would you do "escape analysis" to determine it should be freed and HOW it should be freed? What is the mechanism if you do not have RAII, ARC or GC?
This is true for all similar schemes, that they have something for easy for simple-to-track allocations, and then have to fallback on something generic.
But even that is usually assuming that the language is somehow having a built-in notion of memory allocation and freeing.
`@pool` flushes the temp allocator and all allocations made by the temp allocator are freed when the pool goes out of scope.
There are similarities, but NSAutoreleasePool is for refcounting and an object released by the autoreleasepool might have other objects retaining it, so it's not necessarily freed.
Memory arenas/pools have been around for ages, and binding arenas to a lexical scope is also not a new concept. C++ was doing this with RAAI, and you could implement this in Go with defer and in other languages by wrapping the scope with a closure.
This post discusses how arenas are implemented in C3 and what they're useful for, but as other people have said this doesn't make sense to compare arenas to reference counting or a borrow checker. Arenas make memory management simpler in many scenarios, and greatly reduce (but don't necessarily eliminate - without other accompanying language features) the chances of a memory leak. But they contribute very little to memory safety and they're not nearly as versatile as a full-fledged borrow checker or reference counting.
The blog claims that @pool "solves memory lifetimes with scopes" yet it looks like a classic region/arena allocator that frees everything at the end of a lexical block… a technique that’s been around for decades.
Where do affine or linear guarantees come in?
From the examples I don’t see any restrictions on aliasing or on moving data between pools, so how are use‑after‑free bugs prevented once a pointer escapes its region?
And the line about having "solved memory management" for total functions::: bravo indeed…
Could you show a non‑trivial case where @pool eliminates a leak that an ordinary arena allocator wouldn’t?
Could you show a non‑trivial case, say, a multithreaded game loop where entities span multiple frames, or a high‑throughput server that streams chunked responses, where @pool prevents leaks that a plain arena allocator would not?
This doesn't actually do any compile-time checks (it could, but it doesn't). It will do runtime checks on supported platforms by using page protection features eventually, but that's not really the goal.
The goal is actually extremely simple: make working with temporary data very easy, which is where most memory management messes happen in C.
The main difference between this and a typical arena allocator is the clearly scoped nature of it in the language. Temporary data that is local to the function is allocated in a new @pool scope. Temporary data that is returned to the caller is allocated in the parent @pool scope.
Personally I don't like the precise way this works too much because the decision of whether returned data is temporary or not should be the responsibility of the caller, not the callee. I'm guessing it is possible to set the temp allocator to point to the global allocator to work around this, but the callee will still be grabbing the parent "temp" scope which is just wrong to me.
For memory only, which is one of the simplest kinds of resource. What about file descriptors? Graphics objects? Locks? RAII can keep track of all of those. (So does refcounting, too, but tracing GC usually not.)
So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.
- integers use names like "short" instead of names with numbers like "i16"
- they use printf-like formatting functions instead of Python's f-strings
- it seems that there is no exception in case of integer overflow or floating point errors
- it seems that there is no pointer lifetime checking
- functions are public by default
- "if" statement still requires parenthesis around boolean expression
Also I don't think scopes solve the problem when you need to add and delete objects, for example, in response to requests.
Also, since D lang usually implements all kinds of possible concepts and mechanism from other languages, I would love to see those being implemented aswell! D already has a borrow checker no so why not also add this, would be very cool to play with it!
Windeycastle•2d ago