frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

OpenCut: The open-source CapCut alternative

https://github.com/OpenCut-app/OpenCut
153•nateb2022•3h ago•41 comments

Let's Learn x86-64 Assembly Part 0 – Setup and First Steps

https://gpfault.net/posts/asm-tut-0.txt.html
39•90s_dev•1h ago•10 comments

The underground cathedral protecting Tokyo from floods (2018)

https://www.bbc.com/future/article/20181129-the-underground-cathedral-protecting-tokyo-from-floods
33•barry-cotter•3d ago•10 comments

Investors bought 27% of US homes in Q1, as traditional buyers struggle to afford

https://abcnews.go.com/Business/wireStory/investors-snap-growing-share-us-homes-traditional-buyers-123560969
69•MilnerRoute•47m ago•37 comments

APKLab: Android Reverse-Engineering Workbench for VS Code

https://github.com/APKLab/APKLab
42•nateb2022•3h ago•1 comments

How does a screen work?

https://www.makingsoftware.com/chapters/how-a-screen-works
307•chkhd•10h ago•64 comments

A technical look at Iran's internet shutdowns

https://zola.ink/blog/posts/a-technical-look-at-irans-internet-shutdown
102•znano•7h ago•44 comments

Show HN: A Raycast-compatible launcher for Linux

https://github.com/ByteAtATime/raycast-linux
129•ByteAtATime•7h ago•28 comments

Reading Neuromancer for the first time in 2025

https://mbh4h.substack.com/p/neuromancer-2025-review-william-gibson
363•keiferski•16h ago•320 comments

Traditional Chinese Medicine Has Not Been Vindicated by Science

https://www.mcgill.ca/oss/article/medical-critical-thinking-health-and-nutrition/no-traditional-chinese-medicine-has-not-been-vindicated-science
64•mgh2•2h ago•23 comments

The North Korean fake IT worker problem is ubiquitous

https://www.theregister.com/2025/07/13/fake_it_worker_problem/
158•rntn•12h ago•166 comments

How to scale RL to 10^26 FLOPs

https://blog.jxmo.io/p/how-to-scale-rl-to-1026-flops
40•jxmorris12•3d ago•1 comments

C3 solved memory lifetimes with scopes

https://c3-lang.org/blog/forget-borrow-checkers-c3-solved-memory-lifetimes-with-scopes/
86•lerno•2d ago•63 comments

Fine dining restaurants researching guests to make their dinner unforgettable

https://www.sfgate.com/food/article/data-deep-dives-bay-area-fine-dining-restaurants-20404434.php
47•borski•8h ago•96 comments

Axon's Draft One AI Police Report Generator Is Designed to Defy Transparency

https://www.eff.org/deeplinks/2025/07/axons-draft-one-designed-defy-transparency
211•zdw•2d ago•135 comments

The Gottorf Globe and its reconstruction

https://gottorfer-globus.de/en/the-gottorf-globe
13•Archelaos•4h ago•2 comments

Show HN: Learn LLMs LeetCode Style

https://github.com/Exorust/TorchLeet
112•Exorust•11h ago•18 comments

Infisical (YC W23) Is Hiring DevRel Engineers

https://www.ycombinator.com/companies/infisical/jobs/qCrLiJb-developer-relations
1•vmatsiiako•7h ago

Five companies now control over 90% of the restaurant food delivery market

https://marketsaintefficient.substack.com/p/five-companies-now-control-over-90
132•goinggetthem•3h ago•125 comments

Does showing seconds in the system tray actually use more power?

https://www.lttlabs.com/blog/2025/07/11/does-showing-seconds-in-the-system-tray-actually-use-more-power
123•LorenDB•6h ago•110 comments

GLP-1s Are Breaking Life Insurance

https://www.glp1digest.com/p/how-glp-1s-are-breaking-life-insurance
217•alexslobodnik•5h ago•257 comments

Ask HN: How much of OpenAI code is written by AI?

34•growbell_social•3h ago•18 comments

Zig's new I/O: function coloring is inevitable?

https://blog.ivnj.org/post/function-coloring-is-inevitable
24•ivanjermakov•8h ago•5 comments

The upcoming GPT-3 moment for RL

https://www.mechanize.work/blog/the-upcoming-gpt-3-moment-for-rl/
166•jxmorris12•4d ago•66 comments

Local Chatbot RAG with FreeBSD Knowledge

https://hackacad.net/post/2025-07-12-local-chatbot-rag-with-freebsd-knowledge/
63•todsacerdoti•10h ago•4 comments

Monitoring My Homelab, Simply

https://b.tuxes.uk/simple-homelab-monitoring.html
95•Bogdanp•3d ago•30 comments

Understanding Tool Calling in LLMs – Step-by-Step with REST and Spring AI

https://muthuishere.medium.com/understanding-tool-function-calling-in-llms-step-by-step-examples-in-rest-and-spring-ai-2149ecd6b18b
83•muthuishere•14h ago•20 comments

Notes on Graham's ANSI Common Lisp (2024)

https://courses.cs.northwestern.edu/325/readings/graham/graham-notes.html
90•oumua_don17•4d ago•33 comments

The Robot Sculptors of Italy

https://www.bloomberg.com/features/2025-robot-sculptors-marble/
46•helsinkiandrew•3d ago•14 comments

Hypercapitalism and the AI Talent Wars

https://blog.johnluttig.com/p/hypercapitalism-and-the-ai-talent
9•walterbell•3h ago•0 comments
Open in hackernews

C3 solved memory lifetimes with scopes

https://c3-lang.org/blog/forget-borrow-checkers-c3-solved-memory-lifetimes-with-scopes/
86•lerno•2d ago

Comments

Windeycastle•2d ago
Nice read, although a small section on how it's implemented exactly would've been nice.
hrhrdorhrvfbf•2d ago
Rust’s interface for using different allocators is janky, and I wish they had something like this, or had moved forward with the proposal for the mechanism for making it a part of a flexible implicit context mechanism that was passed along with function calls.

But mentioning the borrow checker raises an obvious question that I don’t see addressed in this post: what happens if you try to take a reference to an object in the temporary allocator, and use it outside of the temporary allocator’s scope? Is that an error? Rust’s borrow checker has no runtime behavior, it only exists to create errors in cases like that, so the title invites the question of how your this mechanism handles that case but doesn’t answer it.

lerno•2d ago
A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet), but in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

This is of course not as good as ASAN or a borrow checker, but it interacts very nicely with C.

Filligree•9h ago
So, would you say the title overstates its case slightly?
lerno•7h ago
I would say that the title is easily misread. If you open the blog post and just read the title and a few lines into the intro, I think it's clear it's about C3 not having to implement any recently popular language features in order to solve the problem of memory lifetimes for temporary objects as they arise in a language with C-like semantics.

Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.

It should also be clear from the rest of the blog post that it doesn't try to make any claims that it's a novel technique (it's something that has been around for a long time). What's novel is that it's well integrated into the stdlib.

SkiFire13•5h ago
> C3 not having to implement any recently popular language features in order to solve the problem of memory lifetimes for temporary objects as they arise in a language with C-like semantics.

But you said it yourself in your previous message:

> A dangling pointer will generally still possible to dereference (this is an implementation detail, that might get improved – temp allocators aren't using virtual memory on supporting platforms yet)

So the issue is clearly not solved.

And to be complete about the answer:

> in safe more that data will be scratched out with a value, I believe we use 0xAA by default. So as soon as this data is used out of scope you'll find out.

I can see multiple issues with this:

- it's only in safe mode

- it's safe only as long as the memory is never used again for a different purpose, which seems to imply that either this is not safe (if it's written again) or that it leaks massive amounts of memory (if it's never written to again)

> Now clearly people are misreading the title when it stands on its own as "borrow checkers suck, C3 has a way of handling memory safety that is much better". That is very unfortunate, but chance to fix that title already passed.

Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"? To me that way of reading seems reasonable, but the title still looks plainly wrong.

Heck, even citing the borrow checker *at all* seems wrong, this is more about RAII than lifetimes (and RAII in Rust is solved with ownership, not the borrow checker).

lerno•4h ago
> So the issue is clearly not solved.

You can use --sanitize=address to get this today, or use the Vmem-based temp allocator (which is only in the 0.7.4 prerelease and only for 64 bit POSIX) if you're curious how it feels and works in practice.

> I can see multiple issues with this:

There is a constant trade-off, and being as safe as possible is obviously great, but there is also the question of performance.

The context matters though, it's a C-like language, an evolution of C. So it doesn't try to be a completely new language with new semantics, and that creates a lot of constraints.

The "safe-C" C-dialects usually add a lot of additional annotations that doesn't seem particularly palatable to most developers.

> Am I still misreading the title if I read it as "C3 solves the same issues that the borrow checker solves"?

Yes I am afraid you do. But that's my fault (since I suggested the title, even though I didn't write the article), and not yours.

Philpax•9h ago
I feel like "solved" is a strong word for what's described here. This works for some - possibly even many - scenarios, but it does not solve memory lifetime in the general case, especially when data from different scopes needs to interact.
hvenev•9h ago
I'm struggling to understand how this has anything to do with borrow checking. Borrow checking is a way to reason about aliasing, which doesn't seem to be a concern here.

This post is about memory management and doesn't seem to be concerned much about safety in any way. In C3, does anything prevent me from doing this:

  fn int* example(int input)
  {
      @pool()
      {
          int* temp_variable = mem::tnew(int);
          *temp_variable = input;
          return temp_variable;
      };
  }
cayley_graph•8h ago
Yes, this has little to nothing to do with borrow checking or memory/concurrency safety in the sense of Rust. Uncharitably, the author appears not to have a solid technical grasp of what they're writing about, and I'm not sure what this says about the rest of the language.
lerno•7h ago
No, that is quite possible. You will not be able to use that memory you just returned though. What actually happens is an implementation issue, but it ranges from having the memory overwritten (but still being writable) on platforms with the least support, to being neither read or writable, to throwing an exact error with ASAN on. Crashing on every use is often a good sign that there is a bug.
unscaled•7h ago
It might not be on every use though. The assignment could very well be conditional. If a dangling reference could escape from the arena in which it was allocated, you cannot claim to have memory safety. You can claim that the arena prevents memory leaks (if you remember to allocate everything correctly within the arena), but it doesn't provide memory safety.
lerno•7h ago
Memory safety as in the full toolset that Rust provides? C3 clearly doesn't, I fully agree.
ameliaquining•9h ago
"No more [...] slow compile times with complex ownership tracking."

Presumably this is referring to Rust, which has a borrow checker and slow compile times. The author is, I assume, under the common misconception that these facts are closely related. They're not; I think the borrow checker runs in linear time though I can't find confirmation of this, and in any event profiling reveals that it only accounts for a small fraction of compile times. Rust compile times are slow because the language has a bunch of other non-borrow-checking-related features that trade off compilation speed for other desiderata (monomorphization, LLVM optimization, procedural macros, crates as a translation unit). Also because the rustc codebase is huge and fairly arcane and not that many people understand it well, and while there's a lot of room for improvement in principle it's mostly not low-hanging fruit, requiring major architectural changes, so it'd require a large investment of resources which no one has put up.

unscaled•7h ago
I know very little about how rustc is implemented, but watching what kind of things make make Rust compile times slower, I tend to agree with you. The borrow checker rarely seems to be the culprit here. It tends to spike up exactly on the things you've mentioned: procedural macros use, generics use (monomorphization) and release builds (optimization).

There are other legitimate criticisms you can raise at the Rust borrow checker such as cognitive load and higher cost of refactoring, but the compilation speed argument is just baseless.

SkiFire13•6h ago
Procedural macros are not really _that_ slow themselves, the issue is more that they tend to generate enormous amount of code that will then have to be compiled, and _that_'s slow.
ameliaquining•5h ago
Also the procedural macro library itself and all of its dependencies have to be compiled. Though this only really affects initial builds, as the library can be cached on subsequent ones.
josh11b•7h ago
https://learning-rust.github.io/docs/lifetimes/

> Lifetime annotations are checked at compile-time. ... This is the major reason for slower compilation times in Rust.

This misconception is being perpetuated by Rust tutorials.

estebank•6h ago
On the phone, so I can't now, but someone should file a ticket to that project about that error: https://github.com/learning-rust/learning-rust.github.io/iss...

Be aware that it is not part of the rust-lang organization, it's a third party.

JoshTriplett•2h ago
https://github.com/learning-rust/learning-rust.github.io/pul...
UncleMeat•9h ago
The core benefit of the borrow checker is not "make sure to remember to clean up memory to avoid leaks." The core benefits are "make sure that you can't access memory after it has been destroyed" and "make sure that you can't mutate something that somebody else needs to be constant." This is fundamentally a statement about the relationship between many objects, which may have different lifetimes and which are allocated in totally different parts of the program.

Lexically scoped lifetimes don't address this at all.

lerno•7h ago
Well, the title (which is poorly worded as has been pointed out) refers to C3 being able to implement good handling of lifetimes for temporary allocations by baking it into the stdlib. And so it doesn't need to reach for any additional language features. (There is for example a C superset that implements borrowing, but C3 doesn't take that route)

What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used. That's of course not the level of compile time checking that Rust does. But then Rust has a lot more in the language in order to support this.

Conversely C3 does have contracts as a language feature, which Rust doesn't have, so C3 is able to do static checking with the contracts to reject contract violations at compile time, which runtime contracts like some Rust creates provides, can't do.

SkiFire13•6h ago
> What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

The article makes no mention of this, so in the context of the article the title remains very wrong. I could also not find a page in the documentation claiming this is supported (though I have to admit I did not read all the pages), nor an explanation of how this works, especially in relation to the performace hit it would result in.

> C3 is able to do static checking with the contracts to reject contract violations at compile time

I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts. Even worse, violating them when not using safe mode results in "unspecified behaviour", but really it's undefined behaviour (violating contracts is even their list of undefined behaviour! [2])

[1]: https://c3-lang.org/language-common/contracts/

[2]: https://c3-lang.org/language-rules/undefined-behaviour/#list...

lerno•5h ago
> The article makes no mention of this, so in the context of the article the title remains very wrong

The temp allocator implementation isn't guaranteed to detect it, and the article doesn't go into implementation details and guarantees (which is good, because capabilities will be added on the road to 1.0).

> I tries searching how these contracts work in the C3 website [1] and these seems to be no guaranteed static checking of such contracts.

No, there is no guarantee at the language level because doing so would make a conforming implementation of the compiler harder than it needs to be. In addition, setting exact limits may hamper innovation of compilers that wish to add more analysis but will hesitate to reject code that can be statically know to violate contracts.

At higher optimizations, the compiler is allowed to assume that the contracts evaluate to true. This means that code like `assert(i == 1); if (i != 1) return false;` can be reduced to a no-op.

So the danger here is then if you rely on the function giving you a valid result even if the indata is not one that the function should work with.

And yes, it will be optional to have those "assumes" inserted.

Already today in current compiler, doing something trivial like writing `foo(0)` to a function that requires that the parameter > 1 is caught at compile time. And it's not doing any real analysis yet, but it will definitely happen.

UncleMeat•1h ago
Just my opinion, but I think that having contracts that might be checked is a really really really dangerous approach. I think it is a much better idea to start with a plan for what sorts of things you can check soundly and only do those. "Well we missed that one because we only have intraprocedural constant propagation" is not going to be the sort of thing most users understand and will catch people by surprise.
fanf2•4h ago
> What the C3 solution DOES to provide a way to detect at runtime when already freed temporary allocation is used.

I looked at the allocator source code and there’s no use-after-free protection beyond zeroing on free, and that is in no way sufficient. Many UAF security exploits work by using a stale pointer to mutate a new allocation that re-uses memory that has been freed, and zeroing on free does nothing to stop these exploits.

cogman10•9h ago
I really do not see the benefit of this over C++ destructors and or facilities like `unique_ptr` and `shared_ptr`.

@pool appears to be exactly what C++ does automatically when objects fall out of scope.

vineethy•8h ago
My first thoughts also
sirwhinesalot•8h ago
The advantage is that the allocations are grouped: they're allocated in the same memory region (good memory locality) and freed in bulk. The tradeoff is needing to explicitly create these scopes and not being able to have custom deallocation logic like you can in a destructor.

(This doesn't seem to have anything to do with borrow checking though, which is a memory safety feature not a memory management feature. Rust manages memory with affine types which is a completely separate thing, you could write an entire program without a single reference if you really wanted to)

ameliaquining•8h ago
You can also do those things in an RAII language with an arena library. Is the complaint just that it's too syntactically verbose?
jdcasale•8h ago
I am also struggling to see the difference between this and language-level support for an arena allocator with RAII.
lerno•7h ago
You can certainly do it with RAII. However, what if a language lacks RAII because it prioritizes explicit code execution? Or simply want to retain simple C semantics?

Because that is the context. It is the constraint that C3, C, Odin, Zig etc maintains, where RAII is out of the question.

cogman10•8h ago
It seems like exactly the same verbosity as what you'd do with a custom allocator.

I think the only real grace is you don't have to pass around the allocator. But then you run into the issue where now anyone allocating needs to know about the lifetimes of the pool of the caller. If A -> B (pool) -> C and the returned allocation of C ends up in A, now you potentially have a pointer to freed memory.

Sending around the explicit allocator would allow C to choose when it should allocate globally and when it should allocate on the pool sent in.

sirwhinesalot•8h ago
I think the point is that it is the blessed/default way of doing things, rather than opt-in, as in C++ or Rust.

Rust doesn't even have a good allocator interface yet, so libraries like bumpalo have a parallel implementation of some stdlib types.

lerno•7h ago
The benefit is that it: (a) works in a language without RAII, and C-like languages usually does not have that (b) there are no individual heap allocations and frees (c) allocations are grouped together.
littlestymaar•7h ago
> (a) works in a language without RAII

I'm confused: how is it not exactly RAII?

lerno•7h ago
Well, there are no objects, no constructors and no destructors.
bbminner•8h ago
Ok, now give me an example of a resource manager (eg in a game) that has methods for loading resources into memory and also for releasing such resources - all of a sudden if a system needs to give away pointer access to its buffers, things become more complicated and arena allocators are not enough.
lerno•7h ago
I am not sure how this would be a problem. Certainly the resource manager should manage the memory itself in some manner.

It has very little to do with trying to manage temporary memory lifetimes.

Calavar•7h ago
For that scenario you can use a pool allocator backed by a fixed size allocation from an arena. That gives you the flexibility to allocate and free resources on the fly, but with a fixed upper limit to the lifetime (e.g. the lifetime of the level or chunk). Once you're ready to unload a level or a chunk, you can rewind the arena, which is a very cheap operation (as opposed to calling free in a loop, which can be expensive if the free implementation tries to defragment the freelist)
smcameron•8h ago
Seems overly simplistic and doesn't seem to cover extremely common cases such as a thread allocating some memory then putting it into a queue to be consumed by other threads which then eventually free the memory, or any allocation lifetime that isn't simply the scope of the enclosing block.
lerno•7h ago
Well the latter is covered: you can make temp allocations out of order when having nested "@pool"s. There are examples in the blog post.

It doesn't solve the case when lifetimes are indeterminate. But often they are well know. Consider "foo(bar())" where "bar()" returns an allocated object that we wish to free after "foo" has used it. In something like C it's easy to accidentally leak such a temporary object, and doing it properly means several lines of code, which might be bad if it's intended for an `if` statement or `while`.

ltbarcly3•8h ago
This literally doesn't solve any actual problems. If all memory allocation patterns were lexical this is the most easy and most obvious thing to do. That is why stack allocation is the default and works exactly like this.
amelius•8h ago
Well, it solves the problem of destructors/deallocation wasting a lot of time.
lerno•7h ago
Imagine we have a function "foo" which returns an allocated object Bar, we want to pass this to a function "bar" and then have it released.

Now we usually cannot do "bar(foo())" because it then leaks. We could allocate a buffer on the stack, and then do "bar(foo(&buffer))", but this relies on us safely knowing that the buffer does not overflow.

If the language has RAII, we can use that to return an object which will release itself after going out of scope e.g. std::unique_ptr, but this relies on said RAII and preferably move semantics.

If the context is RAII-less semantics, this is not trivial to solve. Languages that run into this is C3, Zig, C and Odin.

With the temp allocator solution, we can write `bar(foo())` if `foo` always allocates a temp variable, or `bar(foo(tmem))` if it takes an allocator.

ltbarcly3•2h ago
Wait, you are implying this is some kind of algorithmic 'solution' to a long standing problem. It's not. This is notable because it's an implementation that works in C++. The 'concept' of tracking allocations in a lexical way is trivially obvious.
amelius•8h ago
Smart compilers already do this with escape analysis.
lerno•7h ago
No, I don't think they do.

Given a function `foo` that is allocating an object "o" and returns it to the upper scope, how would you do "escape analysis" to determine it should be freed and HOW it should be freed? What is the mechanism if you do not have RAII, ARC or GC?

throwawaymaths•7h ago
you track how the variable is used in the compilation unit which should have a finite set of possibilities?
lerno•7h ago
This is about tracking allocated memory, which is different. I know V claimed it could solve this with static analysis, but in practice it didn't work and had to fallback to a GC.

This is true for all similar schemes, that they have something for easy for simple-to-track allocations, and then have to fallback on something generic.

But even that is usually assuming that the language is somehow having a built-in notion of memory allocation and freeing.

dnautics•6h ago
It should be possible in zig! Here's a proof of concept, I would guess that if V failed it was because they tried to do it at the language level. If you analyse intermediate representations the work is much, much easier.

https://youtu.be/ZY_Z-aGbYm8?feature=shared

turnsout•7h ago
Is this different from NSAutoreleasePool, which has been around for over 30 years?
sirwhinesalot•7h ago
Implementation-wise yes, very different, idea-wise not really. The author of C3 is a fan of Objective-C.
lerno•7h ago
NSAutoreleasePool keeps a list of autoreleased objects, that are given a "release" message when the pool goes out of scope.

`@pool` flushes the temp allocator and all allocations made by the temp allocator are freed when the pool goes out of scope.

There are similarities, but NSAutoreleasePool is for refcounting and an object released by the autoreleasepool might have other objects retaining it, so it's not necessarily freed.

timeon•7h ago
I don't think technical writing needs this kind of rage-bait. They could have presented just the features of the language. Borrow-checker is clearly unrelated here.
ac130kz•7h ago
The post doesn't even mention how it works/improves DX in a multi-threaded environment, borrow checkers are targeting specifically that use case.
unscaled•7h ago
The post's title is quite hyperbolic and I don't think it serves the topic right.

Memory arenas/pools have been around for ages, and binding arenas to a lexical scope is also not a new concept. C++ was doing this with RAAI, and you could implement this in Go with defer and in other languages by wrapping the scope with a closure.

This post discusses how arenas are implemented in C3 and what they're useful for, but as other people have said this doesn't make sense to compare arenas to reference counting or a borrow checker. Arenas make memory management simpler in many scenarios, and greatly reduce (but don't necessarily eliminate - without other accompanying language features) the chances of a memory leak. But they contribute very little to memory safety and they're not nearly as versatile as a full-fledged borrow checker or reference counting.

rq1•6h ago
What core type theory is C3 actually built on?

The blog claims that @pool "solves memory lifetimes with scopes" yet it looks like a classic region/arena allocator that frees everything at the end of a lexical block… a technique that’s been around for decades.

Where do affine or linear guarantees come in?

From the examples I don’t see any restrictions on aliasing or on moving data between pools, so how are use‑after‑free bugs prevented once a pointer escapes its region?

And the line about having "solved memory management" for total functions::: bravo indeed…

Could you show a non‑trivial case where @pool eliminates a leak that an ordinary arena allocator wouldn’t?

Could you show a non‑trivial case, say, a multithreaded game loop where entities span multiple frames, or a high‑throughput server that streams chunked responses, where @pool prevents leaks that a plain arena allocator would not?

sirwhinesalot•6h ago
It is unfortunate that the title mentions borrow checking which doesn't actually have anything to do with the idea presented. "Forget RAII" would have made more sense.

This doesn't actually do any compile-time checks (it could, but it doesn't). It will do runtime checks on supported platforms by using page protection features eventually, but that's not really the goal.

The goal is actually extremely simple: make working with temporary data very easy, which is where most memory management messes happen in C.

The main difference between this and a typical arena allocator is the clearly scoped nature of it in the language. Temporary data that is local to the function is allocated in a new @pool scope. Temporary data that is returned to the caller is allocated in the parent @pool scope.

Personally I don't like the precise way this works too much because the decision of whether returned data is temporary or not should be the responsibility of the caller, not the callee. I'm guessing it is possible to set the temp allocator to point to the global allocator to work around this, but the callee will still be grabbing the parent "temp" scope which is just wrong to me.

Sesse__•2h ago
> "Forget RAII" would have made more sense.

For memory only, which is one of the simplest kinds of resource. What about file descriptors? Graphics objects? Locks? RAII can keep track of all of those. (So does refcounting, too, but tracing GC usually not.)

caim•6h ago
funny thing is that Malloc also behaves like an arena. When your program starts, Malloc reserves a lot of memory, and when your program ends, all this memory is released. Memory Leak ends up not being a problem with Memory Safety.

So, you will still need a borrow checker for the same reasons Rust needs one, and C/C++ also needed.

codedokode•2h ago
I never heard about this language, so I quickly looked through the docs and here is what I didn't like:

- integers use names like "short" instead of names with numbers like "i16"

- they use printf-like formatting functions instead of Python's f-strings

- it seems that there is no exception in case of integer overflow or floating point errors

- it seems that there is no pointer lifetime checking

- functions are public by default

- "if" statement still requires parenthesis around boolean expression

Also I don't think scopes solve the problem when you need to add and delete objects, for example, in response to requests.

Alifatisk•1h ago
Wow, this is such a fascinating concept. The syntax can’t stop reminding me of @autoreleasepool from ObjC. I’ll definitely try this out on a small project soon.

Also, since D lang usually implements all kinds of possible concepts and mechanism from other languages, I would love to see those being implemented aswell! D already has a borrow checker no so why not also add this, would be very cool to play with it!