frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Help with LLVM

31•kvthweatt•2d ago•8 comments

Ask HN: Are you missing Daily Email alerts from HN?

3•unknownhad•2h ago•0 comments

Tell HN: EU soliciting feedback on law that could affect Open Access

5•Quanttek•3h ago•0 comments

Ask HN: How do you use 5–10 minute gaps productively?

25•pea•16h ago•36 comments

Ask HN: Who wants to be hired? (January 2026)

158•whoishiring•2d ago•329 comments

Ask HN: Who is hiring? (January 2026)

345•whoishiring•2d ago•251 comments

Ask HN: How is your work making the world a better place?

11•AbstractH24•10h ago•4 comments

Ask HN: Did the number of Ask HN posts decline as well due to LLMs?

2•zerr•2h ago•0 comments

What do people usually do with spare Android phones? Any practical use cases?

16•AndroidShare•1d ago•16 comments

Ask HN: Reading list for being a better engineer?

37•drekipus•1d ago•15 comments

Ask HN: What's the future of software testing and QA?

21•sjgeek•1d ago•13 comments

Tell HN: Happy New Year

442•schappim•5d ago•207 comments

Svger CLI – Zero-dependency SVG to component tool, 52% faster than SVGR

3•navid_rezadoost•15h ago•1 comments

Ask HN: What did you learn in 2025?

17•kiernanmcgowan•1d ago•5 comments

Ask HN: Why not ban first-person pronouns from conversational AI?

6•libertyit•19h ago•6 comments

Tell HN: I'm having the worst career winter of my life

94•mariogintili•2d ago•119 comments

Ask HN: What if a language's structure determined memory lifetime?

4•stevendgarcia•23h ago•17 comments

Ask HN: Expository/Succinct Books on Modern Physics

27•rramadass•2d ago•25 comments

LinkedIn Prevents You from Deplatforming

43•jeffkumar•16h ago•42 comments

Ask HN: Who is using Nebula (mesh VPN)?

5•cdsl•1d ago•6 comments

How to use AI to augment learning without losing critical thinking skills?

24•mintsuku•3d ago•13 comments

Ask HN: Replacement for MacUpdater which reached EOL on 2026-01-01

3•croemer•2d ago•3 comments

It's 2026 now. Is Webpack 6.x going to happen?

3•narukeu•1d ago•7 comments

Ask HN: When do we expose "Humans as Tools" so LLM agents can call us on demand?

47•vedmakk•3d ago•32 comments

Android Tablet as Mac Display

7•jefferyabbott•1d ago•4 comments

Tell HN: Instagram Web has been broken for weeks

6•thrdbndndn•2d ago•3 comments

Books Should Update as Software

6•fullstackragab•1d ago•8 comments

Ask HN: What is your prediction for the price of computer parts in 2026?

2•wand3r•1d ago•2 comments

Ask HN: What do you think of reality check based behaviour corrector app?

2•tbhaxor•1d ago•0 comments

Ask HN: Where else do you keep up-to-date?

7•throwaway132448•1d ago•1 comments
Open in hackernews

Ask HN: What if a language's structure determined memory lifetime?

4•stevendgarcia•23h ago
I’ve been exploring a new systems-language design built around a single hard rule:

Data lives exactly as long as the lexical scope that created it.

Outer scopes can never retain references to inner allocations.

There is no GC.

No traditional Rust-style borrow checker.

No hidden lifetimes.

No implicit reference counting.

When a scope exits, everything allocated inside it is freed deterministically.

---

Here’s the basic idea in code:

    fn handler() {
        let user = load_user()      // task-scoped allocation
        CACHE.set(user)             // compile error: escape from inner scope
        CACHE.set(user.clone())     // explicit escape
    }

If data needs to escape a scope, it must be cloned or moved explicitly.

The compiler enforces these boundaries at compile time. There are no runtime lifetime checks.

Memory management becomes a structural invariant. Instead of the runtime tracking lifetimes, the program structure makes misuse unrepresentable.

Concurrency follows the same containment rules.

    fn fetch_all(ids: [Id]) -> Result<[User]> {
        parallel {
            let users = fetch_users(ids)?
            let prefs = fetch_prefs(ids)?
        }
        merge(users, prefs)
    }
If any branch fails, the entire parallel scope is cancelled and all allocations inside it are freed deterministically.

This is structured concurrency in the literal sense: when a parallel scope exits (success or failure), its memory is cleaned up automatically.

Failure and retry are also explicit control flow, not exceptional states:

    let result = restart {
        process_request(req)?
    }
A restart discards the entire scope and retries from a clean slate.

No partial state.

No manual cleanup logic.

---

Why I think this is meaningfully different:

The model is built around containment, not entropy. Certain unsafe states are prevented not by convention or discipline, but by structure.

This eliminates:

* Implicit lifetimes and hidden memory management

* Memory leaks and dangling pointers (the scope is the owner)

* Shared mutable state across unrelated lifetimes

If data must live longer than a scope, that fact must be made explicit in the code.

---

What I’m trying to learn at this stage:

1. Scalability. Can this work for long-running, high-performance servers without falling back to GC or pervasive reference counting?

2. Effect isolation. How should I/O and side effects interact with scope-based retries or cancellation?

3. Generational handles. Can this replace traditional borrowing without excessive overhead?

4. Failure modes. Where does this model break down compared to Rust, Go, or Erlang?

5. Usability. What common patterns become impossible, and are those useful constraints or deal-breakers?

---

Some additional ideas under the hood, still exploratory:

* Structured concurrency with epoch-style management (no global atomics)

* Strictly pinned execution zones per core, with lock-free allocation

* Crash-only retries, where failure always discards the entire scope

---

But the core question comes first:

Can a strictly scope-contained memory model like this actually work in practice, without quietly reintroducing GC or traditional lifetime machinery?

NOTE: This isn’t meant as “Rust but different” or nostalgia for old systems.

It’s an attempt to explore a fundamentally different way of thinking about memory and concurrency.

I’d love critical feedback on where this holds up — and where it collapses.

Thanks for reading.

Comments

chrisjj•23h ago
Great work! I look forward to the responses.
stevendgarcia•22h ago
Thanks! Appreciate the feedback. There are a couple of replies here that sparked some interesting angles—looking forward to diving deeper into those and seeing where the discussion goes
eimrine•23h ago
J has some of this approach but it has been made mostly for math so it is not optimized for CRUDs.
stevendgarcia•22h ago
That’s an interesting comparison.

I agree J aligns philosophically (values over references), and you're right that it feels more optimized for pure mathematical work rather than managing long-lived, mutable state in concurrent services. What I’m exploring is whether a model like this can provide similar benefits in CRUD-heavy systems without needing GC or manual memory management.

If you’ve seen J used effectively in that space, I’d love to hear more about it.

eimrine•19h ago
No I haven't, but I want you to notice that this script language has no GC at all. So theoretical part of your message is possible. Just it needs a Kenneth Iverson level of programmer if to continue the work in array programming paradigm.

It may be not efficient at all for using rich types and structs because stack language is the earliest approach whose pros are coming from ability to make the single-pass compiler. If your requirements do not fit in single-pass approach, then you are going to have a really hard time of guessing what and when is needed to be recycled.

andyjohnson0•22h ago
I once worked for about a decade with a body of server-side C code that was written like this. Almost every data structure was either statically allocated at startup or on the stack. I inherited the codebase and kept the original style, once I'd got my head around it.

Positives were that it made the code very easy to reason about, and my impression was that it made it reliable - ownership of data was mostly obvious, and it was hard to (for example) mistakenly use a data structure after it had been free'd. Memory usage under load was very predictable.

Downsides were that data structures (such as string buffers) had to be sized for the max use-case, and code changes had to be hammered into a basically hierarchical data model. It was also hard to incorporate third-party library code - leading to it having its own http and smtp handling, which wasn't great. Some of that might be a consequence of the choice of base language though.

stevendgarcia•22h ago
This is a really helpful data point, thanks for sharing it.

What you're describing aligns pretty closely with the behavior I'm trying to achieve—predictable ownership, clear memory lifetimes, and fewer “how did this get freed?” bugs. The downsides you mentioned (like sizing buffers for the worst case, being stuck with a rigid hierarchy, and friction with third-party libraries) are exactly the areas I'm aiming to address.

The difference with what I'm cooking up is: by using lexical scopes with cheap arenas, we can preserve most of that reasoning without the rigid static tree structure. Scopes are flexible and explicit, and you can nest, retry, and promote memory between them without hard-coding everything upfront.

That said, I don't think it completely resolves the ecosystem issues you ran into. If anything, it just makes the boundaries clearer.

If you don't mind me asking, did you run into any specific pain points with refactors that were difficult because of the memory model, or was it more of a cultural constraint?

Also, did this experience influence how you built things afterwards? Where did you land in terms of language/stack?

stevendgarcia•22h ago
Your comment got my wheels turning so.. quick followup.

Since you lived with this for such a long stretch, I'd love your gut reaction to the specific escape hatches I'm building in to avoid the rigidity trap:

1. Arenas grow, not fixed:

Unlike stack frames, the arenas in my model can expand dynamically. So it's not "size for worst case"—it's "grow as needed, free all at once when scope ends." A request handler that processes 10 items or 10,000 items uses the same code; the arena just grows.

2. Handles for non-hierarchical references

When data genuinely needs to outlive its lexical scope or be shared across the hierarchy, you get a generational handle:

    let handle = app_cache.store(expensive_result)
    // handle can be passed around, stored, retrieved later
    // data lives in app scope, not request scope
The handle includes a generation counter, so if the underlying scope dies, dereferencing returns None instead of use-after-free.

3. Explicit clone for escape:

If you need to return data from an inner scope to an outer one, you say `clone()` and it copies to the caller's arena. Not automatic, but not forbidden either.

4. The hierarchy matches server reality:

    App (config, pools, caches)
    └── Worker (thread-local state)
        └── Task (single request)
            └── Frame (loop iteration)

For request/response workloads, this isn't an artificial constraint—it's how the work actually flows. The memory model just makes it explicit.

Where I think it still gets awkward:

* Graph structures with cycles (need handles, less ergonomic than GC)

* FFI with libraries expecting malloc/free (planning an `unmanaged` escape hatch)

* Long-running mutations without periodic scope resets (working on incremental reclamation)

Do you think this might address the pain you experienced, or am I missing something? Particularly curious whether the handle mechanism would have helped with the cases where you had to hammer code into the hierarchy.

andyjohnson0•21h ago
There is a lot in what you describe that goes substantially beyond what I had in that codebase. It was basically a set of idioms with some helper code for a few common functions. Having an opinionated, predefined hierarchy is a good approach - there were concepts similar to your app/worker/task in the codebase I dealt with, although the equivalent of worker and task were both (kind of, its been a few years) situated below app.

In the code I mentioned, a lot of use was made of multi-level arrays of structs, with functions being passed a pointer to a root data structure and one or more array indexes. This made function argument validation somewhat better than just checking for null pointers, as array sizes were mostly stored in their containing struct or were constant. I don't know if that corresponds to your 'handle' concept, but I suspect you're doing something more general-purpose.

There were simple reader/writer functions for DTOs (which were mostly stored in arrays) but no idea of an ORM.

Escaping using clone seems sound. The ability to expand scopes seems (if I understand it) powerful, but perhaps makes reasoning about the dynamic behaviour of the code harder. Having some kind of observability around this may help.

Refactoring wasn't a huge problem. The codebase was basically a statically-linked monolith, so dependencies were simplified. I think that having an explicit way to indicate architecture boundaries might be useful.

Overall, I suspect that if there are limitations with your approach then it may be that, while it simplifies 80-90% of a problem, the remainder is hard to fit into the architectural framework. Dogfooding some production-level applications should help.

Good luck. What you're doing is fascinating, and I hope you'll update HN with your progress.

chrisjj•19h ago
> it was hard to (for example) mistakenly use a data structure after it had been free'd

Hard? Why not impossible?

andyjohnson0•17h ago
I distrust absolutes.
chrisjj•15h ago
OK, so how was it possible?
AnimalMuppet•13m ago
"Not proven to be impossible" != "known how it is possible".
tacostakohashi•20h ago
I'm not sure this needs to be its own language.

In C/C++, this can be done by just not using malloc() or new.

You can get an awfully long way in C with only stack variables (or even no variables, functional style). You can get a little bit further with variable length arrays, and alloca() added to the mix.

With C++, you have the choice of stack, or raw new/delete, or unique_ptr, or shared_ptr / reference counting. I think this "multi-paradigm" approach works pretty well, but of course its complicated, and lots of people mess it up.

I think, with well-designed C/C++, 90+% of things can be on the stack, and dynamic allocation can be very much the exception.

I've been switching back and forth across C/C++/Java for the past few months. The more I think about it, the more ridiculous/pathological the Java approach of every object dynamically allocated, impossible to create an object not on the heap seems.

I think the main problem is kind of a human one, that people see/learn about dynamic allocation/shared_ptr etc. and it becomes a hammer and everything looks like a nail, and they forget the prospect of just using stack variables, or more generally doing the simplest thing that will work.

Maybe some kind of language where doing dumb things is an error would be good. e.g., in C++ if you do new and delete in the same scope, it's an error because it could have been a stack variable, just like unreachable code is an error on Java.

stevendgarcia•20h ago
Great feedback!

You’re absolutely right — C and C++ give you the primitives to do this manually. If every developer followed the “stack first, heap only when necessary” discipline, and carefully used unique_ptr or avoided new/delete when possible, you could achieve much of the same safety and determinism.

The difference I’m aiming for is that these constraints aren’t optional — they’re baked into the language and compiler. You don’t rely on every developer making the right choice; instead, the structure of the code itself enforces ownership and lifetime rules.

So in your terms, instead of “doing dumb things is an error,” it’s structurally impossible to do dumb things in the first place. The language doesn’t just punish mistakes with foot-guns, it makes the safe path the only path.

This also opens up other possibilities that are really awkward in C/C++, like structured concurrency with deterministic memory cleanup, restartable scopes, and safe parallel allocations, without relying on GC or heavy reference counting.

I’d be curious: if C++ had a compiler that made stack-first allocation the default and forbade escapes unless explicit, would that solve most of the problems you’ve experienced, or are there still edge cases that would require a fundamentally different runtime model?

tacostakohashi•20h ago
As far as I'm concerned, stack-first allocation _is_ the default. It's true that the default exists in my head rather than in in a compiler, though.

Maybe think about whether what you propose could exist as a compiler warning, or static analysis tool. Or, if you want to create your own language, go for it, that's cool too.

For my purposes... the choice of paradigms, compilers, platforms with C++ and ability to handle and work on decades of existing code outweighs the benefits of "improved" languages, but that's just me.

michalsustr•4h ago
Very interesting! I suggest following up with this on a rust core devs forum, as there might be higher concentration of people capable giving feedback.