A straightforward reading of the code suggests that it should do what it does.
The confusion here is a property of C, not of Go. It's a property of C that you need to care about the difference between the stack and the heap, it's not a general fact about programming. I don't think Go is doing anything confusing.
But yeah, to your point, returning a slice in a GC language is not some exotic thing.
I commented elsewhere on this post that I rarely have to think about stacks and heaps when writing Go, so maybe this isn’t my issue to care about either.
Sure, Go has escape analysis, but is that really what's happening here?
Isn't this a better example of escape analysis: https://go.dev/play/p/qX4aWnnwQV2 (the object retains its address, always on the heap, in both caller and callee)
Since 1.17 it’s not impossible for escape analysis to come into play for slices but afaik that is only a consideration for slices with a statically known size under 64KiB.
Both arrays in this example seem to be on the heap.
If you want to confirm, you have to use the Go compiler directly. Take the following code:
package main
import (
"fmt"
)
type LogEntry struct {
s string
}
func readLogsFromPartition(partition int) []LogEntry {
var logs []LogEntry // Creating an innocent slice
logs = []LogEntry{{}}
logs2 := []LogEntry{{}}
fmt.Printf("%v %v\n", len(logs), len(logs2))
return []LogEntry{{}}
}
func main() {
logs := readLogsFromPartition(1)
fmt.Printf("%p\n", &logs[0])
}
And compile it with $ go build -gcflags '-m' main.go
# command-line-arguments
./main.go:15:12: inlining call to fmt.Printf
./main.go:21:12: inlining call to fmt.Printf
./main.go:13:19: []LogEntry{...} does not escape
./main.go:14:21: []LogEntry{...} does not escape
./main.go:15:12: ... argument does not escape
./main.go:15:27: len(logs) escapes to heap
./main.go:15:38: len(logs2) escapes to heap
./main.go:16:19: []LogEntry{...} escapes to heap
./main.go:21:12: ... argument does not escape
However, if you return logs2, or if you take the address, or if you pass them to Printf with %v to print them, you'll see that they now escape.An additional note: in your original code from your initial reply, everything you allocate escapes to heap as well. You can confirm in a similar way.
In my experience, the average programmer isn’t even aware of the stack vs heap distinction these days. If you learned to write code in something like Python then coming at Go from “above” this will just work the way you expect.
If you come at Go from “below” then yeah it’s a bit weird.
That said, when it matters it matters a lot. In those times I wish it was more visible in Go code, but I would want it to not get in the way the rest of the time. But I’m ok with the status quo of hunting down my notes on escape analysis every few months and taking a few minutes to get reacquainted.
Side note: I love how you used “from above” and “from below”. It makes me feel angelic as somebody who came from above; even if Java and Ruby hardly seemed like heaven.
Coming (back then) from C/C++ gamedev - I was puzzled, then I understood the mantra - it's better for the process to die fast, instead of being pegged by GC and not answering to the client.
Then we started looking what made it use GC so much.
I guess it might be similar to Go - in the past I've seen some projects using a "baloon" - to circumvent Go's GC heuristic - e.g. if you blow this dummy baloon that takes half of your memory GC might not kick so much... Something like this... Then again obviously bad solution long term
The content of the stack is (always?) known at compile time; it can also be thrown away wholesale when the function is done, making allocations on the stack relatively cheaper. These FOSDEM talks by Bryan Boreham & Sümer Cip talk about it a bit:
- Optimising performance through reducing memory allocations (2018), https://archive.fosdem.org/2018/schedule/event/faster/
- Writing GC-Friendly [Go] code (2025), https://archive.fosdem.org/2025/schedule/event/fosdem-2025-5...
Speaking of GC, Go 1.26 will default to a newer one viz. Green Tea: https://go.dev/blog/greenteagc
I also came "from above".
even in C, the concept of returning a pointer to a stack allocated variable is explicitly considered undefined behavior (not illegal, explicitly undefined by the standard, and yes that means unsafe to use). It be one thing if the the standard disallowed it.
but that's only because the memory location pointed to by the pointer will be unknown (even perhaps immediately). the returning of the variable's value itself worked fine. In fact, one can return a stack allocated struct just fine.
TLDR: I don't see what the difference between returning a stack allocated struct in C and a stack allocated slice in Go is to a C programmer. (my guess is that the C programmer thinks that a stack allocated slice in Go is a pointer to a slice, when it isn't, it's a "struct" that wraps a pointer)
The following Go code also works perfectly well, where it would obviously be UB in C:
func foo() *int {
i := 7
return &i
}
func main() {
x := foo()
fmt.Printf("The int was: %d", *x) //guaranteed to print 7
}Of course the compiler could inline it or do something else but semantically its a copy.
gc could create i on the stack then copy it to the heap, but if you plug that code into godbolt you can see that it is not that dumb, it creates a heap allocation then writes the literal directly into that.
[0] unless Foo is inlined and the result does not escape the caller’s frame, then that can be done away with.
Back in Python 2.1 days, there was no guarantee that a locally scoped variable would continue to exist past the end of the method. It was not guaranteed to vanish or go fully out of scope, but you could not rely on it being available afterwards. I remember this changing from 2.3 onwards (because we relied on the behaviour at work) - from that point onwards you could reliably "catch" and reuse a variable after the scope it was declared in had ended, and the runtime would ensure that the "second use" maintained the reference count correctly. GC did not get in the way or concurrently disappear the variable from underneath you anymore.
Then from 2008 onwards the same stability was extended to more complex data types. Again, I remember this from having work code give me headaches for yanking supposedly out-of-scope variable into thin air, and the only difference being a .1 version difference between the work laptop (where things worked as you'd expect) and the target SoC device (where they didn't).
I am so glad I never taken up C. This sound like a nightmare of a DX to me.
And these days, if you're bothering with C you probably care about these things. Accidentally promoting from the stack to the heap would be annoying.
The thing being returned is a slice (a fat pointer) that has pointer, length, capacity. In the code linked you'll see the fat pointer being returned from the function as values. in C you'd get just AX (the pointer, without length and cap)
command-line-arguments_readLogsFromPartition_pc122:
MOVQ BX, AX // slice.ptr -> AX (first result register)
MOVQ SI, BX // slice.len -> BX (second)
MOVQ DX, CX // slice.cap -> CX (third)
The gargabe collection is happening in the FUNCDATA/PCDATA annotations, but I don't really know how that works.
foldr•2mo ago
bonniesimon•2mo ago
Yokohiii•1mo ago
Assuming to everything allocates on the heap, will solve this specific confusion.
My understanding is that C will let you crash quite fast if the stack becomes too large, go will dynamically grow the stack as needed. So it's possible to think you're working on the heap, but you are actually threshing the runtime with expensive stack grow calls. Go certainly tries to be smart about it with various strategies, but a rapid stack grow rate will have it's cost.
foldr•1mo ago
Yokohiii•1mo ago
The initial stack size seems to be 2kb, a more on a few systems. So far I understand you can allocate a large local i.e. 8kb, that doesn't escape and grow the stack immediately. (Of course that adds up if you have a chain of calls with smaller allocs). So recursion is certainly not the only concern.
foldr•1mo ago
Yokohiii•1mo ago
I am pretty sure the escape analysis doesn't affect the initial stack size. Escape analysis does determine where an allocation lives. So if your allocation is lower then what escape analysis considers heap and bigger then the initial stack size, the stack needs to grow.
What I am certain about, is that I have runtime.newstack calls accounting for +20% of my benchmark times (go testing). My code is quite shallow (3-4 calls deep) and anything of size should be on the heap (global/preallocated) and the code has zero allocations. I don't use goroutines either, it might me I still make a mistake or it's the overhead from the testing benchmark. But this obviously doesn't seem to be anything super unusual.
foldr•1mo ago
Yokohiii•1mo ago
masklinn•1mo ago
Depends what you mean by “large”. As of 1.24 Go will put slices several KB into the stack frame:
Goes on the stack if it does not escape (you can see Go request a large stack frame) goes on the heap (Go calls runtime.makeslice).Interestingly arrays have a different limit: they respect MaxStackVarSize, which was lowered from 10MB to 128 KB in 1.24.
If you use indexed slice literals gc does not even check and you can create megabyte-sized slices on the stack.
Yokohiii•1mo ago
masklinn•1mo ago
Yokohiii•1mo ago
masklinn•1mo ago
And creates an on-stack slice whose size is only limited by Go's 1GB limit on individual stack frames: https://godbolt.org/z/rKzo8jre6 https://godbolt.org/z/don99e9cn
Yokohiii•1mo ago
Interesting, [...] syntax works here as expected. So escape analysis simply doesn't look at the element list.
9rx•1mo ago
Why? It is the same as in C.
simiones•1mo ago
9rx•1mo ago
foldr•1mo ago
(Pedants: I'm aware that the official distinction in C is between automatic and non-automatic storage.)
knorker•1mo ago
What you wrote is not the same in C and Go, because GC and escape analysis. But 9rx is also correct that what OP wrote is the same in C and Go.
So OP almost learned about escape analysis, but their example didn't actually do it. So double confusion on their side.
foldr•1mo ago
knorker•1mo ago
foldr•1mo ago
knorker•1mo ago
Escape analysis is the reason your `x` is on the heap. Because it escaped. Otherwise it'd be on the stack.[1]
Now if by "semantics of the code" you mean "just pretend everything is on the heap, and you won't need to think about escape analysis", then sure.
Now in terms of what actually happens, your code triggers escape analysis, and OP does not.
[1] Well, another way to say this I guess is that without escape analysis, a language would be forced to never use the stack.
foldr•1mo ago
knorker•1mo ago
Like any optimization, it makes sense to talk about what "will" happen, even if a language (or a specific compiler) makes no specific promises.
Escape analysis enables an optimization.
I think I understand you to be saying that "escape analysis" is not why returning a pointer to a local works in Go, but it's what allows some variables to be on the stack, despite the ability to return pointers to other "local" variables.
Or similar to how the compiler can allow "a * 6" to never use a mul instruction, but just two shifts and an add.
Which is probably a better way to think about it.
> So clearly you don’t need to think about the details of escape analysis to understand what your code does
Right. To circle back to the context: Yeah, OP thought this was due to escape analysis, and that's why it worked. No, it's just a detail about why other code does something else. (but not really, because OP returned the slice by value)
So I suppose it's more correct to say that we were never discussing escape analysis at all. An escape analysis post would be talking about allocation counts and memory fragmentation, not "why does this work?".
Claude (per OPs post) led them astray.