This can happen subtley without you knowing it. If you use a function in the standard library that happens to call into a CGO function, you are no longer static.
This happens with things like os.UserHomeDir or some networking things like DNS lookups.
You can "force" go to do static compiling by disabling CGO, but that means you can't use _any_ CGO. Which may not work if you require it for certain things like sqlite.
The docs do not mention this CGO dependency, are you sure?
There are multiple standard library functions that do it.. I recall some in "net" and some in "os".
there is cgo-less sqlite implementation https://github.com/glebarez/go-sqlite it seems to not be maintained much tho
https://til.andrew-quinn.me/posts/you-don-t-need-cgo-to-use-...
Since I have made the change, I have not had anyone open any issues saying they had problems running it on their machines. (Unlike when I was using AppImages, which caused much more trouble than I expected)
[0] https://github.com/mmulet/term.everything look at distribute.sh and the makefile to see how I did it.
[1]in a podman or docker container
[2] -ldflags '-extldflags "-static"'
1. <https://github.com/mmulet/term.everything/blob/main/resource...>
I don’t know if it is a cultural American thing or just difference in interpretation but I had no difficulty understanding that this was a visual effect. But in my country ads don’t come with disclaimers. Do you feel like these disclaimers are truly helpful?
Good thing this isn’t a commercial then.
There is a decent list of known functional differences on the musl libc wiki:
https://wiki.musl-libc.org/functional-differences-from-glibc...
Overall, though, the vast majority of software works perfectly or near perfectly on musl libc, and that makes this a very compelling option indeed, especially since statically linking glibc is not supported and basically does not work. (And obviously, if you're already using library packages that are packaged for Alpine Linux in the first place, they will likely already have been tested on musl libc, and possibly even patched for better compatibility.)
Another alternative is
https://github.com/ebitengine/purego
You can use this to dynamic load shared objects / DLLs so in the OP example they could disable systemd support if the systemd shared object did not load.
This technique is used in the cgofuse library ( https://github.com/winfsp/cgofuse ) rclone uses which means rclone can run even if you don't have libfuse/winfsp installed. However the rclone mount subcommand won't work.
The purego lib generalizes this idea. I haven't got round to trying this yet but it looks very promising.
In this particular case it may be that they will need to write a wrapper to abstract differences between the systemd C API if it is not stable, but at least they still can compile a binary from macOS to Linux without issues.
The other issue as other said is to use journalctl and just parse the JSON format. Very likely that this would be way more stable, but not sure if it is performant enough.
Strange, i thought the whole point of containers was to solve this problem.
But what I said was "reduced security footprint", considering the trade offs between a single statically linked binary and a full (or even cut down) Linux distribution.
From the .go file, you just do `// #cgo LDFLAGS: -L. -lfoo`.
You definitely do not need Alpine Linux for this. I have done this on Arch Linux. I believe I did not even need musl libc for this, but I potentially could have used it.
I did not think I was doing something revolutionary!
In fact, let me show you a snippet of my build script:
# Build the Go project with the static library
if go build -o $PROG_NAME -ldflags '-extldflags "-static"'; then
echo "Go project built with static library linkage"
else
echo "Error: Failed to build the Go project with static library"
exit 1
fi
# Check if the executable is statically linked
if nm ./$PROG_NAME | grep -q "U "; then
echo "Error: The generated executable is dynamically linked"
exit 1
else
echo "Successfully built and verified static executable '$PROG_NAME'"
fi
And like I said, the .go file in question has this: // #cgo LDFLAGS: -L. -lfoo
It works perfectly, and should work on any Linux distribution.——
Your code is great, I do basically the same thing (great minds think alike!). The only thing I want to add is that cgo supports pkg-config directly [2] via
// #cgo pkg-config: $lib
So you don’t have to pass in linker flags manually. It’s incredibly convenient.[1]https://stackoverflow.com/questions/57476533/why-is-statical...
[2]https://github.com/mmulet/term.everything/blob/def8c93a3db25...
[1]https://github.com/mmulet/term.everything/issues/28
[2]https://github.com/mmulet/term.everything/issues/18 (although this issue later gets sidetracked to a build issue)
IIRC it used to be common to do builds on an old version of RHEL or CentOS and dynamically link an old version of glibc. Binaries would then work on newer systems because glibc is backwards compatible.
Does anyone still use that approach?
The weird bit is the analysis[1], which complains that a Go binary doesn't run on Alpine Linux, a system which is explicitly and intentionally (also IMHO ridiculously, but that's editorializing) binary-incompatible with the stable Linux C ABI as it's existed for almost three decades now. It's really no more "Linux" than is Android, for the same reason, and you don't complain that your Go binaries don't run there.
[1] I'll just skip without explaination how weird it was to see the author complain that the build breaks because they can't get systemd log output on... a mac.
> In the observability world, if you're building an agent for metrics and logs, you're probably writing it in Go.
I'm pretty unconvinced that this is the case unless you happen to be on the CNCF train. Personally I'd write in Rust these days, C used to be very common too.
It is a bit cursed, but works pretty well. I'm using it in my hardware-backed KMIP server to interface with PKCS11.
The point one needs to touch APIs that only exists on the target system, the fun starts, regardless of the programming language.
Go, Zig, whatever.
Cross platform goes beyond in regards to UI, direction locations, user interactions,...
Yeah, if you happen to have systemd Linux libraries on macOS to facilitate cross compilation into a compatible GNU/Linux system than it works, that is how embedded development has worked for ages.
What doesn't work is pretending that isn't something to care about.
Err, no. Cross-platform means the code can be compiled natively on each platform. Cross-compilation is when you compile the binaries on one platform for a different platform.
Cross-compilation is useless if you don't actually get to executed the created binaries in the target platform.
Now, how do you intend to compile from GNU/Linux into z/OS, so that we can execute the generated binary out from the C compiler ingesting the code written in GNU/Linux platform, in the z/OS language environment inside an enclave, not configured in POSIX mode?
Using z/OS, if you're feeling more modern, it can be UWP sandboxed application with identity in Windows.
That is a better definition yes. But it's still not synonymous with cross-compilation, obviously. Most cross-platform apps are not cross-compiled because it's usually such a pain.
That said, in my personal experience, the most portable programs tend to be written in either Perl or Shell. The former has a crap-ton of portability documentation and design influence, and the latter is designed to work from 40 year old machines up to today's. You can learn a lot by studying old things.
If your production doesn't have any weird PAM or DNS then you can indeed just cross-compile everything and it works
Thanks.
Expecting a portable house and a portable speaker to have the same definition of portable is unfair.
Considering all of the effort and hoop-jumping involved in the route that was chosen, perhaps this decision might be worth revisiting.
In hindsight, maintaining a parser might be easier and more maintainable when compared to the current problems that were overcome and the future problems that will arise if/when the systemd libraries decide to change their C API interfaces.
One benefit of a freestanding parser is that it could be made into a reusable library that others can use and help maintain.
Also in the age of AI it seems possible to have it do the rewrite for you, for which you can iterate on further.
> Note that the actual implementation in the systemd codebase is the only ultimately authoritative description of the format, so if this document and the code disagree, the code is right
This required a lot of extra effort and hoop-jumping, but at least it’s on our side rather than something users have to deal with at deploy time.
Can this binary not include compiled dependacies along side it? I'm thinking like how on windows for portable apps they include the DLLs and other dependant exes in subfolders?
Out of interest, and in relation to a less well liked Google technology, could dart produce what they are after? My understanding is dart can produce static binaries, though I'm not sure if these are truly portable compile once run everywhere sense.
I see this kind of thing in our industry quite often; some Rube Goldberg machine being invented and kept on life support for years because of some reason like this, where someone clearly didn’t do the obvious thing and everyone now just assumes it’s the only solution and they’re married to it.
But I’m too grumpy, work me is leaking into weekend me. I had debates around crap like this all week and I now see it everywhere.
It's the lazy-but-bad solution.
You can also argue that sd_journal (the C API) exists for this exact reason, rather than shelling out to journalctl. These are technical trade-offs, doesn't mean we're fuckups
Quoting from https://systemd.io/JOURNAL_FILE_FORMAT/
> If you need access to the raw journal data in serialized stream form without C API our recommendation is to make use of the Journal Export Format, which you can get via journalctl -o export or via systemd-journal-gatewayd.
Certainly sounds like running journalctl, or using the gateway, is a supported option.
Calling a CLI tool which will be present everywhere your program might reasonably be installed (e.g. if your program is a MySQL extension, it can probably safely assume the existence of mysqld).
The CLI tool you want to call is vendored into or downloaded by your wrapper program, reducing installation requirements overhead (this is not always a good idea for other reasons, but it does address a frequently cited reason not to shell out).
The CLI tool’s functionality is both disjoint with the rest of your program and something that you have a frequent need to hard-kill. (Forking is much more error prone than running a discrete subprocess; you can run your own program as a subprocess too, but in that case the functionality is probably not disjointed).
Talking to POSIX CLI tools in a POSIX compatible way (granted most things those tools do are easier/faster in a language’s stdlib).
Obviously you shouldn’t try to parse human-readable output.
It usually is, because that is the UNIX philosophy and programs that intermingle output with layout often stop doing that, when they don't write to a terminal.
Specifically because you're only supposed to use that OR the c bindings by design because they want the ability to change in the internal format when it's necessary.
https://github.com/systemd/slog-journal so you can at least log to the journal now without CGO
But that's just the journal Wire format which is a lot simpler than the disk format.
I think a journal disk format parser in go would be a neat addition
If you use pure go, things are portable. The moment you use C API, that portability doesn’t exist. This should be apparent.
Vendorise systemd and compile only the journal parts, if they are portable and can be isolated from the rest. Otherwise just shell out to journalctl.
It's yet another example of Go authors just implementing the least-effort without even a slight thought to what it would mean down the line, creating a huge liability/debt forever in the language.
I can't even begin to comprehend the thought process here.
> Journal logs are not stored in plain text. They use a binary format
And it was entirely predictable and predicted that this sort of problem would be the result when that choice was made.
The fact is go is portable, it provides the ability to cross compile out of the box and reasonably executed on other platforms it supports. But in this case, a decision that had little to do with go, the desire to use c code, a non go project, with their go project made things harder.
These are not “just a set of constraints you only notice once you trip over them”, this is trivializing the mistake.
Entire blog can be simplified to the following.
We were ignorant, and then had to do a bunch of work because we were ignorant. It’s a common story in software. I don’t expect everybody to get it right the first time. But what we don’t need is sensational titled blogs full of fluff to try to reason readers out of concluding the obvious. Somebody in charge made decisions uninformed and as a result the project became more complicated and probably took longer.
necovek•2mo ago
I suspect that's not true either even if it might be technically possible to achieve it through some trickery (and why not risc-v, and other architectures too?).
khazit•2mo ago
necovek•2mo ago
Since architectures are only brought up in relation to dynamic libraries, it implied it is otherwise as portable as above languages.
With that out of the way, it seems like a small thing for the Go build system if it's already doing cross compilation (and thus has understanding of foreign architectures and executable formats). I am guessing it just hasn't been done and is not a big lift, so perhaps look into it yourself?
arccy•1mo ago
go doesn't require dynamic linking for C, if you can figure out the right C compiler flags you can cross compile statically linked go+c binaries as well.
swills•1mo ago
vbezhenar•1mo ago
dekhn•1mo ago
spijdar•1mo ago
The usual workaround, I think, is to use dlopen/dlsym from within the program. This is how the Nim language handles libraries in the general case: at compile time, C imports are converted into a block of dlopen/dl* calls, with compiler options for indicating some (or all) libraries should be passed to the linker instead, either for static or dynamic linking.
Alternatively I think you could "trick" the linker with a stub library just containing the symbol names it wants, but never tried that.
dwattttt•1mo ago
Clang knows C, lld knows macho, and the SDK knows the target libraries.
1718627440•1mo ago
cxr•1mo ago
Traditionally, cross-compilers generally didn't even work the way that the Zig and Go toolchains approach it—achieving cross-compilation could be expected to be a much more trying process. The Zig folks and the Go folks broke with tradition by choosing to architect their compilers more sensibly for the 21st century, but the effects of the older convention remains.
cxr•1mo ago
Original discussion: <https://news.ycombinator.com/item?id=24256883>.