> On ARM, such atomic load incurs a memory barrier---a fairly expensive operation.
Not quite, it is just a load-acquire, which is almost as cheap as a normal load. And on x86 there's no difference.
One thing where both GCC and Clang seem to be quite bad at is code layout: even in the example in the article, the slow path is largely inlined. It would be much better to have just a load, a compare, and a jump to the slow path in a cold section. In my experience, in some rare cases reimplementing the lazy initialization explicitly (especially when it's possible to use a sentinel value, thus doing a single load for both value and guard) did produce a noticeable win.
For those (like me) who don’t recognize that abbreviation, “The static initialization order fiasco (ISO C++ FAQ) refers to the ambiguity in the order that objects with static storage duration in different translation units are initialized in” (https://en.cppreference.com/w/cpp/language/siof.html)
Why not just use constinit (iff applicable), construct_at, or lessen the cost with -fno-threadsafe-statics?
STOP WRITING NON-PORTABLE CODE YOU BASTARDS.
The correct answer is, as always, “stop using mutable global variables you bastard”.
Signed: someone who is endlessly annoyed with people who incorrectly think Unix is the only platform their code will run on. Write standard C/C++ that doesn’t rely on obscure tricks. Your co-workers will hate you less.
So I spin up a Debian VM and POSIX the hell out of it. If they dare to complain, I tell 'em to do their damn jobs and not leave all the hard stuff to the guy that only programs on UNIX.
Sure, I could learn to program on Windows... Or I could pick up another PLC or SCADA platform that looks good on my resume. Guess which I choose to do?
Until your boss tells you to port your so-far Linux-only code to Windows, and you run that struggle.
Signed, someone who spent the past year or so porting Linux code to Windows and macOS because the business direction changed and the company saw what was the money-maker.
P.S. Not the parent commenter, because I just realised they, too, had a paragraph beginning with 'signed, ...'
If you only support WSL then you don’t support Windows, imho.
In the end it is just part of the OS and a bunch of extra userspace programs. I mean nobody complains about the Windows Subsystem for Win32.
But yeah, you can just use a non-MS GNU/Windows implementation instead, do you like that better?
Is it possible though? Is it possible to have isolated WSLs (per programm)?
Maybe. But my experience is that there is very little “program code” and it’s mostly “library code”.
And if you did have a program that required WSL and you followed the UNIX model of bash chaining programs then you’re now mandating the “meta program” be run under WSL.
I treat WSL as a hacky workaround for dealing with bad citizens. It’s not a desirable target environment. It exists as a gap stop until someone does it right. YMMV.
Sorry I am confused, what does that refer to?
> you’re now mandating the “meta program” be run under WSL
As long as bash and the tools are in path, can't you run any program normally?
> WSL as a hacky workaround for dealing with bad citizens
Yes, but some parts are out of scope for C and you need to target the OS. Also f.e. passing around file descriptors and sockets are convenient.
The technical requirement of installing WSL before installing our software was already a non-starter, since most Windows users expect one installer or zip to do-it-all. WSL2 isn't (yet) like a C/C++/DirectX redistributable which can be plugged in as a dependency to a given program. Additionally our program is expected to work natively with Windows paradigms.
More critically, we work with high-performance filesystems. The performance impact of files going a round-trip through the Linux Plan9 driver, then through a Linux kernel context switch, into the kernel, and down into Hyper-V, and then up back through the Windows Plan9 driver was completely unacceptable. It was deemed worthwhile to rewrite targeting Windows natively. And even then it was only a partial rewrite: we ended up using MinGW because we had too much of a direct dependency on the pthread API.
This has bitten me once when I was doing a distro upgrade and windows just killed the WSL vm midway.
I.e. - I have tried WSLg (running GUI apps in WSL) but disabled it all together, as after running xclock and couple of things like that to ensure it works, I couldn't find any interest in running GUI apps on Linux.
I can somewhat imagine need for work purposes, if you are falling into some webdev area and need to run some tests against local browser, but that's different from my POV.
Deep interaction with Linux filesystem and kernel behaviour; we've done some intercepting and trampolining Linux syscalls in run-time, and tying it all up to cloud storage APIs.
We had to fundamentally re-architect some of our code to support macOS and Windows.
My background is video games. Which is well known to be a Windows-first environment. I currently work on robotics. The vast majority of robotics ecosystem code is Unix only.
You know what is extremely useful for teleoperation? Virtual reality. You know what platform doesn’t have a way to do VR currently? Linux. Womp womp sad trombone. Mistakes were made.
It’s really easy and not hard to write crossplatform code. My experience is that Linux devs are by far the most resistant to this. It’s very annoying.
Should you care about Windows? I certainly think so. Linux still doesn’t have a good debugger (no, gdb/lldb) aren’t good. Quite frankly every Linux dev would be more productive if they supported Windows where debuggers exist and are decent. So really they’re just shooting themselves in the foot. IMHO.
However, there is an important difference between the sanctions on Russia and the strategy of the pro-Linux "activist" that started this thread: namely, Windows is so heavily entrenched in the niche of enterprise IT that there is no significant chance of Linux's replacing it in that niche with the result that there is no realistic chance of a positive effect of this activism that might cancel out the negative effect you describe. So, I am tentatively in agreement with you.
Note that as I later found out, this doesn't work with Mac OS's linker, so you need to use some separate incantations for Mac OS.
I call them "linker arrays". They are great when you need to globally register a set of things and the order between them isn't significant.
https://github.com/abseil/abseil-cpp/blob/master/absl/base/n...
Which is basically the only usage of std::launder I have seen
The use of std::launder should be more common than it is, I’ve seen a few bugs in optimized builds when not used, but compilers have been somewhat forgiving about not using it in places you should because it hasn’t always existed. Rigorous code should be using it instead of relying on the leniency of the compiler.
In database engine code it definitely gets used in the storage layers.
std::launder is a bit weird here. Technically it should be used every time you use placement new but access the object by casting the pointer to its storage (which NoDestructor does). However, very little code actually uses it. For example, every implementation of std::optional should use it? But when you do, it actually prevents important compiler optimizations that make std::optional a zero-cost abstraction (or it did last time I looked into this).
#define FAST_STATIC(T) \
*({ \
\ // statements separated by semicolons
reinterpret_cast<T *>(ph.buf); \ // the value of the macro as a statement
})
The reinterpret_cast<T*>(...) statement is a conventional C++ Expression Statement, but when enclosed in ({}), GCC considers the whole kit and kaboodle a Statement Expression that propagates a value.There is no C equivalent, but in in C++, since C++11 you can achieve the same effect with lambdas:
auto value = [](){ return 12345; }();
As noted in the linked SO discussion, this is analogous to a JS Immediately-Invoked Function Expression (IIFE).[1] https://stackoverflow.com/questions/76890861/what-is-called-...
int foo() {
int result = ({
if (some_condition)
return -1; // This returns from foo(), not just the statement expression
42;
});
// This line might never be reached
}“Dynamic initialization of a block-scope variable with static storage duration or thread storage duration is performed the first time control passes through its declaration
[…]
this would initialise everything correctly: by the time foo() is called, its b has already been initialised.”
I guess that means this trick can change program behavior, especially if the function containing the static is never called in a program’s execution.
void fun(int arg)
{
static obj foo(arg); // delayed until function called (dependency on arg)
static obj bar(); // inited at program start (no dependency on arg)
}
In other words, any static that can be inited at program startup should be, leaving only the troublesome cases that depend on run-time context.Edit: what?
I made no mention of constexpr.
Program start-up isn't compile time.
The idea is that since "static obj bar()" doesn't depend on anything in the function, it could in principle be moved outside of the function. So in actual fact, it can be treated that way by the loading semantics of the program (can be constructed without the function having to be called), except that the name bar is only visible inside the function.
I don't understand why C++ wouldn't have specified it this way going back to 1998, but that's just me.
pbsd•6mo ago
No. The lock calls are only done during initialization, in case two threads run the initialization concurrently while the guard variable is 0. Once the variable is initialized, this will always be skipped by "je .L3".
menaerus•6mo ago