From https://news.ycombinator.com/item?id=45562815 :
> awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
From "Safe C++ proposal is not being continued" (2025) https://news.ycombinator.com/item?id=45237019 :
> Safe C++ draft: https://safecpp.org/draft.html
Also there are efforts to standardize safe Rust; rust-lang/fls, rustfoundation/safety-critical-rust-consortium
> How does what FLS enables compare to these [unfortunately discontinued] Safe C++ proposals?
Honestly I think that's probably the correct way to write high reliability code.
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
Do you have any evidence for "probably"?
See https://www.safetyresearch.net/toyota-unintended-acceleratio...
"I know for a fact that Italian cooks generate spaghetti, and the deceased's last meal contained spaghetti, therefore an Italian chef must have poisoned him"
It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`, for example. Any human who tells you they have not made this mistake is a liar.
Unless there was a second uncommanded acceleration problem with Toyotas, my understanding is that it was caused by poor mechanical design of the accelerator pedal that caused it to get stuck on floor mats.
In any case, when we're talking about safety critical control systems like avionics, it's better to abstract away the actual act of typing code into an editor, because it eliminates a potential source of errors. You verify the model at a higher level, and the code is produced in a deterministic manner.
The Simulink Coder tool is a piece of software. It is designed and implemented by humans. It will have bugs.
Autogenerated code is different from human written code. It hits soft spots in the C/C++ compilers.
For example, autogenerated code can have really huge switch statements. You know, larger than the 15-bit branch offset the compiler implementer thought was big enough to handle any switch-statement any sane human would ever write? So now the switch jumps backwards instead when trying to get the the correct case-statement.
I'm not saying that Simulink Coder + a C/C++ compiler is bad. It might be better than the "manual coding" options available. But it's not 100% bug free either.
Nobody said it was bug free, and this is a straw man argument of your own construction.
Using Autocode completely eliminates certain types of errors that human C programmers have continued to make for more than half a century.
That's a classic bias: Comparing A and B, show that B doesn't have some A flaws. If they are different systems, of course that's true. But it's also true that A doesn't have some B flaws. That is, what flaws does Autocode have that humans don't?
The fantasy that machines are infallible - another (implicit) argument in this thread - is just ignorance for any professional in technology.
The main flaw of autocode is that a human can't easily read and validate it, so you can't really use it as source code. In my experience, this is one of the biggest flaws of these types of systems. You have to version control the file for whatever proprietary graphical programming software generated the code in the first place, and as much as we like to complain about git, it looks like a miracle by comparison.
It's an interesting question and point, but those are two different things and there is no reason to think you'll get the same results. Why not compile from natural language, if that theory is true?
stdio.h is fine in some embedded contexts, and very very not fine in others
allocation/deallocation from/to the free store (heap)
shall not occur after initialization.
This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones?Where dynamic allocation starts to be really helpful is if you want to minimize your peak RAM usage for coexistence purposes (eg you have other processes running) or want to undersize your physical RAM requirements by leveraging temporal differences between different parts of code (ie components A and B never use memory simultaneously so either A or B can reuse the same RAM). It also does simplify some algorithms and also if you’re ever dealing with variable length inputs then it can help you not have to reason about maximums at design time (provided you just correctly handle an allocations failure).
I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints.
The overwhelming majority of embedded systems are desired around a max buffer size and known worst case execution time. Attempting to balance resources dynamically in a fine grained way is almost always a mistake in these systems.
Putting the words "modern" and "drone" in your sentence doesn't change this.
In this way you can use pools or buffers of which you know exactly the size. But, unless your program is always using exactly the same amount of memory at all times, you now have to manage memory allocations in your pool/buffers.
At least before we had zero-cost exceptions. These days, I suspect the HFT crowd is back to counting microseconds or milliseconds as trades are being done smarter, not faster.
Actual code i have seen with my own eyes. (Not in F-35 code)
Its a way to avoid removing an unused parameter from a method. Unused parameters are disallowed, but this is fine?
I am sceptical that these coding standards make for good code!
(void) a;
Every C programmer beyond weaning knows that. (void) a;
I'm sure there are commonly-implemented compiler extensions, but this is the normal/native way and should always work.https://godbolt.org/z/zYdc9ej88
clang gets this right.
I encounter this when trying to do best-effort logging in a failure path. I call some function to log and error and maybe it fails. If it does, what, exactly, am I going to do about it? Log harder?
_ = a;
And you would encounter it quite often because unused variable is a compilation error: https://github.com/ziglang/zig/issues/335It's extremely annoying until it's suddenly very useful and has prevented you doing something unintended.
Isn't it just bad design that makes both experimenting harder and for unused variables to stay in the code in the final version?
Notably this document is from 2005. So that's after C++ was standardized but before their second bite of that particular cherry and twenty years before its author, Bjarne Stroustrup suddenly decides after years of insisting that C++ dialects are a terrible idea and will never be endorsed by the language committee, that in fact dialects (now named "profiles") are the magic ingredient to fix the festering problems with the language.
While Laurie's video is fun, I too am sceptical about the value of style guides, which is what these are. "TABS shall be avoided" or "Letters in function names shall be lowercase" isn't because somebody's aeroplane fell out of the sky - it's due to using a style Bjarne doesn't like.
I doubt I can satisfy you as to whether I'm somehow a paid evangelist, I remember I got a free meal once for contributing to the OSM project, and I bet if I dig further I can find some other occasion that, if you spin it hard enough can be justified as "payment" for my opinion that Rust is a good language. There was a nice lady giving our free cookies at the anti-racist counter-protests the other week, maybe she once met a guy who worked for an outfit which was contracted to print a Rust book? I sense you may own a corkboard and a lot of red string.
I do early returns in code I write, but ONLY because everybody seems to do it. I prefer stuff to be in predictable places: variables at the top, return at the end. Simpler? Delphi/Pascal style.
While maybe 10% of rules are sensible, these sensible rules also tend to be blindingly obvious, or at least table stakes on embedded systems (e.g. don't try to allocate on a system which probably doesn't have a full libc in the first place).
The main issue is mission assurance. Using the stack or the heap means your variables aren't always at the same memory address. This can be bad if a particular memory cell has failed. If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue.
A good example of what I'm talking about is a program that I was peripherally involved with about 15 years ago. The lead wanted to abstract the mundane details from the users (on the ground), so they would just "register intent" with the spacecraft, and it would figure out how to do what was wanted. The lead also wanted to eliminate features such as "memory dump", which is critical to the anomaly resolution process. If I had been on that team, I would have raised hell, but I wasn't, and at the time, I needed that team lead as an ally.
I mean, even when I have the codebase readily accessible and testable in front of my eyes, I never trust the tests to be enough ? I often spot forgotten edge cases and bugs of various sort in C/embedded projects BECAUSE I run the program, can debug and spot mem issues and whole a lot of other things for which you NEED to gather the most informations you can in order to find solutions ?
So does being able to download a new version of software that uses different memory addresses. The point is if you are able to patch software, you are able to patch memory maps.
Or maybe I would use an MMU but drive it with a kernel written in the old fashioned way with no allocation. It would depend on what hardware I had available and what faults I wanted to survive.
(I’m not an aerospace software developer.)
You could have two copies of the OS mapped to different memory regions. The CPU would boot with the first copy, if it fails watchdog would trigger and the CPU could try to boot the second copy.
This seems like a rather manual way to go about things for which an automated solution can be devised. Such as create special ECC memory where you also account for entire cell failure with Reed-Solomon coding or some boot process which blacklists bad cells etc.
Where do you place the variables then? as global variables? and how do you detect if a memory cell has gone bad?
what leads to better code in terms of understandability & preventing errors
Exceptions (what almost every language does) or Error codes (like Golang)
are there folks here that choose to use error codes and forgo Exceptions completely ?
In C++, which supports both, exceptions are commonly disabled at compile-time for systems code. This is pretty idiomatic, I've never worked on a C++ code base that used exceptions. On the other hand, high-level non-systems C++ code may use exceptions.
I actually think Ada would be an easier sell today than it was back then. It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today. Plus, Ada is having a bit of a resurgence with stuff like NVidia picking SPARK.
I’m sure I’m idealizing it, but at least I’m not demonizing it like folks did back in the day.
I haven’t heard anything particularly bad about the software effort, other than the difficulties they had making the VR/AR helmet work (the component never made it to production afaik).
https://www.nwfdailynews.com/story/news/local/2021/08/02/f-3...
The electrical system performs poorly under short circuit conditions.
https://breakingdefense.com/2024/10/marine-corps-reveals-wha...
They haven't even finished delivering and now have to overhaul the entire fleet due to overheating.
https://nationalsecurityjournal.org/the-f-35-fighters-2-big-...
This program was a complete and total boondoggle. It was entirely the wrong thing to build in peace time. It was a moonshoot for no reason other than to mollify bored generals and greedy congresspeople.
From a european perspective, I can tell you that the mood has shifted 180 degrees from "buy American fighters to solidify our ties with the US" to "can't rely on the US for anything which we'll need when the war comes".
Europe is wise and capable enough to develop their own platform.
I’m from one of those countries, and I can assure you a lot of people would now have preferred that we went with an EU competitor instead.
Countries are buying it because it is the only game in town for certain high-value capabilities, not because they necessarily like the implications of there being a single seller of those capabilities. For better or worse, the US has been flying these for 30 years and has 6th generation aircraft in production. Everyone else is still figuring out their first 5th generation offering.
Closing that gap is a tall order. Either way, European countries need these modern capabilities to have a capable deterrent.
You know the answer, but I'll say it anyway. There is no comparable alternative today, and there will not be one in the near future.
Definitely not a failure.
The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight.
In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX
Ok, joking aside: If it is considered a failure, what 100B+ military programme has not been considered a failure?
In my totally unqualified opinion, the best cost performance fighter jet in the world is the Saab JAS 39 Gripen. It is very cheap to buy and operate, and has pretty good capabilities. It's a good option for militaries that don't have the infinite money glitch.
Anyhow, a fair assessment is the program has gone massively over timeline and budget, so in that sense is a failure, however the resulting aircraft is very clearly the best in its class both in absolute capability and value.
Going forward there's broad awareness in the government that the program management mistakes of the F-35 program cannot be repeated. There's a general consensus that 3 decade long development projects just won't be relevant in a world where drone concepts and similar are evolving rapidly on a year by year basis. There's also awareness the government needs to act more as the integrator that owns the project to avoid lock in issues.
There have been over 1,200 F-35s built so far, with new ones being built at a rate of about 150 per year. For comparison, that’s nearly as many F-35s built per year as F-22s were built ever, and 1,200 is a large amount for a modern jet fighter. The extremely successful F-15 has seen about that many built since it first entered production over 50 years ago.
That doesn’t mean it must be good, but it’s a strong indicator. Especially since the US isn’t the only customer. Many other countries want it too. Some are shying away from it now, but only for political reasons because the US is no longer seen as a reliable supplier.
In terms of actual capabilities, it’s the best fighter jet out there save for the F-22, which was far more expensive and is no longer being made. It’s relatively cheap, comparable in cost to alternatives like the Gripen or Rafale while being much more capable.
There have been a lot of articles out there about how terrible it is. These fall into a few different categories:
* Reasonable critiques of its high development costs, overruns, and delays, baselessly extrapolated to “it’s bad.”
* Teething problems extrapolated to “it’s terrible” as if these things never get fixed.
* Analyses of outcomes from exercises that misunderstand the purpose and design of exercises. You might see that, say, an F-35 lost against an F-16 in some mock fights. But they’re not going to set up a lot of exercises where the F-35 and F-16 have a realistic engagement. The result of such an exercise would be that the F-16 gets shot out of the sky without ever knowing the F-35 was there. This is uninformative and a waste of time and money. So such a matchup will be done with restrictions that actually make it useful. This might end up in a dogfight, where the F-16 is legitimately superior. This then gets reported as “F-35 worse than F-16,” ignoring the fact that a real situation would have the F-35 victorious long before a dogfight could occur.
* Completely legitimate arguments that fighter jets are last century’s weapons, that drones and missiles are the future, and the F-35 is like the most advanced battleship in 1941: useful, powerful, but rapidly becoming obsolete. This may be true, but if it is, it only means the F-35 wasn’t the right thing to focus on, not that it’s a failure. The aircraft carrier was the decisive weapon of the Pacific war but that didn’t make the Iowa class battleships a failure.
The new 6th generation platforms being rolled out (B-21, F-47, et al) are all pure first-principles drone-warfare native platforms.
That is of course not to say that exceptions and error codes are the same.
That explains all the delays on the F-35....,
But honestly, with this sort of programming the language distinctions matter less. As the guide shows you restrict yourself to a subset of the language where distinctions between languages aren't as meaningful. Basically everything runs out of statically allocated global variables and arrays. Don't have to worry about fragmentation and garbage collection if there's no allocation at all. Basically remove any source of variability in execution possible.
So really you could do this in any c style language that gives you control over the memory layout.
I actually do this as well, but in addition I log out a message like, "value was neither found nor not found. This should never happen."
This is incredibly useful for debugging. When code is running at scale, nonzero probability events happen all the time, and being able to immediately understand what happened - even if I don't understand why - has been very valuable to me.
In fact, not using a default (the else clause equivalent) is ideal if you can explicitly cover all cases, because then if the possibilities expand (say a new value in an enum) you’ll be annoyed by the compiler to cover the new case, which might otherwise slip by.
Thanks for sharing
x->foo();
if (x == null) {
Return error…;
}
This literally caused a security vulnerability in the Linux kernel because it’s UB to dereference null (even in the kernel where engineers assumed it had well defined semantics) and it elided the null pointer check which then created a vulnerability.I would say that using unreachable() in mission critical software is super dangerous, moreso than an allocation failing. You want to remove all potential for UB (ie safe rust with no or minimal unsafe, not sprinkling in UB as a form of documentation).
Reading through the JSF++ coding standards I see they ban exceptions, ban the standard template library, ban multiple inheritance, ban dynamic casts, and essentially strip C++ down to bare metal with one crucial feature remaining: automatic destructors through RAII. When a variable goes out of scope, cleanup happens. That is the entire value proposition they are extracting from C++, and it made me wonder if C could achieve the same thing without dragging along the C++ compiler and all its complexity.
GLib is a utility library that extends C with better string handling, data structures, and portable system abstractions, but buried within it is a remarkably elegant solution to automatic resource management that leverages a GCC and Clang extension called the cleanup attribute. This attribute allows you to tag a variable with a function that gets called automatically when that variable goes out of scope, which is essentially what C++ destructors do but without the overhead of classes and virtual tables.
The heart of GLib's memory management system starts with two simple macros: g_autofree and g_autoptr. The g_autofree macro is deceptively simple. You declare a pointer with this attribute and when the pointer goes out of scope, g_free is automatically called on it. No manual memory management, no remembering to free at every return path, no cleanup sections with goto statements. The pointer is freed whether you return normally, return early due to an error, or even if somehow the code takes an unexpected path. This alone eliminates the majority of memory leaks in typical C programs because most memory management is just malloc and free, or in GLib's case, g_malloc and g_free.
The g_autoptr macro is more sophisticated. While g_autofree works for simple pointers to memory, g_autoptr handles complex types that need custom cleanup functions. A file handle needs fclose, a database connection needs a close function, a custom structure might need multiple cleanup steps. The g_autoptr macro takes a type name and automatically calls the appropriate cleanup function registered for that type. This is where GLib shows its maturity because the library has already registered cleanup functions for all its own types. GError structures are freed correctly, GFile objects are unreferenced, GInputStream objects are closed and released. Everything just works.
Behind these macros is something called G_DEFINE_AUTOPTR_CLEANUP_FUNC, which is how you teach GLib about your own types. You write a cleanup function that knows how to properly destroy your structure, then you invoke this macro with your type name and cleanup function, and from that moment forward you can use g_autoptr with your type. The macro generates the necessary glue code that connects the cleanup attribute to your function, handling all the pointer indirection correctly. This is critical because the cleanup attribute passes a pointer to your variable, not the variable itself, which means for a pointer variable it passes a double pointer, and getting this wrong leads to crashes or memory corruption.
The third member of this is g_auto, which handles stack-allocated types. Some GLib types like GString are meant to live on the stack but still need cleanup. A GString internally allocates memory for its buffer even though the GString structure itself is on the stack. The g_auto macro ensures that when the structure goes out of scope, its cleanup function runs to free the internal allocations. Heap pointers, complex objects, and stack structures all get automatic cleanup.
What's interesting about this system is how it composes. You can have a function that opens a file, allocates several buffers, creates error objects, and builds complex data structures, and you can simply declare each resource with the appropriate auto macro. If any operation fails and you return early, every resource declared up to that point is automatically cleaned up in reverse order of declaration. This is identical to C++ destructors running in reverse order of construction, but you are writing pure C code that works with any GCC or Clang compiler from the past fifteen years.
The foundation beneath all this is GLib's memory allocation functions. The library provides g_malloc, g_new, g_realloc and friends which are drop-in replacements for the standard C allocation functions. These functions have better error handling because g_malloc never returns NULL. If allocation fails, the program aborts with a clear error message. This might sound extreme but for most applications it is actually the right behavior. When malloc returns NULL in traditional C code, most programmers either do not check it, check it incorrectly, or check it but then do not have a reasonable recovery path anyway. GLib acknowledges this reality and makes the contract explicit: if you cannot allocate memory, the program terminates cleanly rather than stumbling forward into undefined behavior.
Reference counting is another critical component of GLib's memory management, particularly for objects. The GObject system, which is GLib's object system for C, uses reference counting to manage object lifetimes. Every object has a reference count starting at one when created. When you want to keep a reference to an object, you call g_object_ref. When you are done with it, you call g_object_unref. When the reference count reaches zero, the object is automatically destroyed. This is the same model used by shared_ptr in C++ or reference counting in Python, but implemented in pure C.
This also integrates with the autoptr system. Many GLib types are reference counted, and their cleanup functions simply decrement the reference count. This means you can declare a local variable with g_autoptr, the reference count stays positive while you use it, and when the variable goes out of scope the reference is automatically released. If you were the last holder of that reference, the object is freed. If other parts of the code still hold references, the object stays alive. This solves the resource sharing problem that makes manual memory management so difficult in C.
GLib also provides memory pools through GMemChunk and the newer slice allocator, though the slice allocator is being phased out in favor of standard malloc since modern allocators have improved significantly. The concept was to reduce allocation overhead and fragmentation for programs that allocate many small objects of the same size. You create a pool for objects of a specific size and then allocate from that pool quickly without going through the general purpose allocator. When you are done with all objects from that pool, you can destroy the entire pool at once. This pattern shows up in many high-performance C programs but GLib provided it as a reusable component.
The error handling story in GLib deserves special attention because it demonstrates how automatic cleanup enables better error handling patterns. The GError type is a structure that carries error information including a domain, a code, and a message. Functions that can fail take a GError double pointer as their last parameter. If the function succeeds, it returns true or a valid value and leaves the error NULL. If it fails, it returns false or NULL and allocates a GError with details about what went wrong. The calling code checks the return value and if there was an error, examines the GError for details.
The critical part is that GError is automatically freed when declared with g_autoptr. You can write a function that calls ten different operations, each of which might set an error, and you can check each one and return early if something fails, and the error is automatically freed on all code paths. You never leak the error message string, never double-free it, never forget to free it. This is a massive improvement over traditional C error handling where you either ignore errors or write incredibly tedious cleanup code with goto statements jumping to labels at the end of the function.
The GNOME developers could have switched to C++ or Rust or any modern language, but instead they invested in making C excellent at what C is good at. They added just enough infrastructure to eliminate the common pitfalls without fundamentally changing the language. A C programmer can read GLib code and understand it immediately because it is still just C. The auto macros are syntactic sugar over a compiler attribute, not a new language feature requiring a custom compiler.
This philosophy aligns pretty well with what the F-35 programmers want: the performance and predictability of C with the safety of automatic resource management. No hidden allocations, no virtual dispatch overhead, no exception unwinding cost, no template instantiation bloat. Just deterministic cleanup that happens exactly when you expect it to happen because it is tied to lexical scope, which is something you can see by reading the code.
I found it sort of surprising that the solution to modern C was not a new language or a massive departure from traditional practices. The cleanup attribute has been in GCC since 2003. Reference counting has been around forever. The innovation was putting these pieces together in a coherent system that feels natural to use and composes well.
Sometimes the right tool is not the newest or most fashionable one, but the one that solves your actual problem with the least additional complexity. GLib proves you can have that feature in C, today, with compilers that have been stable for decades, without giving up the simplicity and predictability that makes C valuable in the first place.
mwkaufma•11h ago
- no exceptions
- no recursion
- no malloc()/free() in the inner-loop
jandrewrogers•11h ago
DashAnimal•11h ago
jandrewrogers•11h ago
WD-42•10h ago
nicoburns•10h ago
Gupie•10h ago
dh2022•3h ago
tialaramex•10h ago
My guess is that you're assuming all user defined types, and maybe even all non-trivial built-in types too, are boxed, meaning they're allocated on the heap when we create them.
That's not the case in C++ (the language in question here) and it's rarely the case in other modern languages because it has terrible performance qualities.
jjmarr•10h ago
nmhancoc•10h ago
And if you’re using pooling I think RAII gets significantly trickier to do.
theICEBeardk•10h ago
DashAnimal•10h ago
jandrewrogers•9h ago
C++ is designed to make this pretty easy.
astrobe_•10h ago
Cyan488•10h ago
criddell•2h ago
wiseowise•11h ago
bluGill•10h ago
canyp•10h ago
gmueckl•10h ago
elteto•10h ago
You can compile with exceptions enabled, use the STL, but strictly enforce no allocations after initialization. It depends on how strict is the spec you are trying to hit.
vodou•10h ago
theICEBeardk•10h ago
Espressosaurus•10h ago
Provocative talk though, it upends one of the pillars of deeply embedded programming, at least from a size perspective.
vodou•9h ago
- C++ Exceptions Reduce Firmware Code Size, ACCU [1]
- C++ Exceptions for Smaller Firmware, CppCon [2]
[1]: https://www.youtube.com/watch?v=BGmzMuSDt-Y
[2]: https://www.youtube.com/watch?v=bY2FlayomlE
elteto•9h ago
So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc).
alchemio•8h ago
elteto•5h ago
WD-42•2h ago
theICEBeardk•10h ago
elteto•9h ago
No algorithms or containers, which to me is probably 90% of what is most heavily used of the STL.
Taniwha•10h ago
AnimalMuppet•9h ago
thefourthchime•10h ago
It is "C++", but we also follow the same standards. Static memory allocation, no exceptions, no recursion. We don't use templates. We barely use inheritance. It's more like C with classes.
EliRivers•9h ago
The C++ was atrocious. Home-made reference counting that was thread-dangerous, but depending on what kind of object the multi-multi-multi diamond inheritance would use, sometimes it would increment, sometimes it wouldn't. Entire objects made out of weird inheritance chains. Even the naming system was crazy; "pencilFactory" wasn't a factory for making pencils, it was anything that was made by the factory for pencils. Inheritance rather than composition was very clearly the model; if some other object had function you needed, you would inherit from that also. Which led to some object inheriting from the same class a half-dozen times in all.
The multi-inheritance system given weird control by objects on creation defining what kind of objects (from the set of all kinds that they actually were) they could be cast to via a special function, but any time someone wanted one that wasn't on that list they'd just cast to it using C++ anyway. You had to cast, because the functions were all deliberately private - to force you to cast. But not how C++ would expect you to cast, oh no!
Crazy, home made containers that were like Win32 opaque objects; you'd just get a void pointer to the object you wanted, and to get the next one pass that void pointer back in. Obviously trying to copy MS COM with IUnknown and other such home made QueryInterface nonsense, in effect creating their own inheritance system on top of C++.
What I really learned is that it's possible to create systems that maintain years of uptime and keep their frame accuracy even with the most atrocious, utterly insane architecture decisions that make it so clear the original architect was thinking in C the whole time and using C++ to build his own terrible implementation of C++, and THAT'S what he wrote it all in.
Gosh, this was a fun walk down memory lane.
webdevver•8h ago
it was painful for me to accept that the most elite programmers i have ever encountered were the ones working in high frequency trading, finance, and mass-producers of 'slop' (adtech, etc.)
i still ache to work in embedded fields, in 8kB constrained environment to write perfectly correct code without a cycle wasted, but i know from (others) experience that embedded software tends to have the worst software developers and software development practices of them all.
uecker•6h ago
throwaway2037•6h ago
Also, serious question: Are they any GUI toolkits that do not use multiple inheritance? Even Java Swing uses multiple inheritance through interfaces. (I guess DotNet does something similar.) Qt has it all over the place.
nottorp•5h ago
aninteger•2h ago
Actually the only toolkit that I know that sort of copied this style is Nakst's Luigi toolkit (also in C).
Neither really used inheritance and use composition with "message passing" sent to different controls.
WD-42•2h ago
aninteger•1h ago
tialaramex•10h ago
The idea of `become` is to signal "I believe this can be tail recursive" and then the compiler is either going to agree and deliver the optimized machine code, or disagree and your program won't compile, so in neither case have you introduced a stack overflow.
Rust's Drop mechanism throws a small spanner into this, in principle if every function foo makes a Goose, and then in most cases calls foo again, we shouldn't Drop each Goose until the functions return, which is too late, that's now our tail instead of the call. So the `become` feature AIUI will spot this, and Drop that Goose early (or refuse to compile) to support the optimization.
tgv•10h ago
But ... that rewrite can increase the cyclomatic complexity of the code on which they have some hard limits, so perhaps that's why it isn't allowed? And the stack overflow, of course.
AnimalMuppet•9h ago
zozbot234•8h ago
tialaramex•7h ago
Because Rust is allowed (at this sort of distance in time) to reserve new keywords via editions, it's not a problem to invent more, so I generally do prefer new keywords over re-using existing words but I'm sure I'd be interested in reading the pros and cons.
zozbot234•7h ago
krashidov•10h ago
I feel like that's the way to go since you don't obscure control flow. I have also been considered adding assertions like TigerBeetle does
https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI...
mwkaufma•9h ago
jesse__•5h ago
tonfa•7h ago
fweimer•7h ago
Some large commercial software systems use C++ exceptions, though.
Until recently, pretty much all implementations seemed to have a global mutex on the throw path. With higher and higher core counts, the affordable throw rate in a process was getting surprisingly slow. But the lock is gone in GCC/libstdc++ with glibc. Hopefully the other implementations follow, so that we don't end up with yet another error handling scheme for C++.
msla•9h ago
> no recursion
Does this actually mean no recursion or does it just mean to limit stack use? Because processing a tree, for example, is recursive even if you use an array, for example, instead of the stack to keep track of your progress. The real trick is limiting memory consumption, which requires limiting input size.
mwkaufma•9h ago
mwkaufma•9h ago
drnick1•8h ago
billforsternz•3h ago
pton_xd•9h ago
petermcneeley•7h ago
mwkaufma•7h ago
petermcneeley•6h ago
mwkaufma•6h ago
petermcneeley•6h ago