frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•2m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•2m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•3m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•3m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•3m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•4m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•4m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
1•kevinelliott•5m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•6m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•7m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•13m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•13m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•14m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•16m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•17m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•17m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•17m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
3•samasblack•19m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•20m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•21m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•22m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•24m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•24m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•24m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•25m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•26m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•27m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
2•headalgorithm•27m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•28m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•28m ago•1 comments
Open in hackernews

Hardening the C++ Standard Library at scale

https://queue.acm.org/detail.cfm?id=3773097
158•ndesaulniers•2mo ago

Comments

tialaramex•2mo ago
> those that lead to undefined behavior but aren't security-critical.

Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.

Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".

AshamedCaptain•2mo ago
Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.

The author of TFA actually makes another related assumption:

> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.

Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.

charleslmunger•2mo ago
>Not at all? Most memory-safety issues will never even show up in the radar

Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].

1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...

samdoesnothing•2mo ago
nooooo you don't understand, safety is the most important thing ever for every application, and everything else should be deprioritized compared to that!!!
gishh•2mo ago
Most people around here are too busy evangelizing rust or some web framework.

Most people around here don’t have any reason to have strong opinions about safety-critical code.

Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.

Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.

Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.

The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.

kccqzy•2mo ago
I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.
gishh•2mo ago
Safety-critical systems aren’t connected to a MAC address you can ping. I didn’t move the goalposts.
josephg•2mo ago
Sure they are. Eg, 911 call centers. Flight control. These systems aren’t on the open internet, but they’re absolutely networked. Do they apply regular security patches? If they do, they open themselves up to new bugs. If not, there are known security vulnerabilities just waiting for someone to use to slip into their network and exploit.

And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.

gishh•2mo ago
Ah. I was responding to:

> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.

How many air-gapped systems have you worked on?

AlotOfReading•2mo ago
Individual past experiences aren't always representative of everything that's out there.

I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.

AlotOfReading•2mo ago

    The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely.
Software safety cases depend on being able to link the executable semantics of the code to your software safety requirements.

You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.

Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.

As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.

AlotOfReading•2mo ago

    In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
No one knows what software will be security critical when it's written. We usually only find out after it's already too late.

Language maintainers have no idea what code will be written. The people writing libraries have no idea how their library will be used. The application developers often don't realize the security implications of their choices. Operating systems don't know much about what they're managing. Users may not even realize what software they're running at all, let alone the many differing assumptions about threat model implicitly encoded into different parts of the stack.

Decades of trying to limit the complexity of writing "security critical code" only to the components that are security critical has resulted in an ecosystem where virtually nothing that is security critical actually meets that bar. Take libxml2 as an example.

FWIW, I disagree with the position in the article that fail-stop is the best solution in general, but there's experimental evidence to support it at least. The industry has tried many different approaches to these problems in the past. We should use the lessons of that history.

hgs3•2mo ago
> The people writing libraries have no idea how their library will be used.

Unless you're paying them, the people writing the libraries have no obligation to care. The real issue is Big Tech built itself on the backs of volunteer labor and expects that labor to provide enterprise-grade security guarantees. That's entitled and wholly unreasonable.

> Take libxml2 as an example.

libxml2 is an excellent example. I recommend you read what its maintainer has to say [1].

[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/913#note_243...

AlotOfReading•2mo ago
That's part of my point. As Nick says, libxml2 was not designed with security in mind and he has no control over how people use it. Yet in the "security only in the critical components" mindset, he's responsible for bearing the costs of security-critical development entirely on his own since daniel left. That sucks.

But this isn't a conversation limited to the big tech parasitism Nick is talking about. A quick check on my FOSS system implicates the text editor, the system monitor, the office suite, the windowing system, the photo editor, flatpak, the IDEs, the internationalization, a few daemons, etc as all depending on libxml2 and its nonexistent security.

criemen•2mo ago
> Of course there is undefined behavior that isn't security critical.

But undefined behavior is literally introduced as "the compiler is allowed to do anything, including deleting all your files". Of course that's security critical by definition?

layer8•2mo ago
Arguably the effort presented assumes the context of LLVM, where there is information on the actual compiler behavior.
forrestthewoods•2mo ago
> Undefined Behaviour which isn't Security Critical as if somehow that's a thing

Undefined behavior in the (poorly written) spec doesn't mean undefined behavior in the real world. A given compiler is perfectly free to specify the behavior.

pjmlp•2mo ago
Well, there is ongoing work to put all known UB into an UB annex on the standard, with hope that the size of the annex gets reduced over time.

How well this will work out remains to be seen.

Rust still needs to get rid of its dependency on C++ compiler frameworks, and I don't see Cranelift matching GCC and LLVM any time soon.

on_the_train•2mo ago
std::optional is unsafe in idiomatic use cases? I'd like to challenge that.

Seems like the daily anti c++ post

steveklabnik•2mo ago
Two of the authors are libc++ maintainers and members of the committee, it would be pretty odd if they were anti C++.
maccard•2mo ago
I’m very much pro c++, but anti c++’s direction.

> optional is unsafe in idiomatic use cases? I’d like to challenge that.

    std::optional<int> x(std::nullopt);
    int val = *x;

Optional is by default unsafe - the above code is UB.
TinkersW•2mo ago
That is actually memory safe, as null will always trigger access violation..

Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.

steveklabnik•2mo ago
It is undefined behavior. You cannot make a claim about what it will always do.
AlotOfReading•2mo ago
As a fun example, I worked on a safety-critical system where accessing all-bits-zero pointers would trigger an IRQ that jumped back to PC + 4, leaving the register/variable uninitialized. Great fun was had any time there was LR corruption and CPU started executing whatever happened to be next in memory after function return.
tonyarkles•2mo ago
Hahahaha well that behaviour is certainly fun!

I recently had a less wild but similarly baffling experience on an embedded-but-not-small device. Address 0 was actually a valid address. We were getting a HardFault because a device driver was dereferencing a pointer to an invalid but not-null address. Working backwards, I found that it was getting that invalid address not from 0x0 but rather from 0xC… because the pointer was stored in the third field of a struct and our pointer to that struct was null.

   foo->bar->baz->zap
Foo = 0, &bar = 0xC, baz = invalid address, *baz to get zap is what blew up.
maccard•2mo ago
>null will always trigger access violation..

No, it won't. https://gcc.godbolt.org/z/Mz8sqKvad

TinkersW•2mo ago
Oh my bad, I read that as nullptr, I use a custom optional that does not support such a silly mode as "disengaged"
canyp•2mo ago
How is that an optional then?

The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.

maccard•2mo ago
What do you return if there is no value set? That’s the entire point of optional.
wild_pointer•2mo ago
You didn't read this, did you? https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

It's not a pointer.

on_the_train•2mo ago
But using the deref op is deliberately unsafe, and never used without a check in practice. This would neither pass a review, nor static analysis.
canyp•2mo ago
GP picked the less useful of the two examples. The other one is a use-after-move, which static analysis won't catch beyond trivial cases where the relevant code is inside function scope.

I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).

nly•2mo ago
No type is safe to use after a move, unless it's documented to put itself in to a well defined state and says as such

You can't magically make all the member functions on std::vector safe after a move for example unless the moved from vector allocates itself a new (empty) buffer, which kills the performance benefits.

It's all by design.

surajrmal•2mo ago
I believe it's actually the opposite. You're supposed to be able to reuse objects that were moved from unless otherwise documented, although it may require reinitializing it explicitly. A moved from vector is valid to reuse. Although the standard doesn't specify what state it will be in, all major standard library implementations return it to the default constructed state which doesn't require an allocation.
nly•2mo ago
Read what I wrote again. I said all member functions.

Reusing of a moved from object only requires assignment and destruction to be well behaved.

The std library containers give you extra guarantees (a moved from object is effectively the same as a default constructed one), but the _language_ imposes no such requirements on your types

It's perfectly allowed by the language for the .size() member of your own vector type to return a random value after it's been moved from because you wanted to save 1 CPU instruction somewhere.

surajrmal•2mo ago
I understand what you're saying but no major compiler does anything nefarious. They default initialize it. A lot of code depends on std::optional that was moved from returning false for is_valid for instance and no one would break that even if it's not guaranteed.
simonask•2mo ago
This is false. Moved-from is a "valid but unspecified" state in C++, so perfectly safe, but each type must decide what to do. At the minimum, the destructor must be able to run, because it will be invoked, meaning that the obvious choice is also the only sensible one: letting moved-from be equivalent to the "empty" state.

An empty std::vector does not require that any buffer is allocated. It just has a null data pointer.

nly•2mo ago
See my sibling comment regarding vector.

It's not true that the only sensible choice for a moved-from object be equivalent to the defaulted constructed one.

If your move constructor doesn't exist then the copy constructor gets called under the language rules, so the sensible default is actually a copy.

Everything else is an optimisation that has a trade-off

A conforming implementation of std::list, for example, can have a default constructor and a move constructor that both allocate a sentinel node on the heap, which is why none of the constructors are noexcept.

If you don't allocate a sentinel on the heap, then moving std::list can invalidate iterators (which is what GNU stdlibc++'s implementation chooses).

It's a trade off.

simonask•2mo ago
I did not say “default-constructed”, because that’s a whole other can of worms.

But yes, the implication of C++ move semantics is that every movable object must also define an “empty” (moved-from) state, so you cannot have something like a never-null unique ptr.

Specifically, it is not allowed for the moved-from object to be inconsistent or to say “using it in any way is UB”, because its destructor will run.

mohinder•2mo ago
> This would neither pass a review, nor static analysis

I beg to differ. Humans are fallible. Static analysis of C++ cannot catch all cases and humans will often accept a change that passes the analyses.

einpoklum•2mo ago
> Static analysis of C++ cannot catch all cases

You're ignoring how static analysis can be made to err on the side of safety rather than promiscuity.

Specifically, for optional dereferencing, static analysis can be made to disallow it unless it can prove the optional has a value.

IshKebab•2mo ago
> never used without a check in practice

Ho ho ho good one.

nly•2mo ago
A static analyzer could easily tell you to use the monadic member functions instead of a raw dereference.
boulos•2mo ago
They linked directly to https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/ which did exactly what I'd guessed as its example:

> The following code for example, simply returns an uninitialized value:

  #include <optional>

  int f() {
    std::optional<int> x(std::nullopt);
    return *x;
  }
on_the_train•2mo ago
But that is not idiomatic at all. Idiomatic would be too use .value()
IshKebab•2mo ago
Not only is this a silly No True Scotsman argument, but it's also absolute nonsense. It's perfectly idiomatic to use `*some_optional`.
on_the_train•2mo ago
It is with a prior .has_value call. It's not correct without. It's simple, and covered by static analysis. This is not an issue in real code, it's a pathologic error that doesn't actually happen. Like most anti c++ examples.
Maxatar•2mo ago
Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.
electroly•2mo ago
Sadly I have lots of code that exclusively uses the dereference operator because there are older versions of macOS that shipped without support for .value(); the dereference operator was the only way to do it! To this day, if you target macOS 10.13, clang will error on use of .value(). Lots of this code is still out there because they either continue to support older macOS, or because the code hasn't been touched since.
canyp•2mo ago
It is discussed in the linked post: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

tl;dr: use-after-move, or dereferencing null.

xiphias2•2mo ago
It's great that finally bounds checking happened in C++ by (mostly) default.

The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.

pjmlp•2mo ago
It has always been the default in compiler provided frameworks before C++98, like Turbo Vision, BIDS, OWL, MFC and so on.

Unfortunately the default changed when C++98 came out, and not everyone bothered with providing at least hardening mode in debug builds, VC++ followed by GCC, or compilers in high integrity computing like Green Hills.

Sadly the security and quality mentality seems to be a hard sell in areas where folks are supposed to be Engineers and not craftsmen.

xiphias2•2mo ago
It's not just simply security / quality, but speed of iteration matters as well: the earlier bugs are found in optimized nodebug mode as well, the more I like to use C++ itself for developing.
ris•2mo ago
See also the "lite assertions" mode @ https://gcc.gnu.org/wiki/LibstdcxxDebugMode for glibc, however these are less well documented and it's less clear what performance impact these measures are expected to have.
BinaryIgor•2mo ago
Interesting how C++ is still improving; seems like changes of this kind my rival at least some of the Rust use cases; time will tell
galangalalgol•2mo ago
The issue with safer c++ and modern c++ is the mirror of the problem with migrating a code base from c++ to rust. There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++. If there was a c++23{} that let the compiler know that only modern c++ and libc++ existed inside it would make a huge difference by making those boundaries clear and you can document the assumptions at that boundary. Then move it over time. The optimizer would have an advantage in that code too. But they don't want to do that. The least they could do is settle on a standard c++abi to make interop with newer languages easier, but they don't want to do that either. They have us trapped with sunk cost on some gaint projects. Or they think they do. The big players are still migrating to rust slowly, but steadily.
kaz-inc•2mo ago
There kind of is. There's __cplusplus, which I'll grant you is quite janky.

  #IF __cplusplus==202302L
GeorgeTirebiter•2mo ago
I'm wondering if the C++ -> Rust converters out there are part of the Solution: After converting C++ to Rust, then convert Rust to C++ and you now have clean code which can continue to use all the familiar tooling.
aw1621107•2mo ago
> I'm wondering if the C++ -> Rust converters out there are part of the Solution

Are there C++-to-Rust converters? There are definitely C-to-Rust converters, but I haven't heard of anyone attempting to tackle C++.

> After converting C++ to Rust, then convert Rust to C++ and you now have clean code which can continue to use all the familiar tooling.

This only works if a hypothetical C++ to Rust converter converts arbitrary C++ to safe Rust. C++ to unsafe Rust already seems like a huge amount of work, if it's even possible in the first place; further converting to safe Rust while preserving the semantics of the original C++ program seems even more of a pie in the sky.

GoblinSlayer•2mo ago
Presumably you migrate to this Rustic++ version implying it's maintainable.
aw1621107•2mo ago
I'm not questioning what you do once you get to that hypothetical Rustic++. I'm questioning whether it's possible to get there using automated tooling in the first place.
GeorgeTirebiter•2mo ago
I think you have a point. Rustic++ won't be vibe-coded in a weekend.
Someone•2mo ago
That checks whether a C++ compiler is used and, if so, for the version of the compiler, not whether “only modern c++ and libc++ existed inside it”.

C++ compilers carry lots of baggage for backwards compatibility.

blub•2mo ago
> There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++

Your complaint doesn’t look valid to me: the feature in the article is implemented with compiler macros that work with old and new code without changes.

See https://libcxx.llvm.org/Hardening.html#notes-for-users

galangalalgol•2mo ago
Thats true, going bottom up is easier. Old c++ making calls into modern c++ or rust is less of a problem than the other way.
pjmlp•2mo ago
And not everywhere, as there are many industrial scenarios where Rust either doesn't have an answer yet, or is still in early baby steps regarding tooling and ecosystem support.
josephg•2mo ago
I’m not really sure how checks like this can rival rust. Rust does an awful lot of checks at compile time - sometimes even to the point of forcing the developer to restructure their code or add special annotations just to help the compiler prove safety. You can’t trivially reproduce those all those guardrails at runtime. Certainly not without a large performance hit. Even debug mode stdc++ - with all checks enabled - still doesn’t protect against many bugs the rust compiler can find and prevent.

I’m all for C++ making these changes. For a lot of people, adding a bit of safety to the language they’re going to use anyway is a big win. But in general guarding against threading bugs, or use after free, or a lot of more obscure memory issues requires either expensive GC like runtime checks (Fil-C has 0.5x-4x performance overhead and a large memory overhead). Or compile time checks. And C++ will never get rust’s extensive compile time checks.

blub•2mo ago
They rival Rust in the same way that golang and zig do: they handle more and more memory-safety bugs to the point that the delta to Rust’s additional memory-safety benefits doesn’t justify the cost of using Rust any more.
simonask•2mo ago
Zig does not approach, and does not claim to approach, Rust's level of safety. These are completely different ballparks, and Zig would have to pivot monumentally for that to happen. Zig's selling point is about being a lean-and-mean C competitor, not a Rust replacement.

Golang is a different thing altogether (garbage collected), but they still somehow managed to have safety issues.

pjmlp•2mo ago
It could have gotten them, had the Safe C++ proposal not been shot down by the profiles folks, those profiles that are still vapourware as C++26 gets finalised.

Google just did a talk at LLVM US 2025, regarding the state of clang lifetime analyser, the TL;DW is we're still quite far from the profiles dream.

lenkite•2mo ago
Man, that written proposal was so good, so much work was put into it and so disappointing that it was shot down for a non-existent approach.
semiinfinitely•2mo ago
> Interesting how C++ is still improving

its not

Conscat•2mo ago
Do you read the Clang git commit log every day? C++ improves in many ways faster than any other language ecosystem.
Maxatar•2mo ago
I think he was referring to the language specification, not a specific compiler.
actionfromafar•2mo ago
But that is also wrong, as per the article C++ 26 got some improvements in a hardening profile.
Maxatar•2mo ago
I see that C++26 has some incredibly obscure changes in the behavior of certain program constructs, but this does not mean that these changes are improvements.

Just reviewing the actual hardening of the standard library, it looks like in C++26 an implementation may be considered hardened in which case if certain preconditions don't hold then a contract violation triggers an assertion which in turn triggers a contract violation handler which may or may not result in a predictable outcome depending on one of 4 possible "evaluation semantics".

Oh and get this... if two different translation units have different evaluation semantics, a situation known as "mixed-mode" then you're shit out of luck with respect to any safety guarantees as per this document [1] which says that mixed-mode applications shall choose arbitrarily among the set of evaluation semantics, and as it turns out the standard library treats one of the evaluation semantics (observe) as undefined behavior. So unless you can get all third party dependencies to all use the same evaluation semantic, then you have no way to ensure that your application is actually hardened.

So is C++26 adding changes? Yes it's adding changes. Are these changes actual improvements? It's way to early to tell but I do know one thing... it's not at all uncommon that C++ introduces new features that substitute one set of problems for a new set of problems. There's literally a 300 page book that goes over 20 distinct forms to initialize an object [2], many of these forms exist to plug in problems introduced by previous forms of initialization! For all we know the same thing might be happening here, where the classical "naive" undefined behavior is being alleviated but in the process C++ is introducing an entire new class of incredibly difficult to diagnose issues. And lest you think I'm just spreading FUD, consider this quote from a paper titled "C++26 Contracts are not a good fit for standard library hardening" [3] submitted to the C++ committee regarding this upcoming change arguing that it risks giving nothing more than the illusion of safety:

>This can result in violations of hardened preconditions being undefined behaviour, rather than guaranteed to be diagnosed, which defeats the purpose of using a hardened implementation.

[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p29...

[2] https://www.amazon.ca/dp/B0BW38DDBK?language=en_US&linkCode=...

[3] https://isocpp.org/files/papers/P3878R0.html

aw1621107•2mo ago
I believe there were some changes in the November C++ committee meeting that (ostensibly) alleviates the some of the contracts/hardening issues. In particular:

- P3878 [0] was adopted, so the standard now forbids "observe" semantics for hardened precondition violations. To be fair, the paper doesn't explicitly say how this change interacts with mixed mode contract semantics, and I'm not familiar enough with what's going on to fill in the gaps myself.

- It appears there is interest in adopting one of the changes proposed in D3911 [1], which introduces a way to mark contracts non-ignorable (example syntax is `pre!()` for non-ignorable vs. the current `pre()` for ignorable). A more concrete proposal will be discussed in the winter meeting, so this particular bit isn't set in stone yet.

[0]: https://isocpp.org/files/papers/P3878R1.html

[1]: https://isocpp.org/files/papers/D3911R0.html

blub•2mo ago
The implementations of hardening in libc++ and libstdc++ are available now and are straightforward to use.

https://libcxx.llvm.org/Hardening.html

https://gcc.gnu.org/wiki/LibstdcxxDebugMode (was already available for longer, the official hardening might take this over or do something else)

menaerus•2mo ago
Mixed mode is about the same function compiled with different evaluation semantics in different TUs, and it is legit. The only case they are wondering about is how deal with inlined functions and they suggest ABI extensions to support it during the link-time. None of what you said is an issue.

> The possibility to have a have a well-formed program in which the same function was compiled with different evaluation semantics in different translation units (colloquially called “mixed mode”) raises the question of which evaluation semantic will apply when that function is inline but is not actually inlined by the compiler and is then invoked. The answer is simply that we will get one of the evaluation semantics with which we compiled.

> For use cases where users require strong guarantees about the evaluation semantics that will apply to inline functions, compiler vendors can add the appropriate information about the evaluation semantic as an ABI extension so that link-time scripts can select a preferred inline definition of the function based on the configuration of those definitions.

Maxatar•2mo ago
Not sure what you mean by the term "legit".

The entirety of the STL is inlined so it's always compiled in every single translation unit, including the translation units of third party dependencies.

Also it's not me saying, it's literally the authors of the MSVC standard library and the GCC standard library pointing out these issues [1]:

[1] https://isocpp.org/files/papers/P3878R0.html

menaerus•2mo ago
Legit as in allowed and not an issue as you're trying to convey, ok? I read the paper if that wasn't already obvious from my comment. What you said is factually incorrect.
Maxatar•2mo ago
Not sure I understand what point you're trying to dispute. It's not obvious at all that you read either my post or the paper I posted authored by the main contributors to MSVC and GCC about the issues mixed-mode applications present to the implementation of the standard library given that you haven't presented any defense of your position that addresses these issue. You seem to think that just declaring something "legit" and retorting "you are incorrect" is a sufficient justification.

If this is the extent of your understanding it's a fairly good indication you do not have sufficient background on this topic and may be expressing a very strong opinion out of ignorance of this topic. It's not at all uncommon that those with the most superficial understanding of a subject express the strongest views of said topic [1].

Doing a cursory review of some of your recent posts, it looks like this is a common habit of yours.

[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

menaerus•2mo ago
I have literally copy-pasted the fragments from the paper you're referring to which invalidate your points. How is that not obvious? Did you read the paper yourself or you're just holding strong opinions yourself, as you usually do whenever there is something to backlash against C++? I'm glad you're familiar with the Dunning-Kruger effect, this means there is some hope for you.
GoblinSlayer•2mo ago
Violation of preconditions is always undefined behavior, this isn't new.
aw1621107•2mo ago
The problem is that violation of preconditions being UB in a hardened implementation sort of defeats the purpose of using the hardened implementation in the first place!

This was acknowledge as a bug [0] and fixed in the draft C++26 standard pretty recently.

[0]: https://isocpp.org/files/papers/P3878R1.html

GoblinSlayer•2mo ago
>A hardened implementation that uses 'observe' is not hardened

The proposal simply included a provision to turn off hardening, nothing else. Traditionally these checks were under #ifndef NDEBUG

aw1621107•2mo ago
> The proposal simply included a provision to turn off hardening, nothing else.

(Guessing "the proposal" refers to the hardening proposal?)

I don't think that is correct since the authors of the hardening proposal agreed that allowing UB for hardened precondition violations was a mistake and that P3878 is a bug fix to their proposal. Presumably the intended way to turn off handling would be to just... not enable the hardened implementation in the first place?

Maxatar•2mo ago
Using #ifndef NDEBUG in templates is one of the leading causes of one-definition rule violations.

At least traditionally it was common to not mix debug builds with optimized builds between dependencies, but now with contracts introducing yet another set of orthogonal configuration it will be that much harder to ensure that all dependencies make use of the same evaluation semantic.

fpoling•2mo ago
Rust borrow checker rules out a lot of patterns that typical C++ code uses. So if C++ would get similar rules, they still cannot be applied to most of the existing code in any case.
fweimer•2mo ago
How does this compare to _GLIBCXX_ASSERTIONS in libstdc++ (on by default in Fedora since 2018)?
beached_whale•2mo ago
My understanding that this is like that but both libstdc++/libc++ have been doing more since. Additionally, Google did a blog not to long ago where they talked to actual the performance impact on their large C++ codebase and it averaged about 0.3% I think https://security.googleblog.com/2024/11/retrofitting-spatial...

Since then, libc++ has categorized the checks by cost and one can scale them back too.

dana321•2mo ago
Imagine hardening the regex library, its already as slow as molasses.
jeffbee•2mo ago
There are lots of parts of the standard library that nobody uses, and hardening has no performance impacts in code you don't call.
Panzerschrek•2mo ago
It's a good decision to add at least some checks into C++ standard library. But no runtime check can find a bug in code like this:

  std::vector<int> v;
  v.push_back(123);
  auto& n= v.front();
  v.push_back(456);
  auto n_doubled= n * 2;
A better language is needed in order to prevent such bugs, where such compile-time correctness checks are possible. Some static analyzers are able to detect it in C++, but only in some cases.
delta_p_delta_x•2mo ago
It seems to me statically checking this should be possible. The liveness of the result of std::vector::front() should be invalidated and be considered dead after the second invocation to push_back(). Then a static analyser would correctly mark the final line with red squiggles. Of course, compilers would still be happy to compile this, which they really ought not to.
aw1621107•2mo ago
> It seems to me statically checking this should be possible.

Statically checking this specific example (or similarly simple examples) could be possible, sure. I'm not so sure about more complex cases, such as opaque functions (whether because the function is literally opaque or because not enough inlining occurred), stored references (e.g., std::span), unintentional mutation of the underlying data structure, etc.

Thats basically one of the main reason Rust's lifetimes exist - to explicitly encode information about when lifetimes are valid in the type system. C++ doesn't have an equivalent (yet?), so unless you're willing to use global analysis an/or non-standard annotations there's only so much static analysis can do.

optimalsolver•2mo ago
What does this do?
aw1621107•2mo ago
Potential use-after-free. push_back() may reallocate, which would invalidate the reference returned by front(), rendering its subsequent use invalid.
optimalsolver•2mo ago
Thank you!
tonyarkles•2mo ago
In my own experience with “modern-ish C++” (the platform I work with only supports up to C++17 for now), once we started using smart pointers, like unique_ptr and shared_ptr, iterator invalidation has been the primary source of memory safety errors. You have to be so careful any time you have a reference into a container.

In a lot of cases the solution is already sitting there for you in <algorithms> though. One of the more common places we’ve encountered this problem is when someone has a simple task like “delete items from this vector that match some predicate” and then just write a for-loop that does that but doesn’t handle the fact that the iterators can go bad when you modify the vector. The algorithms library has functions in it to handle that, but without a good mental checklist of what’s all in there people will generally just do the simple (and unfortunately wrong) thing.

optimalsolver•2mo ago
How much performance do you give up with smart pointers?
tonyarkles•2mo ago
Not enough that we’ve ever noticed it being at all significant in the profiling we do regularly. The system is an Edge ML processor running about 600 megapixel/sec through a pipeline that does pre- and post-processing on multiple CPUs and does inference on the GPU using TensorRT. In general allocation isn’t anywhere near our bottleneck, it’s all of the image processing work that is.
GoblinSlayer•2mo ago
Nirvana fallacy. Some checks are better than no checks.
lang4d•2mo ago
I'd be surprised if some combination of ASAN and UBSAN wouldn't catch this and similar dangling references
steveklabnik•2mo ago
I thought it should too, but it doesn't seem to, unless I made a mistake, which I probably did: https://godbolt.org/z/Ex63vxj4r
aw1621107•2mo ago
I think you need to actually use the dangling reference: https://godbolt.org/z/f4s3fT3nM
steveklabnik•2mo ago
Ah, that would make sense, yes. Thanks!