func atomic_get_and_inc() -> Int {
sem.wait()
defer {
value += 1
sem.signal()
}
return value
}It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
fn callFoo() -> FooResult {
let fooParam: Int // declared, not defined yet
defer {
// fooParam must get defined by the end of the function
foo(fooParam)
otherStuffAfterFoo() // …
}
// all code paths must assign fooParam
if cond {
fooParam = 0
} else {
fooParam = 1
return // early return!
}
doOtherStuff()
}
Blame it on it being years since I’ve coded in swift, my memory is fuzzy. struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language. #include <iostream>
#define RemParens_(VA) RemParens__(VA)
#define RemParens__(VA) RemParens___ VA
#define RemParens___(...) __VA_ARGS__
#define DoConcat_(A,B) DoConcat__(A,B)
#define DoConcat__(A,B) A##B
#define defer(BODY) struct DoConcat_(Defer,__LINE__) { ~DoConcat_(Defer,__LINE__)() { RemParens_(BODY) } } DoConcat_(_deferrer,__LINE__)
int main() {
{
defer(( std::cout << "Hello World" << std::endl; ));
std::cout << "This goes first" << std::endl;
}
}Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
busy = true
Task {
defer { busy = false }
// do async stuff, possibly throwing exceptions and whatnot
}Defer is more flexible/requires less boilerplate to add callsite specific handling. For an example, see https://news.ycombinator.com/item?id=46410610
https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/a...
A similar macro later (2006) made its way into Boost as BOOST_SCOPE_EXIT:
https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/ht...
I can't say for sure whether Go's creators took inspiration from these, but it wouldn't be surprising if they did.
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
What i mean is that in cpp all the numerous language features are exposed through little syntax/grammar details. Whereas in Lisps syntax and grammar are primitive, and this is why macros work so well.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
> whether C++ syntax ever becomes readable when you sink more time into it,
Yes, and the easy approach is to learn as you need/go.
(1) Why doesn't it look like C++?
(2) Why does it look so much like C++?
Unless you are many of my coworkers, then you blissfully never think about those things, and have Cursor reply for you when asked about them (-:
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
Java is actively removing it's finalizers.
They are fundamentally different concepts.
See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153
It is because of all the problems that the finalize method was deprecated in Java 9 and marked "deprecated for removal"(JEP 421) in Java 18. More details at https://stackoverflow.com/questions/56139760/why-is-the-fina... and https://inside.java/2022/01/12/podcast-021/
PS: JEP 421: Deprecate Finalization for Removal - https://openjdk.org/jeps/421 Also details alternative features/techniques to use.
(I do also realize that finalizer behavior in some languages is weird, for performance reasons and sometimes just legacy reasons. Go is one such language.)
But I think we've both hit a level of digression that wouldn't be helpful even if we were disagreeing about the facts (which I don't really think we are. I think this is entirely about frames of reference rather than a material dispute over the facts.) Forgetting whether finalizers are truly a form of destructor or not, the point I was trying to make really was that I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`. You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes. (Though I ultimately still prefer errors being passed around as value types, like with std::expected, rather than exception handling blocks.)
I believe the reason why we don't have languages (that I can think of) that demonstrate this exact combination is specifically because try/catch exception blocks fell out of favor at the same time that new compiled/"low-level" programming languages started picking up steam. A lot of new programming language designs that do use explicit lifetimes (Zig, Rust, etc.) simply don't have try...catch style exception blocks in the first place, if they even have anything that resemble exceptions. Even a lot of new garbage collected languages don't use try...catch exceptions, like of course Go.
Now honestly I could've made a better attempt at conveying my position earlier in this thread, but I'm gonna be honest, once I realized I struck a nerve with some people I became pretty unmotivated to bother, sometimes I'm just not in the mood to try to win over the crowd and would rather just let them bury me, at least until the thread died down a bit.
This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
> I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`.
Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
> You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes.
The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at - https://stackoverflow.com/questions/74607300/should-i-use-st... and https://en.cppreference.com/w/cpp/error/uncaught_exception.h...
Exception handling is always tricky to implement/use in any language since there are multiple models (i.e. Termination vs. Resumption) and a language designer is often constrained in his choice. Wikipedia has a very nice explanation - https://en.wikipedia.org/wiki/Exception_handling_(programmin... In particular, see the Eiffel contract approach mentioned in it and then the detailed rationale in Bertrand Meyer's OOSC2 book - https://bertrandmeyer.com/OOSC2/
> The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
Hahaha. It is certainly not a fundamental misunderstanding.
All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
> Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use. But can you emulate `finally`? Again, no. FTA:
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost. Update: Adam Rosenfield points out that Python 3.2 now saves the original exception as the context of the new exception, but it is still the new exception that is thrown.
> In C++, an exception thrown from a destructor triggers automatic program termination if the destructor is running due to an exception.
C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, and have spent a lot of my time on -fno-exceptions (among many other reasons.)
> The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at ...
Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter. You can typically do that in `finally`.
When Java introduced `finally` (I do not know if Java was the first language to have it, though it certainly must have been early) it was intended for just resource cleanup, and indeed, I imagine most uses of finally ever were just for closing files, one of the types of resources that you would want to be scoped like that.
However, in my experience the utility of `finally` has actually increased over time. Nowadays there's all kinds of random things you might want to do regardless of whether an exception is thrown. It's usually in the weeds a bit, like adjusting internal state to maintain consistency, but other times it is just handy to throw a log statement or something like that somewhere. Rather than break out a scope guard for these things, most of the time when I see this need arise in a C++ program, instead the logic is just duplicated both at the end of the `try` and `catch` blocks. I bet if I search long enough, I could find it in the wild on GitHub search.
You are still looking at it backwards. C++ chose to tie user-defined object lifetimes to lexical scopes (for automatic storage objects defined in that scope) via stack-based creation/deletion because it was built on C's abstract machine model. Thus the implicit function calls to ctor/dtor were necessitated which turned out to be a far more general mechanism usable for scope-based control via function calls.
But the lifetime of a user-defined object allocated on the heap is not limited to lexical scope and hence the connection between lexical scope and object lifetime does not exist. However the ctor/dtor are now synchronous with calls to new/delete.
So you have two things viz. lexical scope and object lifetime and they can be connected or not. This is why i insist on disambiguating both in one's mental model.
Java chose the heap-based object lifetime model for all user-defined types and thus there is no connection between lexical scope and object lifetimes. It is because of this that Java had to provide the finally block to provide some sort of lexical scope control even-though it is GC-based. The Java object model is also the reason that finalize in Java is fundamentally different to dtor in C++ which i had pointed out earlier.
> You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use.
For lexical scopes you don't need anything new in C++, you can just use RAII at different levels using various techniques. However to make it even more clearer the upcoming C2Y standard does have proposals for syntactic sugar for defer (https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3489.pdf) and Scoped Guards (https://github.com/bemanproject/scope/blob/main/papers/scope...).
We started this discussion with your claim that dtors and finalize are essentially the same which i have refuted comprehensively.
Now you want to discuss finally and its behaviour w.r.t exception handling. In the absence of exceptions RAII gives you all of finally-like behaviour.
In the presence of exceptions;
> C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, ... Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter.
This is again a misunderstanding. I had already pointed you to Termination vs. Resumption exception handling models with a particular emphasis on meyer's contract-based approach to their usage. Now read Andrei Alexandrescu's classic old article Change the Way You Write Exception-Safe Code — Forever - https://erdani.org/publications/cuj-12-2000.php.html
Both C++ and Java use the Termination model but because the object model of C++ vs. Java is so very different (C++ has two types of object lifetimes viz. lexical scope for automatic and program scope for heap-based with no GC while Java only has program scope for heap-based reclaimed by GC) their implementation is necessarily different.
C++ does provide std::nested_exception and related api (https://en.cppreference.com/w/cpp/error/nested_exception.htm...) to handle chaining/handling of exceptions in any function. However the ctor/dtor are special functions because of the behaviour of the object model detailed above. Thus the decision was made to not allow a dtor to throw while an uncaught exception is in flight. Note that this does not mean a dtor cannot throw (though it has been made implicit noexcept from C++11) but only that the programmer needs to take care when to throw or not. An uncaught exception means there has been a violation of contract and hence the system is in a undefined state; and hence there is no point in proceeding further.
This where the std::uncaught_exceptions comes in which the stack overflow article i linked to earlier quotes Herb Sutter;
A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding due to a new exception that was thrown later than the object’s construction.
Now the dtor can catch the uncaught exception and do proper logging/processing before exiting cleanly.
Finally, note also that Java itself has introduced new constructs like try-with-resources which should be used instead of try-finally for resources etc.
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting: using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)I think Java has something similar called try with resources.
try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error. void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.
Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.
The exception is memory.
The biggest essential differences between Rust and C++ are probably the borrow checker (sometimes nice, sometimes just annoying, IMO) and the lack of class inheritance hierarchies. But both are RAII languages which compile to native code with a minimal runtime, both have a heavy emphasis on generic programming through templates, both have a "C-style syntax" with braces which makes Rust feel relatively familiar despite its ML influence.
In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.
readonly record struct Result<TResult, TDisposable>(TResult? IfHappy, TDisposable? Disposable): IDisposable where TDisposable : IDisposable
{
public void Dispose() => Disposable?.Dispose();
} using (var result = foo.GetSomethingIfLucky())
{
if (result.IfHappy is {} success)
{
// do something
}
}The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
If you can't, it's not remotely "basically the same as C++ RAII".
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".
Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.
Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
https://dlang.org/articles/exception-safe.html
https://dlang.org/spec/statement.html#ScopeGuardStatement
Yes, D also has destructors.
In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.
Before returning on success, I'll dismiss all the scopes.
I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
how old is this post that 3.2 is "now"?
In Java the following is perfectly valid:
try { throw new IllegalStateException("Critical error"); } finally { return "Move along, nothing to see here"; }
The existence of two different patterns each with their own pitfalls is why we can’t have nice things. Finally shouldn’t return a value. Simply a void expression. Exception driven API’s need to be snuffed out.
If your method throws, mark it as such as force me to handle the exception if it does, do not return a non-value value in a finally.
Using Java as the example shows just how far we have come with this thinking, why old school Java style exception handling sucks and why C++ by proxy does too.
It’s difficult to break old mental habits but it’s easier when the compiler yells at you for doing bad things.
try { throw new IllegalStateException("Critical error"); } catch(Exception) { return "Move along, nothing to see here"; }
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
What exactly are you referring to?
Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
The traceback is actually shown based on the last-thrown exception (that thrown from the finally in this example), but includes the previous "chained exceptions" and prints them first. From CPython docs [1]:
> When raising a new exception while another exception is already being handled, the new exception’s __context__ attribute is automatically set to the handled exception. An exception may be handled when an except or finally clause, or a with statement, is used. [...] The default traceback display code shows these chained exceptions in addition to the traceback for the exception itself. [...] In either case, the exception itself is always shown after any chained exceptions so that the final line of the traceback always shows the last exception that was raised.
So, in practice, you will see both tracebacks. However, if you, say, just catch the exception with a generic "except Exception" or whatever and log it without "__context__", you will miss the firstly thrown exception.
[1]: https://docs.python.org/3.14/library/exceptions.html#excepti...
jasode•1mo ago
It's a snowclone based on the meme, "Mom, can we get <X>? No, we have <X> at home." : https://www.google.com/search?q=%22we+have+x+at+home%22+meme
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
thombles•1mo ago
tux3•1mo ago
mort96•1mo ago
As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
mcny•1mo ago
rramadass•1mo ago
UncleMeat•1mo ago
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
mort96•1mo ago
rramadass•1mo ago
mort96•1mo ago
rramadass•1mo ago
Anyway, going forward, if anything like this happens again folks should simply shoot an email immediately to the mods and if the topic is really interesting deserving of more discussion they can always request the mods to keep the post on the frontpage for a longer time period via second-chance pool etc.
It just takes a minute or two of one's time and hence not worth getting het up over.
mort96•1mo ago
Again, this post was misrepresenting Raymond's words for over 7 hours. That's most of its time on the front page. The current system doesn't work.
rramadass•1mo ago
This is the first time i have seen the auto-editorializing algorithm make a mess of the semantic meaning of a sentence which is certainly unfortunate. In most other cases (which are quite rare btw) it is generally much more benign. I presume the mods will be taking another look at their algorithm.
However, given the ways people try to influence the content on HN via title, language, brigading etc. it is good that the algorithm be strict rather than loose to prevent casual gaming of the system. And it works quite well contrary to your claim.
zem•1mo ago
johnfn•1mo ago
actionfromafar•1mo ago
miki123211•1mo ago
mort96•1mo ago
pjmlp•1mo ago
ibobev•1mo ago
hyghjiyhu•1mo ago
ibobev•1mo ago
vidarh•1mo ago
pelorat•1mo ago
Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.
locknitpicker•1mo ago
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
afiori•1mo ago
locknitpicker•1mo ago
I don't think you understand.
If you need to run cleanup code whenever you need to destroy a resource, there is already a special member function designed to handle that: the destructor. Read up on RAII.
It somehow you failed to understand RAII and basic resource management, you can still use one-liners. Read up on scope guard.
If you are too lazy to learn about RAII and too lazy to implement a basic scope guard, you can use one of the many scope guard implementations around. Even Boost has those.
https://www.boost.org/doc/libs/latest/libs/scope/doc/html/sc...
So, unless you are lazy and want to keep mindlessly writing Java in ${LANGUAGE} regardless it makes sense or not, there is absolutely no reason at all to use finally in C++.
AnimalMuppet•1mo ago
Take a file handle, for instance. Don't use open() or fopen() and then try to close it in a finally. Instead, use a file class and let it close itself by going out of scope.
coliveira•1mo ago
mort96•1mo ago
ModernMech•1mo ago
otterley•1mo ago
afiori•1mo ago