void (*fn)(void*, T) = nullptr;I just tried a few PDFs in Chromium and PDFium seems to be much better than pdf.js - faster and handles forms more smoothly.
Sumatra excels at read-only. Usually anything to do with PDF is synonymous with slow, bloat, buggy, but Sumatra at just 10Mbytes, managed to feel snappy, fast like a win32 native UI.
I would argue that a pdf reader is much simpler than multiple very popular webpages nowadays.
Many apps do exist on the web. Not many of them are very good, PDF is a bit case in point - web struggles to fully implement PDF read/write (partly a complexity thing, partly I'm sure a tactic non-compete with Adobe).
Even more sites still exist that aren't apps, even if a few big sites have stolen people's imaginations...
So two reasons why you don't really want your PDF in the browser (I don't mind much if I'm never going to look at it again but otherwise no I don't want it in my browser).
Even if the first two weren't true, there's still just the fact that, no, PDF is local, web is not. I don't need an internet connection for one, and I don't need to worry about one messing with the other. Sounds like a strange thing to worry about, but browsers do crash, and more importantly browsers are often filled with tabs. You can have many PDFs open but you don't need to keep them all in memory...
So one can expect zero day exists and are exploited.
That may not be a feature for you, but it is for attackers.
PDF was originally a display-only format.
In fact you have gotten it backwards. The obviously dynamic features in PDF like JavaScript are designed to be dynamic so they receive so much more attention in security. So smart attackers attack the not-obviously-dynamic features in PDF.
For example, it doesn't support JavaScript. And it doesn't support GoToE.
The text features, both strings and fonts, get sent through HarfBuzz for sanitisation.
How is it not sandboxed?
OK. Now load 100 PDF's. You will need a dedicated PDF reader unless you don't mind wasting a truckload of RAM. Also, browser PDF readers are generally slower and are not optimal at search/bookmarks/navigation/etc.
For me, having a separate dedicated app isn't worth it for the benefits you mention, which to me are minor compared to having to install and manage another thing (which, to be fair, I imagine Sumatra to be a very pleasant citizen at compared to Acrobat).
Documentation particularly is a big one. Then there are books, etc.
Some web pages can still manage to be leaner than a PDF, but as another poster pointed out, a lot of modern web is pretty trash...
Sumatra has no issues with 200mb+ PDFs, or ones with complex drawings.
These are all engineering drawings such as mechanical, electrical, and architectural drawings, so mine might not be a use case everyone has.
I also have opened Lenovo Thinkpad manuals, hahaha. For those who haven't, it's quite amazing, really: the SVG images are exports of the full CAD model of the laptop without any culling of elements that aren't visible. And I know this because I used to see each indiduad screw be rendered slowly afer the other whenever pdf.js tried to render one of those bad boys.
Two, my main objective is extreme simplicity and understandability of the code.
I explicitly gave up features of std::function for smaller code that I actually understand.
fu2 seems to be "std::function but more features".
Today, if I was starting from scratch, I would try zig or odin or maybe even Go.
But SumatraPDF started 15 years. There was nothing but C++. And a much different C++ that C++ of today.
Plus, while my own code is over 100k lines, external C / C++ libraries are multiple of that so (easy) integration with C / C++ code is a must.
I hear you about "back in my day," but since as best I can tell it's just your project (that is, not a whole team of 50 engineers who have to collaborate on the codebase) so you are the audience being done a disservice by continuing to battle a language that hates you
As for the interop, yes, since the 70s any language that can't call into a C library is probably DoA but that list isn't the empty set, as you pointed out with the ones you've actually considered. I'd even suspect if you tried Golang it may even bring SumatraPDF to other platforms which would be another huge benefit to your users
Probably by using a cross-platform toolkit written in C++.
But you didn't respond: which language should I use?
Go didn't exist when I started SumatraPDF.
And while I write pretty much everything else in Go and love the productivity, it wouldn't be a good fit.
A big reason people like Sumatra is that it's fast and small. 10 MB (of which majority are fonts embedded in the binary) and not 100MB+ of other apps.
Go's "hello world" is 10 MB.
Plus abysmal (on Windows) interop with C / C++ code.
And the reason SumatraPDF is unportable to mac / linux is not the language but the fact that I use all the Windows API I can for the UI.
Any cross-platform UI solution pretty much require using tens of megabytes of someone else's reimplementation of all the UI widgets (Qt, GTK, Flutter) or re-implementing a smaller subset of UI using less code.
It can even do comments. But I would like to see more comment tools, especially measurement tools.
And since you are using Windows, do you think it would be worthwhile to add Windows OCR?
I think you should learn basic modern C++ before making judgements like this.
While you are upset about it the rest of the world is just using basic data structures, looping through them, never dealing with garbage collection and enjoying fast and light software that can run on any hardware.
A lot of people could get by with $100 computers if all software was written like sumatraPDF.
std::pair<void(*)(FuncData*), std;:unique_ptr<FuncData>>
at this stage? This implementation has a bunch of performance and ergonomics issues due to things like not using perfect forwarding for the Func1::Call(T) method, so for anything requiring copying or allocating it'll be a decent bit slower and you'll also be unable to pass anything that's noncopyable like an std::unique_ptr.But I do know the code I write and you're wrong about performance of Func0 and Func1. Those are 2 machine words and all it takes to construct them or copy them is to set those 2 fields.
There's just no way to make it faster than that, both at runtime or at compile time.
The whole point of this implementation was giving up fancy features of std::function in exchange for code that is small, fast (both runtime and at compilation time) and one that I 100% understand in a way I'll never understand std::function.
void Call(T arg) const {
if (fn) {
fn(userData, arg);
}
}
Say you pass something like an std::vector<double> of size 1 million into Call. It'll first copy the std::vector<double> at the point you invoke Call, even if you never call fn. Then, if fn is not nullptr, you'll then copy the same vector once more to invoke fn. If you change Call instead to void Call(T&& arg) const {
if (fn) {
fn(userData, std::forward<T>(arg));
}
}
the copy will not happen at the point Call is invoked. Additionally, if arg is an rvalue, fn will be called by moving instead of copying. Makes a big difference for something like std::vector<double> foo();
void bar(Func1<std::vector<double>> f) {
auto v = foo();
f(std::move(v));
}You also have to heap allocate your userData, which is something std::function<> avoids (in all standard implementations) if it’s small enough (this is why the sizeof() of std::function is larger than 16 bytes, so that it can optionally store the data inline, similar to the small string optimization). The cost of that heap allocation is not insignificant.
If I were doing this, I might just go the full C route and just use function pointers and an extra ”userData” argument. This seems like an awkward ”middle ground” between C and C++.
class OnListItemSelected {
OnListItemSelectedData data;
void operator()(int selectedIndex) { ... }
}
Perhaps I'm mistaken in what the author is trying to accomplish though?The one thing the author's solution does which this solution (and lambdas) does not is type erasure: if you want to pass that closure around, you have to use templates, and you can't store different lambdas in the same data structure even if they have the same signature.
You could solve that in your case by making `void operator()` virtual and inheriting (though that means you have to heap-allocate all your lambdas), or use `std::function<>`, which is a generic solution to this problem (which may or may not allocate, if the lambda is small enough, it's usually optimized to be stored inline).
I get where the author is coming from, but this seems very much like an inferior solution to just using `std::function<>`.
I think whether or not you have to allocate from the heap depends on the lifetime of the lambda. Virtual methods also work just fine on stack-allocated objects.
But yes, fair point: they can be stack or statically allocated as well.
> OnListItemSelectedData data;
In this case you can just store the data as member variables. No need for defining an extra class just for the data.
As I've written elsewhere, you can also just use a lambda and forward the captures and arguments to a (member) function. Or if you're old-school, use std::bind.
There are numerous differences between my Func0 and Func1<T> and std::function<>.
Runtime size, runtime performance, compilation speed, understandability of the code, size of the source code, size of the generated code, ergonomics of use.
My solution wins on everything except ergonomics of use.
LLVM has a small vector class.
When asked for comment, pjmlp said: "Another example of NIH, better served by using the standard library".
Secondly, the maintainability of duplicating standard library code, without having the same resources as the compiler vendor.
It is your product, naturally you don't have to listen to folks like myself.
I'm surprised by how many people on HN are yelling at the author to code as if he's working at a company like Adobe, when objectively Adobe's PDF reader is dogshit (especially performance wise) for most people and is probably built on best practices like using standard libraries.
In danger of pointing out the obvious: std::function does note require lambdas. In fact, it has existed long before lambdas where introduced. If you want to avoid lambdas, just use std::bind to bind arguments to regular member functions or free functions. Or pass a lambda that just forwards the captures and arguments to the actual (member) function. There is no reason for regressing to C-style callback functions with user data.
A simple example. If you were to bind a function pointer in one stack frame, and the immediately return it to the parent stack frame which then invokes that bound pointer, the stack that bound the now called function would literally not exist anymore.
There are 2 aspects to this: programmer ergonomics and other (size of code, speed of code, compilation speed, understandability).
Lambdas with variable capture converted to std::function have best ergonomics but at the cost of unnamed, compiler-generated functions that make crash reports hard to read.
My Func0 and Func1<T> approach has similar ergonomics to std::bind. Neither has the problem of potentially crashing in unnamed function but Func0/Func1<T> are better at other (smaller code, faster code, faster compilation).
It's about tradeoffs. I loved the ergonomics of callbacks in C# but I working within limitations of C++ I'm trying to find solutions with attributes important to me.
I would really question your assumptions about code size, memory usage and runtime performance. See my other comments.
Both std::function<> and lambdas were introduced in C++11.
Furthermore absolutely no one should use std::bind, it's an absolute abomination.
> Furthermore absolutely no one should use std::bind, it's an absolute abomination.
Agree 100%! I almost always use a wrapper lambda.
However, it's worth pointing out that C++20 gave us std::bind_front(), which is really useful if you want to just bind the first N arguments:
struct Foo {
void bar(int a, int b, int c);
};
Foo foo;
using Callback = std::function<void(int, int, int)>;
// with std::bind (ugh):
using namespace std::placeholders;
Callback cb1(std::bind(&Foo::bar, &foo, _1, _2, _3));
// lambda (without perfect forwarding):
Callback cb2([&foo](auto&&... args) { foo.bar(args...); });
// lambda (with perfect forwarding):
Callback cb3([&foo](auto&&... args) { foo.bar(std::forward<decltype(args)>(args)...); });
// std::bind_front
Callback cb4(std::bind_front(&Foo::bar, &foo));
I think std::bind_front() is the clear winner here.https://github.com/TartanLlama/function_ref/blob/master/incl...
I can't even read it.
That's the fundamental problem with C++: I've understood pretty much all Go code I ever looked at.
The code like the above is so obtuse that 0.001% of C++ programmers is capable of writing it and 0.01% is capable of understanding it.
Sure, I can treat it as magic but I would rather not.
Why do you even care how std::function is implemented? (Unless you are working in very performance critical or otherwise restricted environments.)
- better call stacks in crash reports
- smaller and faster at runtime
- faster compilation because less complicated, less templated code
- I understand it
So there's more to it that just that one point.Did I loose useful attributes? Yes. There's no free lunch.
Am I going too far to achieve small, fast code that compiles quickly? Maybe I do.
My code, my rules, my joy.
But philosophically, if you ever wonder why most software today can't start up instantly and ships 100 MB of stuff to show a window: it's because most programmers don't put any thought or effort into keeping things small and fast.
BTW, I would also contest that your version is faster at runtime. Your data always allocated on the heap. Depending on the size of the data, std::function can utilize small function optimization and store everything in place. This means there is no allocation when setting the callback and also better cache locality when calling it. Don't make performance claims without benchmarking!
Similarly, the smaller memory footprint is not as clear cut: with small function optimization there might be hardly a difference. In some cases, std::function might even be smaller. (Don't forget about memory allocation overhead!)
The only point I will absolutely give you is compilation times. But even there I'm not sure if std::function is your bottleneck. Have you actually measured?
All others use a pointer to an object that exists anyway. For example, I have a class MyWindow with a button. A click callback would have MyWindow* as an argument because that's the data needed to perform that action. That's the case for all UI widgets and they are majority uses of callbacks.
I could try to get cheeky and implement similar optimization as Func0Fat where I would have inline buffer on N bytes and use it as a backing storage for the struct. But see above for why it's not needed.
As to benchmarking: while I don't disagree that benchmarking is useful, it's not the ace card argument you think it is.
I didn't do any benchmarks and I do no plan to.
Because benchmarking takes time, which I could use writing features.
And because I know things.
I know things because I've been programming, learning, benchmarking for 30 years.
I know that using 16 bytes instead of 64 bytes is faster. And I know that likely it won't be captured by a microbenchmark.
And even if it was, the difference would be miniscule.
So you would say "pfft, I told you it was not worth it for a few nanoseconds".
But I know that if I do many optimizations like that, it'll add up even if each individual optimization seems not worth it.
And that's why SumatraPDF can do PDF, ePub, mobi, cbz/cbr and uses less resources that Windows' start menu.
> I just looked and out of 35 uses of MkFunc0 only about 3 (related to running a thread) allocate the args.
In that case, std::function wouldn't allocate either.
> All others use a pointer to an object that exists anyway. For example, I have a class MyWindow with a button. A click callback would have MyWindow* as an argument because that's the data needed to perform that action. That's the case for all UI widgets and they are majority uses of callbacks.
That's what I would have guessed. Either way, I would just use std::bind or a little lambda:
struct MyWindow { void onButtonClicked(); };
// old-school: std::bind
setCallback(std::bind(&MyWindow::onButtonClicked, window));
// modern: a simple lambda
setCallback([window]() { window->onButtonClicked(); });
If your app crashes in MyWindow::onButtonClicked, that method would be on the top of the stack trace. IIUC this was your original concern. Most of your other points are just speculation. (The compile time argument technically holds, but I'm not sure to which extend it really shows in practice. Again, I would need some numbers.)> I know things because I've been programming, learning, benchmarking for 30 years.
Thinking that one "knows things" is dangerous. Things change and what we once learned might have become outdated or even wrong.
> I know that using 16 bytes instead of 64 bytes is faster. And I know that likely it won't be captured by a microbenchmark.
Well, not necessarily. If you don't allocate any capture data, then your solution will win. Otherwise it might actually perform worse. In your blog post, you just claimed that your solution is faster overall, without providing any evidence.
Side note: I'm a bit surprised that std::function takes up 64 bytes in 64-bit MSVC, but I can confirm that it's true! With 64-bit GCC and Clang it's 32 bytes, which I find more reasonable.
> And even if it was, the difference would be miniscule.
That's what I would think as well. Personally, I wouldn't even bother with the performance of a callback function wrapper in a UI application. It just won't make a difference.
> But I know that if I do many optimizations like that, it'll add up even if each individual optimization seems not worth it.
Amdahl's law still holds. You need to optimize the parts that actually matter. It doesn't mean you should be careless, but we need to keep things in perspective. (I would care if this was called hundreds or thousands of times within a few milliseconds, like in a realtime audio application, but this is not the case here.)
To be fair, in your blog post you do concede that std::function has overall better ergonomics, but I still think you are vastly overselling the upsides of your solution.
C++ takes this to another level, though. I'm not an expert Go or Rust programmer, but it's much easier to understand the code in their standard libraries than C++.
Main things you would need to understand is specialization (think like pattern matching but compile time) and pack expansion (three dots).
The problem with that is that for every type of callback you need to create a base class and then create a derived function for every unique use.
That's a lot of classes to write.
Consider this (from memory so please ignore syntax errors, if any):
class ThreadBase {
virtual void Run();
// ...
}
class MyThread : ThreadBase {
MyData* myData;
void Run() override;
// ...
}
StartThread(new MyThread());
compared to: HANDLE StartThread(const Func0&, const char* threadName = nullptr);
auto fn = MkFunc0(InstallerThread, &gCliNew);
StartThread(fn, "InstallerThread");
I would have to create a base class for every unique type of the callback and then for every caller possibly a new class deriving.This is replaced by Func0 or Func1<T>. No new classes, much less typing. And less typing is better programming ergonomics.
std::function arguably has slightly better ergonomics but higher cost on 3 dimension (runtime, compilation time, understandability).
In retrospect Func0 and Func1 seem trivial but it took me years of trying other approaches to arrive at insight needed to create them.
An interface declaration is, like, two lines. And a single receiver can implement multiple interfaces. In exchange, the debugger gets a lot more useful. Plus it ensures the lifetime of the "callback" and the "context" are tightly-coupled, so you don't have to worry about intersecting use-after-frees.
template<class R, class... Args>
struct FnBase {
virtual R operator()(Args...) = 0;
};
class MyThread : FnBase<void> { ... };I haven’t used windows in a long time but back in the day I remember installing SumatraPDF to my Pentium 3 system running windows XP and that shit rocked
Smaller size at runtime (uses less memory).
Smaller generated code.
Faster at runtime.
Faster compilation times.
Smaller implementation.
Implementation that you can understand.
How is it worse?
std::function + lambda with variable capture has better ergonomics i.e. less typing.
Also I copy pasted the code from the post and I got this:
test.cpp:70:14: error: assigning to 'void ' from 'func0Ptr' (aka 'void ()(void *)') converts between void pointer and function pointer 70 | res.fn = (func0Ptr)fn;
This warning is stupid. It's part of the "we reserve the right to change the size of function pointers some day so that we can haz closures, so you can't assume that function pointers and data pointers are the same size m'kay?" silliness. And it is silly: because the C and C++ committees will never be able to change the size of function pointers, not backwards-compatibly. It's not that I don't wish they could. It's that they can't.
I also believe there are platforms where a function pointer and a data pointer are not the same but idk about such esoteric platforms first hand (seems Itanium had that: https://stackoverflow.com/questions/36645660/why-cant-i-cast...)
Though my point was only that this code will not compile as is with whatever clang Apple ships*
I am not really sure how to get it to compile tbqh
Some further research ( https://www.kdab.com/how-to-cast-a-function-pointer-to-a-voi...) suggest it should be done like so:
> auto fptr = &f; void a = reinterpret_cast<void &>(fptr);
edit: I tried with GCC 15 and that compiled successfully
res.fn = (void *)fn;
`res.fn` is of type `void *`, so that's what the code should be casting to. Casting to `func0Ptr` there seems to just be a mistake. Some compilers may allow the resulting function pointer to then implicitly convert to `void *`, but it's not valid in standard C++, hence the error.Separately from that, if you enable -Wpedantic, you can get a warning for conversions between function and data pointers even if they do use an explicit cast, but that's not the default.
Note that conversion from a void * pointer to a function
pointer as in:
fptr = (int (*)(int))dlsym(handle, "my_function");
is not defined by the ISO C standard. This standard
requires this conversion to work correctly on conforming
implementations.It works in msvc but as someone pointed out, it was a typo and was meant to be (void*) cast.
What I'm trying to say is being better than x means you can do all the same things as x better. Your thing is not better, it is just different.
My unanswered question on this from 8 years ago:
https://stackoverflow.com/questions/41385439/named-c-lambdas...
If there was a way to name lambdas for debug purposes then all other downsides would be irrelevant (for most usual use cases of using callbacks).
Instead of fully avoiding lambdas, you can use inheritance to give them a name: https://godbolt.org/z/YTMo6ed8T
Sadly that'll only work for captureless lambdas, however.
How much smaller is it? Does it reduce the binary size and RAM usage by just 100 bytes?
Is it actually faster?
How much faster does it compile? 2ms faster?
I wrote it to share my implementation and my experience with it.
SumatraPDF compiles fast (relative to other C++ software) and is smaller, faster and uses less resources that other software.
Is it because I wrote Func0 and Func1 to replace std::function? No.
Is it because I made hundreds decisions like that? Yes.
You're not wrong that performance wins are miniscule.
What you don't understand is that eternal vigilance is the price of liberty. And small, fast software.
Proof needed. Perhaps your overall program is designed to be fast and avoid silly bottlenecks, and these "hundred decisions" didn't really matter at all.
Silly bottlenecks are half of the perf story in my experience. The other half are a billion tiny details.
For example C++ can shoehorn you to a style of programming where 50% of time is spent in allocations and deallocations if your code is otherwise optimal.
The only way to get that back is not to use stl containers in ”typical patterns” but to write your own containers up to a point.
If you didn’t do that, youd see in the profiler that heap operations take 50% of time but there is no obvious hotspot.
Yours is smaller (in terms of sizeof), because std::function employs small-buffer optimization (SBO). That is if the user data fits into a specific size, then it's stored inline the std::function, instead of getting heap allocated. Yours need heap allocation for the ones that take data.
Whether yours win or lose on using less memory heavily depends on your typical closure sizes.
> Faster at runtime
Benchmark, please.
In this instance the maintainer of a useful piece of software has made a choice that's a little less common in C++ (totally standard practice in C) and it seems fine, its on the bubble, I probably default the other way, but std::function is complex and there are platforms where that kind of machine economy is a real consideration, so why not?
In a zillion contributor project I'd be a little more skeptical of the call, but even on massive projects like the Linux kernel they make decisions about the house style that seem unorthodox to outsiders and they have their reasons for doing so. I misplaced the link but a kernel maintainer raised grep-friendliness as a reason he didn't want a patch. At first I was like, nah you're not saying the real reason, but I looked a little and indeed, the new stuff would be harder to navigate without a super well-configured LSP.
Longtime maintainers have reasons they do things a certain way, and the real test is the result. In this instance (and in most) I think the maintainer seems to know what's best for their project.
Making changes like this claiming it will result in faster code or more smaller code without any test or comparison before vs after seems to be not the best way of engineering something
I think this is why the thread has seen a lot of push back overall
Maybe the claims are true or maybe they are not - we cannot really say based on the article (though I’m guessing not really)
And I think it is less than ideal as concerns the fragile abd nascent revival of mainstream C++ to have this sort of a gang tackle over a nitpick like this. The approach is clearly fine because its how most every C program works.
The memes of C++ as too hard for the typical programmer and C++ programmers as pedantic know-it-all types are mostly undeserved, but threads like this I think reinforce those negative stereotypes.
The real S-Tier C++ people who are leading the charge on getting C++ back in the mindshare game (~ Herb Sutter's crew) are actively fighting both memes and I think it behooves all of us who want the ecosystem to thrive should follow their lead.
The danger of C++ becoming unimportant in the next five or ten years is zero, C and C++ are what the world runs on in important ways.
But in 20? 30? The top people are working with an urgency I haven't seen in decades and the work speaks for itself: 23 and 26 are coming together "chef's kiss" as Opus would say.
The world is a richer place with Rust and Zig in it, but it would be a poorer place with C++ gone, and that's been the long term trend until very recently.
If the post was about rust and it was using unsafe code and casting function pointers then everyone would quickly jump to try and correct it all the same
> One thing you need to know about me is that despite working on SumatraPDF C++ code base for 16 years, I don’t know 80% of C++.
I'm pretty sure that most "why don't you just use x…" questions are implicitly answered by it, with the answer being "because using x correctly requires learning about all of it's intricacies and edge-cases, which in turn requires understanding related features q, r, s… all the way to z, because C++ edge-case complexity doesn't exist in a vacuum".
> Even I can’t answer every question about C++ without reference to supporting material (e.g. my own books, online documentation, or the standard). I’m sure that if I tried to keep all of that information in my head, I’d become a worse programmer.
-- Bjarne Stroustrup, creator of C++
But he can also contradict himself sometimes in this regard, because he also often uses a variation of calling C++ a language for "people who know what they are doing" as a sort of catch-all dismissal of critiques of its footguns.
The whole problem is that very few people can claim to truly "know what they are doing" when it comes to all of C++' features and how they interconnect, dismissing that by (implicitly) telling people to just "git gud" is missing the point a bit.
But again, he's only human and I do get the urge to get a bit defensive of your baby.
> easy things should be easy; hard things should be possible
From many years ago, this was a Perl motto from Larry Warry. Is the original pontificator... or was it someone before him?I think Herb Sutter is at least trying to find that elegant language, with his "syntax v2" project. It's one way to preserve compatibility with the incalculable amount of C++ in the wild, while also providing a simplified syntax with better defaults and fewer foot-guns.
Of course, Herb isn't immune to making hand-wavy claims[0] of his own, but he seems to bring forward more good ideas than bad.
[0] https://herbsutter.com/2025/03/30/crate-training-tiamat-un-c...
> elegant language
Are there any languages that would qualify and are in common use in industry? I'm not interested in some PL fantasy language that no one outside of academia uses. JavaScript, Python, Ruby, PHP (see double-ended hammer!), C, C++, C#, Java, Go, Rust all have numerous warts, but are heavily used in industry. None of them are particularly elegant in 2025. In the modern era, it is impossible to separate the language itself from the standard library. Even if the language is elegant, I am sure the standard library will have all kinds of awful warts (probably date/time) -- or vice versa.The other is that not all "language warts" are equal. Few would claim the severity of the footguns are equally bad among the languages you listed.
More importantly, I think Bjarne's comment was more about C++ being hindered by its commitment to backwards compatibility and mistakes in previous design decisions getting in the way of making new designs elegant to implement. Unless you come up with a completely new syntax (Herb Sutter's cpp2) or a way to locally break backwards compatibility (Sean Baxter's Circle) C++ has forced itself into a corner.
https://gcc.godbolt.org/z/EaPqKfvne
You could get around this by using a wrapper function, at the cost of a slightly different interface:
template <typename T, void (*fn)(T *)>
void wrapper(void *d) {
fn((T *)d);
}
template <typename T, void (*fn)(T *)>
Func0 MkFunc0(T* d) {
auto res = Func0{};
res.fn = (void *)wrapper<T, fn>;
res.userData = (void*)d;
return res;
}
...
Func0 x = MkFunc0<int, my_func>(nullptr);
(This approach also requires explicitly writing the argument type. It's possible to remove the need for this, but not without the kind of complexity you're trying to avoid.)"I don't understand it, so surely it must be very difficult and probably nobody understands it"
https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-...
I'm with OP here.
I think that has the same benefit as this, that the callbacks are all very clearly named and therefore easy to pick out of a stack trace.
(In fact, it seems like a missed opportunity that modern Java lambdas, which are simply syntactical sugar around the same single-method interface, do not seem to use the interface name in the autogenerated class)
How does that work with variables in the closure then? I could see that work with the autogenerated class: Just make a class field for every variable referenced inside the lambda function body, and assign those in constructor. Pretty similar to this here article. But it's not immediately obvious to me how private static methods can be used to do the same, except for callbacks that do not form a closure (eg filter predicates and sort compare functions and the likes that only use the function parameters).
Also, sad to see people still using new. C++11 was 14 years ago, for crying out loud...
in my day code + data was called a class :)
(yeah, yeah, I know closure and class may be viewed as the same thing, and I know the Qc Na koan)
In old Java, it really was a class. In new Java, I'm not 100% sure anymore, but with verbose syntax it'll be an class. I made it as verbose as possible:
Function<Integer, Integer> adder(int x) {
class Adder implements Function<Integer, Integer> {
int x1;
@Override Integer apply(Integer y) {
return x1 + y;
}
}
Adder adder = new Adder();
adder.x1 = x;
return adder;
}
and a bit less verbose with modern Java: Function<Integer, Integer> adder(int x) {
return (y) -> x + y;
}
So closure could be definitely be considered as a very simple class.The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...
This is the kind of (maybe brilliant, maybe great, maybe both, surely more than myself) developers I don't like to work with.
You are not required to understand the implementation: you are only required to fully understand the contract. I hate those colleagues who waste my time during reviews because they need to delve deeply into properly-named functions before coming back to the subject at hand.
Implementations are organized at different logical level for a reason. If you are not able to reason at a fixed level, I don't like to work with you (and I understand you will not like to work with me).
I think avoiding features you don't understand the implementation of makes a lot of sense in those kinds of situations.
The hidden assumption in your comment is that the contract is implemented perfectly and that the abstraction isn't leaky. This isn't always the case. The author explained a concrete way in which the std::function abstraction leaks:
> They get non-descriptive, auto-generated names. When I look at call stack of a crash I can’t map the auto-generated closure name to a function in my code. It makes it harder to read crash reports.
But that's not an issue with std::function at all! His comment is really about lambdas and I don't understand why he conflates these two. Just don't put the actual code in a lambda:
std::function<void()> cb([widget]() { widget->onButtonClicked(); });
Here the code is in a method and the lambda is only used to bind an object to the method. If you are old-school, you can also do this with std::bind: std::function<void()> cb(std::bind(&Widget::onButtonClicked, widget));AFAIK MSVC also changed their lambda ABI once, including mangling. As I recall at one point it even produced some hash in the decorated/mangled name, with no way to revert it, but that was before /Zc:lambda (enabled by default from C++20).
Well, HN lazyweb, how do you override the stupid name in C++? In other languages this is possible:
$ node --trace-uncaught -e 'const c = function have_name() {throw null}; c()'
$ perl -d:Confess -MSub::Util=set_subname -E 'my $c = sub() {die}; set_subname have_name => $c; $c->()'Why place undefined behavior traps like that in your code.
From what I know about C this code probably breaks on platforms that nobody uses.
Thanks for Sumatra, by the way :D Very useful software!
Your intuition is correct. Even if never dereferenced, casting a compile-time constant other than 0 to a pointer type in that way is implementation-defined, might not be correctly aligned, might not point to an entity of the referenced type, and might be a trap representation. [0] If I understand correctly, saving a trap representation to a variable causes undefined behaviour, as doing so presumably constitutes reading the trap value [1] (I had a tough time finding a definitive answer on that point).
I think it would be compliant to use the intptr_t type (assuming it exists on that platform) and give special treatment when it holds -1, i.e. never cast back to the pointer type if it holds -1. (Alternatively, use uintptr_t and give special treatment if it holds UINTPTR_MAX.) This would still rely on the assumption that the special value will never collide with the actual pointer values you need to work with, of course.
[0] https://wiki.sei.cmu.edu/confluence/display/c/INT36-C.+Conve...
[1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2091.htm
not-so-darkstar•7mo ago
kjksf•7mo ago
Somehow my blog server got overwhelmed and requests started taking tens of seconds. Which is strange because typically it's under 100ms (it's just executing a Go template).
It's not a CPU issues so there must be locking issue I don't understand.