The hard part is when you have multiple people working on something, who all need to synchronize their mental model of all lifetimes in the project, or even in your own code when you come back to it in 3 months. Encoding this stuff in the type system is unbelievably useful.
Zig improves a lot of things, and seems like a pleasant language in general, but this isn't a problem that it does anything to solve.
AI researchers need to hurry up and invent the next big paradigm shift so AI on your phone is as good as SoTA bots, so we can stay ahead of the enshittification curve.
Awesome software - I've been meaning to build a crawler and this does the trick.
Lets be honest Zig is a shiny new shit for people who doesn't want to learn and want everything to be familiar but new.
Criticism of it is not allowed and would be downvoted by bandwagon fanboys.
Familiar is different from the same, therefore is new. And zig has many new concepts and things different from C
More seriously, a reasonably sane way to create a lot of web-heavy services (writing out something simple for brevity, not anything perfect) is with large regions partitioned into ropes (for use with, e.g., iovecs kernel APIs). You have a tiny bit of potentially memory-unsafe stuff in a simple backing data structure (or not -- at $WORK we're moving more things to static allocation models for a host of other benefits), and then everything else you do web-wise is with views into those ropes (enabling incremental processing and whatnot if you care). The rest is memory-safe by definition (only using slices and other such safe techniques), so if you have any memory bugs from there then they're the same logic bugs you can write in any language (a fairly classic example in a web context is serving another user's data, especially by not resetting view states, but that's also not what happens in a "normal" Zig program because the compiler will yell at you when you miss some fields).
You might notice that my answer seemingly wasn't Zig-specific. You can use that same architecture in C. Why is Zig safe? It's a lot of little things -- first-class errors, defer and errdefer statements, first-class tests and fuzzing, the existence of a built-in fat pointer type, etc. If you propose the same idea in C you'll likely screw up a detail somewhere (not checking an error, not using yet another fat pointer implementation for ergonomic reasons, whatever). In Zig you'll write safe code by default.
There are other architectures, other ways to ensure safety, and other things the compiler does to keep you on the straight and narrow. You could go fairly deep into the "why" and "how" of Zig being safe enough. I'll leave that chore for somebody else. The other half of your question though is "what do you gain?"
You gain lots of things, and they might not matter to you, but they probably exist.
One thing I encountered was needing a faster language and not being able to justify the huge ramp-up time to teach Rust to a bunch of Pythonistas (nor the ramp-up time on the company if we tried to hire explicitly for that work, even if we could have gotten the additional budget).
You also gain access to really world-class programmers. There are great programmers in every language, but in established languages they're a lot harder to find in any given job search (Not talking about any of you here on HN of course :) The point is that resumés have a sampling bias from the perspective of the receiving company favoring people who struggle to get jobs, and for a variety of reasons that gives you a much higher signal-to-noise ratio when hiring for less popular languages). This was true of Rust at one point too, but IMO it's a little harder to hire for now (yet still better than even more popular languages).
As a broader point, for somewhat nebulous reasons I don't fully understand yet, it's by far the easiest language I've personally found for writing high-performance software correctly. C/C++/Rust/etc were fine enough I guess (all of them more than fine in other problem domains -- I've used them professionally and don't have too many complaints that other practitioners would disagree with), but they were comparatively hard to use to write code that was anywhere near optimal for complicated problems.
1. Developers balked at being required to take on the cognitive load required to allow GC-less memory management
2. Developers wore their ability to take on that cognitive load as a badge of honor, despite it not being in their best interest
I eventually came to the decision to stop developing in Rust, despite its popularity. It is really cool that its creators pulled it off. It was quite an achievement, given how different it was when it came out. I think that if I had to implement a critical library I would consider using Rust for it, but as a general programming language I want something that allows me to focus my mental facilities on the complexities of the actual problem domain, and I felt that it was too often too difficult to do that with Rust.
Zig where I used to use C/Rust (but admittedly I spent the least time here).
Go where I used to use Java.
Bun/Node for typescript/javascript, where each is appropriate, but I favor Bun for standalone application programming and local scripting.
I really don't understand how that fit with the “I want something that allows me to focus my mental facilities on the complexities of the actual problem domain”.
For low-level stuff, Rust allows to offload the cognitive load of maintaining the ownership requirements to the machine. On the opposite, Zig is exactly like C as it forces you to think about it all the time or you just shoot yourself in the foot at the first opportunity…
For stuff that can be done with managed languages, then absolutely, the GC allows to completely ignore that aspect, at the cost of some performance you don't always care about because how fast the modern hardware is.
Do not do this.
If you have backreferences or "parent pointers", you need `Arc<Mutex<...>>` or `Rc<RefCell<...>>`, and then you run into trouble as you encounter the same node multiple times while traversing the graph, because you cannot hold a mutex lock or mutably borrow `RefCell` twice in the same call stack.
The solution with much less resistance in Rust is to go for a data-oriented representation. If you really need an actual graph of objects, separate the node data from the topology metadata, and refer to the nodes using an ID or index. (As an extra bonus, this also gives you much better cache locality.)
My own appreciation for Rust is rooted in humility. I know I'm an overgrown monkey prone to all kinds of mistakes. I appreciate Rust for helping me avoid that side of me
People love to say this, but C++ is routinely taught as a first programming language to novice programmers (this used to be even more clearly the case before Java and Python largely took on that role) and Rust is undoubtedly simpler than C++.
You can subset C++ and still knock out a program.
You cannot subset Rust and still create a program.
But Rust is a dramatically smaller language than C++. The various subsets of C++ people usually carve out tend to be focused on particular styles of programming, like “no exceptions” or “no RTTI”. Notably never things like “signed integer overflow is now defined”, or “std::launder() is now unnecessary”.
When you have this stuff in "Hello World":
Egui Hello World:
ui.add(egui::Slider::new(&mut age, 0..=120).text("age"));
Ratatui Hello World: fn render(frame: &mut Frame) {
or fn run(mut terminal: DefaultTerminal) -> Result<()> {
loop {
terminal.draw(render)?;
if matches!(event::read()?, Event::Key(_)) {
break Ok(());
}
}
}
And I didn't even break out the function chaining, closure and associated lifetime stuff that pervades the Rust GUI libraries.When I can contrast this to say, ImGui C++:
ImGui::Text("Hello, world %d", 123);
if (ImGui::Button("Save"))
MySaveFunction();
ImGui::InputText("string", buf, IM_ARRAYSIZE(buf));
ImGui::SliderFloat("float", &f, 0.0f, 1.0f);
which looks just slightly above C with classes.This kind of blindness makes me wonder about what universe the people doing "Well Ackshually" about Rust live in.
Rust very much has an enormous learning curve and it cannot be subsetted to simplify it due to both the language and the extensive usage of libraries via Cargo.
It is what it is--and may or may not be a valid tradeoff. But failing to at least acknowledge that will simply make people wonder about the competence of the people asserting otherwise.
The rust code you pasted doesn't show any lifetime.
The `&f` in your imgui example is equivalent to the `&mut age`.
Are you just comparing the syntax? It just take a couple of hours to learn the syntax by following a tutorial and that `&mut` in rust is the same as `&` in C, not to mention that the compiler error tell you to add the `mut` if it is missing.
Also 0..=120 is much more clear than passing to arguments 0.0f, 1.0f. it makes it obvious what it is while looking at the imgui call it isn't.
> When you have this stuff in "Hello World"
Might be worth reading simonask's comment more closely. They said (emphasis added):
> You can absolutely make _a_ complete, featureful program in Rust without naming a single lifetime, or even without dealing with a single reference/borrow.
That some programs require references/borrows/etc. doesn't mean that all programs require them.
† As you're apparently a C++ programmer you would call these "Non-type template parameters"
The nice thing about Rust as First Language (which I'm not sure I'd endorse, but it can't be as bad as C++) is that because safe Rust ropes off so many footguns it's extremely unlikely that you'll be seriously injured by your lack of understanding as a beginner. You may not be able to do something because you didn't yet understand how - or you might do something in a terribly sub-optimal way, but you're not likely to accidentally write nonsense without realising and have that seem to work.
For example yesterday there was that piece where the author seems to have misunderstood how heap allocation works in Rust. But, in safe Rust that's actually harmless. If they write their mistake it won't compile, maybe they figure out why, maybe they give up and can't use heap allocation until they learn more.
I haven't thought too hard about Zig as first language, because to me the instability rules that out. Lecturers hate teaching moving targets.
Rust just feels natural now. Possibly because I was exposed to this harsh universe of problems early. Most of the stupid traps that I fell into are clearly marked and easy to avoid.
It's just so easy to write C++ that seems like it works until it doesn't...
The main thing a lot of had going for us was 5-10 years of experience with Basic, Pascal and other languages before anyone tried to teach us C++. Those who came in truly unprepared often struggled quite badly.
This thread is about Zig though! I want to like Zig but it has many annoyances... just the other day I learned that you must not print to stdout in a unit test (or any code being unit tested!) as that simply hangs the test runner. No error, no warning, it just hangs. WTF who thinks that's ok?!
But I think Zig is really getting better with time, like Java did and perhaps as slowly. Some stdlib APIs used to suck terribly but they got greatly improved in Zig 0.15 (http, file IO and the whole Writergate thing), so I don't know, I guess Zig may become a really good language given some more time, perhaps a couple of years?!
For example, a command line utility. In a CLI tool you typically don't free memory. You just allocate and exit and let the OS clean up memory.
Historically compilers were all like this, they didn't free memory, they just compiled a single file and then exited! This ended up being a problem when compilers moved more into a service model (constant compilation in the background, needing to do whole program optimization, loading into memory and being called on demand to compile snippets, etc), but for certain problem classes, not worrying about memory safety is just fine.
Zig makes it easy to create an allocator, use it, then just free up all the memory in that region.
Right tool for the job and all that.
If you were to add borrow checking to Zig, it would make it much easier to justify using it at my current workplace.
http://github.com/ityonemo/clr
was where i got last year. this december im doing a "prototype" which means its going to be done in zig and im going to clear sone difficult hurdles i couldn't do last year.... also accepting sponsors, details on page.
also disclaimer, im using heavy amounts of ai assistance (as implied in the preview video)
The sad exception is obviously that Rust's std collections are not built on top of it, and neither is almost anything else.
But nevertheless, I think this means it's not a Zig vs Rust thing, it's a Zig stdlib vs Rust stdlib thing, and Rust's stdlib can be replaced via #[no_std]. In the far future, it's likely someone will make a Zig-like stdlib for Rust too, with a &dyn Allocator inside collections.
This exists in the nightly edition of Rust, but is unlikely to become a feature in its current form because the alternative of "Storages" seems to be a lot more flexible and to have broader applicability.
I think we've heard these arguments ad nauseum at this point, but the longer I use Rust for ensuring long-term maintenance burden is low in large systems that I have to be absolutely, 10,000% correct with the way I manage memory the more it seems to reduce the effort required to make changes to these large systems.
In scenarios where multiple people aren't maintaining a highly robust system over a long period of time, e.g. a small video game, I think I'd absolutely prefer Zig or C++ where I might get faster iteration speed and an easier ability to hit an escape hatch without putting unsafe everywhere.
This view is only remotely within the bounds of plausibility if you intended for "other languages" to refer exclusively to languages requiring manual memory management
In languages like Java their version of the Billion Dollar mistake doesn't have arbitrary Undefined Behaviour but it is going to blow up your program, so you're also going to need to track that or pay everywhere to keep checking your work - and since Rust doesn't have the mistake you don't need to do that.
Likewise C# apparently doesn't have arbitrary Undefined Behaviour for data races. But it does lose Sequential Consistency, so, humans can't successfully reason about non-trivial software when that happens, whereas safe Rust doesn't have data races so no problem.
Neither of these languages can model the no-defaults case, which is trivial in Rust and, ironically, plausible though not trivial in C++. So if you have no-defaults anywhere in your problem, Rust is fine with that, languages like Go and Java can't help you, "just imagine a default into existence and code around the problem" sounds like cognitive load to me.
Edited: Fix editorial mistake
The Billion Dollar mistake is about not even having the distinction shown in the commit you linked. In languages with this mistake a Goose and "Maybe a Goose or maybe nothing" are the same type.
Some others are:
- `&mut T` which encodes that you have exclusive access to a value via a reference. I don't think there is any language with the same concept.
- `&T` which encodes the opposite of `&mut T` i.e. you know no one can change the value from underneath you.
- `self`/`value: T` for method receivers and argument which tells you ownership is relinquished (for non-Copy types). I think C++ can also model this with move semantics.
- `Send`/`Sync` bounds informing you how a value can and cannot be used across thread boundaries. I don't know of any language with an equivalent
- `Option<T>` and `Result<T, E>` encoding absence of values. Several other languages have equivalents, but, for example, Java's versions is less useful because they can still be `null`.
- Sum types in general. `Option<T>` and `Result<T, E>` are examples, but sum types are amazing for encoding 1-of-N possibilities. Not unique to Rust of course.
- Explicit integer promotion/demotion. Because Rust never does this implicitly you are forced to encode how it happens and think about how that can fail.
All of these are other ways Rust reduce cognitive load by encoding facts in the program text instead of relying on the programmer's working memory.
The gist of it is that Rust is (relatively) the French of programming languages. Monolingual English speakers (a stand-in here for the C/C++ school of things, along with same-family languages like Java or C#) complain a lot about all this funky syntax/semantics - from diacritics to extensive conjugations - that they've never had to know to communicate in English. They've been getting by their whole life without accents aigu or knowing what a subjunctive mood is, so clearly this is just overwrought and prissy ceremony cluttering up the language.
But for instance, the explicit and (mostly) consistent spelling and phonetics rules of French mean that figuring out how to pronounce an unfamiliar word in French is way easier than it is in English. Moods like the imperative and the subjunctive do exist in English, and it's easier to grasp proper English grammar when you know what they are. Of course, this isn't to say that there are no parts of French that an English speaker can take umbrage at - for example grammatical gender does reduce ambiguity of some complex sentences, but there's a strong argument that it's nowhere near worth the extra syntax/semantics it requires.
On top of all that, French is nowhere near as esoteric as many monolingual Anglophone learners make out; it has a lot in common with English and is easier to pick up than a more distant Romance language like Romanian, to talk of a language in a more distant family (like Greek or Polish). In fact, the overlap between French and English creates expectations of quick progress that can be frustrating when it sinks in that no, this is in fact a whole different language that has to be learned on its own terms versus just falling into place for you.
Hell, we can take this analogy as far as native French speakers being far more relaxed and casual in common use than the external reputation of Strictness™ in the language would have one believe.
- don't have english or any european language as their first language
- have learned english successfully
- are now in a long, struggling process of learning french
I don't believe there is in day-to-day life much value in the advantages you mention for french.
It's also not popular for a language that old. It's roughly as popular as Ada was when it was the same age Rust is today (there may not have been as many projects written in Ada then, but there were certainly much bigger/more important projects being written in Ada then). It's not nearly as popular as C, or C++, or Java, or C#, or Go were at that age.
The relatively small number of developers who program in Rust, and the smaller still number of them who use it at work, are certainly very enthusuastic about it, but an enthusiastic "base" and popularity are very different things.
i.e, it's not rocket science - languages with 10-20+ year history are embedded in the industry. News at 11. ;P
Popularity - as the term is actually used - is one where Rust is fine.
[1]: In the application space, there were VB, Delphi, Smalltalk, and a host of other so-called "RAD languages". In the scripting space, Perl was dominant. In the low-level space, we had the entire Pascal family, with Ada and, to a lesser extent, Oberon.
Now I mostly generate code with coding agents and almost everything I create is Rust based - web backend to desktop app. Let the LLM/agent fight with the compiler.
Now, the robots do a good enough job at writing clean C++ without going too crazy that I just kind of let them do their thing and review the very readable code to keep them on the right path.
I can't even imagine the nightmare with something like a browser where you'd be pulling in C++ dependencies from all over the place and each having their own way of doing things. I mean, I get annoyed when C libs don't do the 'object to be operated on' as the first argument to functions so they can't be trivially wrapped in Python C-API extensions super easily using generators.
--edit--
Actually, this got me thinking, I was exploring using zig for a project (might still do, dunno) and came up with this meta-circular comptime peg grammar 'evaluator' to generate, at compile time, the parser for the peg grammar used to generate the parser for the runtime peg generator tool. Admittedly, I was pretty high when I cooked up this scheme with the robots but it seems to be viable...
But yeah, Go’s system is nice and simple. I am not sure, but I think the fact that Zig programs are a single compilation unit might have some bearing on the orphan rule. There is no concept of crates so “traits”/interfaces can be defined and implemented anywhere.
Though, I am seeing your point on a simple interface system that would be enough to have something like the allocater interface, or the hash map interface.
And also for Hype-- about all these newfangled langs.
https://github.com/dioxuslabs/blitz
Also I think it's a little ridiculous to build yet another new browser in a new language when so many amazing pieces are just sitting around ready for someone to use. Come contribute, we're already much further along :)
If I were building a company around a new browser, I'd reach for the solid stuff that can be pulled in. Our whole blitz project is designed to be modular exactly for that use-case.
Servo had Mozilla's backing in that endeavor though, and even then they didn't manage to ship a full browser in a decade, the problem is just that hard.
Not that hard; Ladybird, with a fraction of the resources available to the servo team, is C++ (Moving to swift soon) and they got pretty damn far.
The lack of velocity in Rust is real; it's a trade-off between velocity and safety, and Ladybird has amply demonstrated just how rapid C++ velocity can be.
And this is coming from someone who doesn't even like C++.
You know the saying about the first 90% and then the second 90%? Making a web browser is the fractal version of that.
> The lack of velocity in Rust is real; it's a trade-off between velocity and safety,
No it's not, Rust has in fact much higher velocity than C++, even at Mozilla which was basically a C++ shop beforehand (and a pretty good one at that).
> and Ladybird has amply demonstrated just how rapid C++ velocity can be.
No, you seem to have a misunderstanding of what servo was. Mozilla didn't use Rust yo make a safer browser, they used Rust to make a faster browser by leveraging all the cores of modern CPUs. That was the primary motive for making Rust in the first place: making multithreading tractable for humans.
As a result, the servo project aimed for SotA performance and modules were rewritten multiple time as they improved the architecture for performance (see the different iterations of Stylo or Webrender, which ended up in Firefox proper when it converged).
That's why it was apparently slow, not because of safety but because it aimed to be the first fully parallel browser (which is something enabled by Rust's safety).
You can argue that Rust has lower velocity than garbage collected language because you need to think about ownership, but not that it has lower velocity than other low-level languages: they too need to think about ownership, they simply have no static check to catch errors at compile time: every error raised by the borrow checker would be as segfault. (And Rust keeps what makes C++ already much higher velocity than C, the ability to build powerful abstractions at no performance cost).
I mean, Rust does have a learning curve, but its complexity is overexaggerated imo. Yes, you have to learn something new, but how it is a problem?
I don't understand why pick language because it looks familiar and you don't have to change how you think. For me that is basically a problem with Zig - I can do everything Zig does in C++, having decades of libraries and infra while Rust actually contributes to the end product.
Throwaway script? Use anything. A mobile app? Whatever gets it on the devices you're targeting today, that works for the life of the device/OS/etc. A backend API that will power a large platform? Something maintainable (by people other than yourself) for 3-5 years. Firmware for IoT that will remain in industrial systems for 20 years? Something that is well established and supported with a good number of other people who can fix it in the long haul.
Like the Alan Perlis (I think) quote goes: "A language that doesn't affect the way you think about programming is not worth knowing."
> Investing in creating a database like TigerBeetle is a long term effort. Databases tend to have a long half life (e.g. Postgres is 30 years old). And so, while Zig being early in 2020 did give me pause, nevertheless Zig’s quality, philosophy and simplicity made sense for a multi-decade horizon.
Meanwhile Rust compiles just fine. Even updating toolchain to newest causes no issues and benchmark still runs. All I had to do is remove pinning to old toolchain, and bump language version to latest. Also changing dependency version to latest worked without an issue.
You'd think that performing all advanced memory manipulations you would want all the safety you want, but hey. Zig is cool this days.
Go figure.
I'm about to say something that will likely surprise you; it surprised me: have you folks thought about Ada? I was around, working on (D)ARPA contracts at BBN, when Ada was designed. We didn't use it, so I had no opinion of it at the time. I thought it had dried up and blown away in the decades since, like PL1.
Well, it's now #17 on the Tiobe index and climbing. So I gave it a try for a small project. My reaction is very positive. The language is well-designed, obviously having learned a lot of lessons from the mistakes of C, among other things. The compiler, gnat, is built on gcc. Compilation times are very fast, execution times are fast, it works well with gdb, and it's well documented. It avoids many of the traps of C (no pointer arithmetic, no automatic type conversions) and it just feels carefully engineered. It's been used to build a lot of applications in areas like train control and aviation where you just can't screw up because people may die if you do.
Based on my experience with writing and debugging a little over a thousand lines of code (so not a lot), I'm very impressed -- it's surprisingly good. I'd suggest giving it consideration if/when the opportunity presents itself.
Hold on, why can't humans have a 100x better browser?
Right now their browser is trivial to block, it provides no value that I can see. curl-impersonate is more useful than what they offer, at least it won't be stuck on captchas as often.
I find this take a bit hard to believe. There's no way that Zig is some kind of magic bullet that avoids build configuration challenges. Especially not considering you are building a browser on top of V8 in a different programming language.
CMake is quite crufty, but there's toolchains for every system under the Sun and this is what makes it actually less painful in a lot of cases. Glossing over your build files it does not look particularly scalable or portable. Nice that Zig allows you to write build config in Zig though.
And although I know Microsoft is uncool, I still want to shill vckpkg as it seems they finally managed to create a usable cross platform package manager for C++
My reasoning for settling on Rust:
If I wanted something more general-purpose and ergonomic, I'd stick with something like Kotlin, which has wider ecosystem support. Go could fit here too, but I've heard from more experienced folks that Go's simplicity can get limiting as codebases grow (and requires 100s of developers to be disciplined). Not impossible, just not as conducive.
But since I specifically wanted a performant systems language, I figured I'd go to the other extreme. So my choice was Rust or Zig. I eventually chose Rust (as complicated as Rust can seem) the borrow checker is pretty elegant once it clicks and provides the necessary safety net for a language I intentionally am choosing for more control.
(here's my article on learning Rust if folks are interested: https://kau.sh/blog/learn-rust-ai-atrophy/) - different angle from the linked article.
The D programming language shines:
comptime: https://dlang.org/spec/function.html#interpretation
metaprogramming: https://dlang.org/spec/template.html#function-templates
explicit memory allocators: these are easily made, there's nothing special about them, I use them all the time
best-in-class C interoperability: Nothing beats D's ImportC, where you can import a .c file as if it were a module. You can even use ImportC to translate your C code to D! https://dlang.org/spec/importc.html
Performance: same as C and C++
I'm aware of D since it's inception more or less but don't know it very well. I would say D lacks a "bombastic" feature and maybe that's both the reason is not used more but also why is such a good language.
It's not "memory safe" like Rust, yes it's fast but so is C/C++, it doesn't have the "massive parallelism/ always-on" robustness like Erlang. It has a bit of everything which is good and bad.
Being a mid all-arounder is OK in my book, perhaps it's more a matter of some "post-AI" tech startup adopt it and get massive or famous, like Ruby because of the Web 2.0 era or Erlang with the Whatsapp thing.
Maybe D is good the way it is and will always be there.
It's a more memory safe language than C/C++, no need to worry about forward references, strong encapsulation, simple modules, and so on.
And let's face it - the C preprocessor is an abomination in modern languages, why does it persist?
But it's difficult to do so! Nothing to do with marketing in my case, at least. The reasons are :
* `dub` is badly documented and does dumb things like including test code in the generated binary.
* `serve-d` is terrible. It can't handle even my little hello world programs - either crashes or consumes 100% CPU until I manually kill it.
* MacOS support sucks. All the time I have problems: the linker didn't work for years (fixed now). Immediate segmentation faults currently (fixed in nightly AFAIK). C code using the new flat128 doesn't compile (I think it was fixed already?). Just constant frustration.
* Too many features, many broken. It has an experimental borrow-checking feature, I tried to use it but it's largely undocumented. People in the Forum told me to that feature is completely unfinished. It has an allocators package as well, but no idea how I can make use of those like I would in Zig. Would love to see a well written post about that.
Using D in betterC mode is what I am most interested about exactly because it looks more like Zig and C than Java - and performs much better. But currently, that means forgetting about Phobos, the standard library, as that's written exclusively with GC and Exceptions in mind. Maybe that's ok as you can just use all C libraries you want, but would be nice to have some D conveniences to make that worthwhile.
Apart from that, I completely agree that D's comptime and metaprogramming is the best I've seen in any language (except for Lisp of course). All I need to keep using D is much better tooling and clarification about what parts of the language are "half-baked" (especially around DIP1000) and which parts are stable - perhaps "editions" will give us that, will check it out when it's ready. Oh and also top-notch MacOS support... I know that's a moving target but even Zig manages to handle that just fine, why not D?!
I don't understand the special purpose behind Zig's allocators. It's just an interface. I make custom allocators in D all the time, it's trivial.
Dip1000 is also an experimental feature and is not part of the core language.
On a more pragmatic note, D has a notion of "sinks", aka output ranges. They provide a simple interface that accepts input, which is appended to the output range. Since the user of the sink has no idea what the sink is doing, just that it accepts input. This is a fun and very practical way to move the choice of how to allocate memory to the caller rather than the callee.
gdc and ldc offer top notch D support for the Mac.
Have you tried using D on Mac? That's almost impossible to do!! You get a segmentation fault immediately when using VS Code because it wants to download its own DCD and that's apparently compiled with DMD?? Everything that gets downloaded in binary doesn't work on MacOS. I only managed to get it working on emacs after building DCD locally with LDC AND making sure emacs does not try to use anything else. But even then, as I said, serve-d barely works (not sure if that's MacOS-specific). D has the worst MacOS experience of several dozen languages I've tried. Even FORTH just works, but D just does not.
My library for example is not using the GC, but I don't put @nogc on every function because it does not make sense. Here is the link to the library: https://github.com/Kapendev/joka
Well I am not, but thanks.
Well you will. It's a common mistake new D users do.
We were able to get dmd's backend license fixed in 2017.
No judgement against trying to monetize valuable work, but in this day and nearly everyone expects free and OSS compilers/interpreters and core tooling.
The backend for the DMD compiler was not fully open source for a number of years. That's because Symantec owned some of the code and they were not willing to let it be relicensed. They did allow that in 2017. It was never a paid product AFAIK.
Overall, that was beneficial to the D community. The GDC backend has always been open, and for some time has been part of GCC. The LDC backend was developed to use LLVM. It's possible that there would not have been motivation for those projects if DMD's backend had been open from the start. DMD compiles fast but the performance is not competitive with the other compilers if you're working on something that needs to push the CPU to its limits.
Sadly, the number of people nerdly enough to want to work on a code generator is very, very small. Me, I find it quite enjoyable.
I cannot even think of anyone who wrote a full stack compiler these days.
Most components of the D standard library (Phobos) use garbage collection by default (when I last checked). This unfortunately removes it as a contender in the same space as Zig and Rust. And once you move to the GC space, there are a truckload of GC languages - it is the non-GC ones that are rare.
IMHO stdlib GC was one of the major barriers stopping D in the system programming space.
What does it offer over Zig's translate-c?
#define X 3+y
I haven't experimented with translate-c to see just what it does, I am going by the description of it which is a bit vague.ImportC can also import D modules, so C code can access the D code.
For example, here's comptime dependency-injection container, which notably, can be configured and extended, completely in comptime, and the final glue is something which in theory the compiler could optimize away too (no idea if it does). https://github.com/cztomsik/tokamak/blob/main/src/container....
And regarding what they discuss in article, I did something similar for N-API, note how much you can do in few hundred lines, and I'd say that you can use it for most of what people do with napi-rs except that napi-rs is huge (and also unsafe, because it allows indirect recursion, which allows breaking all the rustc promises) https://github.com/cztomsik/napigen/blob/main/src/napigen.zi...
websiteapi•2mo ago
rudedogg•2mo ago
Your criticism makes more sense with products targeting non-technical users though. But IMO tech choices have cascading effects. I won’t buy a vehicle if the infotainment software sucks, and that’s the 2nd largest purchase I’ll ever make.
websiteapi•2mo ago
rudedogg•2mo ago
But to elaborate, they’ve found a niche simply by using Rust and rendering the GUI in a performant way on the GPU. I’m not saying performance is the only thing, but for a chunk of people it is something they care about.
data-ottawa•2mo ago
sroussey•2mo ago
If I had the optional GPS screen from 22yr ago, I think I would have ripped it out and replaced it a bunch of times or just bought a new car.
I’m curious to try the new iDrive 10. We will see…
CooCooCaCha•2mo ago
If language doesn’t matter then why not go build something in fortran or brainfuck?
grayhatter•2mo ago
Because if you're getting lunch, and someone suggests Burgers, Sushi, or Casu martzu. Only two are actually reasonable.
Yes, yes, if I'm allergic to shellfish, I might want to make sure I have an EpiPen before getting sushi. But that doesn't mean it's a meaningful problem.
pklausler•2mo ago
aeve890•2mo ago
I think end users don't give a shit about the tech stack of a software. Why would they?
femiagbabiaka•2mo ago
mitchellh•2mo ago
This doesn't guarantee any sort of commercial success because there are so many follow on things that are important (product/market fit, sales, customer success, etc.) but it's pretty rough to succeed in the follow ons when the product itself is shit.
For first order effects, if a product's target market is developer oriented, then marketing to things developers care about such as a programming language will help initial adoption. It can also help the tool get talked about more organically via user blogs, social media, word of mouth, etc.
Basically, yeah, it matters, but as a cog in a big machine like all things.
RustSupremacist•2mo ago
^https://xcancel.com/QULuseslignux/status/1918296149724692968
gorjusborg•2mo ago
On the other hand, I've found that core decisions like language ecosystem choice can be a good leading indicator of other seemingly unrelated decisions.
When I see someone choose a tool that I think is extremely well suited for a purpose, it makes me curious to see what else we agree on.
The Oven team, the ones who created the Bun runtime, is a good example for me. I think Zig is probably the best compromise out there right now, for my sensibilities. The Oven folks, who chose to use Zig to implement Bun, _also_ made a lot of product decisions I really agree with.
karmakaze•2mo ago
Milpotel•2mo ago
Come on, they advertise with benchmarks hence it's quite obvious why they didn't chose a gc'd language.
karmakaze•2mo ago
cgh•2mo ago
Paul Graham is one of the founders of Y Combinator, the company that hosts Hacker News.
karmakaze•2mo ago
Yes it matters to me as an end user if my web browser is more or less likely to have vulnerabilities in it. Choice of programming language has an impact on that. It doesn't have to be Rust, I'd use a browser written in Pony.
If I were making something that had to be low-level and not have security bugs, my statement would be:
> I’m not smart enough to build a big multi-threaded project in a manual memory-managed language that doesn't have vulnerabilities. I want help from the language & compiler.
The size and longevity of the team matters a lot too. The larger it gets the more problematic it is to keep the bugs out.
Milpotel•2mo ago
OnionBlender•2mo ago
Milpotel•2mo ago
DANmode•2mo ago
Milpotel•2mo ago
metaltyphoon•2mo ago
Typo?
Milpotel•2mo ago
9rx•2mo ago
Of course, it's all just 1s and 0s at the end of the day. You can ultimately accomplish the same in any language. But the design of the language does shape the way developers end up thinking about the problems. If NeXT had used, say, C++ instead, it is unlikely that the people involved would have ever come to recognize the same possibilities.
ok123456•2mo ago
DanielHB•2mo ago