I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.
> The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM). > It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.Less time wasted sifting through half-baked solutions.
> Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.---
I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.
Why would you use Java 8?
If you feel like replying with ART, there are plenty of .java files in AOSP.
The Master said, “I consider my not being present at the sacrifice, as if I did not sacrifice.”
The Analects (Confucius, 551-479 BCE), translated by James Legge (1815-1897)
C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...
But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.
People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.
> People tend to be influenced by opinions online but often the real world is completely different.
Unfortunately, my experience has been that C#'s lack of popularity online translates into a lot of misunderstandings about the language and thus many teams simply do not consider it.Some folks still think it's Windows-only. Some folks think you need to use Visual Studio. Some think it's too hard to learn. Lots of misconceptions lead to teams overlooking it for more "hyped" languages like Rust and Go.
I think there may also be some misunderstandings regarding the purchase models around these tools. Visual Studio 2022 Professional is possible to outright purchase for $500 [0] and use perpetually. You do NOT need a subscription. I've got a license key printed on paper that I can use to activate my copy each time.
Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
[0] https://www.microsoft.com/en-us/d/visual-studio-professional...
> Imagine a plumber or electrician spending time worrying about the ideological consequences of purchasing critical tools that cost a few hundred dollars.
That's just the way it is, especially with startups whom I think would benefit the most from C# because -- believe it or not -- I actually think that most startups would be able to move faster with C# on the backend than TypeScript.How's the LSP support nowadays? I remember reading a lot of complaints about how badly done the LSP is compared to Visual Studio.
I started using Visual Studio Code exclusively around 2020 for C# work and it's been great. Lightweight and fast. I did try Rider and 100% it is better if you are open to paying for a license and if you need more powerful refactoring, but I find VSC to be perfectly usable and I prefer its "lighter" feel.
My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?
I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.
One negative aspect is that if you haven't kept up, that terseness can be a bit of a brick wall. Many of the newer features, especially things where the .Net framework just takes over and solves your problem for you in a "convention over configuration" kinda way, are extremely terse. Modern C# can have a bit of a learning curve.
C# is an underrate language for sure and once you get going it is an absolute joy to work in. The .Net platform also gives you all the cross-platform and ease of deployment features of languages like Go. Ignoring C#/.Net because it's Microsoft is a bit of a mistake.
This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.
For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.
Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.
The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...
Five years has nothing to do with Java. It means nobody cares about security in the first place. Outsourcing such a trivial security problem to Microsoft is just another nail in the coffin. "I have no capacity to develop secure software, better make myself dependent on someone who can".
I agree.
We had been doing interop with a Java process from C# to access some document processing functionality during the log4j incident. The library we were using depended on log4j and had a clear resolution at the time, but there was a big perception problem in our client base (i.e., non-technical people that we must suffer in order to continue doing business).
Our solution to the resulting paranoia was to remove Java from our stack altogether. It was a big pain in the ass to find an alternative, but it was definitely worth it from a customer service perspective. Our clients were very happy with this approach and it saved us from a lot of messy meetings that other vendors were sucked into.
What I'm trying to point out is that I've never known this kind of security vulnerability to be quite such a hassle to eradicate when working on the .NET stack. Because the "batteries included" nature of .NET just makes it easier. On Java the supply chain tends to be more unwieldy. Which seems to engender both increased maintenance hassle and a greater tendency toward normalization of deviance. Perhaps as a way to try to sand the sharp corners off of the maintenance hassle.
Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.
• Go uses its own custom ABI and resizeable stacks, so there's some overhead to switching where the "Go context" must be saved and some things locked.
• Go's goroutines are a kind of preemptive green thread where multiple goroutines share the same OS thread. When calling C, the goroutine scheduler must jump through some hoops to ensure that this caller doesn't stall other goroutines on the same thread.
Calling C code from Go used to be slow, but over the last 10 years much of this overhead has been eliminated. In Go 1.21 (which came with major optimizations), a C call was down to about 40ns [1]. There are now some annotations you can use to further help speed up C calls.
In Unity, Mono and/or IL2CPP's interop mechanism also ends up in the ballpark of direct call cost.
Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.
And chances are that it won't be required.
I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.
Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.
This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.
That's not to say that games aren't a very cool space to be in, but the challenges have moved beyond the code. Particularly in the indie space, for 10+ years it's been all about story, characters, writing, artwork, visual identity, sound and music design, pacing, unique gameplay mechanics, etc. If you're making a game in 2025 and the hard part is the code, then you're almost certainly doing it wrong.
you want to make some change, so you adjust a struct or a function signature, and then your IDE highlights all the places where changes are necessary with red squigglies.
Once you’re done playing whack-a-mole with the red squigglies, and tests pass, you know there’s no weird random crash hiding somewhere
I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).
Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.
You're probably right that it'd be best to just jump in and get going with a few of them rather than analyze the choice to death (as I am prone to do when starting anything).
Sometimes it feels like we could use some kind of a temperance movement, because if one can just manage to walk the line one can often reap great rewards. But the incentives seem to be pointing in the opposite direction.
There are some exceptions, e.g., pulling in your languages best-of-breed image library to load some JPGs even though it supports literally a dozen other formats is less disastrous to a code base than pulling in an industrial-strength web framework just to provide two API calls with some basic auth of some sort. But there's something to the concept in general, I think.
I rarely touch game dev but that made me think Godot wasn't very suitable
But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.
Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.
I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.
Although points mentioned in the post are quite valid.
The amount of platforms they support, the amount of features they support, many of which could be a PhD thesis in graphics programming, the tooling, the store,....
While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.
But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.
Borrow checking is a way to make manual memory management safe, ECS works around manual memory management entirely, so it doesn't need borrow checking to be safe (which is also why it's popular in C++, which doesn't have a borrow checker) but because it's Rust, you have strong guarantees about the safety of it, while in C++ you can still shoot yourself in the foot if you don't use ECS the right way.
Also Rust is more than safety: https://steveklabnik.com/writing/rust-is-more-than-safety
If you want to squeeze the maximum performance of your hardware (which is indeed the case for game engines) then all your options are Rust, C and C++. In which case is by far the more ergonomic choice on every aspects.
You could add a lot of research/experimental and/or legacy languages as well (Modula 3, Odin, D, why not FORTRAN, even) but none of them are credible alternatives to C or C++ like Rust has managed to become. And none of them are polished enough (be it in terms of errors messages, libraries, docs, onboarding material, build toolchain) to be considered superior to Rust in terms of developer ergonomics even if you set aside safety.
And again, borrow checking isn't a feature, it's an implementation of the feature which is memory safety without performance tradeoff. Even when using a paradigm that doesn't interact with the borrow checker, you're still using the feature itself (especially because all of the underlying building blocks which aren't using ECS directly are benefiting from the borrow checker's validation).
I completely disagree, having been doing game dev in Rust for well over a year at this point. I've been extremely productive in Bevy, because of the ECS. And Unity compile times are pretty much just as bad (it's true, if you actually measure how long that dreaded "Reloading Domain" screen takes).
Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.
As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.
But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.
You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yes. Because it makes a promise about backwards compatibility.
> Rust language and library features themselves often spend years in nightly before making it to a release build.
So did Java's. And I Rust probably has a fraction of its budget.
In defense of long nightly feature more than once, stabilizing some feature like negative impl and never types early would have caused huge backwards breaking changes.
> You can also always go from 1.0 to 2.0 if you want to make breaking changes.
Yeah, just like Python!
And split the community and double your maintenance burden. Or just pretend 2.0 is 1.1 and have the downstream enjoy the pain of migration.
If you choose to support 1.0 sure. But you don't have to. Overall I find that the Rust community is way too leery of going to 1.0. It doesn't have to be as big a burden as they make it out to be, that is something that comes down to how you handle it.
If you choose not to, then people wait for x.0 where x approaches infinity. I.e. they lose confidence in your crates/modules/libraries.
I mean, a big part of why I don't 1.x my OSS projects (not just Rust) is that I don't consider them finished yet.
The distance in time between the launches of Unreal Engine 4 and Unreal Engine 5 was 8 years (April 2014 to April 2022). Unreal Engine 5 development started in May 2020 and had an early access release in May 2021.
Bevy launched 0.1 in 2020 and is at 0.16 now in 2025. 5 years later and no 1.0 in sight.
If you want people to use your OSS projects (maybe you don't), you have to accept that perfect is the enemy of good.
At this point, regulators and legislators are trying to force people to use the Rust ecosystem - if you want a non-GC language that is "memory safe," it's pretty much the de facto choice. It is long past time for the ecosystem to grow up.
Yeah because that's when it was open sourced, NOT DEVELOPED.
See https://godotengine.org/article/first-public-release/
> Godot has been an in-house engine for a long time and the priority of new features were always linked to what was needed for each game and the priorities of our clients.
I checked the history and it was known by another name Larvita.
> If you want people to use your OSS project
Seeing how currently I have about 0.1 parts of me working on it, no I don't want to give people false sense of security.
> At this point, regulators and legislators are trying to force people to use the Rust ecosystem
Not ecosystem. Language. Ecosystem is a plus.
Further more the issue Bevy has is more of there aren't any good mature GUI libraries for Rust. Because cross OS GUIs were, are and will be a shit show.
Granted it's a shit show that can be directed with enough money.
I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.
With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.
Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.
If not, the language they pick doesn't really make a difference in the end.
It is like complaining playing a music instrument to be in band or orchestra requires too much effort, naturally.
Great musicians make a symphony out of what they can get their hands on.
From what I’ve heard about the Rust community, you may have made an unintentionally witty pun.
Here's a thought experiment: Would Minecraft have been as popular if it had been written in Rust instead of Java?
I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.
Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.
What allocations can you not do in Rust?
The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.
From what I remember from my Unity days (which granted, were a long time ago), GameObjects had their own lifecycle system separate from the C# runtime and had to be created and deleted using Destroy and Create calls in the Unity API. Similarly, components and references to them had to be created and retrieved using the GetComponent calls, which internally used handles, rather than being raw GC pointers. Runtime allocation of objects frequently caused GC issues, so you were practically required to pre-allocate them in an object pool anyway.
I don't see how any of those things would be impossible or even difficult to implement in Rust. In fact, this model is almost exactly what I used to see evangelized all the time for C++ engines (using safe handles and allocator pools) in GDC presentations back then.
In my view, as someone who has not really interacted or explored Rust gamedev much, the issue is more that Bevy has been attempting to present an overtly ambitious API, as opposed to focusing on a simpler, less idealistic one, and since it is the poster child for Rust game engines, people keep tripping over those problems.
I'm sorry, but I still don't understand. There are myriad heap collections and even fancy stuff like Rc<Box<T>> or RefCell<T>. What am I missing here?
Is it as simple as global void pointers in C? No, but it's way safer.
Do you mean cyclic types?
Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc
It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.
Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...
No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.
What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?
Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).
> And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.
Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)
> And maybe not Switch although I’m less certain.
I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.
Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.
Egg on my face. Regrets.
And for something like Gnorp, Rust is probably a decent choice.
Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?
The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:
transform[eid].position += …
physics[eid].velocity = …
But yeah, probably you don't need an ECS for 90% of the games.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
Because of memory bandwidth of Iterating the entities? No way. Every other part - rendering, culling, network updates, etc is far worse.
Let’s restate. In 1998 this got you 1024 entities at 60 FPS. The entire array could no fit in L2 cache of a modern desktop.
And I already advised a simple change to improve memory layout.
> Quake 1 updated the world state at 10 ticks per secondo
That’s not a constraint in Quake 3 - which has the same architecture. So it’s not relevant.
> users' expectations have increased too
Your game is more complex than quake 3? In what regard?
Latency from cpu to memory sure.
Memory frequency has caught up though, so you have have more bandwidth than any CPU can deal with.
PS: I love the art style of the game.
There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)
There is a very certain way rust is supposed to be used, which is a negative on it's own, but it will lead to a fulfilling and productive programming experience. (My opinion) If you need to regularly index something, then you're using the language wrong.
However this problem does still come up in iterator contexts. For example Iterator::take takes a usize.
Concrete example: pulling a single item out of a zip file, which supports random access, is O(1). Pulling a single item out of a *.tar.gz file, which can only be accessed by iterating it, is O(N).
Compressed tars are terrible for random access because the compression occurs after the concatenation and so knows nothing about inner file metadata, but it's good for streaming and backups. Uncompressed tars are much better for random access. (Tar was a used as a backup mechanism to tape (tape archive).)
Zips are terrible for streaming because their metadata is stored at the end, but are better for 1-pass creation and on-disk random access. (Remember that zip files and programs were created in an era of multiple floppy disk-based backups.)
When fast tar enumeration is desired, at the cost of compatibility and compression potential, it might be worth compressing files and then taring them when and if zipping alone isn't achieving enough compression and/or decompression performance. FUSE compressed tar mounting gets to be really expensive with terabyte archives.
Just use squashfs if that is the functionality that you need.
Long story short, yes, it's very different in game dev. It's very common to pre-allocate space for all your working data as large statically sized arrays because dynamic allocation is bad for performance. Oftentimes the data gets organized in parallel arrays (https://en.wikipedia.org/wiki/Parallel_array) instead of in collections of structs. This can save a lot of memory (because the data gets packed more densely) be more cache-friendly, and makes it much easier to make efficient use of SIMD instructions.
This is also fairly common in scientific computing (which is more my wheelhouse), and for the same reason: it's good for performance.
That seems like something that could very easily be turned into a compiler optimisation and enabled with something like an annotation. Would have some issue when calling across library boundaries ( a lot like the handling of gradual types), but within the codebase that'd be easy.
At least on the scientific computing side of things, having the way the code says the data is organized match the way the data is actually organized ends up being a lot easier in the long run than organizing it in a way that gives frontend developers warm fuzzies and then doing constant mental gymnastics to keep track of what the program is actually doing under the hood.
I think it's probably like sock knitting. People who do a lot of sock knitting tend to use double-pointed needles. They take some getting used to and look intimidating, though. So people who are just learning to knit socks tend to jump through all sorts of hoops and use clever tricks to allow them to continue using the same kind of knitting needles they're already used to. From there it can go two ways: either they get frustrated, decide sock knitting is not for them, and go back to knitting other things; or they get frustrated, decide magic loop is not for them, and learn how to use double-pointed needles.
Thank you for this delightful word.
(It's also not always a win: it can work really well if you primarily operate on the 'columns', and on each column more or less once per update loop, but otherwise you can run into memory bandwidth limitations. For example, games with a lot of heavily interacting systems and an entity list that doesn't fit in cache will probably be better off with trying to load and update each entity exactly once per loop. Factorio is a good example of a game which is limited by this, though it is a bit of an outlier in terms of simulation size.)
* Everything should be random access(because you want to have novel rulesets and interactions)
* It should also be fast to iterate over per-frame(since it's real-time)
* It should have some degree of late-binding so that you can reuse behaviors and assets and plug them together in various ways
* There are no ideal data structures to fulfill all of this across all types of scene, so you start hacking away at something good enough with what you have
* Pretty soon you have some notion of queries and optional caching and memory layouts to make specific iterations easier. Also it all changes when the hardware does.
* Congratulations, you are now the maintainer of a bespoken database engine
You can succeed at automating parts of it, but note that parent said "oftentimes", not "always". It's a treadmill of whack-a-mole engineering, just like every other optimizing compiler; the problem never fully generalizes into a right answer for all scenarios. And realistically, gamedevs probably haven't come close to maxing out what is possible in a systems-level sense of things since the 90's. Instead we have a few key algorithms that go really fast and then a muddle of glue for the rest of it.
In general I think GP is correct. There is some subset of problems that absolutely requires indexing to express efficiently.
This is something you can't do on a compute shader, given you don't have access to the built-in derivative methods (building your own won't be cheaper either).
Still, if you want those changes to persist, a compute shader would be the way to go. You _can_ do it using a pixel shader but it really is less clean and more hacky.
Interestingly enough the derivative functions are available to compute shaders as of SM 6.6. [0] Oddly SPIR-V only makes the associated opcodes [1] available to the fragment execution model for some reason. I'm not sure how something like DXVK handles that.
I'm not clear if the associated DXIL or SPIR-V opcodes are actually implemented in hardware. I couldn't immediately find anything relevant in the particular ISA I checked and I'm nowhere near motivated enough to go digging through the Mesa source code to see how the magic happens. Relevant because since you mentioned it I'm curious how much of a perf hit rolling your own is.
[0] https://microsoft.github.io/DirectX-Specs/d3d/HLSL_SM_6_6_De...
[1] https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.htm...
About the performance question, during the frag shader phase neighbouring pixels are already being tracked, so calling those is almost free. It would be difficult to match that performance when already on the compute phase.
What I'm curious about is if there's a hardware intrinsic that computes derivatives or if the implementation of those opcodes is generally in software.
To answer your question, which is very pertinent, they seem to use different hardware accelerated mechanisms. In the compute stage, wave based derivatives are used, and you need to account for different lane counts between GPU architectures.
Understanding that now makes me believe you're right. But one needs to benchmark them to be sure.
> There is some subset of problems that absolutely requires indexing to express efficiently.
Sure. But it's almost certainly quicker to run a shader over them, and ignore the values you don't want to operate on than it is to copy the data back, modify it in a safe bounds checked array in rust, and then copy it again.
Use a compute shader. Run only as many invocations as you care about. Use explicit indexing in the shader to fetch and store.
Obviously that doesn't make sense if you're targeting 90% of the slots in the array. But if you're only targeting 10% or if the offsets aren't a monotonic sequence it will probably be more efficient - and it involves explicit indexing.
Not sure if Rust should be promoted to build games, i prefer it being pushed to build mission critical software.
I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).
I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".
It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.
[1] https://docs.rs/num-traits/latest/num_traits/trait.Num.html
GHC has an -fdefer-type-errors option that lets you compile and run this code:
a :: Int
a = 'a'
main = print "b"
Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.
Or even enums, which are a joke in Julia.
Julia had so much potential, and such poor implementation.
I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”
Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.
Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.
Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!
In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.
let iteration_1_id = 10;
dostuff(iteration_1_id);
let iteration_2_id = 11; // warning, unused variable!!
dostuff(iteration_1_id);
and then spent 10 minutes debugging why iteration_2 wasn't working, when it would have been resolved instantly if I had paid attention to the warnings.In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.
For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.
And these are very regular requirements in a game, tbh.
And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.
I don't think your experience with Amethyst merits your conclusion of the state of gamedev in rust, especially given Amethysts own take on Bevy [1, 2].
1: https://web.archive.org/web/20220719130541mp_/https://commun...
2: https://web.archive.org/web/20240202140023/https://amethyst....
C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.
The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.
But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)
Hot reloading! Iteration!
A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!
Imagine all of them waiting on compile... Or trying to deal with correctness, etc.
Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.
But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..
Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.
Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.
[0] https://docs.unity3d.com/6000.0/Documentation/Manual/perform...
Capcom has their own fork of .NET.
"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"
Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.
For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.
The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).
The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.
It's like imagine saying, I don't want to learn how write a good story because AI always suggests me writing a bad one anyway. May be that delivers the idea better.
Why would I valorize discarding this kind of automation? Is this just a craft vs. production thing? Like, the same reason I'd use only hand tools when doing joinery in Japanese-style woodworking? There's a place for that! But most woodworkers... use table saws and routers.
That implies that proponents of such approach don't want to pursue learning which requires them to do something that exceeds the mediocrity level set by the AI they rely on.
For me it's obvious that it has a major negative impact on many things.
Basically, learn Rust based on whether it's helping solve your issues better, not on whether some LLM is useless or not useless in this case.
The strongest reason I can think of to discard this kind of automation, and do so proudly, is that it's effectively plagiarizing from all of the experts whose code was used in the training data set without their permission.
Artists can come at me with this concern all they want, and I feel bad for them. No software developer can.
I disagree with you about the "plagiaristic" aspect of LLM code generation. But I also don't think our field has a moral leg to stand on here, even if I didn't disagree with you.
As for our tendency to disrespect the copyrights of art, clearly we've always been in the wrong about this, and we should respect the rights of artists. The fact that we've been in the wrong about this doesn't mean we should redouble the offense by also plagiarizing from other programmers.
And there is evidence that LLMs do plagiarize when generating code. I'll just list the most relevant citations from Baldur Bjarnason's book _The Intelligence Illusion_ (https://illusion.baldurbjarnason.com/), without quoting from that copyrighted work.
https://arxiv.org/abs/2202.07646
https://dl.acm.org/doi/10.1145/3447548.3467198
https://papers.nips.cc/paper/2020/hash/1e14bfe2714193e7af5ab...
I stand by that argument, but acknowledge it isn't relevant here.
My bigger thing is just, having the experience of writing many thousands of lines of backend code with an LLM (just yesterday), none of what I'm looking at can meaningfully be described as "plagiarized". It's specific to my problem domain (indeed, to my extremely custom stack) and what isn't domain-specific is just extremely generic stuff (opening a boltdb, printing a table with lipgloss), just assembled precisely.
A friend of mine only understood why i was so impressed by LLMs once he had to start coding a website for his new project.
My feeling is that low-level / system programming is currently at the edge of what LLMs can do. So i'd say that languages that manage to provide nice abstractions around those types of problems will thrive. The others will have a hard time gaining support among young developers.
Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.
That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.
I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.
To me constantly chasing the latest trends means lack of experience in a team and absence of focus on what is actually important, which is delivering the product.
Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.
Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)
On one hand, yes, when it comes to picking tools for new projects, LLM awareness of them is now a consideration in large companies.
And at the same time, those same companies are willing to spend time and effort to ensure that their own tooling is well-represented in the training sets for SOTA models. To the point where they work directly with the corresponding teams at OpenAI etc.
And yes, it does mean that the barrier to entry for new competitors is that much higher, especially when they don't have the resources to do the same.
And if it's not already popular, that won't happen.
But also: there is a lot of Rust code out there! And a cubic fuckload of high-quality written material about the language, its idioms, and its libraries, many of which are pretty famous. I don't think this issue is as simple as it's being out to be.
It's a bit like Rust in 2014, you would never have had enough material for LLMs to train on.
Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.
But, yes, it does mean that new things that are drastic breaks with old practices are much harder to teach compared to incremental improvements.
LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.
Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.
tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.
However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.
I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.
Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.
The same can be said of books as of programming languages:
"Not every ___ deserves to be read/used"
If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.
"Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"
There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.
If you switched from Java to C# or vice versa, nobody would care.
unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.
It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.
"Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"
In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.
The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.
I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.
here's a web demo
Is it normal for Rust ecosystem to suggest software with this level of maturity?
Yeah, all Rust programmers are obligated by a blood pact to do this.
C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.
I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?
On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).
For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.
Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.
That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But part of the thing that attracted me to the game I'm making is that it would be hard to make in a standard cookie-cutter way. The novelty of the systems involved is part of the appeal, both to me and (ideally) to my customers. If/when I get some of those (:
If you just want to make a game, yes, absolutely just go for Unity, for the same reason why if you just want to ship a CRUD app you should just use an established batteries-included web framework. But indie game developers come in all shapes and some of them don't just want to make a game, some of them actually do enjoy owning every part of the stack. People write their own OSes for fun, is it so hard to believe that people (who aren't you) might enjoy the process of building a game engine?
Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)
The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.
If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
Typically the “itch is scratched” long before the application is done.
After 30 years participating in Gamedev communities I feel like the "don't build an engine" was always an empty strawman aimed at nobody in reality.
The Venn diagram between the people interested in technical aspects of an engine and in also shipping a game is probably composed of a few hundred individuals, most of them working for studios.
The "kid that wants to make an engine to make an MMO" is gonna do neither. It's just a meme.
I shouldn't really care about it myself, but I do because Unity sucked the air out of every gamedev discussion and now there are almost no spaces to discuss anything advanced (even if it's applicable to Unity/Unreal/Godot).
If we are talking 2d, it can be months to hack together a basic engine. 3d can be a bit harder but far from decades.
Thing is, if you designed your engine well and implemented great tooling, it should make it faster to implement the actual content of the game.
So upfront cost to be faster later. At least in theory. Obviously you might end up with subpar tooling that is worse than what a commercial could offers. But if you do something like an RPG with a lot of contend, every bit of extra efficiency in creating that content can help a lot.
Now, obviously from a purely commercial standpoint, not using an established engine makes nearly never sense. Super risky. Hard to hire outside talent. You are only justified when you have very, very specific needs that are hard to implement in a generic engine.
Also for us with an ADHD brain, hard things tend to be easier and easy things very hard, so yes the extra mental stimulation of writing an engine can help.
What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.
> That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")
But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.
I’m suspicious though that you could probably get away with literally just using like an in-memory duckdb to store your game state and get most of the performance/modeling value while also getting a more powerful/robust query engine — especially for like turn-based games. I’m also not sure that bevy’s encoding of queries into the type system is all that sane — as opposed to something like query building with LINQ, but I think it’s how they get to resolve the system dependency graph for parallelization
A pretty darn good model for what though and to what extent? For processing millions of objects, sure. For gameplay logic, using ECS for everything? I don't know about that. What I was implying with my comment is that ergonomics are the most important trait of a good game engine.
I reach for an ECS as a last resort when the performance of a gameplay system demands it, not before. Even before that I will tweak my gameplay code to be less wasteful.
For the 4 people on HN not aware of it, this is a riff on Greenspun's tenth rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Many of the negatives in the post are positives to me.
> Each update brought with it incredible features, but also a substantial amount of API thrash.
This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).
Rust is a niche language, there is no evidence it is going to do well in the game space.
Unity and C# sound like a much better business choice for this. Choosing a system/language....
> My love of Rust and Bevy meant that I would be willing to bear some pain
....that is not a good business case.
Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.
* automatically make your program fast;
* eliminate memory leaks;
* eliminate deadlocks; or
* enforce your logical invariants for you.
Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.
I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.
And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.
Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.
I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.
The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.
The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.
However, the ease of actually shipping a game can't be matched.
Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.
I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.
I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.
Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.
First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.
I could sympathize with what was stated by the author.
I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.
But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]
Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)
From my experience one has to take Rust discussions with a grain of salt because often shortcomings and disclosures are handwaved and/or ommited.
And within rust, I've learned to look beyond the most popular and hyped tools; they are often not the best ones.
https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...
I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).
You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.
** value types/unsafe/pointers/stackalloc etc.
Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).
>https://em-tg.github.io/csborrow/
Oooh... I didn't know scoped refs existed.
I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.
Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.
Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.
Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?
For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?
Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.
There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.
My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.
I also don’t want to use a language with questionable hireability.
They've effectively dropped it a couple times in the past, and while they're currently putting effort in, the company as a whole does not seem to care about stuff like this beyond brief bursts of attention to try to win back developer mindshare, before going back to abandonment. It's what Microsoft is rather well known for.
This is provably wrong, unless you want to insist despite the facts because that's what your social bubble tells you to do.
The most popular deployment target for .NET is Linux: https://dotnet.microsoft.com/en-us/platform/telemetry
Equating whatever else MS is up to with the way .NET evolves and is managed is no different to equating YouTube and Golang.
But Microsoft's most consistent legacy across its entire existence has been Embrace, Extend, Extinguish. Any Linux-supporting project is directly opposed to their core business, aside from Azure - if it starts becoming a threat, they'll turn it into a way to force people onto their other money-makers.
The effort of .NET Framework to .NET Core to .NET 5 and now up to .NET 9 is well over a decade long of steady and increasing progress.
1. only supports the three main operating systems and two architectures (or four if we're stretching things and being very generous)
2. large parts of it still don't work anywhere but Windows (UI being the primary one)
3. the level and quality of official tooling provided for Linux and macOS is incomparable to their Windows offerings
No, its cross-platform story is far from the best we have.
The JVM includes no GUI framework. .NET also includes no GUI framework. .NET happens to have a specific framework for Windows. But just like the JVM, there third-party GUI frameworks such as MAUI, Uno, Avalonia, and others.
So no, there's isn't a "large part" of .NET that isn't cross-platform.
> 3. the level and quality of official tooling provided for Linux and macOS is incomparable to their Windows offerings
What are you referring to here?
Feeling passionate about a programming language is generally bad for the products made with that language.
The only tooling I use personally outside of the main CLI is building iOS/Android static libraries (gomobile). It’s still first party, but not in the go command.
So you mean, Rust is more of an intellectual playground, than an actual workbench? I'm curious how high the churn rate of packages in other languages is, like python or ruby (let's not talk about javascript). Could this be the result of rust being still rather young and moving fast?
> Conversely and ironically, this is why I love Go.
Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
Both. IMO rust shines as a C++ replacement, ie low level high performance. But officially its general purpose and nobody will admit that it’s a bad tool for high level jobs, it’s awful for prototyping. I say this as someone who loves many aspects of rust.
> Could this be the result of rust being still rather young and moving fast?
My hunch says it’s something different. Look at the people and their motivations. I’ve never seen such a distinct fan club in programming before. It comes with a lot of passion and extreme talent, but I don’t think it’s a coincidence that governance has been a shitshow and that there are 4 different crates solving the same problem where maintainers couldn’t agree on something minor. It makes sense, if aesthetics is a big factor.
> Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?
Nothing I’ve noticed. Are you talking about GOPATH hell, from back in the day?
That is the one of the first things my colleagues told me after trying Rust for a few weeks: a laaaarge number of crates under 1.0, and so many abandoned crates, still published in crates.io. Some of those have even reported CVEs due to heavy `unsafe` usage for... nothing.
I love Rust, but I have the feeling that the language (and its community) lost the point since the release of the 2018 edition.
But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.
I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).
It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).
They're far slower in Python but that hasn't stopped anyone.
In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.
Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.
It also struggles with numeric work involving large matrices, because there isn't good support for that built into the language or standard library, and there isn't a well-developed library like NumPy to reach for.
Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.
edit: The downvoters are missing out.
Rust is an absolute gem at web backend. An absolute fucking gem.
On the JVM I'd prefer Kotlin/http4k/SQLDelight any day over {Java,Kotlin}/Spring(Boot)/{Hibernate,sql-in-strings}.
> Rust is an absolute gem at web backend. An absolute fucking gem.
I am absolutely convinced I can find success story of web backends built with all those languages.
The type system and package manager are a delight, and writing with sum types results in code that is measurably more defect free than languages with nulls.
Other than the great developer experience in tooling and language ergonomics (as in coherent features not necessarily ease of use) the reason I continue to put up with the difficulties of Rust's borrow checker is because I feel I can work towards mastering one language and then write code across multiple domains AND at the end I'll have an easy way to share it, no Docker and friends needed.
But I don't shy away from the downsides. Rust loads the cognitive burden at the ends. Hard as hell in the beginning when learning it and most people (me included) bounce from it for the first few times unless they have C++ experience (from what I can tell). At the middle it's a joy even when writing "throwaway" code with .expect("Lol oops!") and friends. But when you get to the complex stuff it becomes incredibly hard again because Rust forces you to either rethink your design to fit the borrow checker rules or deal with unsafe code blocks which seem to have their own flavor of C++ like eldritch horrors.
Anyway, would *I* recommend Rust to everyone? Nah, Go is a better proposition for a most bang for your buck language, tooling and ecosystem UNLESS you're the kind that likes to deal with complexity for the fulfilled promise of one language for almost anything. In even simpler terms Go is good for most things, Rust can be used for everything.
Also stuff like Maud and Minijinja for Rust are delights on the backend when making old fashioned MPA.
Thanks for coming to my TED talk.
public abstract sealed class Vehicle permits Car, Truck {
public Vehicle() {}
}
public final class Truck extends Vehicle implements Service {
public final int loadCapacity;
public Truck(int loadCapacity) {
this.loadCapacity = loadCapacity;
}
}
public non-sealed class Car extends Vehicle implements Service {
public final int numberOfSeats;
public final String brandName;
public Car(int numberOfSeats, String brandName) {
this.numberOfSeats = numberOfSeats;
this.brandName = brandName;
}
}
In Kotlin it's a bit better, but nothing beats the ML-like langs (and Rust/ReScript/etc): type Truck = { loadCapacity: int }
type Car = { numberOfSeats: int, brandName: string }
type Vehicle = Truck | Car
record Truck(int loadCapacity) implements Vehicle {}
record Car(int numberOfSeats, String brandName) implements Vehicle {}
sealed interface Vehicle permits Car, Truck {}
sealed interface Vehicle {
record Truck(int loadCapacity) implements Vehicle {}
record Car(int numberOfSeats, String brandName) implements Vehicle {}
}
Turns out you can do this and not have the annoying inner class e.g. Vehicle.Car too:
package com.example.vehicles;
public sealed interface Vehicle
// The permits clause has been omitted
// as its permitted classes have been
// defined in the same file.
{ }
record Truck(int loadCapacity) implements Vehicle {}
record Car(int numberOfSeats, String brandName) implements Vehicle {}
enum Vehicle:
case Truck(loadCapacity: Int)
case Car(numberOfSets: Int, brandName: String)
For me it's a question of whether I can get away with garbage collection. If I can then pretty much everything else is going to be twice as productive but if I can't then the options are quite limited and Rust is a good choice.
When things get complex, you start missing Rust's type system and bugs creep in.
In node.js there was a notable improvement when TS became the de-facto standard and API development improved significantly (if you ignore the poor tooling, transpiling, building, TS being too slow). It's still far from perfect because TS has too many escape hatches and you can't trust TS code; with Rust, if it compiles and there are no unsafe (which is rarely a problem in web services) you get a lot of compile time guarantees for free.
The third is the interesting one. When your service has a lot of traffic and every bit of inefficiency costs you money (node rents) and energy. Rust is an obvious improvement over the interpreted languages. There are also a few rare cases where Rust has enough advantages over Go to choose the former. In general though, I feel that a lot of energy consumption and emissions can be avoided by choosing an appropriate language like Rust and Go.
This would be a strong argument in favor of these languages in the current environmental conditions, if it weren't for 'AI'. Whether it be to train them or run them, they guzzle energy even for problems that could be solved with a search engine. I agree that LLMs can do much more. But I don't think they do enough for the energy they consume.
Do we agree that most of the languages I mentioned above are not interpreted languages? You seem to only consider Go as a non-interpreted alternative...
Take 5 min to read about JIT and AOT compilation.
Of those, I'm not very comfortable with using C and C++ for backend development. Their lack of automated memory safety measures is an issue for services that are exposed to the internet. (To be clear, memory safety isn't the only type of safety required). Swift may be a fine choice. (I'm unfamiliar with it.)
JVM languages like Java, Kotlin and Scala are all compiled languages, but I'm unsure how well they satisfy what I said before. To repeat, what matters is energy efficiency and resource utilization (not speed). I hope that somebody can provide an insight into how much overhead they incur on account of running in a VM.
It's possible to write web backends in Swift, but it's probably not a good idea. When I last did so, I ran into ridiculous issues all the time, such as lazy variables not being thread-safe, secondary threads having ridiculously small (and non-adjustable) stack sizes and there being generally absolutely no story for failure recovery (have fun losing all your other in-flight requests and restarting your app in case of an invalid array access). It's possible that some of this has been fixed in the last 5 years since I stopped working on that project, but given Apple's priorities, I somehow doubt that the situation is significantly better.
But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.
Yes, Rust is hard. But it doesn't have to be if you don't want.
This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.
I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.
I know that the compiler complains a lot. But I code with the help of realtime feedback from tools like the language server (rust-analyzer) and bacon. It feels like 'debug as you code'. And I really love the hand holding it does.
Tasks that are simple in most other languages, like a tree that can store many data types, are going to take a lot more code and a lot more thinking to get it to work in rust.
Generics and traits have some rough edges. If I rub up against them, they're really annoying. Otherwise, if I can avoid those then I agree with you that rust is smooth, and the trade off is worth it.
Querying a database while ensuring type safety is harder, but you still don't need an OEM for that. See sqlx.
You can derive FromRow for your structs to cut down the boilerplate, but if you need to join two tables that happen to have a column with the same name it stops working, unless you remember to _always_ alias one of the columns to the same name, every time you query that table from anywhere (even when the duplicate column names would not be present). If a table gets added later that happens to share a column name with another table? Hope you don't ever have to join those two together.
Doing something CRUD-y like "change ordering based on a parameter" is not supported, and you have to fall back to sprintf("%s ORDER BY %s %s") style concatenation.
Gets even worse if you have to parameterize WHERE clauses.
The structs sqlx creates through the macros are unnameable types, they are created on the fly at the invocation site. This means you can't return them without transforming them into a type you own (you declared), either through the FromRow derive, or writing glue code that associates this unnameable type's fields to your own struct's fields, leading to the boilerplate I was referring to. This is very specific to sqlx, don't try to dilute this into "other libraries are similar".
If you choose to forgo the macros, and use the regular .query() methods, then the results you get are tied to the lifetime of the sqlx connection object, which makes them unergonomic, which is again very specific to sqlx.
Most languages used with DBs are just as safe. This idea about Rust being more safe than languages with GC needs a rather big [Citation Needed] sign for the fans.
The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).
Plus in the meantime, even if I'm doing the "easy mode" approach I get to use all of the features I enjoy about writing in Rust - generics, macros, sum types, pattern matching, Result/Option types. Many of these can't be found all together in a single managed/GC'd languages, and the list of those that I would consider viable for my personal or professional use is quite sparse.
People are saying rust is harsh, i would day its not that much harder then other languages just more verbose and demanding.
What about e.g. Kotlin or Swift?
The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.
This said, they moved to Unity, which is C#, which is garbage collected, right?
And lets be real, how many devs manage to sell as many copies as Minecraft?
Too much discussion about what language to use, instead of what game to make.
I was a Gentoo user (daily driver) for around 15 years but the endless compilation cycles finally got to me. It is such a shame because as I started to depart, Gentoo really got its arse in gear with things like user patching etc and no doubt is even better.
It has literally (lol) just occurred to me that some sort of dual partition thing could sort out my main issue with Gentoo.
@system could have two partitions - the running one and the next one that is compiled for and then switched over to on a reboot. @world probably ought to be split up into bits that can survive their libs being overwritten with new ones and those that can't.
Errrm, sorry, I seem to have subverted this thread.
The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.
Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.
However, I would still prefer C# or F#.
Hence why I enjoy both stacks, lots of goodies to chose from, with great tooling.
Also it’s subjective but PascalCase really irks me.
But yeah it is subjective, also don't have much qualms with other alternatives.
Java has the stigma of ClassFactoryGeneratorFactory sticking to it like a nasty smell but that's not how the language makes you write things. I write Java professionally and it is as readable as any other language. You can write clean, straightforward and easy to reason code without much friction. It's a great general purpose language.
Unfortunately it's not a good gaming language. GC pauses aren't really acceptable (which C# also suffers from) and GPU support is limited.
Miguel de Icaza probably has more experience than anyone building game engines on GC platforms and is very vocally moving toward reference counted languages [1]
Java has made great progress with low-pause (~1 ms) garbage collectors like ZGC and Shenandoah since ~5 years ago.
Additionally they have HPC#, which is a C# subset for high performance code, used by the DOTS subsystem.
Many people mistake their C# experience in Unity, with what the state of the art in .NET world is.
Read the great deep dive blog posts from Stephen Toub on Microsoft DevBlogs on each .NET Core release since version 5.0.
For the competitive Minecraft player, I suspect starting their VM with XX:+UnlockExperimentalVMOptions is normal.
A casual gamer is however not going to enjoy that.
Also here is how great his new Swift love performs in reality against modern tracing GCs,
I wonder how the performance would be with Swift 6.1 which has improved support for Linux.
In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.
Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.
> Swift is a successor to the C, C++, and Objective-C languages. It includes low-level primitives such as types, flow control, and operators. It also provides object-oriented features such as classes, protocols, and generics.
-- https://developer.apple.com/swift/
If developers have such a big problem glueing C libraries into Java JNI, or Panama, then maybe game industry is not where they are supposed to be, when even Assembly comes to play.
Do you have a counter benchmark? The burden is on you here to disprove the data presented. What that benchmark shows is that you spend A TON of time counting references, much more than tracing GC, unless you enter Swifts C++ networking code. I would think games don’t spend most of their time calling into networking code.
> In reality, a lot of Swift apps are delegating to C code. My own app (in development) does a lot of processing, almost none of which happens in Swift, despite the fact I spend the vast majority of my time writing Swift.
So what you’re saying is that any language will be good for gaming since they all can delegate to C?
> Swift an excellent C glue language, which Java isn't. This is why Swift will probably become an excellent game language eventually.
What makes swift better at calling C than Java? AFAIK Java has a perfectly good and brand new foreign function interface.
> This is why Swift will probably become an excellent game language eventually.
I would take this bet that it won’t. Purely on the sheer fact that gaming occurs on Windows and Swift is barely capable there.
Exactly. That is why the out-of-date networking example being touted as evidence is irrelevant here.
What it boils down to is that Java and C have fundamentally incompatible memory models. Direct access to C memory is impossible because of the managed heap and GC.
> I would take this bet that it won’t. Purely on the sheer fact that gaming occurs on Windows and Swift is barely capable there.
This is a very odd comment - gaming occurs in a lot of places. Quite a lot happens on mobile these days. Turns out a lot of mobile devices run Swift, on which it appears to be reasonably capable.
Here, enjoy https://github.com/Quotation/LongestCocoa
There is even a style guide on it,
https://developer.apple.com/library/archive/documentation/Co...
you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.
Nope, embrace the productivity of managed languages, if really needed, package that rest in a native library, done.
I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.
The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).
Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.
I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.
If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.
Citation needed.
What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?
There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
The performance gain comes not from eliminating the function overhead, but enabling conditional move instructions to be used in the comparator, which eliminates a pipeline hazard on each loop iteration. There is some gain from eliminating the function overhead, but it is tiny in comparison to eliminating the pipeline hazard.
That said, C++ has its weaknesses too, particularly in its typical data structures, its excessive use of dynamic memory allocation and its exception handling. I gave an example here:
https://news.ycombinator.com/item?id=43827857
Honestly, I think these weaknesses are more severe than qsort being unable to inline the comparator.
Does not work if the compiler can not look into the function, but the same is true in C++.
Edit: It sort of works for the bsearch() standard library function:
https://godbolt.org/z/3vEYrscof
However, it optimized the binary search into a linear search. I wanted to see it implement a binary search, so I tried with a bigger array:
https://godbolt.org/z/rjbev3xGM
Now it calls bsearch instead of inlining the comparator.
That's not the most general case, but it's better than I expected.
That said, this brings me to my original reason for checking this, which is to say that it did not use a cmov instruction to eliminate unnecessary branching from the loop, so it is probably slower than a binary search that does:
https://en.algorithmica.org/hpc/data-structures/binary-searc...
That had been the entire motivation behind this commit to OpenZFS:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
It should be possible to adapt this to benchmark both the inlined bsearch() against an implementation designed to encourage the compiler to emit a conditional move to skip a branch to see which is faster:
https://github.com/scandum/binary_search
My guess is the cmov version will win. I assume merits a bug report, although I suspect improving this is a low priority much like my last report in this area:
When people claim C++ to be faster than C, that is usually understood as C++ provides tools that makes writing fast code easier than C, not that the fastest possible implementation in C++ is faster than the fastest possible implementation in C, which is trivially false as in both cases the fastest possible implementation is the same unmaintainable soup of inline assembly.
The typical example used to claim C++ is faster than C is sorting, where C due to its lack of templates and overloading needs `qsort` to work with void pointers and a pointer to function, making it very hard on the optimiser, when C++'s `std::sort` gets the actual types it works on and can directly inline the comparator, making the optimiser work easier.
That said, external comparators are a weakness of generic C library functions. I once manually inlined them in some performance critical code using the C preprocessor:
https://github.com/openzfs/zfs/commit/677c6f8457943fe5b56d7a...
One of the strengths of C++ is that it is well-suited to compile-time codegen of hyper-optimized data structures. In fact, that is one of the features that makes it much better than C for performance engineering work.
You have other sources of slow downs in C++, since the abstractions have a tendency to hide bloat, such as excessive dynamic memory usage, use of exceptions and code just outright compiling inefficiently compared to similar code in C. Too much inlining can also be a problem, since it puts pressure on CPU instruction caches.
For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company. It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C. It isn’t like C++ can’t use C code.
Even regarding exceptions, I would not touch them anywhere close to the critical path, but, for example, during application setup I have no problem with them. And yet I know of people writing very high performance applications that are happy to throw on the critical path as long as it is a rare occurence.
C++ puts people into a sea of complexity and then blames them when they do not get a good result. The purpose of high level programming languages is to make things easier for people, not make them even more likely to fail to write good code and then blame them when they do not.
> For example, exceptions have been explicitly disabled on every C++ code base I’ve ever worked on, whether FAANG or a smaller industrial company.
Unfortunately, C++ does not make exceptions optional and even if you use a compiler flag to disable them, libraries can still throw them. Just allocating memory can throw them unless you use the nothrow version of the new operator that C++11 introduced. Alternatively, you could use malloc with a placement new and then manually call the destructor before freeing the memory. As far as I know, many C++ developers who disable exceptions do not do either. Then when a memory allocation fails, their program terminates.
That said, there are widely used C++ code bases that rely on exception handling. One of the most famous is the Windows NTFS driver, which reportedly makes heavy use of structured exception handling:
https://blog.zorinaq.com/i-contribute-to-the-windows-kernel-...
> It isn’t compatible with some idiomatic high-performance software architectures so it would be weird to even turn it on. C++ allows you to strip all bloat at compile-time and provides tools to make it easy in a way that C could only dream of, a standard metaprogramming optimization. Excessive dynamic allocation isn’t a thing in real code bases unless you are naive. It is idiomatic for many C++ code bases to never do any dynamic allocation at runtime, never mind “excessive”.
I have seen plenty of C++ software throw exceptions in wine, since it prints information about it to the console. It is amazing how often exceptions are used in the normal operation of such software. Contrary to your assertions of everyone turning off exceptions, these exceptions are caught and handled. Of course, this goes unseen on the original platform, so the developers likely have no idea about all of the exceptions that their code throws.
As for excessive dynamic memory allocations, C++11 introduced move semantics to eliminate a major source of them, and that very much was a problem in real code. It still can be, since you need to always use std::move and define move constructors. C++ itself tends to discourage use of intrusive data structures (as they break encapsulation), which means doing more dynamic allocations than C code does since heavy use of intrusive data structures in C code avoids allocations that are mandatory without them.
> C++ has many weaknesses. You are failing to identify any that a serious C++ practitioner would recognize as valid. In all of this you also failed to make an argument for why anyone should use C.
My goal had been to say that performance was not a reason to use C++ over C, since C++ is often slower. Had my goal been different, there is plenty of material I could have used. For example, I could have noted the binary compatibility breakage that occurred for C++11 in GCC 5.0, the horrendous error messages from types that are effectively paragraphs, changes in the STL definitions across different versions that break code, and many other things, but I was not trying to throw stones at an obviously glass house.
> It isn’t like C++ can’t use C code.
It increasingly cannot. If C headers use variably modified types and do not have a guard macro an alternative for C++ that turns them into regular pointers, C++ cannot use the header. Here is an example of code using it that a C++ compiler cannot compile:
https://godbolt.org/z/T5T4Y1n68
The C preprocessor also now has generics which are not supported by C++ either:
Abstractions can hide bloat for sure, but the lack of abstraction can also push coders towards suboptimal solutions. For example C code tends to use linked lists just because its easy to implement when a dynamic array such as std::vector would have been more performant.
Too much inlining can of course be a problem, the optimizer has loads of heuristics to decide if inlinining is worth it or not, and the programmer can always mark the function as `[[gnu::noinline]]` if necessary. It is not because C++ makes it possible for the sort comparator to be inlined that it will.
In my experience, exceptions have a slightly positive impact on codegen (compared to code that actually checks error return values, not code that ignores them) because there is no error checking on the happy path at all. The sad path is greatly slowed down though.
Having worked in highly performance sensitive code all of my career (video game engines and trading software), I would miss a lot of my toolbox if I limited myself to plain C and would expect to need much more effort to achieve the same result.
While C code makes more heavy use of linked lists than C++ code, most of the C code I have helped maintain made even heavier use of balanced binary search trees and B-trees than linked lists. It also used SLAB allocation to amortize allocation costs. In the case of OpenZFS, most of the code operated in the kernel where external memory fragmentation makes dynamic arrays (and “large” arrays in general) unusable.
I think you have not seen the C libraries available to make C even better. libuutil and libumem from OpenSolaris make doing these things extremely nice. Some of the first code I wrote professionally (and still maintain) was written in C++. There really is nothing from C++ that I miss in C when I have such libraries. In fact, I have long wanted to rewrite that C++ code in C since I find it easier to maintain due to the reduced abstractions.
Implementations of both languages provide inline asm, so this is trivially true. Yet it is an uninteresting statement.
But if you look beyond, you can find a whole world that extend the stl. If you are not happy, say, with unordered_map, you can find more or less drop in replacements that use the same iterator based interface, preserve value semantics and use the a common language to describe iterator and reference invalidation.
Regarding your specific use case, if you want intrusive lists you can use boost.intrusive which provides containers with STL semantics except it leaves ownership of the nodes to the user. The containers do not even need to be lists: you can put the same node in multiple lists linked list, binary trees (multiple flavors), and hash maps (although this is not fully intrusive) at the same time.
These days I don't generally need boost much, but I still reach for boost.intrusive quite often.
Yes, most language features past the C89 subset are not supported, besides the C standard library, because C++ has much better alternatives, like why _Generic when templates are a much saner approach, than type dispatching with the pre-processor.
However that is besides the point, 99% of C89 code minus a few differences, is valid C++ code, and if the situation so requires, C++ code can be exactly the same way.
And lets not forget most FOSS projects have never moved beyond C89/C99 anyway, so stuff like _Generic is of relative importance.
As for WG 14, they incorporated numerous C++isms into C. While you claim that they did not go far enough, I am sure you will find many who would say that they went too far.
I claim they aren't focused on what matters, we don't need C++isms into C, we already have C++, and C should have been done as language back in C89.
Anyone that wanted more has always been able to use C++ instead, or Objective-C on Apple/NeXT land.
What we need is for WG14 to finally take security regarding strings, arrays seriouslys, not yet another reboot of functions using (ptr, length) pairs.
I have tested this WRT _Generic as it was a concern. It turns out that GCC will accept it on older versions, which permits compatibility. You might feel that this is wrong, but that is how things are right now.
Yes, you can write most things in modern C++ in roughly equivalent C with enough code, complexity, and effort. However, the disparate economics are so lopsided that almost no one ever writes the equivalent C in complex systems. At some point, the development cost is too high due to the limitations of the expressiveness and abstractions. Everyone has a finite budget.
I’ve written the same kinds of systems I write now in both C and modern C++. The C equivalent versions require several times the code of C++, are less safe, and are more difficult to maintain. I like C and wrote it for a long time but the demands of modern systems software are a beyond what it can efficiently express. Trying to make it work requires cutting a lot of corners in the implementation in practice. It is still suited to more classically simple systems software, though I really like what Zig is doing in that space.
I used to have a lot of nostalgia for working in C99 but C++ improved so rapidly that around C++17 I kind of lost interest in it.
You can argue that C takes more effort to write, but if you write equivalent programs in both (ie. that use comparable data structures and algorithms) they are going to have comparable performance.
In practice, many best-in-class projects are written in C (Lua, LuaJIT, SQLite, LMDB). To be fair, most of these projects inhabit a design space where it's worth spending years or decades refining the implementation, but the combination of performance and code size you can get from these C projects is something that few C++ projects I have seen can match.
For code size in particular, the use of templates makes typical C++ code many times larger than equivalent C. While a careful C++ programmer could avoid this (ie. by making templated types fall back to type-generic algorithms to save on code size), few programmers actually do this, and in practice you end up with N copies of std::vector, std::map, etc. in your program (even the slow fallback paths that get little benefit from type specialization).
Great question! Here's one answer:
Having written a great deal of C code, I made a discovery about it. The first algorithm and data structure selected for a C program, stayed there. It survives all the optimizations, refactorings and improvements. But everyone knows that finding a better algorithm and data structure is where the big wins are.
Why doesn't that happen with C code?
C code is not plastic. It is brittle. It does not bend, it breaks.
This is because C is a low level language that lacks higher level constructs and metaprogramming. (Yes, you can metaprogram with the C preprocessor, a technique right out of hell.) The implementation details of the algorithm and data structure are distributed throughout the code, and restructuring that is just too hard. So it doesn't happen.
A simple example:
Change a value to a pointer to a value. Now you have to go through your entire program changing dots to arrows, and sprinkle stars everywhere. Ick.
Or let's change a linked list to an array. Aarrgghh again.
Higher level features, like what C++ and D have, make this sort of thing vastly simpler. (D does it better than C++, as a dot serves both value and pointer uses.) And so algorithms and data structures can be quickly modified and tried out, resulting in faster code. A traversal of an array can be changed to a traversal of a linked list, a hash table, a binary tree, all without changing the traversal code at all.
That's interesting, did ChatGPT tell you this?
While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.
The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.
You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.
Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.
You are arguing against what the language was 30-40 years ago. The language has undergone two pretty fundamental revisions since then.
In certain cases, sure - inlining potential is far greater in C++ than in C.
For idiomatic C++ code that doesn't do any special inlining, probably not.
IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).
But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.
For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.
Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.
Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here
High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.
Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.
Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!
I would actually flip your HDL definition a bit. Clash is a true HDL, but specifically because it's not just a shallowly embedded DSL.
Clash is actually a GHC plugin that compiles haskell code to a synchronous circuit representation and then can spit that out as whatever HDL you like. It's emphatically not just a library for constructing circuit descriptions, like most new gateware development tools. This is possible because the semantics of Haskell are (by a mixture of good first-principles design and luck) an almost exact match for the standard four-state logic semantics of synchronous digital circuits.
This is also different from the standard HLS approach where the semantics of the source language do not at all match the semantics of the target. With Haskell, they are (surprisingly) very close! Only in a few edge cases (mostly having to do with zero-bit wires and so on) does the evaluation semantics of haskell differ from the evaluation semantics of e.g. verilog.
I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.
For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.
Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.
Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.
Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.
But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.
I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.
Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.
But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.
Then we had C++ for all our low level code and Lua for gameplay.
We were floating a middle layer of Rust for Lua bindings and the glue code for our transformation pipeline, but there was always a little too much friction to introduce. What we were particularly interested in was memory allocation bugs (use after free and leaks) and speeding up development of the engine. So I could see it having a place.
Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.
No it isn't, there are now two versions of Minecraft, the classical one, and Minecraft Bedrock, that is the one written in C++.
Minecraft Bedrock doesn't have half of the community that Minecraft classical enjoys, hence why Microsoft is trying to use JavaScript based extensions to bring the mod community into Minecraft Bedrock.
Finally without Minecraft classical market success, there wouldn't exist Minecraft Bedrock at all, so Java did serve well enough to Notch's fortunes.
And Java was a perfectly good choice of language for Notch for the same reasons.
I don't play Minecraft so I guess I'm outta touch. I knew about Bedrock and I've heard kids call Java the "old one". I didn't realise there's still an active community. Thanks for the correction :)
> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.
I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.
And never for anything that requires a steady foundation.
Programming language does not matter. Choose the right tool for job and be pragmatic.
There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.
I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.
---
I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:
1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).
2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).
3- Render it directly from the native UI toolkit.
Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).
This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.
I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.
IS GREAT.
After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.
So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.
> this is why game engines embedded scripting languages
I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.
The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."
Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.
For example; if you have a progressbar that needs to be updated continuously, you do what? Upon every `tick` of your Rust engine you send a new struct with `ProgressBar(percentage=x)`? Or do the structs have unique identifiers so that the UI code can just update that one element and its properties instead of re-rendering the entire screen?
pub fn randomize_paperdoll<C: Component>(
mut commands: Commands,
views: Query<(Entity, &Id<Spine>, &Id<Paperdoll>, &View<C>), Added<SkeletonController>>,
models: Query<&Model<C>, Without<Paperdoll>>,
attachment_assets: Res<AttachmentAssets>,
spine_manifest: Res<SpineManifest>,
slot_manifest: Res<SlotManifest>,
) {
I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.
Ecosystem problems:
The Rust 3D game dev user base is tiny.
Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.
The lower levels are buggy and have a lot of churn
The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.
Also, too many different crates want to own the event loop.
These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.
Language problems:
Back-references are difficult
A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
There are three common workarounds:
- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.
- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.
- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.
Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.
"Is-a" relationships are difficult
Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.
> Migration - Bevy is young and changes quickly.
We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.
> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.
This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.
I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.
This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.
...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.
I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.
eg: https://docs.openrewrite.org/recipes/java/migrate/joda/jodat...
I occasionally notice libraries or frameworks including OpenRewrite rules in their releases. I've never tried it, though!
On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.
Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..
As good as Rust is, I still feel there needs to be a "high-low" strategy for most biz/game code. You want to be able to depend on your low level abstractions, while constantly changing up how you fit them together.
It is very strange that more mainstream languages do not have such features (and I am not talking about 3rd party tools; in Modelica conversions are part of the language spec).
@Deprecated(replaceWith = "new expression")
Only works for simple cases but better than nothing. For more, there's OpenRewrite.It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.
So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.
The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.
Well, I always thought it is the key in every kind of development, JS or else.
These are all solvable problems, but in reality, it's very hard to write a good business case for being the one to solve them. Most of the cost accrues to you and most of the benefit to the commons. Unless a corporate actor decides to write a major new engine in Rust or use Bevy as the base for the same, or unless a whole lot of indie devs and part-time hackers arduously work all this out, it's not worth the trouble if you're approaching it from the perspective of a studio with severe limitations on both funding and time.
The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser. It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.
From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.
Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.
What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood. Content is constantly coming in, being displayed, and then discarded as the user moves around the big world. All that dynamism requires more complex data structures than a game that loads everything at startup.
Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.
(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.
I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)
Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.
The only challenge is not having an ecosystem with ready made everything like you do in "batteries included" frameworks. You are basically building a game engine and a game at the same time.
We need a commercial engine in Rust or a decade of OSS work. But what features will be considered standard in Unreal Engine 2035?
(For the record software development has nothing to do now with how it looked when I started in 2003, plenty of things have revolutionized the way we write code (especially Github) and made us an order of magnitude more productive at least. Yet the number of developer has skyrocketed. I don't expect this trend to stop, AI is yet another productivity boost in an industry that already faced a lot of them in recent time.
I see this and I am reminded when I had to fight the 0 indexing, when I was cutting my teeth in C, for class.
I wonder why no one complains about 0 indexing anymore. Isn't it weird how you have to go 0 to length - 1, and implement algorithm differently than in a math book?
Am I the only person that remembers how hard it was to wrap your head around numbers starting at 0, rather than 1?
So the reason why you don't see many people fighting 0-indexing is because they actually prefer it.
Maybe you had the luck of learning 0 based language first. Then most of them were a smooth ride.
My point is you forgot how hard it is because it's now muscle memory (if you need a recap of the difficulty learn a program with arbitrary array indexing and set you first array index to something exciting like 5 or -6). It also means if you are "fighting the borrow checker" you are still at pre-"muscle memory" stage of learning Rust.
Given most languages since at least C have 0-based indexing... I would think most engineers picked it up early? I recall reading The C Programming Language 20 years ago, reading the reason and just following what it says. I don't think it's as complex as the descriptions people put forward of "fighting the borrow checker." One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust." I know which one I'm going to find more challenging when I get round to trying to learn Rust...
As I mentioned I started Basic on C64, and schools curriculum was in Pascal. I didn't learn about C until I got to college.
> One is "mentally add/subtract 1" and another is "gain a deep understanding of how memory management works in Rust."
In practice they are, you start writing code. At first you trip on your feet, read stuff carefully, then try again until you succeed.
Then one day, you wake up and realize I know 0 indices and/or borrow checker. You don't know how you know, you just know you don't make those mistakes anymore.
Also, there’s a huge difference between beginners not understanding 0-based indexing and experienced C++ engineers describing the challenges understanding Rust’s unique features. I mean, Jesus Christ, we’re commenting on a thread here of experienced engineers commenting on how challenging it can be! I really don’t know what else to say.
Keep in mind I wasn't new to programming at that point. I was programming in C64 basic for 3 years and Pascal for 3 as well. For hobby of course, and not fully.
Zero indexes aren't simple, they are intertwined everywhere but they are easy - as in familiar.
What experienced C++ devs in Rust are not much different than experienced Pascal devs in C. Lost. And having to rethread semi-familiar grounds.
---
And I described you my own. I don't fight the borrow checker. I just intuitively know how it works and what to avoid.
If you think just subtracting one or adding one is enough, there should be an easy enough way to test if it is. In Veritasium video they mention that having glasses that turn your vision upside down will cause confusion at first, but you will get used to them quickly.
What you could do is take a language that has arbitrary starting index value and set it to something weird. Like 42 or -5. Then rewrite your programs. See how many off by 41 errors you make. Then once you no longer make mistakes with it. Go back to 0 indexes.
Do you need me to comment on the difference between 0 and 42?
But you need to see with eyes of a newbie. I mean what is the problem here, you did say it's just adding or subtracting a number whether it's 0, 1, -1 or 42? Should be trivial, right?
My guess while the change of indices is simple (altering ranges by a constant), it's going to be hard (requiring constant mental effort until it's internalized).
Apparently I do need to comment but I don't think any human on earth has the words to convince you that, as a human, adding/subtracting by 1 is a billion times easier to understand than adding/subtracting 42.
Ok, then add start your indices with something else 1 or 2, -1 would be best but probably not well supported. Whatever you aren't used to.
Also why is substracting a small number such a problem? I never found adding/substracting 1, much more difficult than 42. It's just math.
If the answer to your question is not self-evident, then I don't know of any words that will convince you.
for (i=0; i<length; i++)
I eventually just memorized that pattern, but stumbled every time any part of it changed. I had to rethink the whole logic every time to figure out < vs <= and length vs length-1, and usually ended up getting an answer that was both confident and wrong.The borrow checks feels similar but different. It feels like it has more "levels" to it. Initially, it came naturally to me and I wondered what all the fuss was about. I was fighting iterators and into, not the borrow checker. I just had to mentally keep track of what owned my data and it all felt pretty obvious.
Then I started working on things that didn't fit into my naive mental model, and it became a constant fight.
So overall, a similar experience to 0-based indexing, yes. (Except I still don't "just know" the trickier bits of the borrow checker yet, so I don't know what comes next.)
I started out with BASIC and Fortran, which use 1 based indices. Going to C was a small bump in the road getting used to that, and then it's Fortran which is the oddball.
DIM a(10)
actually declares an array of 11 elements, with indices from 0 to 10 inclusive.I believe it was QBASIC that first borrowed the ability to define ranges explicitly from Pascal, so that we could do:
DIM a(1 TO 10)
etc to avoid the "zero element tax"Most languages have abstractions for iterating over an array so that you don’t need to use 0 or length-1 these days
element_position = start + index*element_size
Sure. But from most general to least general, it goes:
Counting on fingers >>>> Math Equations >> Computer algorithms
The more general ones influence the less general ones because that's where most people start.
(defvar names (Vector String)
#("Alex" "Kim" "Robin" "Sam"))
(elt names 1)
...and get "Alex" instead of "Kim". (defvar names (Vector String)
#("Alex" "Kim" "Robin" "Sam"))
(first names)
or if being flexible, (defvar names (Vector String)
#("Alex" "Kim" "Robin" "Sam"))
(elt names (lowbound names))
Bonus points for adding a macro or function, to make the second form available as first as well.To say it's the same as using array indices is just not true.
Bevy entity IDs are opaque and you have to try really hard to do arithmetic on them. You can technically do math on instance IDs in Unity too; you might say "well, nobody does that", which is my point exactly.
> You don't need to deal with the game life cycle AND the memory life cycle with a GC.
I don't know what this means. The memory for a `GameObject` is freed once you call `Destroy`, which is also how you despawn an object. That's managing the memory lifecycle.
> In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.
Is there a use for storing data on a dead `GameObject`? I've never had any reason to do so. In any case, if you really wanted to do that in Bevy you could always use an `EntityHashMap`.
The live C# but dead Unity object trick is mostly only useful for dangling handles and IDs and such. It's more that memory won't be ripped out from under you for none Unity data and the usual GC rules apply.
And again the difference between using the GC and rolling your own implementation is pretty big. In your hash map example you still have to solve the issue of how long you keep entries in that map. The GC answers that question.
I've gone back and forth on this, myself.
I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.
Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)
I think this is indeed peak rust.
It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.
Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!
Static analysis could potentially check for those potential panics at compile time. If that was implemented, the run time check, and the potential for a panic, would go away. It's not hard to check, provided that all borrows have limited scope. You just have to determine, conservatively, that no two borrow scopes for the same thing overlap.
If you had that check, it would be possible to have something that behaves like RefCell, but is checked entirely at compile time. Then you know you're free of potential double-borrow panics.
I started a discussion on this on a Rust forum. A problem is that you have to perform that check after template expansion, and the Rust compiler is not set up to do global analysis after template expansion. This idea needs further development.
This check belongs to the same set of checks which prevent deadlocking a mutex against itself. There's been some work on Rust static deadlock analysis, but it's still a research topic.
However, I'm not sure what the implications are around mutability. I use a Cursor struct which stores a reference to a specific leaf node in the tree. Cursors can walk forward in the tree (cursor.next_entry()). The tree can also be modified at the cursor location (cursor.insert(item)). Modifying the tree via the cursor also updates some metadata all the way up from the leaf to the root.
If the cursor stored a Rc<Leaf> or Weak<Leaf>, I couldn't mutate the leaf item because rc.get_mut() returns None if there are other strong or weak pointers pointing to the node. (And that will always be the case!). Maybe I could use a Rc<Cell<Leaf>>? But then my pointers down the tree would need the same, and pointers up would be Weak<Cell<Leaf>> I guess? I have a headache just thinking about it.
Using Rc + Weak would mean less unsafe code, worse performance and code thats even harder to read and reason about. I don't have an intuitive sense of what the performance hit would be. And it might not be possible to implement this at all, because of mutability rules.
Switching to an array improved performance, removed all unsafe code and reduced complexity across the board. Cursors got significantly simpler - because they just store an array index. (And inserting becomes cursor.insert(item, &mut tree) - which is simple and easy to reason about.)
I really think the Vec<Node> / Vec<Leaf> approach is the best choice here. If I were writing this again, this is how I'd approach it from the start.
I have done this before with stdlib stuff like io::Cursor and was pretty happy with the result.
Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.
(I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)
In my case, inserts and read operations vastly outnumber deletes. So much so that in all of my testing, I never saw a leaf node which could be freed anyway. (Leaves store ~32 values, and there were no cases where all of a leaf's values actually get deleted). I decided to just leak nodes if it ever happens in real life.
The algorithm processes data in batches then frees everything. So worst case, it just has slightly higher peak memory usage while processing. A fine trade in this case given it let me remove ~200 lines of code - and any bugs that might have been lurking in them.
How about using hash maps/hash tables/dictionaries/however it's called in Rust? You could generate unique IDs for the elements rather than using vector indices.
Cache locality matters, but so does having less allocator pressure. Use 32-bit unsigned ints as indices, and you get improvements on that as well.
>The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.
I'd always try to avoid that type of allocation pattern in C++, FWIW :-).
I probably say this because I still have to main 32-bit binaries (only 2G of address space), but it can potentially be problematic even on 64-bit machines (typically 256 TB of address space), especially if the data structure should be a reusable container with unknown number of instances. If you don't know a reasonable upper bound of elements beforehand, you have to reallocate later, or drastically over-reserve from the start. The former removes a pointer stability guarantee, the later is uneconomical, it may even be uneconomical on 64-bit depending on how many instances of the data structures you plan to have. And having to reallocate when overflowing the preallocated space makes operations less deterministic with regards to execution time.
That makes sense. If my btree was gigabytes in size, I might rethink the approach for a number of reasons. But in my case, even for quite large input, the data structure never gets more than a few megabytes in size. Thats small enough that resizing the vec has a negligible performance impact.
It helps that my btree stores its contents using lossless internal run-length encoding. Eg, if I have values like this:
{key: 5, value: 'a'}
{key: 6, value: 'a'}
{key: 7, value: 'a'}
Then I store them like this: {key: [5..8), value: 'a'}
In my use case, this compaction decreases the size of the data structure by about 20x. There's some overhead in joining and splitting values - but its easily worth it.Yes. I've found that problem in index-allocated code.
Also, when you do this, you need an allocator for the indexes. I've found bugs in those.
I think you should think less like Java/C# and more like database.
If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.
So I'll probably use Box here to refer to the parent
I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.
* Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.
* Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.
Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.
The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.
Yes.
Three months ago, when the Rust graphics stack achieved sync, I wrote a congratulatory note.[1]
Everybody is in sync!
wgpu 24
egui 0.31
winit 0.30
all play well together using the crates.io versions. No patch overrides! Thanks, everybody.
Wgpu 25 is now out, but the others are not in sync yet. Maybe this summer.[1] https://www.reddit.com/r/rust_gamedev/comments/1iiu3mr/every...
I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.
So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.
Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.
In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.
It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.
I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.
I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/
I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.
At least until we get AI driven systems good enough to generate straight binaries.
Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.
The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.
There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.
The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness
---
So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?
Compared with GC, borrow checking affects every function signature
Compared with manual memory management, GC also affects every function signature.
IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there
The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.
Likewise, type safety is a global property of a program
---
(good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)
I think "natural" is a bit loaded, there is native support in the frontend for doing both. You have to go out of your way to annotate functions with @live and it is still experimental(https://dlang.org/spec/ob.html). The garbage collection is natural and happens if you do nothing, but you can turn it off with proper annotations like @nogc(https://dlang.org/spec/function.html#nogc-functions) or using betterC(https://dlang.org/spec/betterc.html). There is also @safe, @system and @trusted(https://dlang.org/spec/memory-safe-d.html).
So natural is a stretch at the moment, but you can use all kinds of different techniques, what is needed is more community and library standardization around some solutions.
D is as memory safe as Rust is, when you use the garbage collector to allocate/free memory. If you don't use the GC in D, then there's a risk from:
* double frees
* memory leaks
* not pairing the allocation with free'ing
Those last 3 is what the borrow checker handles.In other words, with D, there is no point to using the borrow checker if one is using D's GC for memory management.
You can mix and match using the GC or manual memory allocation however it makes sense for your program. It is normal for D programmers to use both.
Does D also protects against data race?
(I couldn't find an obvious answer after a bit of research, but I might have overlooked something)
I don't think gradual types are as holy grail as you make them out to be. In gradual typing, if I recall correctly, there was a large overhead when communicating between typed and untyped parts. But further
But lets say gradual memory management is perfect, you have to keep in mind the costs of having GC + borrow checking.
First thing, rather than focusing on perfecting GC or borrow checking, you divert your focus.
Second, you introduce an ecosystem split, with some libraries supporting GC and others supporting non-GC. E.g. you make games in C# and you want to be careful about avoiding GC, good luck finding fast enough non-GC libraries.
For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.
And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.
#4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer
BTW, D has always had quite an advanced pattern matcher for types.
https://dlang.org/spec/expression.html#is_expression
https://dlang.org/spec/template.html#template_type_parameter...
[1] https://github.com/vlang/v/blob/master/doc/docs.md#sum-types
One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?
Cheers!
But I still like the borrow checker style of programming because it makes the code easier to understand.
I find it convenient in the D compiler implementation to use the GC for the AST memory management, as the algorithms that manipulate it are easier if they needn't concern themselves with memory management. A borrow checker approach doesn't fit it comfortably, either.
Many of the data structures persist to the end of the program, as a compiler is a batch program. No memory management strategy is even necessary for those.
I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.
D's implementation of a borrow checker, is very intriguing, in terms of possibilities and putting it back into the context of a tool and not the "be all, end all".
> I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).
This speaks volumes from such an experienced and accomplished programmer.
But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.
Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.
You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.
Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.
Finding the fun and time to market are top priority for games development.
Anything with a graphical shell is probably better written in a GC'd language, but I'd love to hear some counter-arguments.
If it’s a really logic-intensive game like Factorio (C++), or RollerCoaster Tycoon (Assembly), then I don’t think you can get away with something like Unity.
For simpler things that have a lot of content, I don’t think you can get away with Rust, until its ecosystem grows to match the usual game engines of today.
I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).
People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.
But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.
The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.
The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.
Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.
> > Back-references are difficult > > A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.
When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.
We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.
Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.
I agree with you: the other extreme (using Arc everywhere) is also bad.
There's a sweet middle spot of using it just when strictly necessary.
Java, C# and Go don't use atomic reference counting and don't have such overhead.
Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.
But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.
A fear I have with larger side projects is the notion that it could all be for nought, though I suppose that’s easily-mitigated by simply keeping side projects small, iterative if necessary. Start with an appropriate-sized MVP, et al.
This was a problem with early versions of Scala as well, exacerbated by the language and core libs shifting all the time. It got so difficult to keep things up to date with all the cross compatibility issues that the services written in it ended up stuck on archaic versions of old libraries. It was a hard lesson in if you're doing a non-hobby project, avoid languages and communities that behave like this until they've finally stabilized.
Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.
Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.
Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.
Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!
Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.
https://news.ycombinator.com/item?id=43787012
In my personal opinion, a paradox of truly open-source projects (meaning community projects, not pseudo-open-source from commercial companies) is that development seems to show a tendency of diversity. While this leads to more and more cool things appearing, there always needs to be a balance with sustainable development.
Commercial projects, at least, always have a clear goal: to sell. For this goal, they can hold off on doing really cool things. Or they think about differentiated competition. Perhaps if the purpose is commercial, an editor would be the primary goal (let me know if this is alreay on the roadmap).
---
I don't think the language itself is the problem. The situation where you have to use mature solutions for efficiency is more common in games and apps.
For example, I've seen many people who have had to give up Bevy, Dioxus, and Tauri.
But I believe for servers, audio, CLI tools, and even agent systems, Rust is absolutely my first choice.
I've recently been rewriting Glicol (https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.
So I still have 100% confidence in Rust.
Is there a Rust equivalent of openai-agents-sdk?
Gave up after 3 days for 3 reasons:
1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.
2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.
3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.
With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.
Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.
But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.
Risks are real though.
How long ago was this and did you try JetBrains RustRover? While not quite as mature as some other JetBrains tools, I've found the latest version really quite good.
Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.
For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/
They're distributing their game on Steam too so Linux support is next to free via Proton.
Non-issue. Pick a single blessed distro. Clearly state that it's the only configuration that you officially support. Let the community sort the rest out.
I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.
The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").
Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.
But, that’s not a knock on Rust. It’s just “Right tool for the job.”
Coroutines can definitely be very useful for games and they're also available in C#.
you integrate it tightly with the engine so it only does game logic, making files small and very quick and easy to read.
platform independent, no compiling, so can modify it in place on a release build of the game.
the "everything is a table" approach is very easy to concept mentally means even very inexperienced coders can get up and running quickly.
great exception handling, which makes most release time bugs very easy to diagnose and fix.
All of which means you spend far more time building game logic and much, much less time fighting the programming language.
Heres my example of a 744 flight data recorder (the rest of the 744 logic is in the containing folders)
https://github.com/mSparks43/747-400/blob/master/plugins/xtl...
All asynchronously multithreaded, 100% OS independent.
So much so, in fact, that it can be bolted onto an existing game written in C#. I did exactly that for Bannerlord: https://www.nexusmods.com/mountandblade2bannerlord/mods/1651
I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.
The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.
I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.
Such a crappy thing for a company to do.
Imagine you all cost 100k/year to employ by the larger company (since you all apparently don't make money).
Then imagine you are all now cost 105k a year to the parent company.
It's no difference.
That's the approach I've been taking with a side project game for the very reason alone that the other contributors are not system programmers. I.e. a similar situation as the author had with his brother.
Rust was simply not an option -- or I would be the only one writing code. :]
And yeah, as others mentioned: Fyrox over Bevy if you have few (or one) Rust dev(s). It just seems Fyrox is not on the radar of many Rust people even. Maybe because Bevy just gets a lot more (press) coverage/enthusiasm/has more contributors?
I feel most of the things mentioned (rapid prototyping, ease of use for new programmers, modability) would be more easily accomplished by embedding a Lua interpreter in the rust project.
Glad C# is working out for them though, but if anyone else finds themselves in this situaton in Rust, or C, C++, Zig, whatever - embedding lua might be something else to consider, that requires less re-writing.
But yeah my first thought here was Lua too like others said
I've ported code between engines, and that makes my productivity feel very... leisurely.
Also, it's endearing that he builds things with his brother including that TF2 map that he linked from years ago.
Java was my first hope. It was a bit safer than C++ but ultimately too verbose and the GC meant too much memory is wasted. Most games were very sensitive to memory use because consoles always had limited memory to keep costs down.
Next I spent years of side projects on Common Lisp based on Andy Gavin’s success there with Crash Bandicoot and more, showing it was possible to do. However, reports from the company were that it was hard to scale to more people and eventually a rewrite of the engine in C++ came.
I have explored Rust and Bevy. Bevy is bleeding edge and that’s okay, but Rust is not the right language. The focus on safety makes coding slow when you want it to be fast. The borrow checker frowns when you want to mutate things for speed.
In my opinion Zig is the most promising language for triple A game dev. If you are mid level stick to Godot and Unity, but if you want to build a fast, safe game engine then look at Zig first.
* They didn't select Rust as the best tool available to create a game, they decided to create a Rust project which happens to be a game
* When the objective and mental model of a solution is clear, the execution is trivial. I bet I could recreate a software which took me 3 months to develop in 3 days, if I just have to retype the solution instead of finding a solution. No matter which language
* They seem to struggle with the most trivial of tasks. Having to call out being able to utilize an A* library (an algorithm worth like 10 lines of code) or struggling with scripting (trivial with proven systems like lua) suggests a quite novice team
That being said, I'm glad for their journey:)
Learning - Over the past year my workflow has changed immensely, and I regularly use AI to learn new technologies, discuss methods and techniques, review code, etc. The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance. While Bevy and Rust evolve rapidly - which is exciting and motivating - the pace means AI knowledge lags behind, reducing the efficiency gains I have come to expect from AI assisted development. This could change with the introduction of more modern tool-enabled models, but I found it to be a distraction and an unexpected additional cost.
In 2023 I wondered if LLM code generation would throttle progress in programming language design. I was particularly thinking about Idris and other dependently-typed languages which can do deterministically correct code generation. But it applies to any form of language innovation: why spend time learning a new programming language that 100% reliably abstracts boilerplate away, when an LLM can 95% reliably slop the boilerplate? Some people (me) will say that this is unacceptably lazy and programmers should spend time reading things, the other will point to the expected value of dev costs or whatever. Very depressing.
k__•2mo ago
From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.
For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.