What? That's absolutely ideal! It's incredibly simple. I wish deployment processes were always that simple! Docker is not going to make your deployment process simpler than that.
I did enjoy the deep dive into figuring out what was taking a long time when compiling.
If anyone out there is already fully committed to using only Alpine Linux, I'd recommend trying creating native packages at least once.
Oops, changed one template in one header. And that impacts.... 98% of my code.
A.k.a. "Remember the Vasa!" https://news.ycombinator.com/item?id=17172057
Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.
Cargo is the standard build system for Rust projects, though some users use other ones. (And some build those on top of Cargo too.)
The local builds are fast, why would you rebuild docker for small changes?
Also why is a personal page so much rust and so many dependencies. For a larger project with more complex stuff you’d have a test suite that takes time too. Run both in parallel in your CI and call it a day.
Do you really believe that nobody over the course of Rust's lifetime has ever taken a look at C compilers and thought about if techniques they use could apply to the Rust compiler?
Unity builds are useful for C programs because they tend to reduce header processing overhead, whereas Rust does not have the preprocessor or header files at all.
They also can help with reducing the number of object files (down to one from many), so that the linker has less work to do, this is already sort of done (though not to literally one) due to what I mentioned above.
In general, the conventional advice is to do the exact opposite: breaking large Rust projects into more, smaller compilation units can help do less "spurious" rebuilding, so smaller changes have less overall impact.
Basically, Rust's compile time issues lie elsewhere.
Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.
When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.
I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.
Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.
FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.
I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.
A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.
Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.
Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.
The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).
You can see that with "gccgo". Slower to compile, faster to run.
The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.
Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.
Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P
* Make no nested types - these slow compiler time a lot
* Include no crates, or ones that emphasize compiler speed
C is still v. fast though. That's why I love it (and Rust).
One of the primary features of Rust is the extensive compile-time checking. Monomorphization is also a complex operation, which is not exclusive to Rust.
C compile times should be very fast because it's a relatively low-level language.
On the grand scale of programming languages and their compile-time complexity, C code is closer to assembly language than modern languages like Rust or Swift.
The rust compiler is actually pretty fast for all the work it's doing. It's just an absolutely insane amount of additional work. You shouldn't expect it to compile as fast as C.
1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.
2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.
3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.
Here's a somewhat dated but still good overview of various approaches to generics in different languages including C++, Rust, Swift, and Zig and their tradeoffs: https://thume.ca/2019/07/14/a-tour-of-metaprogramming-models...
Personally I don't care anymore, since I do hotpatching:
https://lib.rs/crates/subsecond
Zig is faster, but then again, Zig isn't memory save, so personally I don't care. It's an impressive language, I love the syntax, the simplicity. But I don't trust myself to keep all the memory relevant invariants in my head anymore as I used to do many years ago. So Zig isn't for me. Simply not the target audience.
For all the C++ laughing in this thread, there's really only one thing that makes C++ slow - non-`extern` templates - and C++ gives you a lot more space to speed them up than Rust does.
As for templates, I can't think of anything about them that would speed up things substantially wrt Rust aside from extern template and manually managing your instantiations in separate .cpp files. Since otherwise it's fundamentally the same problem - recompiling the same code over and over again because it's parametrized with different types every time.
Indeed, out of the box I would actually expect C++ to do worse because a C++ header template has potentially different environment in every translation unit in which that header is included, so without precompiled headers the compiler pretty much has to assume the worst...
My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.
So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.
But it’s also probable that 16 seconds was fairly early in development and it would get much worse from their.
xkcd is always relevant: https://xkcd.com/303/
When I had to deal with this I would just open the newspaper and read an article in front of my boss.
> To get your Rust program in a container, the typical approach you might find would be something like:
If you have `cargo build --target x86_64-unknown-linux-musl` in your build process you do not need to do this anywhere in your Dockerfile. You should compile and copy into /sbin or something.
If you really want to build in a docker image I would suggest using `cargo --target-dir=/target ...` and then run with `docker run --mount type-bind,...` and then copy out of the bind mount into /bin or wherever.
The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.
This is really sad to see but most libraries abuse codegen features really hard.
You have to write a lot of things manually if you want fast compilation in rust.
Compilation speed of code just doesn’t seem to be a priority in general with the community.
For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.
I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.
But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.
> * Borrowing — Rust’s defining feature. Its sophisticated pointer analysis spends compile-time to make run-time safe.
> * Monomorphization — Rust translates each generic instantiation into its own machine code, creating code bloat and increasing compile time.
> * Stack unwinding — stack unwinding after unrecoverable exceptions traverses the callstack backwards and runs cleanup code. It requires lots of compile-time book-keeping and code generation.
> * Build scripts — build scripts allow arbitrary code to be run at compile-time, and pull in their own dependencies that need to be compiled. Their unknown side-effects and unknown inputs and outputs limit assumptions tools can make about them, which e.g. limits caching opportunities.
> * Macros — macros require multiple passes to expand, expand to often surprising amounts of hidden code, and impose limitations on partial parsing. Procedural macros have negative impacts similar to build scripts.
> * LLVM backend — LLVM produces good machine code, but runs relatively slowly. Relying too much on the LLVM optimizer — Rust is well-known for generating a large quantity of LLVM IR and letting LLVM optimize it away. This is exacerbated by duplication from monomorphization.
> * Split compiler/package manager — although it is normal for languages to have a package manager separate from the compiler, in Rust at least this results in both cargo and rustc having imperfect and redundant information about the overall compilation pipeline. As more parts of the pipeline are short-circuited for efficiency, more metadata needs to be transferred between instances of the compiler, mostly through the filesystem, which has overhead.
> * Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn’t need to — with most Rust projects being statically linked, the machine code isn’t needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.
> * Single-threaded compiler — ideally, all CPUs are occupied for the entire compilation. This is not close to true with Rust today. And with the original compiler being single-threaded, the language is not as friendly to parallel compilation as it might be. There are efforts going into parallelizing the compiler, but it may never use all your cores.
> * Trait coherence — Rust’s traits have a property called “coherence”, which makes it impossible to define implementations that conflict with each other. Trait coherence imposes restrictions on where code is allowed to live. As such, it is difficult to decompose Rust abstractions into, small, easily-parallelizable compilation units.
> * Tests next to code — Rust encourages tests to reside in the same codebase as the code they are testing. With Rust’s compilation model, this requires compiling and linking that code twice, which is expensive, particularly for large crates.
[1]: https://www.pingcap.com/blog/rust-compilation-model-calamity...
ahartmetz•3h ago
hu3•3h ago
> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.
Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.
adastra22•3h ago
hu3•3h ago
First stage compiles the code. This is good for isolation and reproducibility.
Second stage is a lightweight container to run the compiled binary.
Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.
Containers are good practice for CI/CD anyway.
MobiusHorizons•2h ago
taberiand•16m ago
AndrewDucker•2h ago
Don't do what you don't need to do.
hu3•2h ago
They are already long past the point of "complicate things unnecessarily".
A simple Dockerfile pales in comparison.
dwattttt•2h ago
Docker is a (the, in some areas) modern way to do it, but far from the only way.
a3w•1h ago
eeZah7Ux•31m ago
no, that's sandboxing.
jerf•3h ago
"Unfortunately, this will rebuild everything from scratch whenever there's any change."
In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)
It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.
scuff3d•1h ago
Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.
I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.
vorgol•2h ago