frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

GitHub Models now supports moving beyond free limits

https://github.blog/changelog/2025-06-24-github-models-now-supports-moving-beyond-free-limits/
1•nonfamous•38s ago•0 comments

Hailey gets LLM code review

https://social.hails.org/@hailey/114752144098708214
1•jordigh•1m ago•0 comments

Show HN: I built a daily sunlight tracker

https://www.lumehealth.io/products
1•vickipow•2m ago•0 comments

VeilNet Built on Pion WebRTC, Replacement for Tor and Tailscale

https://www.veilnet.org/
2•ulfaric•4m ago•1 comments

The massed-spaced learning effect in non-neural human cells

https://www.nature.com/articles/s41467-024-53922-x
1•galaxyLogic•6m ago•0 comments

Check out Wonder Machine, solve your wildest thoughts, powered by xAI

https://ohara.ai/mini-apps/d6c57cc1-5e3c-48e0-bd0c-f63bc2d58eae
1•ppet•7m ago•1 comments

BBC to start charging US-based consumers for news and TV coverage

https://www.theguardian.com/media/2025/jun/26/bbc-usa-paid-subscription-news
4•LeoPanthera•8m ago•0 comments

Bumble's AI icebreakers are mainly breaking EU law

https://noyb.eu/en/bumbles-ai-icebreakers-are-mainly-breaking-eu-law
1•WhyNotHugo•10m ago•0 comments

My Dotfiles with Chezmoi

https://github.com/neiesc/dotfiles
1•neiesc•10m ago•0 comments

Data Science Weekly – Issue 605

https://datascienceweekly.substack.com/p/data-science-weekly-issue-605
1•sebg•11m ago•0 comments

BinDSA: Efficient, Precise Binary-Level Pointer Analysis

https://dl.acm.org/doi/10.1145/3728928
1•matt_d•11m ago•0 comments

Criminal who helped inspire 'Stockholm syndrome' theory dies

https://www.bbc.com/news/articles/cp3ly0e0wvpo
2•FridayoLeary•12m ago•0 comments

Bridging the Gaps Between GNNs and Data-Flow Analysis: The Closer, the Better

https://dl.acm.org/doi/10.1145/3728906
1•matt_d•13m ago•0 comments

Print-Ready Name Badge Inserts in 60s

https://badgesheet.vercel.app
1•daddypog•13m ago•0 comments

Show HN: Using Claude Code SDK to to implement an agentic CV parser

https://github.com/semicolonio/claude-code-cv-parser
1•nasir•14m ago•0 comments

O3-Deep-Research

https://platform.openai.com/docs/models/o3-deep-research
1•serjester•17m ago•0 comments

Show HN: TypeQuicker – Type natural text that targets your weak points

https://www.typequicker.com
3•absoluteunit1•18m ago•1 comments

The Deceptive Promise of AI

https://eric.mann.blog/the-deceptive-promise-of-ai/
1•eamann•20m ago•0 comments

Lake Tahoe Boat Tragedy Claims Longtime Apple Employee Paula Bozinovich

https://daringfireball.net/linked/2025/06/25/lake-tahoe-boat-tragedy-claims-longtime-apple-employee-paula-bozinovich
2•ksec•21m ago•0 comments

Show HN: Tavkhid Method – persistent memory in DeepSeek-R1 beyond 128K tokens

https://medium.com/@tauhidnataevofficial
1•Tavkhid•24m ago•0 comments

The Low-Altitude Economy Is About War

https://entropicthoughts.com/low-altitude-economy-is-about-war
1•zaik•24m ago•0 comments

DHH Presents Omarchy: Arch and Hyprland Linux Build

https://twitter.com/dhh/status/1938369883617861849
3•brightball•25m ago•1 comments

Save your disk, write files directly into RAM with /dev/shm

https://hiandrewquinn.github.io/til-site/posts/save-your-disk-write-files-directly-into-ram-with-dev-shm/
3•hiAndrewQuinn•26m ago•0 comments

Thomas Aquinas – The world is divine

https://ralphammer.com/thomas-aquinas-the-world-is-divine/
1•pedroth•30m ago•0 comments

Tracking Deportation Agents Across America

https://icelist.info/
1•doener•32m ago•0 comments

Show HN: Functioneer – Do eng/sci analysis in <5 lines of code

https://github.com/qthedoc/functioneer
1•qthedoc•34m ago•0 comments

DLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching

https://arxiv.org/abs/2506.06295
1•PaulHoule•40m ago•0 comments

Mastra Cloud

https://mastra.ai/cloud
2•AnhTho_FR•40m ago•0 comments

Maps is deprecated and will be removed from the Microsoft Store by July 2025

https://learn.microsoft.com/en-us/windows/whats-new/deprecated-features-resources
3•PopAlongKid•41m ago•1 comments

How do scientists calculate the probability that an asteroid could hit Earth?

https://theconversation.com/how-do-scientists-calculate-the-probability-that-an-asteroid-could-hit-earth-249834
1•bdev12345•41m ago•0 comments
Open in hackernews

"Why is the Rust compiler so slow?"

https://sharnoff.io/blog/why-rust-compiler-slow
129•Bogdanp•4h ago

Comments

ahartmetz•3h ago
That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!
hu3•3h ago
From the article, the goal was not to simplify, but rather to modernize:

> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.

Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.

adastra22•3h ago
So put the binary in the container. Why does it have to be compiled within the container?
hu3•3h ago
That is what they are doing. It's a 2 stage Dockerfile.

First stage compiles the code. This is good for isolation and reproducibility.

Second stage is a lightweight container to run the compiled binary.

Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.

Containers are good practice for CI/CD anyway.

MobiusHorizons•2h ago
That’s a reasonable deployment strategy, but a pretty terrible local development strategy
taberiand•16m ago
Devcontainers are a good compromise though - you can develop within a context that can be very nearly identical to production; with a bit of finagling you could even use the same dockerfile for the devcontainer, and the build image and the deployed image
AndrewDucker•2h ago
I'm not sure why "complicate things unnecessarily" is considered more modern.

Don't do what you don't need to do.

hu3•2h ago
You realize the author is compiling a Rust webserver for a static website right?

They are already long past the point of "complicate things unnecessarily".

A simple Dockerfile pales in comparison.

dwattttt•2h ago
Mightily resisting the urge to be flippant, but all of those benefits were achieved before Docker.

Docker is a (the, in some areas) modern way to do it, but far from the only way.

a3w•1h ago
Increased security compared to bare hardware, lower than VMs. Also, lower than Jails and RKT (Rocket) which seems to be dead.
eeZah7Ux•31m ago
> process isolation, increased security

no, that's sandboxing.

jerf•3h ago
Also strikes me as not fully understanding what exactly docker is doing. In reference to building everything in a docker image:

"Unfortunately, this will rebuild everything from scratch whenever there's any change."

In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)

It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.

scuff3d•1h ago
Thank you! I got a couple minutes in and was confused as hell. There is no reason to do the builds in the container.

Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.

I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.

vorgol•2h ago
Exactly. I immediately thought of the grug brain dev when I read that.
kelnos•3h ago
> This is... not ideal.

What? That's absolutely ideal! It's incredibly simple. I wish deployment processes were always that simple! Docker is not going to make your deployment process simpler than that.

I did enjoy the deep dive into figuring out what was taking a long time when compiling.

quectophoton•2h ago
One thing I like about Alpine Linux is how easy and dumbproof it is to make packages. It's not some wild beast like trying to create `.deb` files.

If anyone out there is already fully committed to using only Alpine Linux, I'd recommend trying creating native packages at least once.

tmtvl•3h ago
Just set up a build server and have your docker containers fetch prebuilt binaries from that?
adastra22•3h ago
As a former C++ developer, claims that rust compilation is slow leave me scratching my head.
shadowgovt•3h ago
I thorougly enjoy all the work on encapsulation and reducing the steps of compilation to compile, then link that C does... Only to have C++ come along and undo almost all of it through the simple expedient of requiring templates for everything.

Oops, changed one template in one header. And that impacts.... 98% of my code.

eikenberry•3h ago
Which is one of the reasons why Rust is considered to be targeting C++'s developers. C++ devs already have the Stockholm syndrome needed to tolerate the tooling.
MyOutfitIsVague•2h ago
Rust's compilation is slow, but the tooling is just about the best that any programming language has.
zozbot234•1h ago
> Stockholm syndrome

A.k.a. "Remember the Vasa!" https://news.ycombinator.com/item?id=17172057

MobiusHorizons•2h ago
Things can still be slow in absolute terms without being as slow as C++. The issues with compiling C++ are incredibly well understood and documented. It is one of the worst languages on earth for compile times. Rust doesn’t share those language level issues, so the expectations are understandably higher.
int_19h•38m ago
But it does share some of those issues. Specifically, while Rust generics aren't as unstructured as C++ templates, the main burden is actually from compiling all those tiny instantiations, and Rust monomorphization has the same exact problem responsible for the bulk of its compile times.
ecshafer•3h ago
The Rust compiler is slow. But if you want more features from your compiler you need to have a slower compiler, there isn't a way around that. However this blog post doesn't really seem to be around that and more an annoyance in how they deploy binaries.
MangoToupe•3h ago
I don't really consider it to be slow at all. It seems about as performant as any other language this complexity, and it's far faster than the 15 minute C++ and Scala build times I'd place in the same category.
randomNumber7•1h ago
When C++ templates are turing complete is it pointless to complain about the compile times without considering the actual code :)
namibj•3h ago
Incremental compilation good. If you want, freeze the initial incremental cache after a single fresh build to use for building/deploying updates, to mitigate the risk of intermediate states gradually corrupting the cache.

Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.

b0a04gl•3h ago
rust prioritises build-time correctness: no runtime linker or no dynamic deps. all checks (types, traits, ownership) happen before execution. this makes builds sensitive to upstream changes. docker uses content-hash layers, so small context edits invalidate caches. without careful layer ordering, rust gets fully recompiled on every change.
RS-232•3h ago
Is there an equivalent of ninja for rust yet?
steveklabnik•2h ago
It depends on what you mean by 'equivalent of ninja.'

Cargo is the standard build system for Rust projects, though some users use other ones. (And some build those on top of Cargo too.)

senderista•3h ago
WRT compilation efficiency, the C/C++ model of compiling separate translation units in parallel seems like an advance over the Rust model (but obviously forecloses opportunities for whole-program optimization).
woodruffw•3h ago
Rust can and does compile separate translation units in parallel; it's just that the translation unit is (roughly) a crate instead of a single C or C++ source file.
EnPissant•2h ago
And even for crates, Rust has incremental compilation.
ic_fly2•3h ago
This is such a weird cannon on sparrows approach.

The local builds are fast, why would you rebuild docker for small changes?

Also why is a personal page so much rust and so many dependencies. For a larger project with more complex stuff you’d have a test suite that takes time too. Run both in parallel in your CI and call it a day.

taylorallred•3h ago
So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.
js2•2h ago
"Just". Probably because there's a lot of complexity you're waving away. Almost nothing is ever simple as "just".
pixelpoet•2h ago
At a previous company, we had a rule: whoever says "just" gets to implement it :)
forrestthewoods•1h ago
I wanted to ban “just” but your rule is better. Brilliant.
taylorallred•2h ago
That "just" was too flippant. My bad. What I meant to convey is "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".
steveklabnik•2h ago
> "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".

Do you really believe that nobody over the course of Rust's lifetime has ever taken a look at C compilers and thought about if techniques they use could apply to the Rust compiler?

taylorallred•2h ago
Of course not. But it wouldn't surprise me if nobody thought to use a unity build. (Maybe they did. Idk. I'm curious).
steveklabnik•2h ago
Rust and C have differences around compilation units: Rust's already tend to be much larger than C on average, because the entire crate (aka tree of modules) is the compilation unit in Rust, as opposed to the file-based (okay not if you're on some weird architecture) compilation unit of C.

Unity builds are useful for C programs because they tend to reduce header processing overhead, whereas Rust does not have the preprocessor or header files at all.

They also can help with reducing the number of object files (down to one from many), so that the linker has less work to do, this is already sort of done (though not to literally one) due to what I mentioned above.

In general, the conventional advice is to do the exact opposite: breaking large Rust projects into more, smaller compilation units can help do less "spurious" rebuilding, so smaller changes have less overall impact.

Basically, Rust's compile time issues lie elsewhere.

ameliaquining•2h ago
Can you explain why a unity build would help? Conventional wisdom is that Rust compilation is slow in part because it has too few translation units (one per crate, plus codegen units which only sometimes work), not too many.
tptacek•2h ago
I don't think it's interesting to observe that C code can be compiled quickly (so can Go, a language designed specifically for fast compilation). It's not a problem intrinsic to compilation; the interesting hard problem is to make Rust's semantics compile quickly. This is a FAQ on the Rust website.
lordofgibbons•2h ago
The more your compiler does for you at build time, the longer it will take to build, it's that simple.

Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.

When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.

ChadNauseam•2h ago
That the type system is responsible for rust's slow builds is a common and enduring myth. `cargo check` (which just does typechecking) is actually usually pretty fast. Most of the build time is spent in the code generation phase. Some macros do cause problems as you mention, since the code that contains the macro must be compiled before the code that uses it, so they reduce parallelism.
rstuart4133•1h ago
> Most of the build time is spent in the code generation phase.

I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.

Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.

FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.

tedunangst•57m ago
I just ran cargo check on nushell, and it took a minute and a half. I didn't time how long it took to compile, maybe five minutes earlier today? So I would call it faster, but still not fast.

I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.

duped•2h ago
I think this is mostly a myth. If you look at Rust compiler benchmarks, while typechecking isn't _free_ it's also not the bottleneck.

A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.

Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.

cogman10•2h ago
Yes but I'd also add that Go specifically does not optimize well.

The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).

You can see that with "gccgo". Slower to compile, faster to run.

cherryteastain•1h ago
Is gccgo really faster? Last time I looked it looked like it was abandoned (stuck at go 1.18, had no generics support) and was not really faster than the "actual" compiler.
Zardoz84•1h ago
Dlang compilers does more than any C++ compiler (metaprogramming, a better template system and compile time execution) and it's hugely faster. Language syntax design has a role here.
dhosek•2h ago
Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.
drivebyhooting•2h ago
That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.
steveklabnik•2h ago
The borrow checker is usually a blip on the overall graph of compilation time.

The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.

kimixa•31m ago
While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.

Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.

Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P

taylorallred•2h ago
These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.
Thiez•2h ago
This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.
vbezhenar•1h ago
Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.
rcxdude•1h ago
Probably, but it's the kind of thing that needs a lot of fairly significant overhauls in the compiler architecture to really move the needle on, as far as I understand.
zozbot234•1h ago
You can address the junk problem manually by having generic functions delegate as much of their work as possible to non-generic or "less" generic functions (Where a "less" generic function is one that depends only on a known subset of type traits, such as size or alignment. Then delegating can help the compiler generate fewer redundant copies of your code, even if it can't avoid code monomorphization altogether.)
jvanderbot•2h ago
If you'd like the rust compiler to operate quickly:

* Make no nested types - these slow compiler time a lot

* Include no crates, or ones that emphasize compiler speed

C is still v. fast though. That's why I love it (and Rust).

Aurornis•2h ago
> makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.

One of the primary features of Rust is the extensive compile-time checking. Monomorphization is also a complex operation, which is not exclusive to Rust.

C compile times should be very fast because it's a relatively low-level language.

On the grand scale of programming languages and their compile-time complexity, C code is closer to assembly language than modern languages like Rust or Swift.

ceronman•2h ago
I bet that if you take those 278k lines of code and rewrite them in simple Rust, without using generics, or macros, and using a single crate, without dependencies, you could achieve very similar compile times. The Rust compiler can be very fast if the code is simple. It's when you have dependencies and heavy abstractions (macros, generics, traits, deep dependency trees) that things become slow.
90s_dev•2h ago
I can't help but think the borrow checker alone would slow this down by at least 1 or 2 orders of magnitude.
steveklabnik•2h ago
Your intuition would be wrong: the borrow checker does not take much time at all.
FridgeSeal•2h ago
Again, as this been often repeated, and backed up with data, the borrow-checker is a tiny fraction of a Rust apps build time, the biggest chunk of time is spent in LLVM.
tomjakubowski•1h ago
The borrow checker is really not that expensive. On a random example, a release build of the regex crate, I see <1% of time spent in borrowck. >80% is spent in codegen and LLVM.
taylorallred•1h ago
I'm curious about that point you made about dependencies. This Rust project (https://github.com/microsoft/edit) is made with essentially no dependencies, is 17,426 lines of code, and on an M4 Max it compiles in 1.83s debug and 5.40s release. The code seems pretty simple as well. Edit: Note also that this is 10k more lines than the OP's project. This certainly makes those deps suspicious.
maxk42•1h ago
Rust is doing a lot more under the hood. C doesn't track variable lifetimes, ownership, types, generics, handle dependency management, or handle compile-time execution (beyond the limited language that is the pre-compiler). The rust compiler also makes intelligent (scary intelligent!) suggestions when you've made a mistake: it needs a lot of context to be able to do that.

The rust compiler is actually pretty fast for all the work it's doing. It's just an absolutely insane amount of additional work. You shouldn't expect it to compile as fast as C.

vbezhenar•1h ago
I encountered one project in 2000-th with few dozens of KLoC in C++. It compiled in a fraction of a second on old computer. My hello world code with Boost took few seconds to compile. So it's not just about language, it's about structuring your code and using features with heavy compilation cost. I'm pretty sure that you can write Doom with C macros and it won't be fast. I'm also pretty sure, that you can write Rust code in a way to compile very fast.
taylorallred•1h ago
I'd be very interested to see a list of features/patterns and the cost that they incur on the compiler. Ideally, people should be able to use the whole language without having to wait so long for the result.
kccqzy•1h ago
Templates as one single feature can be hugely variable. Its effect on compilation time can be unmeasurable. Or you can easily write a few dozen lines that will take hours to compile.
vbezhenar•33m ago
So there are few distinctive patterns I observed in that project. Please note that many of those patterns are considered anti-patterns by many people, so I don't necessarily suggest to use them.

1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.

2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.

3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.

charcircuit•2h ago
Why doesn't the Rust ecosystem optimize around compile time? It seems a lot of these frameworks and libraries encourage doing things which are slow to compile.
steveklabnik•2h ago
The ecosystem is vast, and different people have different priorities. Simple as that.
nicoburns•1h ago
It's starting to, but a lot of people are using Rust because they need (or want) the best possible runtime performance, so that tends to be prioritised a lot of the time.
int_19h•21m ago
It would be more accurate to say that idiomatic Rust encourages doing things which are slow to compile: lots of small generic functions everywhere. And the most effective way to speed this up is to avoid monomorphization by using RTTI to provide a single generic compiled implementation that can be reused for different types, like what Swift does when generics across the module boundary. But this is less efficient at runtime because of all the runtime checks and computations that now need to be done to deal with objects of different sizes etc, many direct or even inlined calls now become virtual etc.

Here's a somewhat dated but still good overview of various approaches to generics in different languages including C++, Rust, Swift, and Zig and their tradeoffs: https://thume.ca/2019/07/14/a-tour-of-metaprogramming-models...

aappleby•2h ago
you had a functional and minimal deployment process (compile copy restart) and now you have...
OtomotO•2h ago
It's not. It's just doing way more work than many other compilers, due to a sane type system.

Personally I don't care anymore, since I do hotpatching:

https://lib.rs/crates/subsecond

Zig is faster, but then again, Zig isn't memory save, so personally I don't care. It's an impressive language, I love the syntax, the simplicity. But I don't trust myself to keep all the memory relevant invariants in my head anymore as I used to do many years ago. So Zig isn't for me. Simply not the target audience.

o11c•2h ago
TL;DR `async` considered harmful.

For all the C++ laughing in this thread, there's really only one thing that makes C++ slow - non-`extern` templates - and C++ gives you a lot more space to speed them up than Rust does.

int_19h•29m ago
C++ also has async these days.

As for templates, I can't think of anything about them that would speed up things substantially wrt Rust aside from extern template and manually managing your instantiations in separate .cpp files. Since otherwise it's fundamentally the same problem - recompiling the same code over and over again because it's parametrized with different types every time.

Indeed, out of the box I would actually expect C++ to do worse because a C++ header template has potentially different environment in every translation unit in which that header is included, so without precompiled headers the compiler pretty much has to assume the worst...

gz09•2h ago
Unfortunately, removing debug symbols in most cases isn't a good/useful option
magackame•2h ago
What "most" cases are you thinking of? Also don't forget that a binary that in release weights 10 MB, when compiled with debug symbols can weight 300 MB, which is way less practical to distribute.
kenoath69•2h ago
Where is Cranelift mentioned

My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.

So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.

jiehong•2h ago
I think that’s what zig team is also doing to allow very fast build times: remove LLVM.
norman784•2h ago
Yes, Zig author commented[0] that a while ago

[0] https://news.ycombinator.com/item?id=44390972

norman784•2h ago
Nice, I checked a while ago and was no support for macOS aarch64, but seems that now it is supported.
lll-o-lll•1h ago
Wait. You were going to ditch rust because of 16 second build times?
sarchertech•26m ago
16 seconds is infuriating for something that needs to be manually tested, like does this jump feel too floaty.

But it’s also probable that 16 seconds was fairly early in development and it would get much worse from their.

metaltyphoon•25m ago
Over time that adds up when your coding consists of REPL like workflow.
smcleod•2h ago
I've got to say when I come across an open source project and realise it's in rust I flinch a bit know how incredibly slow the build process is. It's certainly been one of the deterrents to learning it.
juped•2h ago
I don't think rustc is that slow. It's usually cargo/the dozens of crates that make it take a long time, even if you've set up a cache and rustc is doing nothing but hitting the cache.
Devasta•1h ago
Slow compile times are a feature, get to make a cuppa.
zozbot234•1h ago
> Slow compile times are a feature

xkcd is always relevant: https://xkcd.com/303/

randomNumber7•1h ago
On the other hand you get mentally insane if you try to work in a way that you do s.th. usefull during the 5-10 min compile times you often have with C++ projects.

When I had to deal with this I would just open the newspaper and read an article in front of my boss.

duped•1h ago
A lot of people are replying to the title instead of the article.

> To get your Rust program in a container, the typical approach you might find would be something like:

If you have `cargo build --target x86_64-unknown-linux-musl` in your build process you do not need to do this anywhere in your Dockerfile. You should compile and copy into /sbin or something.

If you really want to build in a docker image I would suggest using `cargo --target-dir=/target ...` and then run with `docker run --mount type-bind,...` and then copy out of the bind mount into /bin or wherever.

cratermoon•1h ago
Some code that can make Rust compilation pathologically slow is complex const expressions. Because the compiler can evaluate a subset of expressions at compile time[1], a complex expression can take an unbounded amount of time to evaluate. The long-running-const-eval will by default abort the compilation if the evaluation takes too long.

1 https://doc.rust-lang.org/reference/const_eval.html

ozgrakkurt•48m ago
Rust compiler is very very fast but language has too many features.

The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.

This is really sad to see but most libraries abuse codegen features really hard.

You have to write a lot of things manually if you want fast compilation in rust.

Compilation speed of code just doesn’t seem to be a priority in general with the community.

rednafi•25m ago
I’m glad that Go went the other way around: compilation speed over optimization.

For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.

I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.

But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.

jmyeet•11m ago
Early design decisions favored run-time over compile-time [1]:

> * Borrowing — Rust’s defining feature. Its sophisticated pointer analysis spends compile-time to make run-time safe.

> * Monomorphization — Rust translates each generic instantiation into its own machine code, creating code bloat and increasing compile time.

> * Stack unwinding — stack unwinding after unrecoverable exceptions traverses the callstack backwards and runs cleanup code. It requires lots of compile-time book-keeping and code generation.

> * Build scripts — build scripts allow arbitrary code to be run at compile-time, and pull in their own dependencies that need to be compiled. Their unknown side-effects and unknown inputs and outputs limit assumptions tools can make about them, which e.g. limits caching opportunities.

> * Macros — macros require multiple passes to expand, expand to often surprising amounts of hidden code, and impose limitations on partial parsing. Procedural macros have negative impacts similar to build scripts.

> * LLVM backend — LLVM produces good machine code, but runs relatively slowly. Relying too much on the LLVM optimizer — Rust is well-known for generating a large quantity of LLVM IR and letting LLVM optimize it away. This is exacerbated by duplication from monomorphization.

> * Split compiler/package manager — although it is normal for languages to have a package manager separate from the compiler, in Rust at least this results in both cargo and rustc having imperfect and redundant information about the overall compilation pipeline. As more parts of the pipeline are short-circuited for efficiency, more metadata needs to be transferred between instances of the compiler, mostly through the filesystem, which has overhead.

> * Per-compilation-unit code-generation — rustc generates machine code each time it compiles a crate, but it doesn’t need to — with most Rust projects being statically linked, the machine code isn’t needed until the final link step. There may be efficiencies to be achieved by completely separating analysis and code generation.

> * Single-threaded compiler — ideally, all CPUs are occupied for the entire compilation. This is not close to true with Rust today. And with the original compiler being single-threaded, the language is not as friendly to parallel compilation as it might be. There are efforts going into parallelizing the compiler, but it may never use all your cores.

> * Trait coherence — Rust’s traits have a property called “coherence”, which makes it impossible to define implementations that conflict with each other. Trait coherence imposes restrictions on where code is allowed to live. As such, it is difficult to decompose Rust abstractions into, small, easily-parallelizable compilation units.

> * Tests next to code — Rust encourages tests to reside in the same codebase as the code they are testing. With Rust’s compilation model, this requires compiling and linking that code twice, which is expensive, particularly for large crates.

[1]: https://www.pingcap.com/blog/rust-compilation-model-calamity...