I wonder, if they spend their time... lets call it "inefficiently"? Spending so much effort on speeding up interning symbols seems to me at very least inefficient. Whatever rustc spends so much time on, can be anything but parsing. My guess would be that calculating types with all the involved traits is one major culprit, and the other is optimization, because zero-cost abstractions maybe O(1) at the run time, but they are neither zero-cost nor O(1) at the compile time, and there are a lot of them.
So, I wish the author all the best (I believe they have a lot of fun), but I think if their goal to build a fast compiler they are wasting their time.
I'd suggest getting any parser generator, building a parser, and making a working compiler first. Then profile it, and only then start inventing plans for improving performance. Or maybe take rustc and to profile it?
I mean, why to invent super-duper-concurrent-hashtable, if it is not known that it slows down the compiler significantly? Why invent anything? Why not just overcommit memory upfront, making the collision rate vanishingly rare, optimizing the happy path, while letting the collision handling to be as slow as it gets? Will it take too much memory? Really? How much is this "too much"? I don't see any discussions on this, even though this is the most obvious path. With obvious tradeoffs, ofc, but without a measure the tradeoffs can be judged only qualitatively, so you can't decide whether they're worth it.
ordu•13m ago
So, I wish the author all the best (I believe they have a lot of fun), but I think if their goal to build a fast compiler they are wasting their time.
I'd suggest getting any parser generator, building a parser, and making a working compiler first. Then profile it, and only then start inventing plans for improving performance. Or maybe take rustc and to profile it?
I mean, why to invent super-duper-concurrent-hashtable, if it is not known that it slows down the compiler significantly? Why invent anything? Why not just overcommit memory upfront, making the collision rate vanishingly rare, optimizing the happy path, while letting the collision handling to be as slow as it gets? Will it take too much memory? Really? How much is this "too much"? I don't see any discussions on this, even though this is the most obvious path. With obvious tradeoffs, ofc, but without a measure the tradeoffs can be judged only qualitatively, so you can't decide whether they're worth it.