Can someone explain this? Are they suggesting that (eventually) one engineer can produce 1 million lines of Rust code in a month? Or replace 1 million lines of C code?
Using new "powerful code processing infrastructure"... but would it understand the semantics? Are those semantics clearly documented?
How is it wild? On social media I kept seeing things like people falsely expecting the end goal would require manually reading through a million lines of code. It seemed more like people making up reasons to be mad or trying to dunk on the author.
Which is absolutely batshit. There's no way that can be reviewed properly, even if it's putting all of the review work on all of the other teams.
This is "lets put our postgres database on blockchain because I think blockchain is cool" level of crap you see in peak bubble.
That’s not to trivialize what a compiler does, but it’s effectively going from a complex form to its building blocks while maintaining semantics.
Changing high level languages introduces fundamentally different semantics. Both can decompose to the same general building blocks, but you can’t necessarily compose them the same way.
At the simplest example, a compiler backend (the part you’re describing) can’t reason about data access rules. That is the domain of the language’s compiler frontend and a fundamental difference between C++ and Rust that can’t just be directly derived.
AI code generation is not deterministic and has no guarantee of behavior, thus requires review unless incorrect code is acceptable.
You don't have to use AI code generation to be what is generating the code or you could require some kind of proof of equivalence to verify the code that was generated.
They do still review code, but the first wave of layoffs in 2022 mainly hit principal engineers because some bean counters said "oh, these are the engineers that are costing us the most per head", so it's kind of the inmates running the asylum now.
And I'll say that their biggest sin was always that their code from the late 90s on was about 20% too clever for their own good. Kind of goes to that classic quip about how how it takes twice your brain power to debug code as it takes to write it, so if you were already maxing out just writing it, then you're not smart enough to debug it. That's half of why features seemed to get a 1.0 release, then get replaced with something rather than iteratively improved (the other half being FAANG style internal incentive structures).
Were all seeing the effects of them clearing house of their weaponized autism that was barely keeping the wheels on the wagon. They do review, but they don't have the ability to do it properly at scale anymore. Which makes rewriting everything even more batshit.
As a third-party developer in the late 2000s I remember my boss giving me a CDROM binder (binders?) of every single OS release that Microsoft had ever put out. I assume he’d been given it my his developer-relations rep at Microsoft. My team and I used it to ensure our code worked on every MSDOS/Win* platform we cared to target.
I expect that, internally, the Windows team have crazy amounts of resources to implement the most comprehensive regression testing suite ever created. To that extent, at least, you’d be able to tell if the Rust version did what the old code did even if you didn’t read the code itself.
That thing we don't have yet?
marcodiego•54m ago
fragmede•44m ago
dralley•41m ago
j-o-m•7m ago
Further, this is not a random speculative post, it is an announcement for a job opening on the posters team.
Spooky23•25m ago
1gn15•13m ago