In fact, the more behaviour we can model at compile time the better when it comes to LLMs - there's some cool ideas here like transpiling Rust into languages for formal verification. See https://github.com/formal-land/coq-of-rust as an example.
Formal verification was one of those things that was previously so annoying to do that it rarely made it past academic use cases or extremely important libraries, but I think LLMs take the tedium out of it. Perhaps formal verification will have a "test driven development" type of moment in the sun thanks to this.
tsc error messages are so bad that every time my LLM sees one of those "SomeType is not assignable to SomeLongAssTypeDontEvenTryToUnderstandWhatsGoingOnHere<<<<>>>>>>>>>>>>>>>>>>>>" it just gives up and casts to any. goes for python too.
is go faster than rust?
Depends on how you write the Go or Rust code. The most optimal Rust re-write of the TypeScript compiler would very likely be faster than the most optimal version in Go. However they didn't want to do a re-write, they wanted to port the existing compiler codebase written in TS. Go like TS (ultimately the JS runtime) also has GC which makes a 1-to-1 port much easier.
The reason for this decision is that they wanted a near 1:1 port of the typescript code to go, keeping design and structure almost identical.
You can’t do that in rust as easily because of all the cyclical references and indirection involved.
A rust port would be a rewrite. This is merely a migration.
No.
They rewrote in go because go is similar enough to typescript, while being faster than typescript.
Source: https://github.com/microsoft/typescript-go/discussions/411
Need to dig in a bit more on the implementation, but I was surprised that the paper didn't mention hooking into existing language service/server. There's more than types that an LLM could leverage from existing language tooling. Auto imports is a good example, it is handy for the human developer to keep a linear writing flow, something a LLM needs even more.
https://arxiv.org/abs/2407.08983
AST-T5: Structure-Aware Pretraining for Code Generation and Understanding
https://arxiv.org/abs/2401.03003
CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation
That this research comes out of universities, and not large AI labs, makes me think those labs believe that larger models are still the way to go.
jiggawatts•2h ago
The real challenge will be to make this detect and switch languages automatically. For example, a snippet of code could include a LaTeX formula in a comment and SQL in a string literal. There are many more examples, such as regex inside a shell script, and so on.
The obvious next step after that is back-tracking. It's possible to emit a token that is valid, but then allows no further completions that are valid. In other words, the model can paint itself into a corner. To my knowledge, no current online LLM service uses any kind of backtracking, they run in append ("forwards") mode only.
helltone•2h ago
grafmax•1h ago
foota•1h ago
tough•1h ago
https://arxiv.org/abs/2504.00532
IterGen: Iterative Semantic-aware Structured LLM Generation with Backtracking
https://arxiv.org/abs/2410.07295
ROCODE: Integrating Backtracking Mechanism and Program Analysis in Large Language Models for Code Generation
https://arxiv.org/abs/2411.07112v1