Comment: How about get an actual published language standard? How about get more implementations of it?
In the discussion, @infogulch: If you are aiming to be the foundation of an entire industry it doesn't seem unreasonable to ask for some specifications. https://news.ycombinator.com/item?id=44926375
I agree with @infogulch
Because I’m a cynical person, I view LLMs like I’ve viewed rust for years: a neat tool that makes a small subset of people happy and a touch more productive.
The evangelical defense of either topic is very off-putting, to say the least.
@infogulch really nailed it with the comment about how the rust community wants influence/power, and ignores the responsibility that comes with said influence/power.
The most difficult part for alternative Python implementations is supporting the same stdlib.
You have a compiler, that’s MIT licensed. It will be MIT licensed in perpetuity. It will continue to compile your code forever, with absolutely no changes needed to your code.
How precisely, does your life change if there is a second compiler or not? Or if there’s an spec? And more interestingly, why does it affect only operating systems development and not web server development?
We see it time and again across anything that has two implementations.
The pace of development is pretty good, to the point where one of the main criticisms of Rust is that it moves too quickly. The stability is pretty good, I can only remember one time when code was broken. Compile times are mediocre but three ambitious projects hope to tackle that - parallel frontend, cranelift backend, changing the default linker to a better one(landing next month).
This argument would carry more weight if you could point out something specific that the Rust project isn’t addressing but might address if an alternate implementation existed. Remembering of course, that the second implementation would have to be compatible with the OG implementation.
We don't know because we don't, and can't, have a competing compiler.
For all languages that have competing compilers they all improve on each other.
If you don’t know in what way Rust needs to be improved, you don’t know enough to be making confident pronouncements of what it needs.
And like I’ve pointed out before on this thread, a second compiler does exist. It could help with bootstrapping because it’s written in C++ but that’s about it?
Plenty of successful languages have just the one implementation. Python is a great example. Slowing down python development by making everyone proposing a new feature have to convince two separate teams to adopt it wouldn’t help either the Python ecosystem or the Rust one.
Frankly, this thread has gone like every other thread on this subject. Advocates for a spec and second compiler are unable to come up with technical arguments in favour of it. Instead they simply say “well I’ve seen it done by this other language, so it should be done here as well.” That’s just cargo culting.
The only thing different about this thread is hearing from people that work in progress spec and second compiler aren’t enough. No, it needs to be ready right now!!! Yeah alright.
I do find it curious that the people who write comments demanding a spec and alternate implementations aren’t aware of the work-in-progress spec and alternate implementations.
Would it surprise you to learn that they’re making steady progress, even merging in two PRs two hours ago? https://github.com/rust-lang/reference
People see what they want to see. They complain no matter what. Remember this thread started with “there’s no spec”, then “the spec work has been abandoned”, then “the spec work isn’t going fast enough”.
When they’re moving the goalposts this fast it’s hard to keep up. Especially because, like my other comment says, it’s not clear what they’d actually use the spec for.
IMHO, I'm familiar enough with the spec effort to say that the GP and other naysayers are basically right. Despite my hopes, the effort to specify Rust has gone unwell.
In that case, I agree that the spec is different from the reference they’re building.
To try and understand where you’re coming from, you want the development of Rust to be similar to C++, where a group of people decide the spec and afterwards implementations implement the spec?
A language reference is like documentation and a language specification is like a contract. If I uncover behavior in a system that isn’t documented, then I will update the documentation. But if the behavior fails to match the (formal) spec, then the system is at fault for not meeting the expectations of the contract, and the system needs to be corrected. As https://github.com/rust-lang/rfcs/pull/3355#issuecomment-134... states:
> If you built a rocket and that rocket crashes, you wouldn't update the spec of the rocket to say "it is expected to crash after reaching 3000m altitude". But if you made a typo that says the rocket should crash after reaching 3000m altitude and somehow passed review, you wouldn't add a self detonation device into the rocket just because of this either.
In contexts like safety critical/offline environments, the consequences of unexpected behavior can be dire and hard to fix in the moment (e.g. airplanes in flight). Liability becomes a real concern, and outside parties need a "contract" to be confident certain expectations or behaviors will be met in perpetuity. The reference doesn't offer that guarantee, and has understood itself to be a best-effort technical reference, despite attempts to increase its authoritativeness into a spec https://github.com/rust-lang/reference/issues/629
My opinion is that the spec is actually more needed for people inside the language than for people outside. Like the contractual element of a language spec fulfills a constitutional, if not judicial role, in the development process. However, I've learned requires a level of integrity that Rust can no longer hope to achieve.
My perspective of following the compiler development is that very often much needed features aren’t developed because of outside constraints - lack of a person to push it through, lack of consensus on how to implement it and the codebase not being built in a way that supports the feature at all.
A couple of features spring to mind - partial borrows of strict fields by methods and several “obvious” improvements to the type system. I think everyone wants the first feature but it’s hard to implement in the current borrowchecker codebase. Similarly, many features are blocked on the next gen trait solver because the current one is crusty and difficult to work with. Fortunately, I think they’re relatively close with the latter. I believe rust-analyzer is moving to the new trait solver and perhaps rustc will in the future as well. That will unlock a lot of improvements and importantly, fix many soundness issues in the compiler as well.
I can see where you’re coming from. A spec that you can point to allows a user to report a bug with some authority - your implementation is failing to adhere to the spec you wrote. Therefore this bug should be prioritised. And maybe this will lead to higher quality software.
The other perspective is that I’ve seen a lot of good developers, who have a lot of context on the project, leave the project. The reasons vary, but I gather burn out is a common one. They like their coworkers, they like the impact they’re having, but they don’t like a lot of the problems that come with maintaining a large open source project. Entitled people, drive by commenters, a codebase that has developed in layers over 15 years, slow feedback loops on RFCs. I fear that if we added people demanding fixes because of spec adherence, it would be one more source of burnout. This could slow down development as more people leave.
The folks working on the new trait solver are aware of the issues in the existing trait system. You can see all the soundness issues, many of which are tagged T-types here (https://github.com/rust-lang/rust/issues?q=is%3Aissue+is%3Ao...). They really want to fix these, and they hope the new trait solver makes it easier to get there. The constraints of the implementation appear to be driving which soundness issues get worked on. Issues have been paused with the hope that they will go away or be easy to fix with the new system. All energy goes instead to migrating to the new system.
Would it really help if someone said that hey, according to the project constitution, you’ve committed to fixing spec violations before you do feature work like new trait solver. Maybe, I don’t know. My experience with spec driven development is limited.
But my intuition would be to trust the people who’ve done the development so far. They’ve done a pretty good job of producing a successful, performant language. They’re able to add features much more quickly than most other comparable languages while maintaining a high degree of stability. They’ve supported an ecosystem that has roughly doubled in size every year for 10 years. If they want to do spec driven development, sure. If they don’t, that’s fine with me too.
I want them to choose a development system and a quality bar that they’re happy with. Today it’s RFCs and crater runs. Tomorrow it could be spec and spec. If they’re happy, I’m happy.
I’m curious, did you Google before writing this comment? I did just now and the first result I found is https://github.com/rust-lang/reference.
This is something the Rust project is taking very seriously. You can read up on their approach in this detailed blog post from November 2023 - Our Vision for the Rust Specification (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...).
This is an ambitious and difficult project that is a work in progress. They’re continuing to work hard at it. There were 2 PRs merged to master less than 2 hours ago when I wrote this comment. On a Saturday, that’s commitment right there.
> How about get more implementations of it?
Here’s what another quick google search yielded. From the official Rust blog, published November 2024: gccrs: An alternative compiler for Rust (https://blog.rust-lang.org/2024/11/07/gccrs-an-alternative-c...)
Hope that helps dispel the confusion around the lack of spec or alternate implementation.
You linked to efforts to create a specification, but they aren't done or close to done.
It's also not clear the goal is to create a standard. E.g., according to the spec-vision document, the spec isn't planned to be authoritative (the implementation is). If that doesn't change the spec can't really function as a standard. (A standard would allow independent implementations and independent validation of an implementation. A non-authoritative spec doesn't.)
I'm sure they would be among the most interested in a spec. They are currently forced to use the implementation (continuously evolving) as the spec. That makes their job considerably more difficult.
The team that was formed to carry out RFC3355 still exists, though it's had an almost complete turnover of membership.
They're currently engaged in repeating the process of deciding what they think a spec ought to be like, and how it might be written, from scratch, as if the discussion and decisions from 2023 had never happened.
The tracking issue for the RFC was https://github.com/rust-lang/rust/issues/113527 . You can see the last update was in 2023, at the point where it was time to start the actual "write the spec" part.
That's when it all fell apart, because most of the people involved were, and are, much more interested in having their say on the subject of what the spec should and shouldn't be like than they are in doing the work to write it.
After the "write a new spec" plan was abandoned in 2024 the next stated plan was to turn the Reference into the spec, though they never got as far as deciding how that was going to work. That plan lasted six months before they changed direction again. The only practical effect was to add the paragraph identifiers like [destructors.scope.nesting.function-body].
They're currently arguing about whether to write something from scratch or somehow convert the FLS[1] into a spec.
To argue by example: Ruby has an ISO standard, but that standard is for a very old version of the language. Python doesn’t have an independent standard at all; it’s like Rust in that the reference implementation is itself the standard. But nobody is champing at the bit to replace their Ruby or Python stack for standards reasons.
“It’s X under the hood” is true, but only in the same sense that “computer programs are made of coffee” is true. You can’t easily replace a Ruby stack with a Python one just because both have a reference C implementation; that’s the entire point of having high level languages with rich abstractions.
One of the founding members wrote a blog post & RFC which gives some good reasons why a spec is needed https://blog.m-ou.se/rust-standard https://rust-lang.github.io/rfcs/3355-rust-spec.html
> Basically every computational system lives and dies on its degree of formal specifiability
How much software development work would you say qualifies as "construction of formal computational systems"? I feel like I have to be thinking of something different than you because to a first approximation I think ~no software has much in the way of formal specification done on it, if any at all.
I feel like there's a bit of black-and-white thinking here as well. It's not as if you pick either "full formal specification with bonus Lean/Coq/etc." or "YOLO" - there's shades of grey in between where you can nail down different bits with more or less specificity/formality/etc. depending on feasibility/priorities/etc. Same with support for/against a formal spec - there's more nuance than "an absolute necessity" or "a waste of time".
All of it. That's a literal definition of what a piece of software is, a formal system for computing things. Whether we realize it or like it or not, that's what we are doing when we build software, and that's why formalization of implementation-independent details is almost always beneficial. Sure, you don't always need to do this formalization in Lean or something, but you should at least have a document that outlines the behaviors of the system, it's invariants, etc. etc. Simply pointing people at a codebase and saying "there it is" or making the present, incidental behavior of an implementation the source of truth is like building a bridge and answering the question "how do we know it won't collapse under load X" by answering "idk put the load on it and see".
I know it sounds a bit extreme, but I actually would stick to my original stance here. If you have an implementation but haven't worked out its design formally then you probably have a buggy implementation.
An aversion and under appreciation of formalism is far more endemic in contemporary software development than a penchant for it is. Arguably, if people in the industry took the title of "engineer" a little more seriously, we'd have better systems.
Civil engineers work out formal specifications for bridges even when they are working on a tiny foot bridge in a park, why? because that's basically the whole job. They don't throw their hands in the air and just say "a formal spec ain't worth it, just build the bridge intuitively"—plenty of bridges are built this way, but we don't call those builders engineers, and for good reason.
I completely agree that in reality, we often cannot achieve the ideal precisely for the reasons you described. Constraints may make it infeasible. But for the particular case of Rust, I'd argue that that isn't the case, and my point is more so that those entertaining the idea that formal specification doesn't even have value might be doing something with computers, but it isn't engineering.
I think my biggest objection to mandating formalism (in the abstract - I do find value in it in some situations) in computing is how little we know about computing compared to what we know about aviation or bridge building. Those are mature industries; computing is unsettled and changing rapidly. Even the formalisms we do have in computing are comparatively weak.
I don't think that we should mandate formalism. I'm just trying to say that diminishing the value of formalism is bad for the industry as a whole.
And point taken about maturity, but in that sense if we don't encourage people to actually engage in defining specification and formalizing their software we won't ever get to the same point of maturity as actual engineering in the first place. We need to encourage people to explore these aspects of the discipline so that we can actually set software on better foundations, not discourage them by going around questioning the inherent value of formalism.
Rust is a particularly good example because, as other commenters have pointed out, if we believe it's a waste of time to formalize the language we purportedly want everyone to use to build foundational software, what exactly would we formalize then? If you aren't going to formalize that because it "isn't worth it", well, arguably, nothing is worth formalizing then if the core dependency itself isn't even rigorously defined.
People also forget the other benefits of having a formal spec for a language. Yes it enables alternative implementations, but it also enables people to build tons of other things in a verifiably correct way, such as static analysis checks and code generators.
True, all other things being equal. But your logic falls down when all other things aren't equal.
Rust is a more "secure ground" than C even though C has an official specification and Rust doesn't really.
Also you shouldn't say "formal specification" in this context because I don't think you really mean https://en.wikipedia.org/wiki/Formal_specification
1. Self-referencing structs. Especially where you want to have something like a source file and the parsed AST in the same struct. You can't easily do that at the moment. It would be nice if there was something like an offset reference that made it work. Or something else...
2. The orphan rule. I get it, but it's still annoying. We can do better than newtype wrappers (which sometimes have to be nested 2 or 3 levels deep!).
3. The fact that for reasonable compile time you need to split projects into lots of small crates. Again, I understand the reasons, but the result sucks and we can definitely do better. As I understand it this is because crates are compiled as one compilation unit, and they have to be because circular dependencies are allowed. While that is true, I expect most code doesn't actually have circular dependencies so why not make those opt-in? Or even automatically split items/files within a crate into separate compilation units based on the dependency graph?
There's probably more; this is just what I can remember off the top of my head.
Hopefully that's constructive criticism. Rust is still my favourite programming language by far.
Would add 4: partial self borrows (ability for methods to only part of their struct).
For 3, I think the low hanging fruit is probably better support for just using multiple crates (support for publishing them as one package for example).
By that do you mean that there are better alternatives that Rust could adopt or that we need such alternatives (but they could not exist)?
Rust allows circular dependencies between modules within a single compilation unit, but crates can't be circular dependencies.
> I expect most code doesn't actually have circular dependencies
Not true, most code does have benign circular dependencies, though it's not common to realize this. For example, consider any module that contains a `test` submodule, that's a circular dependency, because the `test` submodule imports items from the parent (it has to, because it wants to test them), but also the parent can refer to the submodule, because that's how modules and submodules fundamentally work. To eliminate the circular dependency you would need to have all test functions within every compilation unit defined within the root of the crate (not in a submodule off of the root; literally defined in the root namespace itself).
Yes, that's what I was saying.
> that's a circular dependency, because the `test` submodule imports items from the parent
This isn't a circular dependency because the parent doesn't import anything from the test module.
It would mean your compilation unit might have to be a subset of a module, but I think that's fine?
Foundational software needs performance, reliability—and productivity
This is not how good software is written, it's dogma. Do something people want, don't force it on them. All I see is another dependency. Don't tell me you're funny, tell me a joke.but just saying this will likely attract an avalanche of downvotes, its like the only things you cant talk about online are anything against rust or the genocide in palestine
Then pay attention.
Rust has been in Windows 11 since 2023:
The emphasis on cross-language compatibility may be misplaced. You gain complexity and lose safety. If we have to do that, it would help if it were bottom-up, replacing libc, for example.
Go doesn't do cross-language calling much. Google paid people to build the foundations, the low-level libraries right. Rust tends toward many versions of the basics, most flawed.
One of the fundamental problems in computing for about two decades now has been getting programs on the same machine to efficiently talk to each other. Mostly, people have hacked on the Microsoft .DLL concept, giving them state. Object request brokers, like CORBA, never caught on. Nor did good RPC, like QNX. So people are still trying to link stuff together that doesn't really want to be linked.
Or you can do everything with sockets, even on the same machine. They're the wrong abstraction, raw byte streams, but they're available.
I'm currently struggling with the connection between Apache mod_fcgid (retro) and a Rust program (modern). Apache launches FCGI programs as subprocesses, with stdin and stdout connected to the parent via either pipes or UNIX local sockets. There's a binary framing protocol, an 8-byte header with a length. You can't transmit arbitrarily large messages; those have to be "chunked". There's a protocol for that. You can have multiple transactions in progress. The parent can make out of band queries of the child. There's a risk of deadlock if you write too much and fill the pipe when the other end is also writing. All that plumbing is specific to this application.
(Current problem: Rust std::io appears to not like stdin being a UNIX socket. Trying to fix that.)
That's not really how modern GCs work, and not how abstractions work when you have a good JIT. The latency impact of modern GCs is now often effectively zero (there are zero objects processed in a stop-the-world pause, and the overall CPU utilisation of the GC is a function of the ratio between the size of the resident set and the size of the heap) and a JIT can see optimisation opportunities more easily and exploit them more aggressively than an AOT compiler (thanks to speculative optimisations). The real cost is in startup/warmup time and memory overhead, as well as optimising amortised performance rather than worst-case performance. Furthermore, how much those tradeoffs actually cost can be a very different matter from what they are (e.g. 3x higher RAM footprint may translate to zero additional cost, and doing manual memory management may actually cost more), as brilliantly explored in this recent ISMM talk: https://youtu.be/mLNFVNXbw7I
> C++’s innovations in zero-cost abstractions
I think that the notion of zero-cost abstractions - i.e. the illusion of "high" [1] abstraction when reading the code with the experience of "low" abstraction when evolving it - is outdated and dates from an era (the 1980s and early '90s) when C++ believed it could be both a great high-level and great a low-level language. Since then, there's generally been a growing separation rather than unification of the two domains. The portion of software that needs to be very "elaborate" (and possibly benefit from zero-cost abstractions) and still low-level has been shrinking, and the trend, I think, is not showing any sign of reversing.
[1]: By high/low abstraction I mean the number of subroutine implementations (i.e. those that perform the same computational function) that could be done with no impact at all on the caller. High abstraction means that local changes are less likely to require changing remote code and vice-versa, and so may have an impact on maintenance/evolution costs.
In an AOT language, this must be dynamic dispatch, as there are multiple algorithms. The JDK, however, notices how the encoding is basically always UTF-8 and does an 'if utf8 then do utf8code else do dynamic dispatch'. Then the inliner comes along, pushes that if outside a loop, and merges the byte reading, encoding and char processing to 1 big code block, all optimized together.
Edit: adding https://en.m.wikipedia.org/wiki/Rust_(programming_language)
ninkendo•5mo ago
If you want to write an OS in rust and provide rich services for applications to use, you need to offer libraries they can call without needing to recompile when the OS is upgraded. Windows mostly does this with COM, Apple historically had ObjC’s dynamic dispatch (and now Swift’s ABI), Android does this with a JVM and bytecode… Rust can only really offer extern "C", and that really limits how useful dynamic libraries can be.
Doing an ABI without a VM-like layer (JVM, .NET) is really difficult though, and requires you to commit to certain implementation details without ever changing them, so I can understand why it’s not a priority. To my knowledge the only success stories are Swift (which faced the problem head-on) and COM (which has a language-independent IDL and other pain points.) ObjC has such extreme late binding that it almost feels like cheating.
If rust had a full-featured ABI it would be nearly the perfect language. (It would improve compile times as well, because we’d move to a world where dependencies could just be binaries, which would cut compile times massively.)
loglog•5mo ago
ninkendo•5mo ago
jenadine•5mo ago
(And I've seen proposal to extend extern ABI to more things)
boredatoms•5mo ago
eptcyka•5mo ago
ronsor•5mo ago
robertknight•5mo ago
IshKebab•5mo ago
ameliaquining•5mo ago
For the use case of letting Rust programs dlopen plugins that are also written in Rust, the current solutions are stabby and abi_stable. They're not perfect but they seem to work fairly well in practice.
For more general use cases like cross-language interop, the hope is to get somewhere on crABI: https://github.com/joshtriplett/rfcs/blob/crabi-v1/text/3470... This is intended to be a superior successor to C's ABI, useful for all the same use cases (at least if the code you want to talk to is new enough to support it). Note that this isn't something the Rust maintainers can do unilaterally; there's a need to get buy-in from maintainers of other languages, Linux distros, etc.
I'm not sure that the "everything is a shared library" approach is workable in a language where heap allocation is generally explicit and opt-in. While Swift has some support for stack allocation, most types are heap-allocated, and I think that reduces the extent to which the rest of the language has to be warped to accommodate the stable ABI.
ninkendo•5mo ago
Stack vs heap in swift is simple: structs are stack, classes are heap.
This is what makes Swift’s effort so impressive: it works with stack data just as well as heap. Stack allocated types being used across library boundaries has a ton of plumbing to make it work, there’s the concept of a value witness table (like a vtable but for size/stride information) that can allow looking up sizes at runtime. Stack allocation makes heavy use of alloca(), which most languages shy away from (since it means stack frames aren’t of a static size.) It was difficult but swift managed to do it.
For rust to do something similar it would have to fix a lot of cases where object safety is broken, and yeah, this may require changing the language, particularly to support dynamic/runtime size information. (It would also make the dyn keyword effectively obsolete, because to make generics work across library boundaries, any type has to be able to “become” dynamic if the linker needs it to be.)
It would definitely require changes to the language, but Swift provides some inspiring prior art here.
This article goes into the details: https://faultlore.com/blah/swift-abi/
ameliaquining•5mo ago
ninkendo•5mo ago
IMO it goes way beyond code size. Swift needs a stable ABI so that when you update iOS, your apps don't all break. (This "niche" can IMO be described as "anyone writing an OS".)
Apps on your device don't just link to C-ABI dylibs, they link to tons of Apple-provided frameworks (including things like UIKit) which provide a rich API surface area, with a ton of interdependent relationships between the app's code and the platform's code. Apps have to implement complicated protocols (traits) which the OS cares deeply about, or sometimes inherit from a platform-provided base class, and these protocols need to stay binary-stable across releases.
If disk space were unlimited and apps simply statically linked everything (including UIKit, etc), there'd be no way for Apple to make improvements to these frameworks during an OS release and have apps get the benefits without recompilation. They'd have never been able to do an OS-wide "dark mode", for instance. More than that, if iOS needs to change the implementation of an OS-level API (like the share sheet, or location services, etc), they'd be completely unable to do this if apps all compiled it in statically. [1]
But yeah, there's no equivalent of iOS in the rust world. As another commenter pointed out, rust is an orphan without a primary platform, there's no OS provider using rust to provide OS-level API's, so basically nobody needs this right now. But it also means that, if someone _wanted_ to write an OS in Rust and provide rich API's for apps to use (which is the definition I take for "Foundational Software"), this would likely be the first problem they'd need to solve.
[1] In reality, many things on iOS are split across an XPC boundary so that the "implementation" is in a separate process written by apple, which is rebuilt in a new OS release, but ironically, the framework's ABI is what shoulders the backwards compatibility burden, not the XPC protocol. Apple changes XPC definitions all the time, they simply rely on the client framework updating in lockstep with the daemon, and for apps to link to the client framework dynamically.
ameliaquining•5mo ago
Swift is unique in that both the OS itself and all the apps are written in it, so a stable ABI with very tight language integration makes sense. I'm guessing that a novel Rust OS probably wouldn't work that way, at least if it were aiming to be an open platform; you would want to let people write apps targeting that OS in C or C++ or Go or whatever else. So it would make more sense to use an ABI that was designed from the ground up for cross-language compatibility, like crABI is trying to do (and then it wouldn't make sense for such an ABI to apply to code that didn't opt in).
ninkendo•5mo ago
Correction: People statically linked in other swift libraries. Particularly Swift's stdlib, which finally became usable dynamically circa Swift 5.5. The libraries provided by the OS at the time (UIKit, etc) were all ObjC, and they had stable ABI's (ObjC's ABI is basically string-matching on method names, so it's always had a stable ABI) and so were dynamically linked. People using UIKit from Swift were never statically linking it.
> Swift is unique in that both the OS itself and all the apps are written in it
This is what I'm driving at by using Rust for "foundational software". To me, foundational software means the API's the OS itself offers could be written in Rust, and could be called from Rust apps using dynamic linking. Right now if we wanted to do this, a C ABI is all we can use, which limits the utility. (Not least because all `extern "C"` in rust is implicitly tagged as unsafe.)
> So it would make more sense to use an ABI that was designed from the ground up for cross-language compatibility, like crABI is trying to do
(crABI was trying to do this, but it seems dead now.)
Something like this would be awesome, and it's what I'm advocating for. I don't even really think we need all of Rust to be able to be used across dynamic linker boundaries... as others have pointed out, allowing enums (with associated values), Option<T> and Result<T, E>, to be passed across ABI boundaries would be a great place to start. I also agree that such an ABI doesn't have to be forced on everyone, it can be opt-in as you suggest, and used for cases where it makes sense.
ameliaquining•5mo ago
As for the "shared-library-centric OS" idea, I'm not sure anyone is really strongly clamoring for this (mostly because of the difficulties in getting adoption of any kind of novel OS at all), so it basically makes sense for the Rust project not to prioritize it. Niko's broader definition of "foundational software" seems more presently applicable. But if such a thing were to be attempted then figuring out a good ABI would be an important step.
yoshuaw•5mo ago
Using them is unfortunately not yet quite as easy as using repr(wasm)/extern "wasm". But with wit-bindgen [2] and the wasm32-wasip2 target [3], it's not that hard either.
[1]: https://youtu.be/tAACYA1Mwv4
[2]: https://github.com/bytecodealliance/wit-bindgen
[3]: https://doc.rust-lang.org/nightly/rustc/platform-support/was...
drogus•5mo ago
nh2•5mo ago
You can use any serialisation method you want (C structs, Protobuf, JSON, ...).
An ABI is for passing data in the programming language and trusting it blindly, e.g. what functions use when calling each other.
Any sane OS still has to validate the data that crosses its boundaries. For example, if you make a Linux system call, it doesn't blindly use a pointer you pass it, but instead actually checks if the pointer belongs to the calling process's address space.
It seems pointless to pick Rust's internal function-calling data serialisation for the OS boundary; that wouldn't make it easier for any userspace programs running in that OS than explicitly defining a serialisation format.
Why would an OS want to limit itself to Rust's internal representation, or why would Rust want to freeze its internal representation, making improvements impossible on both sides? This seems to bring only drawbacks.
The only benefit I can see so far for a stable ABI is dynamic linking (and its use in "plugins" for programs where the code is equally trusted, e.g. writing some dlopen()able `.so` files in Rust, as plugins for Rust-only programs), where you can really "blindly call the symbol". But even there it's a bit questionable, as any mistake or mismatch in ABI can result in immediately memory corruption, which is something that Rust users rather dislike.
anon-3988•5mo ago