It helps if you can express these preconditions in the first place, though.
That said, Rust is just enough Haskell to be all the Haskell any systems programmer ever needed, always in strict mode, with great platform support, a stellar toolkit, great governance, a thriving ecosystem and hype.
Lots of overlap in communities (more Haskell -> Rust than the other way IMO) and it's not a surprise :)
I think an IO monad and linear types [1] would do a lot for me in a Rust-like
[1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem
One large problem with Haskell that pushes people to Rust is strictness, I think -- laziness is a feature of Haskell and is basically the opposite direction from where Rust shines (and what one would want out of a systems language). It's an amazing feature, but it makes writing performant code more difficult than it has to be. There are ways around it, but they're somewhat painful.
Oh there's also the interesting problem of bottom types and unsafety in the std library. This is a HUGE cause of consternation in Haskell, and the ecosystem suffers from it (some people want to burn it all down and do it "right", some want stability) -- Rust basically benefited from starting later, and just making tons of correct decisions the first time (and doing all the big changes before 1.0).
That said, Haskell's runtime system is great, and it's threading + concurrency models are excellent. They're just not as efficient as Rust's (obviously) -- the idea of zero cost abstractions is another really amazing feature.
> [1] Affine types? Linear types? The ones where the compiler does not insert a destructor but requires you to consume every non-POD value. As someone with no understanding of language theory, I think this would simplify error handling and the async Drop problem
Yeah, the problem is that Affine types and Linear types are actually not the same thing. Wiki is pretty good here (I assume you meant to link to this):
https://en.wikipedia.org/wiki/Substructural_type_system
Affine is a weakening of Linear types, but the bigger problem here is that Haskell has a runtime system -- it just lives in a different solution-world from Rust.
For rust, Affine types are just a part of the way they handle aliasing and enable a GC-free language. Haskell has the feature almost... because it's cool/powerful. Yes, it's certainly useful in Haskell, but Haskell just doesn't seem as focused on such a specific goal, which makes sense because it's a very research driven language. It has the best types (and is the most widely adopted ML lang IIRC) because it focuses on being correct and powerful, but the ties to the ties to practicality are not necessarily the first or second thought.
It's really something that Rust was able to add something that was novel and useful that Haskell didn't have -- obvious huge credit to the people involved in Rust over the years and the voices in the ecosystem.
There's actually two different ways that Rust types can always be discarded: mem::forget and mem::drop. Making mem::forget unsafe for some types would be very useful, but difficult to implement without breaking existing code [1].
Btw, you are correct with linear types - Affine types allow discarding (like Rust already does).
Which links to this blog post explaining the choice in more detail: https://smallcultfollowing.com/babysteps/blog/2022/04/12/imp...
> In the past, we were blocked for technical reasons from expanding implied bounds and supporting perfect derive, but I believe we have resolved those issues. So now we have to think a bit about semver and decide how much explicit we want to be.
By the way, this issue also affects all of the other derivable traits in std - including PartialEq, Debug and others. Manually deriving all this stuff - especially Debug - is needless pain. Especially as your structs change and you need to (or forget to) maintain all this stuff.
Elegant software is measured in the number of lines of code you didn't need to write.
Nevertheless, it would be cool to be able to add #[noderive(Trait)] or something to a field not to be included in automatic trait implementation. Especially that sometimes foreign types do not implement some traits and one has to implement lots of boilerplate just to ignore fields of those types.
I know of Derivative crate [1], but it's yet another dependency in an increasingly NPM-like dependency tree of a modern Rust project.
All in all, I resort to manual trait implementations when needed, just as GP.
I rather have a simple and explicit language with a bit more typing, than a perl that tries to include 10.000 convenience hacks.
(Something like Uiua is ok too, but their tacitness comes from simplicity not convenience.)
Debug is a great example for this. Is derived debug convenient? Sure. Does it produce good error messages? No. How could it? Only you know what fields are important and how they should be presented. (maybe convert the binary fields to hex, or display the bitset as a bit matrix)
We're leaving so much elegance and beauty in software engineering on the table, just because we're lazy.
Strong disagree. Elegant software is easy to understand when read, without extraneous design elements, but can easily have greater or fewer lines of code than an inelegant solution.
> "Of course. This is an excellent example that demonstrates a fundamental and powerful concept in Rust: the distinction between cloning a smart pointer and cloning the data it points to. [...]"
Then I post the compiler's output:
> "Ah, an excellent follow-up! You are absolutely right to post the compiler error. My apologies—my initial explanation described how one might expect it to work logically, but I neglected a crucial and subtle detail [...]"
Aren't you also getting very tired of this behavior?
My friend was a big fan of Gemini 2.5 Pro and I kept telling him it was garbage except for OCR and he nearly followed what it recommended. Haha, he’s never touching it again. Every other LLM changed its tune on pushback.
I keep thinking the LLM contribution to humanity is/will be a net negative in the long run.
In my experience this is still true for the reasoning models with undergraduate mathematics - if you ask it to do your point-set topology homework (dishonest learning) it will score > 85/100, if you are confused about point-set topology and try to ask it an honest (but ignorant) question it will give you a pile of pseudo-mathematical BS.
It’s times like this when I wonder if we’re even using the same tools. Maybe it’s because I only even try to actively use them when I expect failure and am curious how it will be (occasionally it just decides to interpose itself on a normal search result, and I’m including those cases in my results) but my success rate with DuckDuckGo Assist (GPT-4o) is… maybe 10% of the time success but the first few search results gave the answer anyway, 30% obviously stupidly wrong answer (and some of the time the first couple of results actually had the answer, but it messed it up), 60% plausible but wrong answer. I have literally never had something I would consider an insightful answer to the sorts of things I might search the web for. Not once. I just find it ludicrously bad, for something so popular. Yet somehow lots of people sing their praises and clearly have a better result than me, and that sometimes baffles, sometimes alarms me. Baffles—they must be using it completely differently from me. Alarms—or are they just failing to notice errors?
(I also sometimes enjoy running things like llama3.2 locally, but that’s just playing with it, and it’s small enough that I don’t expect it to be any good at these sorts of things. For some sorts of tasks like exploring word connections when I just can’t quite remember a word, or some more mechanical things, they can be handy. But for search-style questions, using models like GPT-4o, how do I get such consistently useless or pernicious results from them!?)
There's a difference in question difficulty distribution between me asking "how do I do X in FFmpeg" because I'm too lazy to check the docs and don't use FFmpeg frequently enough to memorize, compared to someone asking because they have already checked the docs and/or use FFmpeg frequently but couldn't figure out how to do specifically X (say cropping videos to an odd width/height, which many formats just don't support), for instance. Former probably makes up majority of my LLM usage, but have still occasionally been suprirsed on the latter where I've come up empty checking docs/traditional search but an LLM pulls out something correct.
The part that annoys me definitely is how confident they all sound. However the way I'm using them is with tool usage loops and so it usually runs into part 2 immediately and course corrects.
I would have thought they'd add "don't apologise!!!!" or something like that to the system prompt like they do to avoid excessive lists.
Maybe this is an improvement on templates and precompiler macros, but not really.
And no, sorry, the complexity of C++ templates far outweighs anything in Rust's macros. Templates are a turing complete extension of the type system. They are not macros or anything like it.
Rust macro rules are token-to-token transformers. Nothing more. They're also sanitary, meaning they MUST form valid syntax and don't change the semantics in weird ways like C macros can.
Proc-macros are self-standing crates with a special library type in the crate manifest indicating as such, and while they're not "sanitary" like macros rules, they're still just token to token transformers that happen to run Rust code.
Both are useful, both have their place, and only proc macros have a slight developer experience annoyance with having to expand to find syntax errors (usually not a problem though).
Also I think it would be better if they operated with reflection-like structures like functions, classes, method rather than tokens - they would be easier to write and read.
Achieving a perfect world where build tooling can only touch the things it really needs is less of a toolchain problem and more of an OS hardening issue. I'd argue that's outside the scope of the compiler and language teams.
Not even close. Rust Macros vs C++ templates is more like "Checkers" vs "3D-chess while blindfolded".
>Rust doesn't seem worthwhile to learn, as in a few years time C++ will get memory safety proper
C++ getting "memory safety proper" is just adding to the problem that's C++.
It being a pile of concepts, and features, and incompatible ideas and apis.
The language & compiler should be unsurprising. If you have language feature A, and language feature B, if you combine them you should get A+B in the most obvious way. There shouldn't be weird extra constraints & gotchas that you trip over.
> in the most obvious way.
What people find obvious is often hard to predict.
Allowing more safe uses seems OK to me, but obviously expanding the functionality adds complexity, so there’s a trade off.
No, this is backwards. We have to require all generic parameters are Clone, as we cannot assume that any are not used in a way that requires them to be Clone.
> The reason this is the way it is is probably because Rust's type system wasn't powerful enough for this to be implemented back in the pre-1.0 days. Or it was just a simple oversight that got stabilized.
The type system can't know whether you call `T::clone()` in a method somewhere.
Why not?
hyperbrainer•3h ago
I don't understand what "used in such a way requires them to be cloned" means. Why would you require that?
rocqua•3h ago
Then even if T isn't cloneable, the type might still admit a perfectly fine implementation of clone.
xvedejas•3h ago
tinco•3h ago
I think it's a bit cynical that it would cost at least 4 years for this change to be admitted into the compiler. If the author is right and there is really no good reason for this rule, and I agree with the author in seeing no good reason, then it seems like something that could be changed quite quickly. The change would allow more code to compile, so nothing would break.
The only reason I could come up with for this rule is that for some other reason allowing non complying type parameters somehow makes the code generation really complex and they therefore postponed the feature.
the_mitsuhiko•2h ago
The history of this decision can be found in details in this blog post: https://smallcultfollowing.com/babysteps//blog/2022/04/12/im...
The key part:
> This idea [of relaxing the bounds] is quite old, but there were a few problems that have blocked us from doing it. First, it requires changing all trait matching to permit cycles (currently, cycles are only permitted for auto traits like Send). This is because checking whether List<T> is Send would not require checking whether Option<Rc<List<T>>> is Send. If you work that through, you’ll find that a cycle arises. I’m not going to talk much about this in this post, but it is not a trivial thing to do: if we are not careful, it would make Rust quite unsound indeed. For now, though, let’s just assume we can do it soundly.
> The other problem is that it introduces a new semver hazard: just as Rust currently commits you to being Send so long as you don’t have any non-Send types, derive would now commit List<T> to being cloneable even when T: Clone does not hold.
> For example, perhaps we decide that storing a Rc<T> for each list wasn’t really necessary. Therefore, we might refactor List<T> to store T directly […] We might expect that, since we are only changing the type of a private field, this change could not cause any clients of the library to stop compiling. With perfect derive, we would be wrong.2 This change means that we now own a T directly, and so List<T>: Clone is only true if T: Clone.
josephg•1h ago
We could solve this by having developers add trait bounds explicitly into the derive macro.
Currently this:
expands to: Perfect derive would look at the struct fields to figure out what the trait bounds should be. But it might make more sense to let users set the bound explicitly. Apparently the bon crate does it something like this: Then if you add or remove fields from your struct, the trait bounds don't necessarily get modified as a result. (Or, changing those trait bounds is an explicit choice by the library author.)