As I understand it, the primary purpose of newtypes is actually just to work around typeclass issues like in the examples mentioned at the end of the article. They are specifically designed to be zero cost, because you want to not pay when you work around the type class instance already being taken for the type you want to make an instance for. When you make an abstract data type by not exporting the data constructors, that can be done with or without newtype.
I think OCaml calls these things modules or so. But the concepts are similar. For most cases, when there's one obvious instance that you want, having Haskell pick the instance is less of a hassle.
In other words the full range of Int?
Is newtype still bad?
In other words how much of this criticism has to do with newtype not providing sub-ranging for enumerable types?
It seems that it could be extended to do that.
Correct fields by...name? By structure? I'm trying to understand.
let full_name = (in: { first: string, last: string }) => in.first + " " + in.last
Then you can use this function on any data type that satisfies that signature, regardless of if it's User, Dog, Manager etc.I think it also makes more sense in immutable functional languages like clojure. Oddly enough I like it in Go too, despite being very different from clojure.
It seems ok in upcoming languages with polymorphic sum types (eg Roc “tags”) though?
I'm not saying the nominal approach to types is wrong or bad, I just find my way of thinking is better suited for structural systems. I'm thinking less about the semantics around product_id vs user_id and more about what transforms are relevant - the semantics show up in the domain layer.
Take a vec3 for example, in a structural system you could apply a function designed for a vec2 on it, which has practical applications.
To get where structural type systems fall down, think about a bad case is when dealing with native state and you have a private long field with a pointer hiding in it used in native calls. Any “type” that provides that long will fit the type, leading to seg faults. A nominal type system allows you to make assurances behind the class name.
Anyways, this was a big deal in the late 90s, eg see opaque types https://en.wikipedia.org/wiki/Opaque_data_type.
Range checking can be very annoying to deal with if you take it too seriously. This comes up when writing a property testing framework. It's easy to generate test data that will cause out of memory errors - just pass in maximum-length strings everywhere. Your code accepts any string, right? That's what type signature says!
In practice, setting compile-time limits on string sizes for the inputs to every internal function would be unreasonable. When using dynamically allocated memory, the maximum input size is really a system property: how much memory does the system have? Limits on input sizes need to be set at system boundaries.
Providing a proof of program correctness is pretty challenging even in languages that support it. In most cases careful checking of invariants at runtime (where not possible at compile time) and crashing loudly and early is sufficient for reliable-enough software.
nixpulvis•2h ago