All that makes a lot of sense if it was introduced as a performance hack rather than a thoughtfully designed concept.
The object-oriented part of OCaml, by the way, has inheritance that's entirely orthogonal to interfaces, which in OCaml are static types. Languages like Smalltalk and, for the most part, Python don't have interfaces at all.
"Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules [1]. You can bolt behaviors and their types onto structures at will. This is how OO should be.
I put together a visual argument on another thread on HN a few weeks ago:
https://imgur.com/a/class-inheritance-vs-traits-oop-isnt-bad...
[1] Though if you want rules on bounds and associated types, you can have them.
Yes-and-no.
Interfaces still participate in inheritance hierarchies (`interface Bar extends Foo`), and that's in a way that prohibits removing/subtracting type members (so interfaces are not in any way a substitute for mixins). Composition (of interfaces) can be used instead of `extends`, but then you lose guarantees of reference-identity - oh, and only reference-types can implement interfaces which makes interfaces impractical for scalars and unusable in a zero-heap-alloc program.
Interface-types can only expose virtual members: no public fields - which seems silly to me because a vtable-like mechanism could be used to allow raw pointer access to fields via interfaces, but I digress: so many of these limitations (or unneeded functionality) are consequences of the JVM/CLR's design decisions which won't change in my lifetime.
Rust-style traits are an overall improvement, yes - but (as far as my limited Rust experience tells me) there's no succinct way to tell the compiler to delegate the implementation of a trait to some composed type: I found myself needing to write an unexpectedly large amount of forwarding methods by hand (so I hope that Rust is better than this and that I was just doing Rust the-completely-wrong-way).
Also, oblig: https://boxbase.org/entries/2020/aug/3/case-against-oop/
"Only reference types can implement interfaces" is simply not true in C#. Not only can structs implement them, but they can also be used through the interface without boxing (via generics).
(If you merge multiple interfaces, the implementations of the methods have to match. You end up with even more special getters for each one sometimes.)
It's true that you can't access private members (not just fields) on `this` from the mixin interface. But explicit implementations of members mean that only someone explicitly downcasting the object will get access to those members, so accidental access is not an issue.
Those default-implementations are only accessible when the object is accessed via that interface; i.e. they aren't accessible as members on the object itself. Furthermore, interfaces (still) only declare (and optionally define) vtable members (i.e. only methods, properties, and events - which are all fundamentally just methods), not fields or any kind of non-static state, whereas IMO mixins should have no limitations and should behave the same as though you copied-and-pasted raw code.
That's true in C# but not in Java, so it's not something intrinsic to the notion of an interface.
> Furthermore, interfaces (still) only declare (and optionally define) vtable members (i.e. only methods, properties, and events - which are all fundamentally just methods), not fields or any kind of non-static state
This is true, but IMO largely irrelevant because get/set accessors are a "good enough" substitute for a field. That there is even a distinction between fields and properties in the first place is a language-specific thing; it doesn't exist in e.g. Eiffel.
"This doesn't represent the fundamental truth" does not imply "this has little value". Your navigation software likely doesn't account for cars passing each other on the road either -- or probably red lights for that matter -- and yet it's still pretty damn useful. The sweet spot is problem- and model-dependent.
The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves). People created AbstractFactoryFactoryBuilders not because they wanted to, but because "books" said to do stuff like and people were just signaling to the tribe.
So now, we are now all signaling to the new tribe that "inheritance is bad" even though we proudly created multiple AFFs in the past. Not very original in my opinion since Go and Rust don't have inheritance. The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.
Inheritance really shines when you want to encapsulate behaviour behind a common interface and also provide a standard implementation. I.e: I once wrote a RN app which talked to ~10 vacuum robots. All of these robots behaved mostly the same, but each was different in a unique way. E.g. 9 robots returned to station when the command "STOP" was send, one would just stop in place. Or some robots would rotate 90 degrees when a "LEFT" command was send, others only 30 degrees. We wrote a base class which exposed all needed commands and each robot had an inherited class which overwrote the parts which needed adjustment (e.g. sending left three times so it's also 90 degrees or send "MOVE TO STATION" instead of "STOP").
I can only think of one or two instances where I've really been convinced that inheritance is the right tool. The only one that springs to mind is a View hierarchy in UI libraries. But even then, I notice React (& friends) have all moved away from this approach. Modern web development usually makes components be functions. (And yes, javascript supports many kinds of inheritance. Early versions of react even used them for components. But it proved to be a worse approach.)
I've been writing a lot of rust lately. Rust doesn't support inheritance, but it wouldn't be needed in your example. In rust, you'd implement that by having a trait with functions (+default behaviour). Then have each robot type implement the trait. Eg:
trait Robot {
fn stop(&mut self) { /* default behaviour */ }
}
struct BenderRobot;
impl Robot for BenderRobot {
// If this is missing, we default to Robot::stop above.
fn stop(&mut self) { /* custom behaviour */ }
}
trait Robot {
fn send_command(&mut self, command: Command);
fn stop(&mut self) {
self.send_command(Command.STOP);
}
}
struct BenderRobot;
impl Robot for BenderRobot {
// Required.
fn send_command(&mut self, command: Command) { todo!(); }
}
This is starting to look a lot like C++ class inheritance. Especially because traits can also inherit from one another. However, there are two important differences: First, traits don't define any fields. And second, BenderRobot is free to implement lots of other traits if it wants, too.If you want a real world example of this, take a look at std::io::Write[1]. The write trait requires implementors to define 2 methods (write(data) and flush()). It then has default implementations of a bunch more methods, using write and flush. For example, write_all(). Implementers can use the default implementations, or override them as needed.
Docs: https://doc.rust-lang.org/std/io/trait.Write.html
Source: https://doc.rust-lang.org/src/std/io/mod.rs.html#1596-1935
How does one handle cases where fields are useful? For example, imagine you have a functionality to go fetch a value and then cache it so that future calls to get that functionality are not required (resource heavy, etc).
// in Java because it's easier for me
public interface hasMetadata {
Metadata getMetadata() {
// this doesn't work because interfaces don't have fields
if (this.cachedMetadata == null) {
this.cachedMetadata = generateMetadata();
}
return this.cachedMetadata;
}
// relies on implementing class to provide
Metadata fetchMetadata();
}
I'd like to generalize that a little bit and say: graph structures in general. A view hierarchy is essentially a tree, where each node has a bunch of common bits (tree logic) and a bunch of custom bits (the actual view). There are tons of "graph structures" that fit that general pattern: for instance, if you have some sort of data pipeline DAG where data comes in on the left, goes out on the right, and in the middle has to pass through a bunch of transformations that are linked in some kind of DAG. Inheritance is great for this: you just have your nodes inherit from some kind of abstract "Node" class that handles the connection and data flow, and you can implement your complex custom behaviors however you want and makes it very easy to make new ones.
I'm very much in agreement that OOP inheritance has been horrendously overused in the 90s and 00s (especially in enterprise), but for some stuff, the model works really well. And works much better than e.g. sum types or composition or whatever for these kinds of things. Use the right tool for the right job, that's the central point. Nothing is one-size-fits-all.
But what do those functions return? Oh look, it's DOM nodes, which are described by and implemented with inheritance.
I would agree that view hierarchies in UI libraries are one of the primary use-cases for inheritance. But it's a pretty big one.
(yes, I guess it's the fake vtable of structure full of pointers)
Call them interfaces with default implementations or super classes, they are the same thing and very useful.
As usual there is no silver bullet, so it's just a tool and like any other tool you need to use it wisely, when it makes sense.
1. A class can be composed out of multiple interfaces, making them more like mixins/traits etc vs inheritance, which is always a singular class
2. The implementation is flat and you do not have a tree of inheritance - which was what this discussion was about. This obviously comes with the caveat that you don't combine them, which would effectively make it inheritance again.
There are many other ways to share an implementation of a common feature:
1. Another comment already mentioned default method implementations in an interface (or a trait, since the example was in Rust). This technique is even available in Java (since Java 8), so it's as mainstream as it gets.
The main disadvantage is that you can have just one default implementation for the stop() method. With inheritance you could use hierarchies to create multiple shared implementations and choose which one your object should adopt by inheriting from it. You also cannot associate any member fields with the implementation. On the bright side, this technique still avoids all the issues with hierarchies and single and multiple inheritance.
2. Another technique is implementation delegation. This is basically just like using composition and manually forwarding all methods to the embedded implementer object, but the language has syntax sugar that does that for you. Kotlin is probably the most well-known language that supports this feature[1]. Object Pascal (at least in Delphi and Free Pascal) supports this feature as well[2].
This method is slightly more verbose than inheritance (you need to define a member and initialize it). But unlike inheritance, it doesn't requires forwarding the class's constructors, so in many cases you might even end up with less boilerplate than using inheritance (e.g. if you have multiple overloaded constructors you need to forward).
The only real disadvantage of this method is that you need to be careful with hierarchies. For instance, if you have a Storage interface (with the load() and store() methods) you can create EncryptedStorage interface that wraps another Storage implementation and delegates to it, but not before encrypting everything it sends to the storage (and decrypting the content on load() calls). You can also create a LimitedStorage wrapper than enforces size quotas, and then combine both LimitedStorage and EncryptedStorage. Unlike traditional class hierarchies (where you'd have to implement LimitedStorage, EncryptedStorage and LimitedEncryptedStorage), you've got a lot more flexibility: you don't have to reimplement every combination of storage and you can combine storages dynamically and freely. But let's assume you want to create ParanoidStorage, which stores two copies of every object, just to be safe. The easiest way to do that is to make ParanoidStorage.store() calls wrapped.store() twice. The thing you have to keep in mind, is that this doesn't work like inheritance: For instance, if you wrap your objects in the order EncryptedStorage(ParanoidStorage(LimitedStorage(mainStorage))), ParanoidStorage will call LimitedStorage.store(). This is unlike the inheritance chain EncryptedStorage <- ParanoidStorage <- LimitedStorage <- BaseStorage, where ParanoidStorage.store() will call EncryptedStorage.store(). In our case this is a good thing (we can avoid a stack overflow), but it's important to keep this difference in mind.
3. Dynamic languages almost always have at least one mechanism that you can use to automatically implement delegation. For instance, Python developers can use metaclasses or __getattr__[3] while Ruby developers can use method_missing or Forwaradable[4].
4. Some languages (most famously Ruby[5]) have the concept of mixins, which let you include code from other classes (or modules in Ruby) inside your classes without inheritance. Mixins are also supported in D (mixin templates). PHP has traits.
5. Rust supports (and actively promotes) implementing traits using procedural macros, especially derive macros[6]. This is by far the most complex but also the most powerful approach. You can use it to create a simple solution for generic delegation[7], but you can go far beyond that. Using derive macros to automatically implement traits like Debug, Eq, Ord is something you can find in every codebase, and some of the most popular crates like serde, clap and thiserror rely on heavily on derive.
[1] https://kotlinlang.org/docs/delegation.html
[2] https://www.freepascal.org/docs-html/ref/refse48.html
[3] https://erikscode.space/index.php/2020/08/01/delegate-and-de...
[4] https://blog.appsignal.com/2023/07/19/how-to-delegate-method...
[5] https://ruby-doc.com/docs/ProgrammingRuby/html/tut_modules.h...
[6] https://doc.rust-lang.org/reference/procedural-macros.html#d...
I've worked with countless people who came from Java, who try to create the same abstractions and factories and layers.
When I chide them, it's like realizing the shackles are off, and they have fun again with the basics. It leads to much more readable, simple code.
This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
Yeah; I agree with this. I think this is both the best and worst aspect of Go: Go is a language designed to force everyone's code to look like vaguely the same, from beginners to experts. Its a tool to force even mediocre teams to program in an inoffensive, bland way that will be readable by anyone.
I doubt it; the majority of code is in enterprise projects, and they do Java and C# in the idiomatic way, with inheritance.
I'm working on an Android project right now, and inheritance is everywhere!
So, sure, if you ignore all mobile development, and ignore almost all enterprise software, and almost all internal line-of-business software, and restrict yourself to what various "influencers" say, then sure THAT crowd is moving away from inheritance.
Even C++ has that with multiple inheritance - some parents can just be interfaces.
As to whether Smalltalk needs interfaces see https://stackoverflow.com/a/7979852/151019 and https://www.jot.fm/issues/issue_2002_05/article1/
C++ does not (or at least did not at the time) have a concept of interfaces. There was a pattern in some development communities for defining interfaces by writing classes that followed particular rules, but no first-class support for them in the language.
An interface is just a base class none of whose virtual functions have implementations. C++ has first class support for it. The only thing C++ lacks is the "interface" keyword.
C++ doesn't have this restriction, so interfaces would add very little.
Your distinction between "first class support for interfaces" and "C++ support for interfaces" looks like an artificial one to me.
Other than not requiring the keyword "interface", what is it about the C++ way of creating an interface that makes it not "first class support"?
The only part of inheritance I’ve ever found useful is allowing objects to conform to a certain interface so that they can fulfill a role needed by a generic function. I’ve always preferred the protocol approach or Rust’s traits for that over classicist inheritance though.
Yes, in our fad-chasing industry the pendulum has moved in the other direction. Let's wait few years.
There is nothing wrong with OOP, inheritance, FP, procedural, declarative or whatever. What is bad is religious dogma overtaking engineering work.
You can write spaghetti in any language or paradigm. People will go overboard on DRY while ignoring that inheritance is more or less just a mechanism for achieving DRY for methods and fields.
FP wizards can easily turn your codebase into a complex organism that is just as “impenetrable” as OOP. But as you say, fads are fads are fads, and OOP was the previous fad so it behooves anyone who wants to look “up to date” to be performative about how they know better.
Personally I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects. I expect you can even base a trait off of another trait in Rust.
But don’t dare call it what it actually is, because this industry really is as petulant as you describe.
I think every new technology or idea is created because it solves some problems, but in the long run, we'll discover that it creates other problems. For example, transpiling javascript, C++ OO, actors, coroutines, docker, microkernels, and so on.
When a new idea appears, we're much more aware of the benefits it brings. But we don't know the flaws yet. So we naively hope there are no flaws - and the new idea is universally good.
But its rare to change something and not have that cause problems. It just always takes awhile for the problems to really show up and spoil the party. I guess you could think of it as the hype cycle - first hype, then disillusionment, then steady state.
Sometimes I play this game with new technology. "In 10 years, people will be complaining about this on hackernews. What do I guess they'll be saying about it?". For rails, people complain about its deep magic. For rust, I think it'll be how hard it is to learn. For docker, that it increases the size of deployments for no reason. And that its basically static linking with more steps.
Calling everything a fad is too cynical for me, because it implies that progress is impossible. But plenty of tools have made my life as a software developer better. I prefer typescript over javascript. I prefer cargo over makefile / autotools / cmake / crying. Preemptive schedulers are better than your whole computer freezing. High level languages beat programming in assembly.
Its just hard to say for sure how history will look back on any particular piece of tech in 20 years time. Fortran lost to C, even though its better in some ways. I think C++ / Java style OO will die a slow death, in favour of data oriented design and interfaces / traits (Go / Rust style). I could be wrong, but thats my bet.
> I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects.
I hear what you're saying - but there's some important differences about the philosophy of how we conceptualise our programs. OO encourages us to think of nouns that "do" some verb. Using structs and functions (like C and Rust) feels much more freeform to me. Yegge said it better: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
But lets see in 20 years. Maybe OO will be back, but I doubt it. I think if we can learn anything from the history of CS, its that functional programming had basically all the right ideas 40 years ago. Its just taking the rest of us decades to notice.
Of course, in the functional programming community we know that it is pointfree abstraction that makes your software better.
https://wiki.haskell.org/Pointfree
(Please pardon the pun.)
What's described here is over-generic code, instead of KISS and just keeping an eye on extensibility instead of generalizing ahead of time. This can happen in any paradigm.
I would even go so far as to argue that a small team of devs can learn an OOP heirarchy and work with it indefinitely, but a similar small team will drown in maintenance overhead without OOP and inheritance. This is highly relevant as we head into an age of decreased headcounts. This style of abandoning OOP will age poorly as teams decrease in size.
Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens.
With enough patience you will see many fads pass twice like a tide raising and falling. OOP, runtime typing, schema-less databases and TDD are the first to come to mind.
I feel "self-describing" data formats and everything agile are fading already.
Very few ideas stick, but some do: I do not expect GOTO to ever come back, but who knows where vibe coding will lead us :)
100% this! And I've recently been wondering whether this is the right workflow for AI-assisted development: use vibe-coding to build the one that you plan to throw away [0], use that to validate your assumptions and implement proper end-to-end tests, then recreate it again once or more with AI asked to try different approaches, and then eventually throw these away too and more manually create "the third one".
[0] "In most projects, the first system built is barely usable....Hence plan to throw one away; you will, anyhow." Fred Brooks, The Mythical Man-Month
That's just false. Before Java abstract factory era there was already a culture of creating deep inheritance hierarchies in C++ code. Interfaces and design patterns (including factories) were adopted as a solution to that mess and as bad as they were - they were still an improvement.
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
You have two objects. A and B. How do you merge the two objects? A + B?
The most straight forward way is inheritance. The idea is fundamental.
The reason why it's not practical has more to do with human nature and the limitations of our capabilities in handling complexity then it has to do with the concept of inheritance itself.
Literally think about it. How else do you merge two structs if not using inheritance?
The idea that inheritance is not fundamental and is wrong in nature is in itself mistaken.
What? Using multiple inheritence? That's one of the worst ideas I've ever seen in all of computer science. You can't just glue two arbitrary classes together and expect their invariants to somehow hold true. Even if they do, what happens when both classes implement a method or field with the same name? Bugs. You get bugs.
I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.
The way to merge two structs is via composition:
struct C {
a: A,
b: B,
}
If you want to expose methods from A or B, either wrap the methods or make the a or b fields public / protected and let callers call c.a.foo().Don't take my word for it, here's google's C++ style guide[1]
> Composition is often more appropriate than inheritance.
> Multiple inheritance is especially problematic, because it often imposes a higher performance overhead (in fact, the performance drop from single inheritance to multiple inheritance can often be greater than the performance drop from ordinary to virtual dispatch), and because it risks leading to "diamond" inheritance patterns, which are prone to ambiguity, confusion, and outright bugs.
> Multiple inheritance is permitted, but multiple implementation inheritance is strongly discouraged.
[1] https://google.github.io/styleguide/cppguide.html#Inheritanc...
You just threw this in out of nowhere. I didn't mention anything about "multiple" inheritance. Just inheritance which by default people usually mean single inheritance.
That being said multiple inheritance is equivalent to single inheritance of 3 objects. The only problem is because two objects are on the same level it's hard to know which property overrides which. With a single chain of inheritance the parent always overrides the child. But with two parents, we don't know which parent overrides which parent. That's it. But assume there are 3 objects with distinct properties.
A -> B -> C
would be equivalent to A -> C <- B.
They are isomorphic. Merging distinct objects with distinct properties is commutative which makes inheritance of distinct objects commutative. C -> B -> A == A -> B -> C
>I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.Don't ever tell me that programming for 30 years is a reason for being correct. It's not. In fact you can be doing it for 30 years and be completely and utterly wrong. Then the 30 years of experience is more of a marker of your intelligence.
The point is YOU are NOT understanding WHAT i am saying. Read what I wrote. The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively.
But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting.
Think about it. You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
Say you have two structs. The structs contain redundant properties. HOW do you define one struct in terms of the other? There's no simpler way then inheritance.
>> Composition is often more appropriate than inheritance.
You can use composition but that's literally the same thing but wierder, where instead of identical properties overriding other properties you duplicate the properties via nesting.
So inheritance
A = {a, b}, C = {a1}, A -> C = {a1, b}
Composition: A = {a, b}, C = {a1}, C(A) = {a1, {a, b}}
That's it. It's just two arbitrary rules for merging data.If you have been programming for 30 years you tell me how to fit this requirement with the most minimal code:
given this:
A = {a, b, c, d}
I want to create this: B = {a, b, c, d, e}
But I don't want to rewrite a, b, c, d multiple times. What's the best way to define B while reusing code? Inheritance.Like I said the problem with inheritance is not the concept itself. It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity. The idea itself is fundamental not stupid. It's the human brain that is too stupid to handle the emergent complexity.
Also I don't give two flying shits about google style guides after the fiasco with golang error handling. They could've done a better job.
Why would we assume that? If the objects are entirely distinct, why are you combining them together into one class at all? That doesn't make any sense. Let distinct types be distinct. Let consumers of those types combine them however they like.
> But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting. [...] You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
So now the objects have 90% of their properties shared. That's different from what you were saying earlier. But moving on...
The composition answer is similar to the inheritance answer. Take the common parts and extract them out into a self contained type. Use that type everywhere you need it - eg by including it in both A and B.
> given this: A = {a, b, c, d} I want to create this: B = {a, b, c, d, e}. [...] What's the best way to define B while reusing code?
Via composition, you do it like this:
B = { a: A, e }
What could be simpler than that?You keep claiming that inheritance is simpler. But then you go on to say this:
> The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively. [..] It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity.
In other words, using inheritance increases the complexity of the resulting code for the human brain. It makes our code harder to read & understand. I absolutely agree with this criticism you're making.
And that's an incredibly damning criticism, because complexity is an absolute killer. Your capacity to wrangle the complexity of a given piece of code is the single greatest limitation of any software developer, no matter how skilled. Any change that makes your code more complex and harder to understand must be worth it in some way. It must pay dividends. Using inheritance brings no benefit in most cases over the equivalent compositional code. The only thing that happens is that - as you say - it makes the software harder for our brains to handle.
If you ask me, that's a terrible decision to make.
why inheritance would make it easier?
Then multiple inheritance is much cleaner because you can test constructor arguments against the union of these invariants before even hitting a constructor.
Note that I’ve never used it, but it did strike me that this was “the” way to include multiple inheritance in a language. But for some reason (run time performance maybe?) no one else seems to have done it this way.
"how to join two struts with least amount of work and thinking so my manager can tick off a box in excel"
in such case inheritance is a nice temporary crutch
The need to extend a data structure to add more fields comes almost immediately. Think: something like the C "hack" of embedding the "base" structure as the first field of the "derived" structure:
struct t_derived
{
struct t_base base;
int extra_data;
};
Then you can pass a derived_t instead of base_t with some type casting and caveats. This is "legal" in C because the standard guarantees that base has offset 0.Of course our "extra_data" could be a structure, but although it would look like "A+B" it is actually a concatenation.
After reading this, I'm thinking that intrusive lists is the one use of inheritance in C++ that makes any sense.
It does. Trees appear in nature all the time. It's the basis of human society, evolution and many things.
Most of programming moves towards practicality rather then fundamental truth. That's why you get languages like golang which are ugly but practical.
Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
Hard disagree. Knowing that AES and Twofish are block ciphers is useful when dealing with cryptography. Many categories of algorithms and objects are naturally taxonomic.
Even HTML+CSS has (messy) inheritance.
Even trees are not trees: https://en.wikipedia.org/wiki/Anastomosis
Evolution is most definitely not a tree.
Nature also tends towards practicality, even more so than programming. Trees aren’t a fundamental truth, they’re a made-up oversimplified abstraction.
They were pushed by cultish types with little evidence. There was this assertion that all these things were wonderful and would reduce effort and therefore they must be good and we all must use them. We got object oriented everything including object oriented CPUs, object oriented relational databases, object oriented "xtUML". If you weren't object oriented you were a pile of garbage in those days.
For all that, I don't know if there was ever any good evidence at all that any of it worked. It was like the entire industry all fell for snakeoil salesmen and are collectively too embarrassed about it to have much introspection or talk about it. Not that it was the last time the industry has fallen for snakeoil...
My first foray into serious programming was by way of Django, which made a choice of representing content structure as classes in the codebase. It underwent the usual evolution of supporting inheritance, then mixins, etc. Today I’d probably have mixed feelings about conflating software architecture with subject domain so blatantly: of course it could never represent the fundamental truth. However, I also know that 1) fundamental truth is not losslessly representable anyway (the map cannot be the territory), 2) the only software that is perfectly isolated from imperfections of real world is software that is useless, and 3) Django was easy to understand, easy to build with, and effectively fit the purpose.
Any map (requirement, spec, abstraction, pattern) is both a blessing that allows software to be useful over longer time, and a curse that leads to its obsolescence. A good one is better at the former than the latter.
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=26988839 - April 2021 (252 comments)
plus this bit:
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=35261638 - March 2023 (1 comment)
The "invented" part was suspicious though.
As an aside, I have noticed that the robotics frameworks (ROS and ROS2) heavily rely on inheritance and some co-dependent C++ features like virtual destructors (to call the derived class's destructor through a base class pointer). I was once invited to an interview for a robotics company due to my "C++ experience"and grilled on this pattern of C++ that I was completely unfamiliar with. I seriously considered removing C++ from my resume that day.
The reality is that a codebase is not that simple. Many things you create are not representable as realworld "objects" - to me, this is where is gets confusing to follow especially when the code gets bigger.
I remember those OOP books (I cannot comment on modern OOP books) where the first few chaptors would use Shapes as an example. Where A Circle, Square, Triangle, etc.. would inherit the Shape object. Sure, in simple examples like this.. it makes sense.
I remember covering inheritence and how to tell if its better or composition... which is the "Object IS X" or "Object HAS X" - so you base you're heirarchy around that mindset.
- "A Chair is Furniture" (Chair inherits Furniture) - "A Chair has Legs" (Chair has array of Leg)
I will always remember my first job - creating shop floor diagrams where you get to select a Shelf or Rack and see the visual representation of goods, etc. My early codebase was OOP... a Product, Merchandise, Shelf, Bay, Pegboard, etc. Each object inherits something in one way or another. Keeping on top of it eventually became a pain. I think there was, overall, about 5 levels of inheritence.
I reviewed my codebase one day and decided to screw it -- I would experiment other approaches. I ended up created simple classes with no inheritence. Each class was isolated from one another with the exception of a special Id which represented "something" like a Pin, or Shelf, etc. Now my code was flexible... "A Shelf has this and this"
In later years I realised what I did was following along the lines of what is commonly known as ECS or Entity-Component-System. Seems popular in games (and I viewed that project is a game-like fashion so it makes sense)
As someone who was blessed/lucky to learn C and Pascal.. with some VB6.. I understood how to write clean code with simple structs and functions. By the time I was old enough to get a job, I realised most (if not all) job adverts required OOP, Design Patterns, etc. I remember getting my first Java book. About 1,000 pages, half of which was about OOP (not Java directly)
I remember my first job. Keeping my mouth shut and respecting the older, more experienced developers. I would write code the way I believed was correct -- proper OOP. Doing what the books tell me. Doing what is "cool" and "popular" is modern programming. Hiding the data you should not see, and wrapping what you should in Methods... all that.
Nobody came to me and offered guidance but I learned that some of my older codebase with Inheritence, Overrides.. while it was "proper" code, would end up a jumbled mess when it required new features. One class that was correctly setup one day needed to be moved about, affecting the class hierarchy of others. It brings me back to thinking of my earlier programming days with C -- and to have things in simples structs and functions is better.
I do not hate on OOP. Afterall, in my workplace, am using C# or Python - and make use of classes and, at times, some inheritence here and there. The difference is not to go all religious in OOP land. I use things sparingly.
At work, I use what the Companies has already laid out. Typically languages that are OOP, with a GC, etc. I have no problem with that. At home or personal projects, I lead more towards C or Odin these days. I use Scheme from time-to-time. I would jump at the opportunity to using Odin in the workplace but I am surrounded by developers who dont share my mindset, and stick to what they are familiar with.
Overall, his Conclusion matches my own. "Personally, for code reuse and extensibility, I prefer composition and modules."
Interfaces are indeed much nicer, but you have to make sure that your program language doesn't introduce additional overhead.
Don't be the guy that makes Abstract Factory Factories the default way to call methods. Be aware that there are a lot of people out there that would love to ask a web-server for instructions each time they want to call a method. Always remember that the IT-Crowd isn't sane.
Other languages (just like the article) only saw the downsides to such a generic abstraction that they added N times more abstractions (so split inheritance, interfaces, traits, etc) and rules for interactions that it significantly complicated the language with fundamentally no effective gains.
In summary, Herb will always do a better job than me explaining why the choices in the design of C++ classes, even with multiple inheritance, is one of the key factors of C++ success. With cppfront, he extends this idea with metaclasses to clearly describe intent. I think he is on the right track.
- If I have a class Foo and interface Bar, I should be easily able to pass a Foo where Bar is required, provided that Foo has all the methods that Bar has (sometimes I don't control Foo and can't add the "implements Bar" in it).
- I can declare "class Foo implements Bar", but that only means "give me a compilation error if Bar has a method that Foo doesn't implement" - it is NOT required in order to be able to pass a Foo object to a method that takes a Bar parameter
- Conversely, I should be able to also declare "interface Foo implementedBy Baz" and get a compilation error if either one of them is modified in a way that makes them incompatible (again - this does not mean that Baz is the _only_ implementor, just that it's one of them)
- Especially with immutable values - the same should apply to data. record A extends B, C only means "please verify that A has all the members that B & C have, and as such whenever a B record is required, I can pass an A instead". I should be able to do the reverse too (record B extendedBy A). Notably, this doesn't mean "silently import members from B, and create a multiple-inheritance-mess like C++ does".
(I do understand that there'd be some performance implications, but especially with a JIT a feel these could be solved; and we live in a world where I think a lot of code cares more about expressiveness/ understandability than raw performance)
You can use `o satisfies T` wherever you want to ensure that any object/instance o implements T structurally.
To verify a type implements/extends another type from any third-party context (as your third point), you could use `(null! as T1) satisfies T2;`, though usually you'd find a more idiomatic way depending on the context.
Of course it's all type-level - if you are getting untrusted data you'll need a library for verification. And the immutable story in TS (readonly modifier) is not amazing.
> Inheritance was invented by the Simula language
The only reason inheritance continues to be around is social convention. It’s how programmers are taught to program in school and there is an entire generation of people who cannot imagine programming without it.
Aside from common social practice inheritance is now largely a net negative that has long outlived its usefulness. Yes, I understand people will always argue that without their favorite abstraction everything will be a mess, but we shouldn’t let the most ignorant among us baselessly dictate our success criteria only to satisfy their own inability to exercise a tiny level of organizational capacity.
impure•9h ago
I guess it could simplify the GC but modern garbage collectors have come a long way.
andyferris•8h ago
I am kind of amused they _removed_ first-class functions though!
int_19h•5h ago
bitwize•8h ago
The designers of StarCraft ran into the pitfalls of designing a sensible inheritance hierarchy, as described here (C-f "Game engine architecture"): https://www.codeofhonor.com/blog/tough-times-on-the-road-to-...
virtue3•8h ago
kragen•7h ago
int_19h•5h ago
kragen•5h ago
nine_k•7h ago
If you must, you can use the implementation inheritance for mix-ins / cross-cutting concerns that are the same for all parties involved, e.g. access control. But even that may be better done with composition, especially when you have an injection framework that wires up certain constructor parameters for you.
Where inheritance (extension) properly belongs is the definition of interfaces.
wbl•8h ago
nine_k•7h ago
In this regard, Go and Rust do classes / objects right, Java provides the classical pitfalls, and C++ is the territory where unspeakable horrors can be freely implemented, as usual.
vlovich123•7h ago
nine_k•7h ago
JITs can do many fascinating optimizations based on profiling the actual code. They must always be on guard though for a case when their profiling-based conclusions fail to hold with some new data, and they have to de-optimize. This being on guard also has its cost.
vlovich123•2h ago
Reason077•8h ago
nine_k•7h ago
xxs•7h ago
Single site (no class found overriding a method) are static and can be inlined directly. Dual call sites use a class check (which is a simple equality), can be inlined, no v-table. 3-5 call sites use inline caches (e.g. the compiler records what class have been used) that are similar and some can be inlined, usually plus a guard check.
Only high polymorphic calls use v-table and in practice is a very rare occasion, even with Java totally embracing inheritance (or polymorphic interfaces)
Note: CHA is dynamic and happens at runtime, depending which classes have been loaded. Loading new classes causes CHA to be performed again and if there are affected sites, the latter are to be deoptimized (and re-JIT again)
nine_k•6h ago
josephg•5h ago
Does C++ have any of these optimisations?
xxs•7h ago
kragen•7h ago
Yes, current GCs are very fast and do not suffer from the problems Simula's GC suffered from. Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B. Allocation may not be any faster, because in either case the compiler can bump the nursery pointer just once (with a copying collector). Deallocation is maybe slightly faster, because with a copying collector, deallocation cost is sort of proportional to how much space you allocate, and the total size of record B is smaller with record A embedded in it than the total size of record A plus record B with a pointer linking them. (That's one pointer bigger.) But tracing gets much faster when there are no pointers to trace.
You will also notice from this example that it's failing to embed the superclass (or whatever) that requires an additional record lookup. And probably a cache miss, too.
I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general, and more generally the Lisp model of memory as a directed graph of objects linked by pointers, because although inheritance reduces the number of cache misses in OO code, it doesn't reduce them enough.
I've written about this at greater length in http://canonical.org/~kragen/memory-models/, but I never really finished that essay.
josephg•5h ago
Yes it does! Inheritance itself is fine, but inheritance almost always means virtual functions - which can have a significant performance cost because of vtable lookups. Using virtual functions also prevents inlining - which can have a big performance cost in critical code.
> Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B.
Huh? No - if you put A and B in separate allocations, you get worse performance. Both because of pointer chasing (which matters a great deal for performance). And also because you're putting more pressure on the allocator / garbage collector. The best way to combine A and B is via simple composition:
In this case, there's a single allocation. (At least in languages with value types - like C, C++, C#, Rust, Swift, Zig, etc). In C++, the bytes in memory are actually identical to the case where B inherits from A. But you don't get any class entanglement, or any of the bugs that come along with that.> I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general
Games are moving away from OO because C++ style OO is a fundamentally bad way to structure software. Even if it wasn't, struct-of-arrays usually performs better than arrays-of-structs because of how caching works. And modern ECS (entity component systems) can take good advantage of SoA style memory layouts.
The performance gap between CPU cache and memory speed has been steadily growing over the last few decades. This means, relatively speaking, pointers are getting slower and big arrays are getting faster on modern computers.
kragen•4h ago
> inheritance almost always means virtual functions
Inheritance and "virtual functions" (dynamic method dispatch) are almost, but not completely, unrelated. You can easily have either one without the other. Golang and Lua have dynamic method dispatch without inheritance; C++ bends over backwards so that you can use all the inheritance you want without incurring any of the costs of dynamic method dispatch, as long as you don't declare anything virtual. This is actually a practical thing to do with modern C++ with templates and type inference.
> No - if you put A and B in separate allocations, you get worse performance
Yes, that's what I was saying.
> you're putting more pressure on the allocator / garbage collector
Yes, I explained how that happens in greater detail in the comment you were replying to.
With your struct C, it's somewhat difficult to solve the problem catern was saying Simula invented inheritance to solve; if A is "list node" and B is "truck", when you navigate to a list node p of type A*, to get the truck, you have to do something like &((struct C *)p)->b, relying on the fact that the struct's first field address is the same as the struct's address and on the fact that the A is the first field. While this is certainly a workable thing to do, I don't think we can recommend it without reservation on the basis that "you don't get any class entanglement, or any of the bugs"! It's very error-prone.
> Games are moving away from OO because C++ style OO
There are a lot of things to criticize about C++, but I think one of its worst effects is that it has tricked people into thinking that C++ is OO. "C++ style OO" is a contradiction in terms. I mean, it's possible to do OO in C++, but the language fights you viciously every step of the way; the moment you make a concession to C++ style, OO collapses.
jayd16•4h ago