All that makes a lot of sense if it was introduced as a performance hack rather than a thoughtfully designed concept.
The object-oriented part of OCaml, by the way, has inheritance that's entirely orthogonal to interfaces, which in OCaml are static types. Languages like Smalltalk and, for the most part, Python don't have interfaces at all.
"Trait/Typeclass"-style compositional inheritance as in Rust and Haskell is sublime. It's similar to Java interfaces in terms of flexibility, and it doesn't enforce hierarchical rules [1]. You can bolt behaviors and their types onto structures at will. This is how OO should be.
I put together a visual argument on another thread on HN a few weeks ago:
https://imgur.com/a/class-inheritance-vs-traits-oop-isnt-bad...
[1] Though if you want rules on bounds and associated types, you can have them.
Yes-and-no.
Interfaces still participate in inheritance hierarchies (`interface Bar extends Foo`), and that's in a way that prohibits removing/subtracting type members (so interfaces are not in any way a substitute for mixins). Composition (of interfaces) can be used instead of `extends`, but then you lose guarantees of reference-identity - oh, and only reference-types can implement interfaces which makes interfaces impractical for scalars and unusable in a zero-heap-alloc program.
Interface-types can only expose virtual members: no public fields - which seems silly to me because a vtable-like mechanism could be used to allow raw pointer access to fields via interfaces, but I digress: so many of these limitations (or unneeded functionality) are consequences of the JVM/CLR's design decisions which won't change in my lifetime.
Rust-style traits are an overall improvement, yes - but (as far as my limited Rust experience tells me) there's no succinct way to tell the compiler to delegate the implementation of a trait to some composed type: I found myself needing to write an unexpectedly large amount of forwarding methods by hand (so I hope that Rust is better than this and that I was just doing Rust the-completely-wrong-way).
Also, oblig: https://boxbase.org/entries/2020/aug/3/case-against-oop/
"Only reference types can implement interfaces" is simply not true in C#. Not only can structs implement them, but they can also be used through the interface without boxing (via generics).
(If you merge multiple interfaces, the implementations of the methods have to match. You end up with even more special getters for each one sometimes.)
It's true that you can't access private members (not just fields) on `this` from the mixin interface. But explicit implementations of members mean that only someone explicitly downcasting the object will get access to those members, so accidental access is not an issue.
Those default-implementations are only accessible when the object is accessed via that interface; i.e. they aren't accessible as members on the object itself. Furthermore, interfaces (still) only declare (and optionally define) vtable members (i.e. only methods, properties, and events - which are all fundamentally just methods), not fields or any kind of non-static state, whereas IMO mixins should have no limitations and should behave the same as though you copied-and-pasted raw code.
That's true in C# but not in Java, so it's not something intrinsic to the notion of an interface.
> Furthermore, interfaces (still) only declare (and optionally define) vtable members (i.e. only methods, properties, and events - which are all fundamentally just methods), not fields or any kind of non-static state
This is true, but IMO largely irrelevant because get/set accessors are a "good enough" substitute for a field. That there is even a distinction between fields and properties in the first place is a language-specific thing; it doesn't exist in e.g. Eiffel.
int Foo { get; set; }Someone has already referenced "Adding Dynamic Interfaces to Smalltalk" [0] and looking back there doesn't seem to be any kind of demonstration that use of interfaces makes software faster to develop or less error prone or... [1]
[0] https://www.jot.fm/issues/issue_2002_05/article1/
[1] https://www.cs.utexas.edu/~wcook/papers/OOPSLA89/interfaces.pdf "We are not finders of fact. We are tellers of story.
You base the story on the evidence, no?
No! We base the evidence on the story. We prove what helps us, and we disprove what hurts us. Whoever tells the best story goes home with the cash-in prizes."
(The Good Fight, season 3 episode 2. 30 - 31 minutes. CBS All Access 2019.)"This doesn't represent the fundamental truth" does not imply "this has little value". Your navigation software likely doesn't account for cars passing each other on the road either -- or probably red lights for that matter -- and yet it's still pretty damn useful. The sweet spot is problem- and model-dependent.
The bottom line is, no one ever really used inheritance that much anyway (other than smart people trying to outsmart themselves). People created AbstractFactoryFactoryBuilders not because they wanted to, but because "books" said to do stuff like and people were just signaling to the tribe.
So now, we are now all signaling to the new tribe that "inheritance is bad" even though we proudly created multiple AFFs in the past. Not very original in my opinion since Go and Rust don't have inheritance. The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
If you think that, you have no idea how much horrible code is out there. Especially in enterprise land, where deadlines are set by people who get paid by the hour. I once worked on a java project which had a method - call a method - call a method - call a method and so on. Usually, the calls were via some abstract interface with a single implementor, making it hard to figure out what was even being executed. But if you kept at it, there were 19 layers before the chain of methods did anything other than call the next one. There was a separate parallel path of methods that also went 19 layers deep for cleaning up. But if you follow it all the way down, it turns out the final method was empty. 19 methods + adjacent interface methods all for a no-op.
> The bottom line is, most people don't have any original opinions at all and are just going with whatever seems to be popular.
Most people go with the crowd. But there's a reason the crowd is moving against inheritance. The reason is that inheritance is almost always a bad idea in practice. And more and more smart people talking about it are slowly moving sentiment.
Bit by bit, we're finally starting to win the fight against people who think pointless abstraction will make their software better. Thank goodness - I've been shouting this stuff from the rooftops for 15+ years at this point.
Inheritance really shines when you want to encapsulate behaviour behind a common interface and also provide a standard implementation. I.e: I once wrote a RN app which talked to ~10 vacuum robots. All of these robots behaved mostly the same, but each was different in a unique way. E.g. 9 robots returned to station when the command "STOP" was send, one would just stop in place. Or some robots would rotate 90 degrees when a "LEFT" command was send, others only 30 degrees. We wrote a base class which exposed all needed commands and each robot had an inherited class which overwrote the parts which needed adjustment (e.g. sending left three times so it's also 90 degrees or send "MOVE TO STATION" instead of "STOP").
I can only think of one or two instances where I've really been convinced that inheritance is the right tool. The only one that springs to mind is a View hierarchy in UI libraries. But even then, I notice React (& friends) have all moved away from this approach. Modern web development usually makes components be functions. (And yes, javascript supports many kinds of inheritance. Early versions of react even used them for components. But it proved to be a worse approach.)
I've been writing a lot of rust lately. Rust doesn't support inheritance, but it wouldn't be needed in your example. In rust, you'd implement that by having a trait with functions (+default behaviour). Then have each robot type implement the trait. Eg:
trait Robot {
fn stop(&mut self) { /* default behaviour */ }
}
struct BenderRobot;
impl Robot for BenderRobot {
// If this is missing, we default to Robot::stop above.
fn stop(&mut self) { /* custom behaviour */ }
} trait Robot {
fn send_command(&mut self, command: Command);
fn stop(&mut self) {
self.send_command(Command.STOP);
}
}
struct BenderRobot;
impl Robot for BenderRobot {
// Required.
fn send_command(&mut self, command: Command) { todo!(); }
}
This is starting to look a lot like C++ class inheritance. Especially because traits can also inherit from one another. However, there are two important differences: First, traits don't define any fields. And second, BenderRobot is free to implement lots of other traits if it wants, too.If you want a real world example of this, take a look at std::io::Write[1]. The write trait requires implementors to define 2 methods (write(data) and flush()). It then has default implementations of a bunch more methods, using write and flush. For example, write_all(). Implementers can use the default implementations, or override them as needed.
Docs: https://doc.rust-lang.org/std/io/trait.Write.html
Source: https://doc.rust-lang.org/src/std/io/mod.rs.html#1596-1935
How does one handle cases where fields are useful? For example, imagine you have a functionality to go fetch a value and then cache it so that future calls to get that functionality are not required (resource heavy, etc).
// in Java because it's easier for me
public interface hasMetadata {
Metadata getMetadata() {
// this doesn't work because interfaces don't have fields
if (this.cachedMetadata == null) {
this.cachedMetadata = generateMetadata();
}
return this.cachedMetadata;
}
// relies on implementing class to provide
Metadata fetchMetadata();
}It sounds like you could solve that problem in a lot of different ways. For example, you could make an HTTP client wrapper which internally cached responses. Or make a LazyResource struct which does the caching - and use that in all those different types you're making. Or make a generic struct which has the caching logic. The type parameter names the special individual behaviour. Or something else - I don't have enough information to know how I'd approach your problem.
Can you describe a more detailed example of the problem you're imagining? As it is, your requirements sound random and kind of arbitrary.
public interface MetadataSource {
Metadata metadata = null;
default Metadata getMetadata() {
if (metadata == null) {
metadata = fetchMetadata();
}
return metadata;
}
// This can be relatively costly
Metadata fetchMetadata();
}
public class Image implements MetadataSource {
public Metadata fetchMetadata() {
// goes to externally hosted image to fetch metadata
}
}
public class Video implements MetadataSource {
public Metadata fetchMetadata() {
// goes to video hosting service to get metadata
}
}
public class Document implements MetadataSource {
public Metadata fetchMetadata() {
// goes to database to fetch metadata
}
}
Each of the above have completely different ways to fetch their metadata (ex, Title and Creator), and of them has different characteristics related to the cost of getting that data. So, by default, we want the interface to cache the result so that the1. The thing that _has_ the metadata only needs to know how to fetch it when it's asked for (implementation of fetchMetadata), and it doesn't need to worry about the cost of doing so (within limits of course)
2. The things that _use_ the metadata only need to know how to ask for it (getMetadata) and can assume it has minimal cost.
3. Neither one of those needs to know anything about it being cached.
I had a case recently where I needed to check "does this have metadata available" separate from "what is the metadata". And fetching it twice would add load.
class CachedMetadataSource implements MetadataSource {
CachedMetadataSource(MetadataSource uncachedSource) {}
Metadata getMetadata() {
if (metadata == null) {
metadata = uncachedSource.getMetadata();
}
return metadata;
}
} trait MetadataSource {
fn fetch_metadata(&self) -> Metadata;
}
impl MetadataSource for Image { ... }
impl MetadataSource for Video { ... }
impl MetadataSource for Document { ... }
And a separate object which stores an image / video / document alongside its cached metadata: struct ThingWithMetadata<T> {
obj: T, // Assuming you need to store this too?
metadata: Option<Metadata>
}
impl<T: MetadataSource> ThingWithMetadata {
fn get_metadata(&self) -> &Metadata {
if self.metadata.is_none() {
self.metadata = Some(self.obj.fetch_metadata());
}
self.metadata.as_ref().unwrap()
}
}
Its not the most beautiful thing in the world, but it works. And it'd be easy enough to add more methods, behaviour and state to those metadata sources if you want. (Eg if you want Image to actually load / store an image or something.)In this case, it might be even simpler if you made Image / Video / Document into an enum. Then fetch_metadata could be a regular function with a match expression (switch statement).
If you want to be tricky, you could even make struct ThingWithMetadata also implement MetadataSource. If you do that, you can mix and match cached and uncached metadata sources without the consumer needing to know the difference.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
(For one thing, it's quite obvious to see that the pattern itself is rather anti-modular, and the ways generic typestate is used are also quite divergent from the usual style of inheritance-heavy OO design.)
You can pass WithCachedMetadata around, and consumers don't need to understand any of the implementation details. They just ask for the metadata and it'll fetch it lazily. But it is definitely more awkward than inheritance, because the image struct is wrapped.
As I said, there's other ways to approach it - but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option. A better approach might be for each item to simply know the URL to their metadata. And then get your net code to handle caching on behalf of the whole program.
It sounds like you really want to use mixins for this - and you're proposing inheritance as a way to do it. The part of me which knows ruby, obj-c and swift agrees with you. I like this weird hacky use of inheritance to actually do class mixins / extensions.
The javascript / typescript programmer in me would do it using closures instead:
function lazyResource(url) {
let cached = null
return async () => {
if (cached == null) cached = await fetch(url)
return cached
}
}
// ...
const image = {
metadata: lazyResource(url)
}
Of all the answers, I think this is actually my favorite solution. Its probably the most clear, simple and expressive way to solve the problem.Right, but the start of where I jumped into this thread was about the fact that there are places where fields would make things better (specifically in relation to traits, but interfaces, too). And then proceeding to discuss a specific use case for that.
> A better approach might be for each item to simply know the URL to their metadata.
Not everything is a coming from a url and, even when it is, it's not always a GET/REST fetch.
> but I suspect in this case, using inheritance as a stand-in for a class extension / mixin is probably going to always be your most favorite option
Honestly, I'd like to see Java implement something like a mixin that allows adding functionality to a class, so the class can say "I am a type of HasAuthor" and everything else just happens automatically.
I'd like to generalize that a little bit and say: graph structures in general. A view hierarchy is essentially a tree, where each node has a bunch of common bits (tree logic) and a bunch of custom bits (the actual view). There are tons of "graph structures" that fit that general pattern: for instance, if you have some sort of data pipeline DAG where data comes in on the left, goes out on the right, and in the middle has to pass through a bunch of transformations that are linked in some kind of DAG. Inheritance is great for this: you just have your nodes inherit from some kind of abstract "Node" class that handles the connection and data flow, and you can implement your complex custom behaviors however you want and makes it very easy to make new ones.
I'm very much in agreement that OOP inheritance has been horrendously overused in the 90s and 00s (especially in enterprise), but for some stuff, the model works really well. And works much better than e.g. sum types or composition or whatever for these kinds of things. Use the right tool for the right job, that's the central point. Nothing is one-size-fits-all.
But what do those functions return? Oh look, it's DOM nodes, which are described by and implemented with inheritance.
I would agree that view hierarchies in UI libraries are one of the primary use-cases for inheritance. But it's a pretty big one.
Well of course. React builds on what the browser provides. And the DOM has been defined as a class hierarchy since forever. But react components don’t inherit from one another. If the react devs could reinvent the DOM, I think it would look very different than it looks today.
(yes, I guess it's the fake vtable of structure full of pointers)
Funny you mention it, since JavaScript has absolutely no concept of contracts, which is one of the most important side-effects of inheritance. Especially not at compile time, but even at runtime you can compose objects willy-nilly, pass them anywhere, and the only way to test if they adhere to some kind of trait is calling a method and hoping for the best.
At least that had been the case till ES6 came around, but good luck finding anyone actually using classes in JavaScript. Mainly because it adds near-zero benefits, basically just the ability to overwrite method behavior without too much trickery.
Call them interfaces with default implementations or super classes, they are the same thing and very useful.
As usual there is no silver bullet, so it's just a tool and like any other tool you need to use it wisely, when it makes sense.
1. A class can be composed out of multiple interfaces, making them more like mixins/traits etc vs inheritance, which is always a singular class
2. The implementation is flat and you do not have a tree of inheritance - which was what this discussion was about. This obviously comes with the caveat that you don't combine them, which would effectively make it inheritance again.
There are many other ways to share an implementation of a common feature:
1. Another comment already mentioned default method implementations in an interface (or a trait, since the example was in Rust). This technique is even available in Java (since Java 8), so it's as mainstream as it gets.
The main disadvantage is that you can have just one default implementation for the stop() method. With inheritance you could use hierarchies to create multiple shared implementations and choose which one your object should adopt by inheriting from it. You also cannot associate any member fields with the implementation. On the bright side, this technique still avoids all the issues with hierarchies and single and multiple inheritance.
2. Another technique is implementation delegation. This is basically just like using composition and manually forwarding all methods to the embedded implementer object, but the language has syntax sugar that does that for you. Kotlin is probably the most well-known language that supports this feature[1]. Object Pascal (at least in Delphi and Free Pascal) supports this feature as well[2].
This method is slightly more verbose than inheritance (you need to define a member and initialize it). But unlike inheritance, it doesn't requires forwarding the class's constructors, so in many cases you might even end up with less boilerplate than using inheritance (e.g. if you have multiple overloaded constructors you need to forward).
The only real disadvantage of this method is that you need to be careful with hierarchies. For instance, if you have a Storage interface (with the load() and store() methods) you can create EncryptedStorage interface that wraps another Storage implementation and delegates to it, but not before encrypting everything it sends to the storage (and decrypting the content on load() calls). You can also create a LimitedStorage wrapper than enforces size quotas, and then combine both LimitedStorage and EncryptedStorage. Unlike traditional class hierarchies (where you'd have to implement LimitedStorage, EncryptedStorage and LimitedEncryptedStorage), you've got a lot more flexibility: you don't have to reimplement every combination of storage and you can combine storages dynamically and freely. But let's assume you want to create ParanoidStorage, which stores two copies of every object, just to be safe. The easiest way to do that is to make ParanoidStorage.store() calls wrapped.store() twice. The thing you have to keep in mind, is that this doesn't work like inheritance: For instance, if you wrap your objects in the order EncryptedStorage(ParanoidStorage(LimitedStorage(mainStorage))), ParanoidStorage will call LimitedStorage.store(). This is unlike the inheritance chain EncryptedStorage <- ParanoidStorage <- LimitedStorage <- BaseStorage, where ParanoidStorage.store() will call EncryptedStorage.store(). In our case this is a good thing (we can avoid a stack overflow), but it's important to keep this difference in mind.
3. Dynamic languages almost always have at least one mechanism that you can use to automatically implement delegation. For instance, Python developers can use metaclasses or __getattr__[3] while Ruby developers can use method_missing or Forwaradable[4].
4. Some languages (most famously Ruby[5]) have the concept of mixins, which let you include code from other classes (or modules in Ruby) inside your classes without inheritance. Mixins are also supported in D (mixin templates). PHP has traits.
5. Rust supports (and actively promotes) implementing traits using procedural macros, especially derive macros[6]. This is by far the most complex but also the most powerful approach. You can use it to create a simple solution for generic delegation[7], but you can go far beyond that. Using derive macros to automatically implement traits like Debug, Eq, Ord is something you can find in every codebase, and some of the most popular crates like serde, clap and thiserror rely on heavily on derive.
[1] https://kotlinlang.org/docs/delegation.html
[2] https://www.freepascal.org/docs-html/ref/refse48.html
[3] https://erikscode.space/index.php/2020/08/01/delegate-and-de...
[4] https://blog.appsignal.com/2023/07/19/how-to-delegate-method...
[5] https://ruby-doc.com/docs/ProgrammingRuby/html/tut_modules.h...
[6] https://doc.rust-lang.org/reference/procedural-macros.html#d...
2003 Traits: Composable Units of Behaviour
https://www.cs.cmu.edu/~aldrich/courses/819/Scha03aTraits.pd...
" Traits as described in this paper are implemented in Squeak [22], an open-source dialect of Smalltalk-80."
I suspect part of the problem of inheritance is that it is a way to share behavior that some humans, especially visual thinkers who understand VMTs, find easy to reason about.
In my experience verbal thinkers struggle with inheritance, because it requires jumping between levels of abstraction and they aren't thinking in terms of semantic units. I have found that books like Refactoring can help bridge the gap, but we have to identify it as a gap to be bridged and people have to want to learn this new skill.
And then on the flip side you have people who try to use it just as a way to de-dupe code, even when it doesn't reflect a meaningful semantic unit.
This is too dismissive of the criticism. The problem with inheritance is it makes control flow harder to understand and it spreads your logic all over a bunch of classes. Ironically, inheritance violates encapsulation - since a base class is usually no longer self contained. Implementation details bleed into derived classes.
The problem isn’t “verbal thinkers”. I can think in OO just fine. I’ve worked in 1M+ line of code Java projects, and submitted code to chrome - which last time I checked is a 30M loc C++ project. My problem with OO is that thinking about where any given bit of code is distracts me from what the code is trying to do. That makes me less productive. And I’ve seen that same problem affect lots of very smart devs, who get distracted building a taxonomy in code instead of solving actual problems.
It’s not a skills problem. Programming is theory building. OO seduces you into thinking the best theory for your software is a bunch of classes which inherit from each other, and which reference each other in some tangled web of dependencies. With enough effort, you can make it work. But it almost always takes more effort than straightforward dataflow style programming to model the same thing.
But we may also disagree on what "productive" means in the context of writing software.
The "taxonomy of code" you are dismissing is I believe what Fred Brooks describes as the "essential tasks" of programming: "fashioning of the complex conceptual structures that compose the abstract software entity".
It's not that I don't sympathize with your concern: being explicit and clear about "what the code is trying to do" is why TDD is popular among OOP programmers. But the step after "green" is "refactor", where the programmer stops focusing on what the code is trying to do and refines the taxonomy of the system that implements those tasks.
I've worked with countless people who came from Java, who try to create the same abstractions and factories and layers.
When I chide them, it's like realizing the shackles are off, and they have fun again with the basics. It leads to much more readable, simple code.
This isn't to say Java is bad and Go is good, they're just languages. It's just how they're typically (ab)used in enterprises.
Yeah; I agree with this. I think this is both the best and worst aspect of Go: Go is a language designed to force everyone's code to look like vaguely the same, from beginners to experts. Its a tool to force even mediocre teams to program in an inoffensive, bland way that will be readable by anyone.
I doubt it; the majority of code is in enterprise projects, and they do Java and C# in the idiomatic way, with inheritance.
I'm working on an Android project right now, and inheritance is everywhere!
So, sure, if you ignore all mobile development, and ignore almost all enterprise software, and almost all internal line-of-business software, and restrict yourself to what various "influencers" say, then sure THAT crowd is moving away from inheritance.
Even C++ has that with multiple inheritance - some parents can just be interfaces.
As to whether Smalltalk needs interfaces see https://stackoverflow.com/a/7979852/151019 and https://www.jot.fm/issues/issue_2002_05/article1/
C++ does not (or at least did not at the time) have a concept of interfaces. There was a pattern in some development communities for defining interfaces by writing classes that followed particular rules, but no first-class support for them in the language.
An interface is just a base class none of whose virtual functions have implementations. C++ has first class support for it. The only thing C++ lacks is the "interface" keyword.
C++ doesn't have this restriction, so interfaces would add very little.
Your distinction between "first class support for interfaces" and "C++ support for interfaces" looks like an artificial one to me.
Other than not requiring the keyword "interface", what is it about the C++ way of creating an interface that makes it not "first class support"?
The important thing is to distinguish between interface and implementation, and that is relevant to any class, whether it implements a separately defined interface or not.
2003 Traits: Composable Units of Behaviour
https://www.cs.cmu.edu/~aldrich/courses/819/Scha03aTraits.pd...
" Traits as described in this paper are implemented in Squeak [22], an open-source dialect of Smalltalk-80."
The only part of inheritance I’ve ever found useful is allowing objects to conform to a certain interface so that they can fulfill a role needed by a generic function. I’ve always preferred the protocol approach or Rust’s traits for that over classicist inheritance though.
I'm fine with it because trait inheritance doesn't increase code complexity in the same way C++ / Java class inheritance does. If you call foo.bar(), its usually pretty obvious which function is being called. And you only ever have to look in one place to see all the fields of a struct.
In C++, its common to have a class method say "blah = 5;" or something. Then you need to spend 5 minutes figuring out where "blah" is even defined in the class hierarchy. By the time you find it, you have 8 code windows open and you've forgotten what you were even trying to do. And thats to say nothing of all the weird and wonderful bits of code which might modify that field when you aren't looking. Ugh.
Agreed, mutation tends to make everything worse and definitely more complicated.
Mutation is a powerful technique, but needs to be treated with care. Haskell and Rust (and Erlang) amongst others have some interesting approaches for how to recognise the danger of mutations, but still harness their upsides.
Haskell even has quite a few different approaches to choose from, or to mix-and-match.
Some of this can be remedied with tooling. A nice "show usages" would solve this. Also, some IDEs have a class browser, where you can see the inheritance tree with all the members.
Yes, in our fad-chasing industry the pendulum has moved in the other direction. Let's wait few years.
There is nothing wrong with OOP, inheritance, FP, procedural, declarative or whatever. What is bad is religious dogma overtaking engineering work.
You can write spaghetti in any language or paradigm. People will go overboard on DRY while ignoring that inheritance is more or less just a mechanism for achieving DRY for methods and fields.
FP wizards can easily turn your codebase into a complex organism that is just as “impenetrable” as OOP. But as you say, fads are fads are fads, and OOP was the previous fad so it behooves anyone who wants to look “up to date” to be performative about how they know better.
Personally I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects. I expect you can even base a trait off of another trait in Rust.
But don’t dare call it what it actually is, because this industry really is as petulant as you describe.
I think every new technology or idea is created because it solves some problems, but in the long run, we'll discover that it creates other problems. For example, transpiling javascript, C++ OO, actors, coroutines, docker, microkernels, and so on.
When a new idea appears, we're much more aware of the benefits it brings. But we don't know the flaws yet. So we naively hope there are no flaws - and the new idea is universally good.
But its rare to change something and not have that cause problems. It just always takes awhile for the problems to really show up and spoil the party. I guess you could think of it as the hype cycle - first hype, then disillusionment, then steady state.
Sometimes I play this game with new technology. "In 10 years, people will be complaining about this on hackernews. What do I guess they'll be saying about it?". For rails, people complain about its deep magic. For rust, I think it'll be how hard it is to learn. For docker, that it increases the size of deployments for no reason. And that its basically static linking with more steps.
Calling everything a fad is too cynical for me, because it implies that progress is impossible. But plenty of tools have made my life as a software developer better. I prefer typescript over javascript. I prefer cargo over makefile / autotools / cmake / crying. Preemptive schedulers are better than your whole computer freezing. High level languages beat programming in assembly.
Its just hard to say for sure how history will look back on any particular piece of tech in 20 years time. Fortran lost to C, even though its better in some ways. I think C++ / Java style OO will die a slow death, in favour of data oriented design and interfaces / traits (Go / Rust style). I could be wrong, but thats my bet.
> I think it’s obvious that anyone passing around structs that contain data and functions that act on that data is the same concept as passing around objects.
I hear what you're saying - but there's some important differences about the philosophy of how we conceptualise our programs. OO encourages us to think of nouns that "do" some verb. Using structs and functions (like C and Rust) feels much more freeform to me. Yegge said it better: https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
But lets see in 20 years. Maybe OO will be back, but I doubt it. I think if we can learn anything from the history of CS, its that functional programming had basically all the right ideas 40 years ago. Its just taking the rest of us decades to notice.
(can also be imported globally with 'global using static ..' in a usings file)
So I agree with you. You can write good C# if you want to. The problem is that a lot of people - for some strange reason - actively choose to make their programs heavily OOP.
Something like that, pick your tribe or, even better, be an individual and do whatever (TF) you want.
I met this old guy at a conference one, ~15 years ago. He said he didn’t get why people say Java is slow. His Java, he said, runs just as fast as C. I asked him to show me his code - and I’m so glad I did. It was amazing. He did everything in one big static class, and treated Java as if it were a funny way to write C. He ignored almost the entire standard library. No wonder his code ran fast. It was basically JIT-compiled C code.
Java isn’t the problem. “Java best practices” are the problem. It’s a culture thing. Likewise, can write heavily OOP code in C if you really put your mind to it and write your own struct full of function pointers. But it’s not in culture of the C community to overuse that design.
Hey, struct generics are the go-to tool for zero-cost abstractions in .NET! No need to feel bad about them :)
A thing like a "comparator" or an "XYZ factory" is not a domain noun, but rather a pluggable code module.
Contrary to what the hype of the 90s said, I don't think OOP is the ultimate programming technique which will obsolete all others. But I think that it's equally inaccurate to make wild claims about how OOP is useless garbage that only makes software worse. Yes, you can make an unholy mess of class structures, but you can do that with every programming language. The prejudice some people have against OOP is really unfounded.
Which doesn't mean everyone has to learn to understand OOP, but just because one person doesn't want to doesn't mean no one should.
Of course, in the functional programming community we know that it is pointfree abstraction that makes your software better.
https://wiki.haskell.org/Pointfree
(Please pardon the pun.)
What's described here is over-generic code, instead of KISS and just keeping an eye on extensibility instead of generalizing ahead of time. This can happen in any paradigm.
Classes - and class hierarchies - really let you go to town. I've seen codebases that seem totally impossible to get your head around. The best is when you have 18 classes which all implicitly or explicitly depend on each other. In that case, just starting the program up requires an insane, fragile dance where lots of objects need to be initialized in just the perfect order, otherwise something hits a null pointer exception in its initialization code. You reorder two lines in a constructor somewhere and something on the other side of your codebase breaks, and you have no idea why.
For some reason I've never seen anyone make that kind of mess just using composition. Maybe I just haven't been around long enough.
I would even go so far as to argue that a small team of devs can learn an OOP heirarchy and work with it indefinitely, but a similar small team will drown in maintenance overhead without OOP and inheritance. This is highly relevant as we head into an age of decreased headcounts. This style of abandoning OOP will age poorly as teams decrease in size.
Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens.
Inheritance isn't the only way to avoid duplicating code. Composition works great - and it results in much more maintainable code. Rust, for example, doesn't have class based inheritance at all. And the principle of DRY is maintained in everything I've made in it. And everything I've read by others. Its composition all the way down, and it works great. Go is just the same.
If anything, I think if you've got a weak team it makes even more sense to stick to composition over inheritance. The reason is that composition is easier to read and reason about. You don't get "spooky action from a distance" when you use composition, since a struct is made up of exactly the list of fields you list. Nothing more, nothing less. There's no overridden methods and inherited fields to worry about.
Generally, don't treat Go as if its some bad imitation of C++ or Java. Its a different language. Like all languages, idiomatic Go is its own thing. It looks different to idiomatic Ruby or Javascript or C++ or Perl.
I think of programming languages kind of like pieces of wood. Each language has its own "grain" that you need to follow when you work. If you try and force any programming language into acting like its something else, you're going against the grain of the language. You'll need to work 10x harder to get anywhere if you try to work like that. Spend more time learning.
The cost of working with code is much lower with LLMs than with humans and it's falling by an order of magnitude every year.
Because that's the negation of my premise which you disagreed with: "Keeping to the DRY principle is also more valuable in the age of AI when briefer codebases use up fewer LLM tokens."
> And why aren't you using your IDE to change them all at once?
It sounds like you're assuming that they're all defined in the same way that you can catch them with a search.
It's just slightly too strong of a statement.
I'm working in a very large Spring codebase right now, with a lot of horrible inheritance abuse (seriously, every component extended common hierarchy of classes that pulled in a ton of behavior). I suspect part of the reason is the Spring context got out of control, and the easiest way to reliably "inject" behavior is by subclassing. Terrible.
On the other hand, inheritance is sometimes the most elegant solution to a problem. I've done this at multiple companies:
Payment
+ PayPalPayment
+ StripePayment
Sometimes you have data (not just behavior!) that genuinely follows an IS-A relationship, and you want more than just interface polymorphism. Yes you can model this with composition, but the end result ends up being more complex and uglier.It doesn't have to be all one or the other. But I agree, it should be mostly composition.
I like languages where I can have both, and where the language authors are not trying to preach at me.
Yep: it requires skills that aren't taught in schools or exercised in big companies organized around microservices. We've gone back to a world where most developers are code monkeys, converting high-level design documents into low-level design documents into code.
That isn't what OOP is good for: OOP is good for evolving maintainable, understandable, testable, expressive code over time. But that doesn't get you a promotion right now, so why would engineers value it?
Whoa that’s quite the claim. Most large projects built heavily on OO principles I’ve seen or worked on have become an absolute unmaintainable mess over time, with spider webs of classes referencing classes. To say nothing of DI, factoryfactories and all the rest.
I believe you might have had some good experiences here. But I’m jealous, and my career doesn’t paint the same rosy picture from the OO projects I’ve seen.
I believe most heavily OO projects could be written in about 1/3 as many lines if the developers used an imperative / dataflow oriented design instead. And I’m not just saying that - I’ve seen ports and rewrites which have born out around that ratio. (And yes, the result is plenty maintainable).
With enough patience you will see many fads pass twice like a tide raising and falling. OOP, runtime typing, schema-less databases and TDD are the first to come to mind.
I feel "self-describing" data formats and everything agile are fading already.
Very few ideas stick, but some do: I do not expect GOTO to ever come back, but who knows where vibe coding will lead us :)
100% this! And I've recently been wondering whether this is the right workflow for AI-assisted development: use vibe-coding to build the one that you plan to throw away [0], use that to validate your assumptions and implement proper end-to-end tests, then recreate it again once or more with AI asked to try different approaches, and then eventually throw these away too and more manually create "the third one".
[0] "In most projects, the first system built is barely usable....Hence plan to throw one away; you will, anyhow." Fred Brooks, The Mythical Man-Month
That's just false. Before Java abstract factory era there was already a culture of creating deep inheritance hierarchies in C++ code. Interfaces and design patterns (including factories) were adopted as a solution to that mess and as bad as they were - they were still an improvement.
I don't think this is accurate. people created factories like this because they were limited by interface bounds in the languages they were coding in and had to swap out behaviour at run or compile time for testing or configuration purposes.
The lower the stakes, the more dogmatic people become about their choices, because they know on some level it's a matter of taste and nothing more. Counterintuitively, it becomes even more tied to one's ego than the choices that actually have major consequences.
(Probably also worth noting that high performance 3D graphics torture the object abstraction past recognizability, because maintaining those runtime abstractions costs resources that could be better spent slamming pixels into a screen).
Your software will still be a mess but a mess you can work with. Not a horror beyond comprehension. We should aim for workable mess.
This is from experience working with both procedural/functional mess and OO mess.
Inheritance is most definitely used in many popular C++ libraries, e.g., protobuf::Message [1] (which is base class to all user message classes and also has its own base class of MessageLite) or QWidget [2] (which sits in a large class hierarchy) or tinyxml2::XMLNode (base class to other node types). These are honestly the first three libraries that I thought of that have a non-trivial collection of classes in them. They're all stateful base classes by the way, not pure interfaces. And remember, I'm not trying to justify whether these are good or bad designs, just the make the observation that inheritance certainly is well used in practice.
(The fourth library I thought of with a reasonably complex collection of classes is Boost ASIO [4] which actually doesn't use inheritance. Instead it uses common interfaces to allow some compile-time polymorphism. Ironically, this is the only library in the list that I've been so unsatisfied with that I've written my own wrapper more than once for a little part of it: allowing auto-(re)connecting outbound and accepting incoming connections with the same interface. Guess what: I used inheritance!)
[1] https://protobuf.dev/reference/cpp/api-docs/google.protobuf....
[2] https://doc.qt.io/qt-6/qwidget.html
[3] https://leethomason.github.io/tinyxml2/classtinyxml2_1_1_x_m...
[4] https://www.boost.org/doc/libs/1_88_0/doc/html/boost_asio/re...
I've also written some code that's gotten a lot of mileage out of inheritance, including multiple inheritance. Some of my Python abstractions would not have worked anywhere near as well as they did without it. But even then, I could build APIs at least as usable in languages without inheritance, as long as those languages had sufficient facilities for abstraction of their own. (Which OCaml, Haskell and Rust absolutely do!)
You have two objects. A and B. How do you merge the two objects? A + B?
The most straight forward way is inheritance. The idea is fundamental.
The reason why it's not practical has more to do with human nature and the limitations of our capabilities in handling complexity then it has to do with the concept of inheritance itself.
Literally think about it. How else do you merge two structs if not using inheritance?
The idea that inheritance is not fundamental and is wrong in nature is in itself mistaken.
What? Using multiple inheritence? That's one of the worst ideas I've ever seen in all of computer science. You can't just glue two arbitrary classes together and expect their invariants to somehow hold true. Even if they do, what happens when both classes implement a method or field with the same name? Bugs. You get bugs.
I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.
The way to merge two structs is via composition:
struct C {
a: A,
b: B,
}
If you want to expose methods from A or B, either wrap the methods or make the a or b fields public / protected and let callers call c.a.foo().Don't take my word for it, here's google's C++ style guide[1]
> Composition is often more appropriate than inheritance.
> Multiple inheritance is especially problematic, because it often imposes a higher performance overhead (in fact, the performance drop from single inheritance to multiple inheritance can often be greater than the performance drop from ordinary to virtual dispatch), and because it risks leading to "diamond" inheritance patterns, which are prone to ambiguity, confusion, and outright bugs.
> Multiple inheritance is permitted, but multiple implementation inheritance is strongly discouraged.
[1] https://google.github.io/styleguide/cppguide.html#Inheritanc...
You just threw this in out of nowhere. I didn't mention anything about "multiple" inheritance. Just inheritance which by default people usually mean single inheritance.
That being said multiple inheritance is equivalent to single inheritance of 3 objects. The only problem is because two objects are on the same level it's hard to know which property overrides which. With a single chain of inheritance the parent always overrides the child. But with two parents, we don't know which parent overrides which parent. That's it. But assume there are 3 objects with distinct properties.
A -> B -> C
would be equivalent to A -> C <- B.
They are isomorphic. Merging distinct objects with distinct properties is commutative which makes inheritance of distinct objects commutative. C -> B -> A == A -> B -> C
>I've been programming for 30 years and I've still never seen an example of multiple inheritance that hasn't eventually become a source of regret.Don't ever tell me that programming for 30 years is a reason for being correct. It's not. In fact you can be doing it for 30 years and be completely and utterly wrong. Then the 30 years of experience is more of a marker of your intelligence.
The point is YOU are NOT understanding WHAT i am saying. Read what I wrote. The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively.
But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting.
Think about it. You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
Say you have two structs. The structs contain redundant properties. HOW do you define one struct in terms of the other? There's no simpler way then inheritance.
>> Composition is often more appropriate than inheritance.
You can use composition but that's literally the same thing but wierder, where instead of identical properties overriding other properties you duplicate the properties via nesting.
So inheritance
A = {a, b}, C = {a1}, A -> C = {a1, b}
Composition: A = {a, b}, C = {a1}, C(A) = {a1, {a, b}}
That's it. It's just two arbitrary rules for merging data.If you have been programming for 30 years you tell me how to fit this requirement with the most minimal code:
given this:
A = {a, b, c, d}
I want to create this: B = {a, b, c, d, e}
But I don't want to rewrite a, b, c, d multiple times. What's the best way to define B while reusing code? Inheritance.Like I said the problem with inheritance is not the concept itself. It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity. The idea itself is fundamental not stupid. It's the human brain that is too stupid to handle the emergent complexity.
Also I don't give two flying shits about google style guides after the fiasco with golang error handling. They could've done a better job.
Why would we assume that? If the objects are entirely distinct, why are you combining them together into one class at all? That doesn't make any sense. Let distinct types be distinct. Let consumers of those types combine them however they like.
> But fundamentally there's no OTHER SIMPLER way to merge two objects without resorting to complex nesting. [...] You have two classes A and B and both classes have 90% of their properties shared. What is the most fundamental way of minimizing code reuse? Inheritance. That's it.
So now the objects have 90% of their properties shared. That's different from what you were saying earlier. But moving on...
The composition answer is similar to the inheritance answer. Take the common parts and extract them out into a self contained type. Use that type everywhere you need it - eg by including it in both A and B.
> given this: A = {a, b, c, d} I want to create this: B = {a, b, c, d, e}. [...] What's the best way to define B while reusing code?
Via composition, you do it like this:
B = { a: A, e }
What could be simpler than that?You keep claiming that inheritance is simpler. But then you go on to say this:
> The problem with inheritance has to do with human capability. We can't handle the complexity that arises from using it extensively. [..] It is human nature or our incapability of DEALING with the complexity that arises from it. The issue is the coupling is two tight so you make changes in one place it creates an unexpected change in another place. Our brains cannot handle the complexity.
In other words, using inheritance increases the complexity of the resulting code for the human brain. It makes our code harder to read & understand. I absolutely agree with this criticism you're making.
And that's an incredibly damning criticism, because complexity is an absolute killer. Your capacity to wrangle the complexity of a given piece of code is the single greatest limitation of any software developer, no matter how skilled. Any change that makes your code more complex and harder to understand must be worth it in some way. It must pay dividends. Using inheritance brings no benefit in most cases over the equivalent compositional code. The only thing that happens is that - as you say - it makes the software harder for our brains to handle.
If you ask me, that's a terrible decision to make.
Human = torso, legs, arms. Three distinct objects combine into one thing. A human by definition is the union of these things. It's fundamental. It's just your bias is trying to see it as something else.
>So now the objects have 90% of their properties shared. That's different from what you were saying earlier. But moving on...
So? I can talk about multiple things right? This is allowed in life right? Did i break the law here?
>The composition answer is similar to the inheritance answer. Take the common parts and extract them out into a self contained type. Use that type everywhere you need it - eg by including it in both A and B.
Composition is the same thing. But it's saying instead of overriding duplicate properties, Clone the duplicate property. That's it. And it uses nesting to achieve this. This arbitrary rule isn't more fundamental then overriding the duplicate property.
>Via composition, you do it like this:
Why don't you take a look at my examples again. You are either not able to comprehend or you didn't read it. I literally said the same thing:
B = {a: A, e}
The above a complete dupe of what I wrote. B = {a, b, c , d, e}
Just nested. Which I brought up: B = {{a, b, c, d}, e}
Is it not? Please read my response replying with stuff like this. Read it thoroughly.>And that's an incredibly damning criticism, because complexity is an absolute killer.
Not exactly it's not that straightforward. Because inheritance minimizes code copying. It reuses code in the most efficient way possible. So actually lines of code and duplicate code actually goes down. So complexity in one area falls and rises in another area.
Our brains are biased towards handling complexity of duplicate code better then tightly coupled code.
No, it’s not. If I put a torso, legs and arms (and perhaps a head) on a table, I don’t get a human being. I’d say a human composes all of those things (and more!). But a human doesn’t inherit from them. For example, each leg can kick(). But you can’t inherit from two legs! And if you did, which leg is the one that kicks when you call the function? Much better to have human class which contains two legs. Then human can have a kick(RIGHT_LEG) function which delegates its behaviour to right_leg.kick().
There’s lots more ways composition helps us model this. Let’s say we want to model blood temperature, which is tracked in every limb separately. Composition makes it more straightforward to have different behaviour (& state) in the Body class for blood temperature than in any of the limbs. (Eg maybe the blood temperature is the average of all limbs temperature. That is more straightforward to implement with composition.)
> inheritance minimizes code copying. It reuses code in the most efficient way possible.
On the surface, I agree with this claim. But it’s funny - programs which make heavy use of OO always seem unnecessary verbose. I wonder why that is? I’m thinking of Java where it’s common to see utility classes that just have 1 or 2 fields take up 100+ lines of code, due to class boilerplate, custom hashCode, toString and isEqual methods and all the rest.
But in any case, as you say, we aren’t just trying to optimise for the fewest lines of code. We’re trying to optimize for how easy something is to read, write and maintain. Adding code to increase simplicity is often worth it. Inheritance increases complexity because it adds a layer of hidden control flow. When I’m reading a program, I need to do a lot of work to figure out if foo.bar() is calling a function in one of the base classes or in the derived class. As you say, humans don’t deal with that kind of complexity well. In general, explicit is better than implicit - this.leg.kick() is more explicit than this.kick() when kick() exists somewhere in one (or more?) of the base classes.
Also let’s say I have 3 classes A, B extends A and C extends A. If there’s a bug in B that involves something in the base class A, fixing that bug may break implicit invariants in C. This kind of “spooky action at a distance” is horrible. Ironically, it violates the principle of encapsulation that OO claims as one of its core principles. I find this kind of problem is rarer in compositional systems. And when it happens, it’s usually much more straightforward to debug and fix. The reason is because classes are all self contained. You don’t have partially-specified base classes that only kinda sorta maintain their invariants. And derived classes don’t implicitly include their base class’s behaviour. As a result, there’s less implicit entanglement. B and C can much more easily change how they wrap A’s behaviour.
At the end of the day, I think we more or less agree that inheritance makes code harder to reason about. I don’t write code to express a pure conceptual representation of the world. (And neither should you!). I write code to get stuff done. If inheritance makes it harder for humans to be productive with our software, then that’s reason enough to abandon it.
Composes duplicates identical properties via nesting.
Inheritance overrides properties that are identical.
That’s it. I’m done.
I agree about nesting. But nesting matters, because it forces us to design components which make sense in isolation. As a result, composition encourages - and in many ways requires - better modularity in code. Inheritance does not. Base classes are often poorly conceived, poorly specified grab-bags of state and functions. They lead to hard to understand, hard to follow code.
Earlier in this thread you insulted my intelligence. You said this:
> Don't ever tell me that programming for 30 years is a reason for being correct. It's not. In fact you can be doing it for 30 years and be completely and utterly wrong. Then the 30 years of experience is more of a marker of your intelligence.
I'm curious if you'll still back the argument you've made here after you've been programming for 30 years too. You're clearly already suspicious of how and why inheritance makes code harder to understand. I suspect in a few years, you'll come around to my point of view on this. But I'd love to know if I'm wrong.
I did insult your intelligence. Because when you said you have 30 years of experience I hear total arrogance. It's like "I'm right and you're wrong because I have 30 years of experience" When I hear that I just want the other person to shut the hell up.
>I agree about nesting. But nesting matters, because it forces us to design components which make sense in isolation. As a result, composition encourages - and in many ways requires - better modularity in code. Inheritance does not. Base classes are often poorly conceived, poorly specified grab-bags of state and functions. They lead to hard to understand, hard to follow code.
Again, you resuse the same code if you don't use inheritance. A cat walks, so does an animal, do does a dog. You have to write walk() twice if you don't use inheritance. There's a trade off here.
The difference is skin deep. It's the emergent complexity that is NOT skin deep.
combining objects via "object composition" or "inheritance" is different. One way is not more right then the other. It's simply that you can't handle the hierarchical relationships.
But think about it. If you have a deeply nested Object where you don't use inheritance. Then all the objects have multitudes of redundant properties, doesn't that result in complex code as well? And how does nesting objects make it less complex then inheriting objects? It's more of code navigation problem in the sense that when you use inheritance and you look at a child derived from generations of inheritance it's just hard to read and figure out what the final object is.
With object composition the view is the same. You have an object that holds generations of nested objects. The difference is you can control click and follow the definition of the nested object so it's more visible.
Thus it seems to me the issue with the complexity is that inheritance simply does not give you a widget you can control click into easily to follow the definition. This whole problem could be characterized by a user interface issue because it's not evident to me how an object with nested objects 1000000 layers deep is more complex then the same object derived from 100000 ancestors.
Yeah I agree with this. There's something about inheritance is more than skin deep. Something which changes how we conceive of our software. I agree that whatever that is, its quite important and impactful.
I could talk for days on what I think that difference is. I wrote a whole bit, but deleted it because I think I've said enough about what I think.
What do you think the difference is? How is it possible for composition and inheritance to be so different, if they're so alike on the surface?
But think about it. If you have a deeply nested Object where you don't use inheritance. Then all the objects have multitudes of redundant properties, doesn't that result in complex code as well? And how does nesting objects make it less complex then inheriting objects? It's more of code navigation problem in the sense that when you use inheritance and you look at a child derived from generations of inheritance it's just hard to read and figure out what the final object is because there’s no easy way to visualize or follow the derived properties.
With object composition the view is the same. You have an object that holds generations of nested objects. The difference is you can control click and follow the definition of the nested object so it's more visible.
Thus it seems to me the issue with the complexity is that inheritance simply does not give you a widget you can control click into easily to follow the definition and see what the derived methods are.
This whole problem is characterized by a user interface issue because it's not evident to me how an object with nested objects 1000000 layers deep is more complex then the same object derived from 100000 ancestors.
Think about the emergent complexity here. An object derived from inheriting a chain of 1000 inherited objects is actually less complex then that same chain of objects created via composition. Because duplicate properties don’t get overridden you have more data stored here then inheritance. It’s actually more complex.
The problem is in the user interface.
Create an IDE that automatically fills out all the derived methods of an object and allows you to control click to the ancestor where the derived method comes from and the issue seems to me to be solved.
why inheritance would make it easier?
I want to write: B = {a, b, c, d, e, f, g}
But I don't want to write duplicate code
So I write:
B = B(A) = {A, e, f, g}
aka I use inheritance.Are there easier ways? No. Inheritance is the most fundamental way of doing this. Composition is just a work around as it results in arbitrary nesting. But ultimately it's the same thing too.
Then multiple inheritance is much cleaner because you can test constructor arguments against the union of these invariants before even hitting a constructor.
Note that I’ve never used it, but it did strike me that this was “the” way to include multiple inheritance in a language. But for some reason (run time performance maybe?) no one else seems to have done it this way.
"how to join two struts with least amount of work and thinking so my manager can tick off a box in excel"
in such case inheritance is a nice temporary crutch
It's done in Java with interfaces with default implementations, and the world hasn't imploded. It just doesn't seem like that big of a problem.
It really, really depends on the codebase. There are absolute mammoth tire fire codebases out there - particularly in "enterprise code". These are often made up of insane hierarchies of classes which in practice do nothing but obscure where any of the actual logic lives for your program. AbstractFactoryBuilderImpl. Wild goose chases where you need some bizzaire and fragile incantation to actually create an object, because every class somehow references every other class. And you can't instantiate anything without instantiating everything else first.
If you haven't seen it in your career yet (or at all), you are lucky. But I promise you, hell is programmed by mediocre teams working in java.
The only real difference I see between multiple inheritance and multiple interfaces with default implementations is constructors. And they can be handled in the same way as default implementations; requiring specific usage/ordering.
The need to extend a data structure to add more fields comes almost immediately. Think: something like the C "hack" of embedding the "base" structure as the first field of the "derived" structure:
struct t_derived
{
struct t_base base;
int extra_data;
};
Then you can pass a derived_t instead of base_t with some type casting and caveats. This is "legal" in C because the standard guarantees that base has offset 0.Of course our "extra_data" could be a structure, but although it would look like "A+B" it is actually a concatenation.
By merging them. Structs are product types. If you merge them, you get a bigger product type. You don't need inheritance (ADTs) for that.
The more useful point of inheritance is having shared commonality. But modern languages make it convenient to express that without using ADTs/inheritance.
TypeScript is fully structurally typed. If you combine a Foo and a Bar it is something new, but keeps being both a Foo and a Bar as well.
Go is structurally typed to a relatively high degree as well. You can embed types (including structs) into structs and only care about the individual parts in your functions. And you have composable and implicit interfaces.
Clojure has protocols and generally only cares about the things you use or define to use in functions. It allows you to do hierarchical keyword ontologies if you want, but I see it rarely used.
These languages and many others favor two fundamental building blocks: composition and signatures. The latter being either about data fields or function signatures. The neat part is these aren't entangled: You can use and talk about them separately.
How fundamental is inheritance if it can be fully replaced by simpler building blocks?
Merging structs and inheritance are fundamentally the same thing.
>How fundamental is inheritance if it can be fully replaced by simpler building blocks?
It can't be replaced. Combining Foo and Bar in the way you're thinking involves additional primitives and concepts like nesting. If Foo and Bar share a same property the most straight forward way of handling is overriding one property with the other. Overriding IS inheritance.
We aren't dealing with product types in the purest form either. These product types have named properties and you need additional rules to handle conflicting names.
In fact once you have named properties the resulting algebra from multiplying structs is not consistent with the concept of multiplication whether you use inheritance or "object composition"
After reading this, I'm thinking that intrusive lists is the one use of inheritance in C++ that makes any sense.
Multiple inheritance, possible but you'd have to jump some hoops to disambiguate since you're dealing with multiple copies of the same base class.
class<T> IntrusiveListNode { T* next, T* prev }
class SomeObj {
IntrusiveListNode<SomeObj> list_foo;
IntrusiveListNode<SomeObj> list_bar;
}Thanks for the update, my C++ is pretty rusty.
But it brings up another problem for me; when iterating a list like that, how do you know which of the links to follow? I get that you can do it manually step by step, but if you wanted to say write an iterator. Member pointers?
It does. Trees appear in nature all the time. It's the basis of human society, evolution and many things.
Most of programming moves towards practicality rather then fundamental truth. That's why you get languages like golang which are ugly but practical.
Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
Hard disagree. Knowing that AES and Twofish are block ciphers is useful when dealing with cryptography. Many categories of algorithms and objects are naturally taxonomic.
Even HTML+CSS has (messy) inheritance.
But don't get me wrong - I'm totally in favour of having common interfaces. They aren't great because they form a taxonomy. They're great because it helps us abstract. Whether its Iterator or different software implementing HTTP, interfaces actually help us solve actual problems.
> Knowing that AES and Twofish are block ciphers is useful when dealing with cryptography.
The useful fact in this example is that they both do a similar thing. They form a type class from their common behaviour. Behaviour they may share with - for example - a compression algorithm. Thats what actually matters here.
So, the taxonomy isn't useless, but, it's also never sufficient.
It's more like there are many fundamental concepts and trees are one such concept. I don't think there is a singular fundamental truth of things.
>Even if you want to claim that trees are a common data structure, that doesn't mean they're appropriate in any specific case. Should we therefore arrange all websites in a tree? Should relational databases be converted to trees, because "they're the basis of human society"? What tosh.
I never made this claim though?
>Programming moves toward practicality because software is created to do work. Taxonomies are entirely and completely worthless. The classic Animal -> Mammal -> Cat example for inheritance is a fascinating ontology, and an entirely worthless piece of software.
I mentioned this because parent poster is talking about fundamental truths. I'm saying trees are fundamental... But they may not be practical.
Even trees are not trees: https://en.wikipedia.org/wiki/Anastomosis
Evolution is most definitely not a tree.
Nature also tends towards practicality, even more so than programming. Trees aren’t a fundamental truth, they’re a made-up oversimplified abstraction.
They were pushed by cultish types with little evidence. There was this assertion that all these things were wonderful and would reduce effort and therefore they must be good and we all must use them. We got object oriented everything including object oriented CPUs, object oriented relational databases, object oriented "xtUML". If you weren't object oriented you were a pile of garbage in those days.
For all that, I don't know if there was ever any good evidence at all that any of it worked. It was like the entire industry all fell for snakeoil salesmen and are collectively too embarrassed about it to have much introspection or talk about it. Not that it was the last time the industry has fallen for snakeoil...
If abstraction wasn't useful, we wouldn't use containers.
My first foray into serious programming was by way of Django, which made a choice of representing content structure as classes in the codebase. It underwent the usual evolution of supporting inheritance, then mixins, etc. Today I’d probably have mixed feelings about conflating software architecture with subject domain so blatantly: of course it could never represent the fundamental truth. However, I also know that 1) fundamental truth is not losslessly representable anyway (the map cannot be the territory), 2) the only software that is perfectly isolated from imperfections of real world is software that is useless, and 3) Django was easy to understand, easy to build with, and effectively fit the purpose.
Any map (requirement, spec, abstraction, pattern) is both a blessing that allows software to be useful over longer time, and a curse that leads to its obsolescence. A good one is better at the former than the latter.
Complex inheritance trees can make sense in niche application for similar reasons.
The fundamental truth of things? What are you even talking about? What fundamental truth of things? And what does that have anything to do with building software?
To provide building blocks useful for the construction of programs.
There's a number of properties that are good for such building blocks... composability, flexibility, simplicity, comprehensibility, etc.
Naturally, these properties can conflict, so the goal would be to provide a minimal set of interoperable building blocks providing good coverage of the desirable properties, to allow the developer can choose the appropriate one for a give circumstance and to change when needed. E.g., they could choose to use a simple but less flexible block in one situation, or a more complicated or less performant block in another.
IMO, inheritance is a decent building block -- simple and easy to understand, though with somewhat limited applicability.
We can imagine improvements (particularly to implementation) but I think it got a bad rep mostly due to people not understanding its uses and limitations.
...I've got to say, though, if you aren't figuring out how to use the simple and easy tools, you're really not going to do better with more complicated and capable tools. People hate to admit it, but the best of us are still highly confused monkeys haphazardly banging away at keyboards, barely able to hold a few concepts in our heads at one time. Simple is good for us.
yes, but that's true of other abstractions too. Whether you use inheritance or not, you usually don't know what abstractions you need until you need them: even if you were using composability rather than inheritance, chances are that you'd have encoded assumptions that HTTP goes over TCP until you need to handle the fact that actually you need higher-level abstractions there.
If you don't use inheritance, you switch to an interface (or a different interface) in your composition. If you did use inheritance, you stop doing so and start using composition. The latter is probably some more work but i don't think it's fundamentally very different.
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=26988839 - April 2021 (252 comments)
plus this bit:
Inheritance was invented as a performance hack - https://news.ycombinator.com/item?id=35261638 - March 2023 (1 comment)
The "invented" part was suspicious though.
As an aside, I have noticed that the robotics frameworks (ROS and ROS2) heavily rely on inheritance and some co-dependent C++ features like virtual destructors (to call the derived class's destructor through a base class pointer). I was once invited to an interview for a robotics company due to my "C++ experience"and grilled on this pattern of C++ that I was completely unfamiliar with. I seriously considered removing C++ from my resume that day.
The reality is that a codebase is not that simple. Many things you create are not representable as realworld "objects" - to me, this is where is gets confusing to follow especially when the code gets bigger.
I remember those OOP books (I cannot comment on modern OOP books) where the first few chaptors would use Shapes as an example. Where A Circle, Square, Triangle, etc.. would inherit the Shape object. Sure, in simple examples like this.. it makes sense.
I remember covering inheritence and how to tell if its better or composition... which is the "Object IS X" or "Object HAS X" - so you base you're heirarchy around that mindset.
- "A Chair is Furniture" (Chair inherits Furniture) - "A Chair has Legs" (Chair has array of Leg)
I will always remember my first job - creating shop floor diagrams where you get to select a Shelf or Rack and see the visual representation of goods, etc. My early codebase was OOP... a Product, Merchandise, Shelf, Bay, Pegboard, etc. Each object inherits something in one way or another. Keeping on top of it eventually became a pain. I think there was, overall, about 5 levels of inheritence.
I reviewed my codebase one day and decided to screw it -- I would experiment other approaches. I ended up created simple classes with no inheritence. Each class was isolated from one another with the exception of a special Id which represented "something" like a Pin, or Shelf, etc. Now my code was flexible... "A Shelf has this and this"
In later years I realised what I did was following along the lines of what is commonly known as ECS or Entity-Component-System. Seems popular in games (and I viewed that project is a game-like fashion so it makes sense)
It is very much like building a database.
Each class I created is very much like creating a table.
I had my "Objects" which was a simple class with an Id. In ECS land, this would be better known as an entity.
I would then create classes (or tables) for each "feature" to support.
A feature could be
Is it Solid? Is it Visible Is is a Shape/Model Does it has Position Does it have Children
etc.
Each feature an object supports gives it extra data. So each feature is essentially a table with an Id, ObjectId, and additional fields.
Basically, I am "creating my object hierarchy" at runtime, not at compile time with OOP methods. This made it sooo more flexible when more Companies wanted to use the software, especially with their unique approaches to shop management. All configurations were in XML files -- much better than trying to change an OOP hierarchy to suit ALL companies rulesets.
This is going back a few years, now. Its amazing what comes back to memory.. how I wrote most of this entirely in Javascript to eventually moving to a backend language using AJAX.. to simplifying code with jQuery.
Fond memories.
:-)
As someone who was blessed/lucky to learn C and Pascal.. with some VB6.. I understood how to write clean code with simple structs and functions. By the time I was old enough to get a job, I realised most (if not all) job adverts required OOP, Design Patterns, etc. I remember getting my first Java book. About 1,000 pages, half of which was about OOP (not Java directly)
I remember my first job. Keeping my mouth shut and respecting the older, more experienced developers. I would write code the way I believed was correct -- proper OOP. Doing what the books tell me. Doing what is "cool" and "popular" is modern programming. Hiding the data you should not see, and wrapping what you should in Methods... all that.
Nobody came to me and offered guidance but I learned that some of my older codebase with Inheritence, Overrides.. while it was "proper" code, would end up a jumbled mess when it required new features. One class that was correctly setup one day needed to be moved about, affecting the class hierarchy of others. It brings me back to thinking of my earlier programming days with C -- and to have things in simples structs and functions is better.
I do not hate on OOP. Afterall, in my workplace, am using C# or Python - and make use of classes and, at times, some inheritence here and there. The difference is not to go all religious in OOP land. I use things sparingly.
At work, I use what the Companies has already laid out. Typically languages that are OOP, with a GC, etc. I have no problem with that. At home or personal projects, I lead more towards C or Odin these days. I use Scheme from time-to-time. I would jump at the opportunity to using Odin in the workplace but I am surrounded by developers who dont share my mindset, and stick to what they are familiar with.
Overall, his Conclusion matches my own. "Personally, for code reuse and extensibility, I prefer composition and modules."
I think syntax has improved since then. Last time I touched Delphi was 2002.
Yes, MFC is a mess. As the years pass I would have been interested in trying NextStep during the prime years. It looked 10 years ahead over Microsofts Visual Studio in the mid-to-late 90s. A tool that evolved well into the Apple world.
Interfaces are indeed much nicer, but you have to make sure that your program language doesn't introduce additional overhead.
Don't be the guy that makes Abstract Factory Factories the default way to call methods. Be aware that there are a lot of people out there that would love to ask a web-server for instructions each time they want to call a method. Always remember that the IT-Crowd isn't sane.
Other languages (just like the article) only saw the downsides to such a generic abstraction that they added N times more abstractions (so split inheritance, interfaces, traits, etc) and rules for interactions that it significantly complicated the language with fundamentally no effective gains.
In summary, Herb will always do a better job than me explaining why the choices in the design of C++ classes, even with multiple inheritance, is one of the key factors of C++ success. With cppfront, he extends this idea with metaclasses to clearly describe intent. I think he is on the right track.
- If I have a class Foo and interface Bar, I should be easily able to pass a Foo where Bar is required, provided that Foo has all the methods that Bar has (sometimes I don't control Foo and can't add the "implements Bar" in it).
- I can declare "class Foo implements Bar", but that only means "give me a compilation error if Bar has a method that Foo doesn't implement" - it is NOT required in order to be able to pass a Foo object to a method that takes a Bar parameter
- Conversely, I should be able to also declare "interface Foo implementedBy Baz" and get a compilation error if either one of them is modified in a way that makes them incompatible (again - this does not mean that Baz is the _only_ implementor, just that it's one of them)
- Especially with immutable values - the same should apply to data. record A extends B, C only means "please verify that A has all the members that B & C have, and as such whenever a B record is required, I can pass an A instead". I should be able to do the reverse too (record B extendedBy A). Notably, this doesn't mean "silently import members from B, and create a multiple-inheritance-mess like C++ does".
(I do understand that there'd be some performance implications, but especially with a JIT a feel these could be solved; and we live in a world where I think a lot of code cares more about expressiveness/ understandability than raw performance)
You can use `o satisfies T` wherever you want to ensure that any object/instance o implements T structurally.
To verify a type implements/extends another type from any third-party context (as your third point), you could use `(null! as T1) satisfies T2;`, though usually you'd find a more idiomatic way depending on the context.
Of course it's all type-level - if you are getting untrusted data you'll need a library for verification. And the immutable story in TS (readonly modifier) is not amazing.
> Inheritance was invented by the Simula language
The only reason inheritance continues to be around is social convention. It’s how programmers are taught to program in school and there is an entire generation of people who cannot imagine programming without it.
Aside from common social practice inheritance is now largely a net negative that has long outlived its usefulness. Yes, I understand people will always argue that without their favorite abstraction everything will be a mess, but we shouldn’t let the most ignorant among us baselessly dictate our success criteria only to satisfy their own inability to exercise a tiny level of organizational capacity.
So you get interfaces that are much bigger than they need to be, visitor pattern this, manager that. As someone who isn't used to OO it is sometimes difficult or cumbersome to compile these kinds of examples and explanations into its essence.
I also noticed that AI assistants often want to blow up every interface with a whole bunch of useless stuff like getter/setter style functions and the like. That's obviously not the fault of these assistants, but I think it's something to consider.
There's no reason to be dogmatic about programming abstractions. Just because OOP became dogma for a while and got abused doesn't mean we have to be dogmatic entirely in the opposite direction. Abstractions have their use for those programming languages that choose to implement them.
I absolutely disagree. Some things in programming exist to bring products to market, but many things in programming only exist to bring programmers to market. That is a terrible and striking difference that results ultimately from an absence of ethics. Actions/decisions that exist only to discard ethical considerations serve only two objectives: 1) normalization of lower competence, 2) narcissism. It does not matter which of those two objectives are served, because the conclusions are the same either way.
For OOP in general, I'd say anything with a metaobject protocol for starters, like Smalltalk, Lisp (via CLOS), Python, Perl (via Moose). All but the first support multiple inheritance, but also have well-defined method resolution orders. Multiple inheritance might still lead frequently to nasty spaghetti code even in those languages, but it will still be predictable.
CLOS and Dylan have multiple dispatch, which is just all kinds of awesome, but alas is destined to remain forever niche.
Interestingly enough, my first non-class-related experience with "intrusive lists" was in C, and we implemented it via macros; you'd add a LINKED_LIST macro in the body of a struct definition, and it would unspool into the pointer declarations. Then the list-manipulation functions were also macros so they would unspool at compile time into C code that was type-aware enough to know where the pointers lived in that individual struct.
Of course, this meant incurring the cost of a new definition of function families for each intrusive-list structure, but this was in the context of bashing together a demo kernel for a class, so we assumed modern PCs that have more memory than sense. The bigger problem was that C macros are little bastards to debug and maintain (especially a macro'd function... so much escaping).
C++, of course, ameliorates almost all those problems. And replaces them with other problems. ;)
impure•9mo ago
I guess it could simplify the GC but modern garbage collectors have come a long way.
andyferris•9mo ago
I am kind of amused they _removed_ first-class functions though!
int_19h•9mo ago
bitwize•9mo ago
The designers of StarCraft ran into the pitfalls of designing a sensible inheritance hierarchy, as described here (C-f "Game engine architecture"): https://www.codeofhonor.com/blog/tough-times-on-the-road-to-...
virtue3•9mo ago
kragen•9mo ago
int_19h•9mo ago
kragen•9mo ago
nine_k•9mo ago
If you must, you can use the implementation inheritance for mix-ins / cross-cutting concerns that are the same for all parties involved, e.g. access control. But even that may be better done with composition, especially when you have an injection framework that wires up certain constructor parameters for you.
Where inheritance (extension) properly belongs is the definition of interfaces.
wbl•9mo ago
nine_k•9mo ago
In this regard, Go and Rust do classes / objects right, Java provides the classical pitfalls, and C++ is the territory where unspeakable horrors can be freely implemented, as usual.
vlovich123•9mo ago
nine_k•9mo ago
JITs can do many fascinating optimizations based on profiling the actual code. They must always be on guard though for a case when their profiling-based conclusions fail to hold with some new data, and they have to de-optimize. This being on guard also has its cost.
vlovich123•9mo ago
wbl•9mo ago
nine_k•9mo ago
Overriding may lead to other troubles though [1].
[1]: https://www.snopes.com/fact-check/shoot-me-kangaroo-down-spo...
Reason077•9mo ago
nine_k•9mo ago
xxs•9mo ago
Single site (no class found overriding a method) are static and can be inlined directly. Dual call sites use a class check (which is a simple equality), can be inlined, no v-table. 3-5 call sites use inline caches (e.g. the compiler records what class have been used) that are similar and some can be inlined, usually plus a guard check.
Only high polymorphic calls use v-table and in practice is a very rare occasion, even with Java totally embracing inheritance (or polymorphic interfaces)
Note: CHA is dynamic and happens at runtime, depending which classes have been loaded. Loading new classes causes CHA to be performed again and if there are affected sites, the latter are to be deoptimized (and re-JIT again)
nine_k•9mo ago
josephg•9mo ago
Does C++ have any of these optimisations?
gpderetta•9mo ago
But these optimizations are vastly less critical in C++ where only are minority of functions are virtual.
xxs•9mo ago
kragen•9mo ago
Yes, current GCs are very fast and do not suffer from the problems Simula's GC suffered from. Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B. Allocation may not be any faster, because in either case the compiler can bump the nursery pointer just once (with a copying collector). Deallocation is maybe slightly faster, because with a copying collector, deallocation cost is sort of proportional to how much space you allocate, and the total size of record B is smaller with record A embedded in it than the total size of record A plus record B with a pointer linking them. (That's one pointer bigger.) But tracing gets much faster when there are no pointers to trace.
You will also notice from this example that it's failing to embed the superclass (or whatever) that requires an additional record lookup. And probably a cache miss, too.
I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general, and more generally the Lisp model of memory as a directed graph of objects linked by pointers, because although inheritance reduces the number of cache misses in OO code, it doesn't reduce them enough.
I've written about this at greater length in http://canonical.org/~kragen/memory-models/, but I never really finished that essay.
josephg•9mo ago
Yes it does! Inheritance itself is fine, but inheritance almost always means virtual functions - which can have a significant performance cost because of vtable lookups. Using virtual functions also prevents inlining - which can have a big performance cost in critical code.
> Nevertheless, they do still have an easier time when you embed record A as a field of record B (roughly what inheritance achieves in this case) rather than putting a pointer to record A in record B.
Huh? No - if you put A and B in separate allocations, you get worse performance. Both because of pointer chasing (which matters a great deal for performance). And also because you're putting more pressure on the allocator / garbage collector. The best way to combine A and B is via simple composition:
In this case, there's a single allocation. (At least in languages with value types - like C, C++, C#, Rust, Swift, Zig, etc). In C++, the bytes in memory are actually identical to the case where B inherits from A. But you don't get any class entanglement, or any of the bugs that come along with that.> I think the reason many game engines are moving away from inheritance is that they're moving away from OO in general
Games are moving away from OO because C++ style OO is a fundamentally bad way to structure software. Even if it wasn't, struct-of-arrays usually performs better than arrays-of-structs because of how caching works. And modern ECS (entity component systems) can take good advantage of SoA style memory layouts.
The performance gap between CPU cache and memory speed has been steadily growing over the last few decades. This means, relatively speaking, pointers are getting slower and big arrays are getting faster on modern computers.
kragen•9mo ago
> inheritance almost always means virtual functions
Inheritance and "virtual functions" (dynamic method dispatch) are almost, but not completely, unrelated. You can easily have either one without the other. Golang and Lua have dynamic method dispatch without inheritance; C++ bends over backwards so that you can use all the inheritance you want without incurring any of the costs of dynamic method dispatch, as long as you don't declare anything virtual. This is actually a practical thing to do with modern C++ with templates and type inference.
> No - if you put A and B in separate allocations, you get worse performance
Yes, that's what I was saying.
> you're putting more pressure on the allocator / garbage collector
Yes, I explained how that happens in greater detail in the comment you were replying to.
With your struct C, it's somewhat difficult to solve the problem catern was saying Simula invented inheritance to solve; if A is "list node" and B is "truck", when you navigate to a list node p of type A*, to get the truck, you have to do something like &((struct C *)p)->b, relying on the fact that the struct's first field address is the same as the struct's address and on the fact that the A is the first field. While this is certainly a workable thing to do, I don't think we can recommend it without reservation on the basis that "you don't get any class entanglement, or any of the bugs"! It's very error-prone.
> Games are moving away from OO because C++ style OO
There are a lot of things to criticize about C++, but I think one of its worst effects is that it has tricked people into thinking that C++ is OO. "C++ style OO" is a contradiction in terms. I mean, it's possible to do OO in C++, but the language fights you viciously every step of the way; the moment you make a concession to C++ style, OO collapses.
jayd16•9mo ago
i_c_b•9mo ago
In C++, inheritance of data is efficient because the memory layout of base class members stays the same in different derived classes, so fields don't cost any more to access.
And construction is (relatively fast, compared to alternatives) because setting a single vtable pointer is faster than filling in a bunch of variable fields.
And non-virtual functions were fast because, again, static memory layouts and access and inlining.
Virtual functions were a bit slower, but ultimately that just raised the larger question of when and where a codebase was using function pointers more broadly - virtual functions were just one way of corralling that issue.
And the fact that there were idiomatic ways to use classes in C++ without dynamically allocating memory was crucial to selling game developers on the idea, too.
So at least from my time when this was happening, the general sense was that, of all the ways OO could be implemented, C++ style OO seemed to be by far the most performant, for the concerns of game developers in the late 90's / early 2000's.
I've been out of the industry for a while, so I haven't followed the subsequent conversations since too closely. But I do think, even when I was there, the actual reality of OO class hierarchies were starting to rear their ugly heads. Giant base classes are indeed drastically bad for caches, for example, because they do tend to produce giant, bloated data structures. And deep class hierarchies turn out to be highly sub-optimal, in a lot of cases, for information hiding and evolving code bases (especially for game code, which was one of my specialties). As a practical matter, as you evolve code, you don't get the benefits of information hiding that were advertised on the tin (hence the current boosting of composition over inheritance). I think you can better, smart discussions about those issues in this thread, so I won't cover them.
But that was a snapshot of those early experiences - the specific ways C++ implemented inheritance for performance reasons were definitely, originally, much of the draw to game programmers.