https://news.ycombinator.com/item?id=44596554 [video] (37 comments)
This current posted link is an article by Casey Muratori with supplementary material on topics to explore further.
- Early History of Smalltalk
- History of C++
- Development of the Simula Languages
- Origins of the APT Language for Automatically Programmed Tools
So just ignore it.
This is a mistake is because it puts the broad-scale modularization boundaries of a system in the wrong places and makes the system brittle and inflexible. A better approach is one where large scale system boundaries fall along computational capability lines, as exemplified by modern Entity Component Systems. Class hierarchies that rigidly encode domain categorizations don't make for flexible systems.
Some of the earliest writers on object encapsulation, e.g. Tony Hoare, Doug Ross, understood this, but later language creators and promoters missed some of the subtleties of their writings and left us with a poor version of object-oriented programming as the accepted default.
> And another thing is if you look at the other branch,
> the branch that I'm not really covering very much
> in this talk, because again,
> we don't program in small talk these days, right?
> The closest thing you would get
> is maybe something like Objective-C.
> If there's some people out there using Objective-C,
> you know, like Apple was using that for a little while,
> so Objective-C kind of came
> from a small talk background as well.
Objective-C is basically Smalltalk retrofitted onto C, even more than C++ was Simula retrofitted onto C (before C++ gained template metaprogramming and more modern paradigms), so it makes sense that Muratori doesn't go much into it, given that he doesn't discuss Smalltalk much.
If we discount NeXT's time using it, Apple's only been using Objective-C for 28 years, just a little while. It also (barely) preceded C++.
Object oriented programming.
- Encapsulation / interfaces is a good idea, a continuation of the earlier ideas of structured programming.
- Mutable state strewn uncontrollably everywhere is bad idea, even in a single-threaded case.
- Inheritance-based polymorphism is painful, both in the multiple (C++) and single (Java) inheritance cases. Composable interfaces / traits / typeclasses without overriding methods are logically and much more useful.
I watched over and over again people writing code to interfaces, particularly due to Spring, and then none of those interface ever got a second implementation done and were never, ever going to! It was a total waste of time, even for testing it was almost a total waste of time, but I guess writing stubbed test classes that could pretend to return data from a queue or a database was somewhat useful. The thing is, there were easier ways to achieve that.
When OOP went mainstream it pretty much was entirely about "compile time hierarchy of encapsulation that matches the domain model" and nothing else. His opinion is the standard way of doing OOP is a bad match for lots of software problems but became the one-size-fits-all solution as a result of ignorance.
Also he claims that history is being rewritten to some extent to say this wasn't the case and there was never a heavy emphasis on doing things that way.
The talk traces that mistake to Simula, where it was appropriately used, because it was intended to simulate the real world hierarchies. Then to C++ where it started to become used inappropriately, then to Java, where it became a universal Praxis to model all real world relationship as compile time hierarchies.
I can read at several thousand words a minute. So I need the whole transcript in one shot.
Then I can read it in 10 or 15 minutes or so, and decide if it's worth watching a 2 hour plus video. The answer is almost always "no".
Here's a fixed version [2] - run this when the transcript is open and loaded.
[1] https://soitis.dev/control-panel-for-youtube
[2] https://gist.github.com/insin/cb938324866c511066bcabe230b6a6...
I also appreciate the shoutout to Looking Glass Studios and their Thief: the dark project game. I LOVED this game when it came out. Obviously the programmer inside appreciates Thief for its overall design as well.
Around 2005 I had to make a graphical shop floor on the web and all product on shelves, etc. Trying to do this in HTML4 with different browsers was a pain, and mostly written in Javascript. During my later time in the project I started to get a feel for AJAX and I wanted to move all my javascript into a library (backend) so I could do more than just an HTML interface. I was thinking OpenGL and other methods or even networking where multiple people could do things together. I started re-writing a VB.NET version of this javascript library and I could not get past the initial design.
Remember i'm technically still a junior developer. I was designing the project "the right way" with OOP - inheritence, overrides, etc. It "looked nice" when you view my OOP as a diagram. It starts off well but when you go down the rabbit hole as well as new features it starts to get bloated and messy.
In the end I thought about games I was writing in C and viewed what I was doing in a game-like way. I ended up creating each "item" with a unique Id (index) and a bunch of "features" that link to the "item" Id.
This way, when rendering, I just look each "feature" update and re-render. It was working suprisingly well. I could then, for each customer using this product, could have their own XML file to store each "object" which was simply an "item" with "features"
Obviously, this is all before I had heard of Entity Component Systems (ECS) or Data-oriented Design, and other names.
I still treasure that project to this very day as a pure success story. I was porting it over to C# before I decided to leave around 2009. If the pay was better, I could still be working for them today (assuming I get more pay increase as the years passed)
It is a reminder that OOP is not be-all-end-all solution.
[0] https://fsharpforfunandprofit.com/ddd/
[1] Make invalid states unrepresentable: https://geeklaunch.io/blog/make-invalid-states-unrepresentab...
I'm not sure what tendencies you're referring to though. F# has been around for 20 years and has only gotten better over time.
I'd like to learn more about how to implement this.
https://adventures.michaelfbryan.com/posts/ecs-outside-of-ga...
Unity ECS (has a pretty good general introduction to ECS) https://docs.unity3d.com/Packages/com.unity.entities@1.3/man...
Unreal https://dev.epicgames.com/documentation/en-us/unreal-engine/...
https://gamedev.net/blogs/entry/2265481-oop-is-dead-long-liv...
As I posted on the video itself: https://news.ycombinator.com/item?id=44611240
2. You have components, which are the real "meat and potatoes" of things. These are the properties or traits of an entity, the specifics depend on your application. For a video game or physics simulator it might be velocity and position vectors.
3. Each entity is associated with 0 or more components.
4. These associations are dynamic.
5. You have systems which operate on some subset of entities based on some constraints. A simple constraint might be "all entities with position and velocity components". Objects lacking those would not be important to a physics system.
In effect, with ECS you create in-memory, hopefully efficient, relational databases of system state. The association with different components allows for dynamically giving entities properties. The systems determine the evolution of the state by changing components, associating entities with components, and disassociating entities from components.
The technical details on how to do this efficiently can get interesting.
Compared to more typical OO (exaggerated for effect), instead of constructing a class which has a bunch of properties (say implements some combination of interfaces) and manually mixing and matching like:
Wizard: Player
FlyingWizard: Wizard, Flying
FlameproofWizard: Wizard, Flameproof
FlyingFlameproofWizard: Wizard, Flameproof, Flying
Or creating a bunch of traits inside a god object version of the Wizard or Player class to account for all conceivable traits (most of which are unused at any given time), you use the dynamic association of an entity with Wizard, Flying, and Flameproof components.So your party enters the third floor of a wooden structure and your Wizard (a component associated with an entity) casts "Fly" and "Protection from Elements" on himself. These associate the entity with the Flying and Flameproof components (and potentially others). Now when fireball is cast and the wizard is in the affected area, he'll be ignored (by virtue of being Flameproof) while everything around him catches fire, and when the wooden floor burns away the physics engine will leave him floating rather than falling like his poor non-flying, currently on fire Fighter compatriot.
Most of them tend to even ignore books on the matter, like "Component Software: Beyond Object-Oriented Programming." [0], rather using some game studios approach to ECS as the genesis of it all.
[0] - https://openlibrary.org/books/OL3564280M/Component_software
They are not talking about that ECS.
Given that both of us have the required CS background should be kind of entertaining.
- An entity is a 64 bit integer wrapped in a struct for typesafety (newtype pattern). This is a primary key.
- A component is a struct that implements the "component" trait. This trait is an implementation detail to support the infrastructure and is not meant to be implemented by the programmer (there is a derive macro). It turns the struct into a SoA variant, registers it into the world object (the "database") plus a bunch of other things. It is a table.
- A query is exactly what it sounds like. You do joins on the components and can filter them and such.
- A system is just code that does a query and does something with the result. It's basically a stored procedure.
It is a relational database.
EDIT: forgot to link the relevant docs: https://docs.rs/bevy/latest/bevy/ecs/component/trait.Compone.... It is really critical to note a programmer is not expected to implement the methods in this trait. Programmers are only supposed to mark their structs with the derive macro that fills in the implementation. The trait is used purely at compile time (like a c++ template).
There's also flecs which doesn't rely on OOP-ish traits in its implementation: https://www.flecs.dev/flecs/
Either way, it doesn't matter if OOP is used in the implementation of an ECS, just as it doesn't matter if MySQL uses classes and objects to implement SQL.
When it comes to organising your code in an ECS-esque fashion, it is much closer to normalising a database except you are organising your structs instead of tables.
With databases, you create tables. You would have an Entity table that stores a unique Id, and tables that represent each Component.. which there would be a EntityId key, etc.
Also, each table is representative of a basic array. It is also about knowing a good design for memory allocation, rather than 'new' or 'delete' in typical OOP fashion. Maybe you can reason the memory needed on startup. Of course, this depends on the type of game.. or business application.
An 'Entity' is useless on its own. It can have many different behaviours or traits. Maybe, if referring to games, you can have an entity that:- has physics, is collidable, is visible, etc.
Each of these can be treated as a 'Component' holding data relevant to it.
Then you have a 'System' which can be a collection of functions to initialise the system, shutdown the system, update the system, or fetch the component record for that entity, etc.. all of which manipulates the data inside the Component.
Some Components may even require data from other Components, which you would communicate calling the system methods.
You can create high level functions for creating each Entity. Of course, this is a very simplified take :-
var entity1 = create_player(1)
var boss1 = create_boss1()
function create_player(int player_no) {
var eid = create_entity();
physics_add(eid); // add to physics system
collision_add(eid); // add to collision system
health_add(eid, 1.0); // add health/damage set to 1.0
input_add(eid, player_no); // input setup - more than 1 player?
camera_set(eid, player_no); // camera setup - support split screen?
return eid;
}
function create_boss1() {
var eid = create_entity();
physics_add(eid);
health_add(eid, 4.0) // 4x more than player
collision_add(eid);
ai_add(eid, speed: 0.6, intelligence: 0.6); // generic AI for all
return eid;
}During his research into the history of OOP he discovered that ECS existed as early as 1963, but was largely forgotten and not brought over as a software design concept or methodology when OOP was making its way into new languages and being taught to future programmers.
There's lots of reasons for why this happened, and his long talk is going over the history and the key people and coming up with an explanatory narrative.
Heck I’ve even done ECS in Rails for exactly this reason.
I never accepted the Java/C++ bastardisation of OOP and still think that Erlang is the most OO language, since encapsulation and message passing is so natural.
Even Smalltalk, post Smalltalk-80 implementations eventually added traits, alongside its single inheritance model.
If you want to go down the rabbit hole, lets start with the first question, how are ECS systems implemented in any random C++ or Rust game code?
What do you mean? ECS is simply a game programming pattern to implement an entity system.
> If you want to go down the rabbit hole, lets start with the first question, how are ECS systems implemented in any random C++ or Rust game code?
Conversely how would you implement it in a procedural language like C or Pascal? ECS is just a switch in emphasis from an array of structures (entities) to a structure of arrays (systems) paradigm. I fail to see what the Object Oriented paradigm has to do with any of it.
Arrays and structures mappings isn't ECS, that is Data Oriented Design,
https://en.wikipedia.org/wiki/Entity_component_system
> Entity–component–system (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on the components.
> Entity: An entity represents a general-purpose object. In a game engine context, for example, every coarse game object is represented as an entity. Usually, it only consists of a unique id. Implementations typically use a plain integer for this
> Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are contiguously stored together in physical memory, enabling efficient memory access for systems which operate over many entities.
> History > In 1998, Thief: The Dark Project pioneered an ECS.
So, according to wikipedia:
- An entity is typically just a numeric unique id
- Components are typically physically contiguous (i.e an array)
- Their history began with Thief pioneering them in 1998
First your praise the talk. That sounds AI like. LLMs are trained in the annoying American way of starting with something positive, even if it's irrelevant, or isn't meant. You're probably somewhat conditioned to do the same. But in this forum, those comments are not encouraged. The upvote button should be enough to express that.
The rest reads like a summary, also an LLM feature, but nobody asked for a summary, and you're not announcing that you want to give one. It sets the reader up for some conclusion or evaluation, which never comes.
There's no personal thought, except for the praise, anecdote, criticism, or supplementary information. If all you wanted was to recommend the talk to people, I think an effective way to do would be something like
> I liked the talk. So much that I didn't know about the history of OOP, or how ECS (Entity Component System) could have been a competitor. Recommended, but it's a bit long though.
Not that that will get you a lot of upvotes (which shouldn't be a goal anyway), but it expresses someone's reflection on the link, which others can understand as support for their decision to check the link, or not.
AI has fucked up our perception. That's not your fault, of course, but you can try to skirt around it. But not everybody has to write every opinion everywhere. It's fine if your communication doesn't always fall on fertile ground. You don't have to apologize or blame the spectrum. Some people have better ways with words than others.
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
I find it funny that even after he goes into explicit detail about describing oop back to the original sources people either didn't watch it or are just blowing past his research to move the goal post and claim thats not actually what OOP is because they don't want to admit the industry is obsessed with a mistake just like waterfall and are too stockholm syndromed to realize
We take out 'dog-is-an-animal' inheritance.
We take out object-based delegation of responsibility (an object shall know how to draw itself). A Painter will instead draw many fat structs.
Code reuse? Per the talk, the guy who stumbled onto this was really looking for a List<> use-case, (not a special kind of Bus/LinkedList hybrid. He was after parametric polymorphism, not inheritance.
It's just data + relevant functions. Which is ok.
That's all there is, really.
https://gist.github.com/reborg/dc8b0c96c397a56668905e2767fd6...
What is the purpose of the customers / products app?
A lot of so-called programmers and systems "engineers" act like religious zealots. Even challenging the ideas of OOP is blasphemy to them, even though there are many legitimate reasons to do so.
It's worth pointing out that actual linguistics is not unlike maths, e.g.: https://en.wikipedia.org/wiki/Formal_grammar, https://en.wikipedia.org/wiki/X-bar_theory, etc. There's an awful lot of vibes-and-feels nonsense about language in the popular press, which one ought not confuse with linguistics proper.
In fact, as Casey mentions in this talk, a lot of the earliest ideas in software architecture came from trying to parse languages - both human and computer.
But when we think about solving problems over domains and relations (e.g think about realizing that the problem of parsing requires traversing a tree like structure) we are dealing with mathematical-logical structures, not linguistic concepts. This is what I meant. I've seen a lot of OOP code that tried desperately to make code reflect the fuzzier relationships between linguistic concepts, rather than the precise ones of logical structure (a lot of this is a consequence of over-encapsulation and excessive information hiding)
Formal grammar is not merely a notation for expressing linguistic rules that incidentally makes them appear akin to maths, it's a theory about what language is - a phenomenon rooted in, and best modelled by, formal logic. The reason formal grammar looks familiar to programmers is because computer science has borrowed tons of concepts from linguistics, concepts originally aimed at modelling human language.
I agree with you on OOP. In a way, popular ideas about language are not unlike OOP: naive models with inherent contradictions that inevitably devolve into an incoherent mess.
Ultimately, a person with a naive conception of how words, objects, and ideas interact is going to make a mess of trying to systematise virtually anything.
Rather ironic given that in this very comment section I'm largely seeing that behavior associated with people appealing to Casey as an authority as an excuse not to engage with intelligently written counterpoints.
I certainly won't defend the historic OOP hype but a tool is not limited by how the majority happen to use it at any given time. Rallying against a tool or concept itself is the behavior of a zealot. It's rallying against a particular use that might have merit.
Similarly I'd like to suggest that there exist situations where waterfall is the obviously correct choice. Yet even then someone could still potentially manage to screw it up.
I agree that once the field matures what we will really (hopefully) finally see are people adopting different modes of organization based on the second order systems properties they support, rather than ideology or personal experience—but we aren't there yet.
I think there are certain cases in which using an object oriented approach makes sense, but man, it has led to so many bloated, needlessly complicated systems in which the majority of the work is dealing with inanities imposed by OOP discipline and structure rather than dealing with the actual problem the system is supposed to solve.
To abandon the OOP would be a symptom of the same hysteria that caused its proselytization.
IT industry has a bipolar disorder.
> I prefer to write code in a verb-oriented way not an object-oriented way. ... It also has to do with what type of system you're making: whether people are going to be adding types to the system more frequently or whether they're going to be adding actions. I tend to find that people add actions more frequently.
Suddenly clicked for me why some people/languages prefer doThing(X, Y) vs. X.doThing(Y)
Ada, Julia, Dylan, Common Lisp for example.
Yet another example why people shouldn't put programming paradigms all into the same basket.
(also Common Lisp is hardly a poster child of OOP, at best you can say it's multi-paradigm like Scala)
Since when do OOP languages have to be single paradigm?
By then point of view, people should stop complaining about C++ OOP then.
What I really meant to say with that was that it's lisp at its core -i.e. if one wants to place it squarely in one single paradigm, imo that one should be "Functional".
I was just surprised to see it listed as an example of OOP language, because it's not the most representative one at that.
Provides an OOP programming model, that no mainstream language, other than Common Lisp fully supports.
https://en.m.wikipedia.org/wiki/The_Art_of_the_Metaobject_Pr...
Dylan, Julia and Clojure only have subsets of it.
"Visitor Pattern Versus Multimethods"
Still it isn't CLOS.
https://shawnhargreaves.com/blog/visitor-and-multiple-dispat...
It's when you start writing ThingDoer.doThing(X, Y) that you begin questioning things.
Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2) and x+y as examples of problems caused by that view.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p19...
But he still has a preference for f(x,y). With x.f(y) gives you have chaining but it also gets rid of multiple dispatch / multimethods that are more natural with f(x,y). Bjarne has been trying to add this back into C++ for quite some time now.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n44...
Ocaml example:
let increment x = x + 1
let square x = x \* x
let to_string x = string_of_int x
let result = 5 |> increment |> square |> to_string
(\* result will be "36" \*)https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
I recently read a quote, paraphrasing, Orthodoxy is a poor man's substitute for moral superiority.
The use of GOTO is another example. Yes, you probably wouldn't want in your codebase, but the overzealousness against it removes expressions like break statements or multiple return statements from languages.
This wasn't true with original Python, however since new style classes became the default type system, everything is indeed an object.
So for the anti-OOP folks out there using languages like Python as an example,
Python 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> x = 23
>>> type(x)
<class 'int'>
>>> dir(x)
['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__dir__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getnewargs__', '__getstate__', '__gt__', '__hash__', '__index__', '__init__', '__init_subclass__', '__int__', '__invert__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__', 'as_integer_ratio', 'bit_count', 'bit_length', 'conjugate', 'denominator', 'from_bytes', 'imag', 'is_integer', 'numerator', 'real', 'to_bytes']
>>>That apparently non-OOP code, requires bytecodes and runtime capabilities that only exist with OOP semantics on the VM.
It is like arguing one is not driving a steam engine only because they now put gas instead of wood.
Trying to argue that you are, in fact driving a steam engine, requires one to assume a level of abstraction and definition, in order to set an arena in which a discussion can occur.
What the talk is about compile time (and maybe execution time in the case of python) hierarchies being structured as a mapping of real objects. This is how I was taught OOP and this is what people are recognizing as "OOP".
>So for the anti-OOP folks out there using languages like Python as an example,
Just because a language associates data with functions, does not mean that every program hierarchy has to map onto a real world relationship.
Why are you even commenting on this with your nonsense? Do you really think that if someone is complaining about OOP they are complaining that data types store functions for operating on that data? Has literally anyone ever complained about that?
It actually does matter a whole lot what developers think. It matters far more than any "CS definition", not that anything here is about computer science.
>What matters are language implementations
No, they do not matter at all. They are totally irrelevant to the topic. In python you can construct your hierarchies to match real world hierarchies. You also can not do that. It is totally irrelevant how the language is implemented.
That's unfortunate.
"The simplistic approach is to say that object-oriented development is a process requiring no transformations, beginning with the construction of an object model and progressing seamlessly into object-oriented code. …
While superficially appealing, this approach is seriously flawed. It should be clear to anyone that models of the world are completely different from models of software. The world does not consist of objects sending each other messages, and we would have to be seriously mesmerised by object jargon to believe that it does. …"
"Designing Object Systems", Steve Cook & John Daniels, 1994, page 6
>While superficially appealing, this approach is seriously flawed. It should be clear to anyone that models of the world are completely different from models of software.
A great line. I just wished it wasn't out shone by all the lectures, tutorials, books which explain OOP by saying "a Labrador is a dog is an animal" and then tell you how this abstraction is exactly what you should be doing.
OOP revisionism is always very surprising, because the only people aware of it are OOP revisionists, the vast majority of developers are completely unaware of it.
The source doesn't say that.
I listened to the presenter tell us "… look at what they were actually talking about when they were talking about Smalltalk in the times before they had chance to reflect and say that it [inheritance] didn't work". 14:10
The source doesn't say that.
I listened to the presenter tell us "… literally representing in the hierarchy what our domain model is. … They have a Path class and from that Path class they have different shapes derived from it." 14:46
Chapter 20 "Smalltalk-80: The Language and its Implementation" describes how Graphics was implemented in the Smalltalk-80 system —
"Class Path is the basic superclass of the graphic display objects that represent trajectories. Instances of Path refer to an OrderedCollection and to a Form. The elements of the collection are Points. … LinearFit and Spline are defined as subclasses of Path. … Class Curve is a subclass of Path. It represents a hyperbola that is tangent to lines … Straight lines can be defined in terms of Paths. A Line is a Path specified by two points." page 400
As they say in the Preface — "Subclasses support the ability to factor the system in order to avoid repetitions of the same concepts in many different places. … subclassing as a means to inherit and to refine existing capability."
I listened to the rest of part one.
6:03 -- "And what he [Marc LeBlanc] said was for some reason OOP has gotten into this mindset of compile-time hierarchies that match the domain model. …"
6:29 -- "And what he [Marc LeBlanc] is saying is like why are we pushing this? Why is that the idea, right? …"
And Stroustrup's starting-point is abstract-data-types not a compile-time-hierarchy —
"Consider defining a type 'shape' for use in a graphics system. Assume for the moment that the system has to support circles, triangles, and squares. Assume also that you have some classes … You might define a shape like this … This is a mess."
And Stroustrup then says what he's pushing and why and when —
"The problem is that there is no distinction between the general properties of any shape … and the properties of a specific shape … The ability to express this distinction and take advantage of it defines object-oriented programming. …
The programming paradigm is: Decide which classes you want; provide a full set of operations for each class; make commonlity explicit by using inheritance. …
Where there is no such commonality, data abstraction suffices."
I think his definition of OO is different to what we've got used to. Perhaps his definition needs a different name.
No. His definition is exactly what people are taught OOP is. It is what I was taught, it is what I have seen taught, it is what I see people mean when they say they are doing OOP.
> Perhaps his definition needs a different name.
No. Your definition needs a different name. Polymorphic functions are not OOP. If you give someone standard Julia code, a language entirely built around polymorphic functions, they would tell you that it is a lot of things, except nobody would call it OOP.
Importantly polymorphic functions work without class hierarchies. And calling anything without class hierarchies "OOP" is insane.
https://eli.thegreenplace.net/2016/the-expression-problem-an...
I've seen "OOP" used to mean different things. For example, sometimes it's said about a language, and sometimes it's unrelated to language features and simply about the "style" or design/architecture/organization of a codebase (Some people say some C codebases are "object oriented", usually because they use either vtables or function pointers, or/and because they use opaque handles).
Even when talking about "OOP as a programming language descriptor", I've seen it used to mean different things. For example, a lot of people say rust is not object-oriented. But rust lets you define data types, and lets you define methods on data types, and has a language feature to let you create a pointer+vtable construct based on what can reasonably be called an interface (A "trait" in rust). The "only" things it's lacking are either ergonomics or inheritance, or possibly a culture of OOP. So one definition of "OOP" could be "A programming language that has inheritance as a language feature". But some people disagree with that, even when using it as a descriptor of programming languages. They might think it's actually about message passing, or encapsulation, or a combination, etc etc.
And when talking about "style"/design, it can also mean different things. In the talk this post is about, the speaker mentions "compile time hierarchies of encapsulation that match the domain model". I've seen teachers in university teach OOP as a way of modelling the "real world", and say that inheritance should be a semantic "is-a" relationship. I think that's the sort of thing the talk is about. But like I mentioned above, some people disagree and think an OOP codebase does not need to be a compile time hierarchy that represents the domain model, it can be used simply as a mechanism for polymorphism or as a way of code reuse.
Anyways, what I mean to say is that I don't think arguing about the specifics of what "OOP" means in the abstract very useful, and that since in this particular piece the author took the time to explicitly call out what they mean that we should probably stick to that.
I think OOP techniques made most sense in contexts where data was in memory of long-running processes - think of early versions of MS Office or such.
We've since changed into a computing environment in which everything that is not written to disk should be assumed emepheral: UIs are web-based and may jump not just between threads or processes but between entire machines between two user actions. Processes should be assumed to be killed and restarted at any time, etc etc.
This means it makes a lot less sense today to keep complicated object graphs in memory - the real object graph has to be represented in persistent storage and the logic inside a process works more like a mathematical function, translating back and forth between the front-end representation (HTML, JSON, etc) and the storage representation (flat files, databases, etc). The "business logic" is just a sub-clause in that function.
For that kind of environment, it's obvious why functional or C-style imperative programming would be a better fit. It makes no sense to instantiate a complicated object graph from your input, traverse it once, then destroy it again - and all that again and again for every single user interaction.
But that doesn't mean that the paradigm suddenly has always been bad. It's just that the environment changed.
Also, there may be other contexts in which it still makes sense, such high-level scripting or game programming.
I understand that OOP is a somewhat diluted term nowdays, meaning different things to different people and in different contexts/communities, but the author spent more than enough time clarifying in excruciating detail what he was talking about.
153 comments as of time of writing, let's see.
C-F: Java: 21 C++: 31 Python: 23 C#: 2
And yet: Pascal: 1 (!) Delphi: 0 VCL: 0 Winforms: 0 Ruby: 2 (in one comment)
This is not a serious conversation about merits of OOP or lack thereof, just like Casey's presentation is not a serious analysis - just a man venting his personal grudges.
I get that, it's completely justified - Java has a culture of horrible overengineering and C++ is, well, C++, the object model is not even the worst part of that mess. But still, it feels like there is a lack of voices of people for whom the concept works well.
People can and will write horrible atrocities in any language with any methodologies; there is at least one widely used "modern C++" ECS implementation built with STL for example (which itself speaks volumes), and there is a vast universe of completely unreadable FP-style TypeScript code out there written by people far too consumed by what they can do to stop for a second and think if they should.
I don't know why Casey chose this particular hill to die on, and I honestly don't care, but we as a community should at least be curious if there are better ways to do our jobs. Sadly, common sense seems to have given way to dogma these days.
Unlike the mainstream OOP, ECS (Entity Component System) is a niche programming model, even Erlang/OTP is probably more OOP-like than ECS. Same with Agents.
But the Software Archeology part was amazing.
Rochus•6mo ago
asa400•6mo ago
t420mom•6mo ago
anp•6mo ago
throwawaymaths•6mo ago
JohnMakin•6mo ago
throwawaymaths•6mo ago
Spivak•6mo ago
Do you just mean that Python lets you write functions not as part of a class? Because yeah there's the public static void main meme but static functions attached to a class is basically equivalent to Python free functions being attached to a module object.
fc417fc802•6mo ago
I use both C++ and Python but I wouldn't describe any of what I write as "object oriented".
lisbbb•6mo ago
lisbbb•6mo ago
JohnMakin•6mo ago
epr•6mo ago
lisbbb•6mo ago
I guess I have a lot of other problems with Java--jar hell, of course, but also the total inability for corporations to update their junk to newer versions of Java because so many libraries and products were never kept up with and then you get security people breathing down your neck to update the JVM which was LITERALLY IMPOSSIBLE in at least two situations I became involved with. We even tried to take over 3rd party libraries and rewrite/update them and ended up at very expensive dead ends with those efforts. Then, to top it all off, being accused of lacking the skill and experience to fix the problem! Those a-holes had no idea what THEY were talking about. But in corporate America, making intelligent and well-documented arguments is nothing. That's when I finally decided I needed to just stop working on anything related to Java entirely. So after about 15 years of that crap, I said no more. But I'm the under-skilled one.
pjmlp•6mo ago
imtringued•6mo ago
If humanity's technological progress depended on impossibly rare events that never happen again, then humanity would miss the vast majority of them. It would be as if those events never existed in the first place.
JohnMakin•6mo ago
pjmlp•6mo ago
robertlagrant•6mo ago
mrkeen•6mo ago
lisbbb•6mo ago
Rochus•6mo ago
In case of Dahl/Nygaard it seems logical since their work focus was on simulation. Simula I was mostly a language suited to build discrete-event simulations. Simula 67, which introduced the main features we subsume under "Object-Orientation" today, was conceived as a general-purpose language, but still Dahl and Nygaard mostly used it for building simulations. It would be wrong to conclude that they recommended a class-domain correspondence for the general case.
> Bjarne Stroustrup is not just some random guy
Sure, but he was a Simula user himself for distributed systems simulation during his PhD research at Cambridge University. And he learned Simula during his undergraduate education at Aarhus, where he also took lectures with Nygaard (a simulation guy as well). So also here, not surprising that he used examples with class-domain correspondence. But there was also a slide in the talk where Stroustrup explicitly stated that there are other valid uses of OO than using it for modeling domains.
igouy•6mo ago
"Unfortunately, inheritance — though an incredibly powerful technique — has turned out to be very difficult for novices (and even professionals) to deal with."
When the presenter tells us — 13:45 "he was already saying he kind of soured on it" — that is not a fact, it's speculation. That speculation does not seem to be supported by what follows in "The Early History of Smalltalk".
One page later — "There were a variety of strong desires for a real inheritance mechanism from Adele and me, from Larry Tesler, who was working on desktop publishing, and from the grad students." page 83
And "A word about inheritance. … By the time Smalltalk-76 came along, Dan Ingalls had come up with a scheme that was Simula-like in it's semantics but could be incrementally changed on the fly to be in accord with our goals of close interaction. I was not completely thrilled with it because it seemed that we needed a better theory about inheritance entirely (and still do). … But no comprehensive and clean multiple inheritance scheme appeared that was compelling enough to surmount Dan's original Simula-like design." page 84
igouy•6mo ago
"Consider defining a type 'shape' for use in a graphics system. Assume for the moment that the system has to support circles, triangles, and squares. Assume also that you have some classes … You might define a shape like this … This is a mess. …"
Then —
"The problem is that there is no distinction between the general properties of any shape … and the properties of a specific shape … The ability to express this distinction and take advantage of it defines object-oriented programming. …
The programming paradigm is: Decide which classes you want; provide a full set of operations for each class; make commonality explicit by using inheritance. …
Where there is no such commonality, data abstraction suffices."
dimal•6mo ago
phendrenad2•6mo ago