"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager."
1984 "Smalltalk-80 The Interactive Programming Environment" page 500
(To me) seems like build & deploy as dev process and message-passing & late-binding as implementation technique.
Separate concerns, I probably misunderstood.
On classes, I get it... tbf though I'm fine with prototype inheritance as well, there's positives and negatives to both approaches... not to mention, there are benefits to not really having either and just having objects you can interrogate or even that are statically assigned at creation (structs).
What's funny on the Method Syntax for me, is that I actually don't like mixing classes that hold data and classes that do things more often than not. I mean, I get the concepts, but I just don't generally like the approach. The only exception might be a controller with a handle to a model(state) and the view... but even then, the data itself (model) is kind of separated as a reference, and don't tend to attach too many variants of state to anything... I'm generally a fan of the single state tree approach (often used for games, and famously via Redux).
On information hiding... I'm generally not too much of a fan of hiding members of an object used to hold data... I mean, I can see filters when you're passing something to the edge of a system, like a hashed password on a user object exposed via an api. But internally, I'd almost rather see immutability as a first class over locking bits and pieces down, then exposing member methods to mutate the object internally. Just my own take.
On Encapsulation, like above... I'm more on the side of the Data oriented design approach. To me this is where you have API surfaces and like above I tend to separate modules/classes that do things, from templates/models/classes that hold data.
I'm mixed on Interfaces.. they're definitely useful for plugin systems or when you have multiple distinct implementations of a thing... but after a couple decades of C#, they're definitely overrated and overused.
No strong opinions on Late Binding pr Dynamic Dispatch... other than I do appreciate it at times in dynamic language environments (JS).
Inheritance and SubTyping imo are, similar to Interfaces, somewhat overrated... I just try to avoid them more than use them. There are exceptions, I'm actively using this in a project right now, but more often than not, it just adds undue complexity. With prototype based inheritance, it's also possible to really slow down certain processes unintentionally.
Strong proponent of Message Passing approaches... it often simplifies a solution in terms of the surface you need to be aware of at a given point. Allows you to construct decision trees and pipelines of simpler functions.
Interesting overall... but still not a fan of some of the excesses in OOP usage in practice that I've had to deal with. I just prefer to break problems up slightly differently... sometimes blurring clear lines of separation to have a simpler whole, sometimes just drawing the lines differently because they make more sense to me to break up for a given use case.
This makes me wonder why most of us use Java at all. In your typical web app project, classes just feel like either:
1) Data structures. This I suspect is a result of ORM's not really being ORM's but actually "Structural Relational Mappers".
- or -
2) Namespaces to dump functions. These are your run-of-the-mill "utils" classes or "service" classes, etc.
The more I work in Java, the more I feel friction between the language, its identity(OO beginning to incorporate functional ideas), and how people write in it.
Java was the first popular language to push static analysis for correctness. It was the "if it compiles, it runs" language of its day, what meant that managers could hire a couple of bad developers by mistake and it wouldn't destroy the entire team's productivity.
I'm not sure that position lasted for even 5 years. But it had a very unique and relevant value proposition at the time.
But Java has better marketing.
It got really useful by 1998.
By that time it supported parametric generics, multiple inheritance, named parameters, optionals instead of nulls everywhere, compile to machine code and quite a few extra things that I couldn't understand at the time.
This is so incredibly wrong it must be a troll.
You can declare static methods on interfaces in Java, which means you could call things like Users.create("Foobar") if you wanted to.
OO conflates many different aspects that are often orthogonal but have been conflated together opportunistically rather than by sound rigor. Clearly most languages allow for functions outside classes. It's clearly the case today especially with FP gaining momentum, but it's also clear back then when Java and the JVM were created. I think smalltalk was the only other language that had this limitation.
Like others in this thread, I can only recommend the big OOPS video: https://youtu.be/wo84LFzx5nI
Users.create(...) is the easy case. Try transfer_permissions(user1, user2, entity) while retaining transactionality and the ability of either user to cancel the transfer.
I'm not sure why having a global function by the same would make this any easier or harder to implement. But it would pollute the global namespace with highly specific operations.
I mean it makes sense to group "like" things together. Whether that's in a module, a class, an interface, or a namespace. Having a huge number of things globally is just confusing pollution.
Yes, but going as a static method into a class that goes into a module is overkill vs just putting it in the module.
> Having a huge number of things globally is just confusing pollution.
I don't know what language you use, but modern programming languages allow specifying what to import when importing a module. You don't have to import everything from every module.
You don't need an ORM or an overgrown dependency injection framework to create a webapp in Java.
The separation of functions and records..
Java has had "data carriers" in the form of records for a while now. Immutable(ish), low boilerblate, convenient.
record User(String name){}
Records are great when doing more "data oriented programming".I dislike the term "anemic domain model", it casts a value judgment which I think is unwarranted. There's a spectrum from anemic to obese (for want of a better word). There are tradeoffs all along that spectrum. Finding a sweet spot will depend heavily on what you're doing, why you're doing it, what your team is comfortable with etc.
All the other stuff, like polymorphism, encapsulation, etc., I consider "addons."
It was a common pattern, back then. We’d pass around structs, and have a small library of functions that accessed/modified the data in these structs.
If you wanted, you could add function pointers to the structs. You could add “polymorphism,” by overwriting these pointers, but it was messy.
That said, inheritance can be very useful, in some cases, like improving DRY. I don’t like to take absolute stances, so much, these days.
Its not very intuitive used that way.
Do people honestly think other languages don't do whatever definition OOP has today? Encapsulation & polymorphism? Message-passing & late-binding?
Inheritance is the one thing that the other languages took a look at and said 'nope' to.
(Also, the OOP texts say to prefer composition anyway)
It is very difficult to tell whether this is a definitional problem - people believe any kind of encapsulation is OOP - or if some people can't wrap their heads around how to do encapsulation without message passing and the rest.
What they don’t get with protocols, though, is polymorphism. I think a lot of folks confuse them.
I wrote about an odd bug that I encountered, from protocol defaults: https://littlegreenviper.com/the-curious-case-of-the-protoco...
I think we can safely stick to how IEEE defines OOP: the combination of three main features: 1) encapsulation of data and code 2) inheritance and late binding 3) dynamic object generation (from https://ethw.org/Milestones:Object-Oriented_Programming,_196...).
The article assumes that C++, Java, and Smalltalk implement completely different subsets of OOP features, which is not true at all. Those languages, including Smalltalk (starting with Smalltalk-76), all implement the Simula 67 object model with classes, inheritance and virtual method dispatch. Simula 67 was the first object-oriented programming language (even if the term was only applied ~10 years later in a 1976 MIT publication for the first time, see https://news.ycombinator.com/item?id=36879311). Message passing (the feature the article claims is unique to Smalltalk) is mathematically isomorphic to virtual method dispatch; and also Smalltalk uses method dispatch tables, very similar to C++ and Java.
Because even though inheritance often is used in a wrong way, there are definitely cases, where it is the clearest pattern in my opinion.
Like graphic libary things. E.g. everything on the screen is a DisplayObject. Simple Textfields and Images inherit directly from DisplayObject. Layoutcontainers inherit from DisplayObjectContainer which inherits from DisplayObject.
Inheritance here makes a lot of sense to me and I don't see how it could be expressed in a different way without loosing that clarity.
What value does the inheritance provide here?
Can't you just use a flat interface per usecase without inheritance and it will work simpler with less mental overhead keeping the hierarchy in mind?
Explicitly your graphic library sounds should be fine to have the interface DisplayObject which you can then add default implementations on. (That's a form of composition)
Every display object has a x y width and height for example. And there is basic validating for every object. Now a validate method can be conposited. But variables? Also the validating, there is some base validating every object share (called wih super) and the specific validating (or rendering) is done down in the subclasses.
And even for simple things, you can composite a extra object, but then you cannot do a.x = b.x * 2 anymore, but would have to do a.po.x = b.po.x * 2 etc
In terms of what you're saying here, the extra verbosity is not really something that either bothers me or is impossible to work around in the context of Rust at least. The standard library in Rust has a trait called `Deref` that lets you automatically delegate method calls without needing to specify the target (which is more than sufficient unless you're trying to emulate multiple inheritance, and I consider not providing support for anything like that a feature rather than a shortcoming).
If I were extremely bothered by the need do do `a.po.x` in the example you give, I'd be able to write code like this:
struct Point {
x: i32,
y: i32,
}
struct ThingWithPoint {
po: Point,
}
impl Deref for ThingWithPoint {
type Target = Point;
fn deref(&self) -> &Self::Target {
&self.po
}
}
fn something(a: &mut ThingWithPoint, b: ThingWithPoint) {
a.x = b.x * 2;
}
Does implementing `Deref` require a bit more code than saying something like `ThingWithPoint: Point` as part of the type definition? Yes (although arguably that has as much to do with how Rust defines methods as part of `impl` blocks outside of the type definition, so defining a method that isn't part of a trait would still be a slightly more verbose, and it's not really something that I particularly have an issue with). Do I find that I'm unhappy with needing to be explicit about this sort of thing rather than having the language provide an extremely terse syntax for inheritance? Absolutely not; the extra syntax convenience is just that, a convenience, and in practice I find it's just as likely to make things more confusing if used too often than it is to make things easier to understand. More to the point, there's absolutely no reason that makes sense to me why the convenience of syntax needs to be coupled with a feature that actually changes the semantics of the type where I want that convenience; as comment I originally replied to stated, inheritance tries to address two very different concerns, and I feel pretty strongly that ends up being more trouble than it's worth compared to just having language features that address them separately.Intuitively¹, I feel like this is something that should be separated out into a BoundingBox object. Every component that needs a bounding box satisfies a small `HasBoundingBox { getBoundingBox(self) -> BoundingBox }` interface. Maybe there's a larger `Resizeable` interface which (given a type that satisfies `HasBoundingBox`) specifies an additional `setBoundingBox(self, bb)` method.
You don't end up with a tidy hierarchy this way, but I'm not sure you'd end up with a tidy hierarchy using inheritance, either. I feel like this sort of UI work leads toward diamond inheritance, mixins, or decorators, all of which complicate inheritance hierarchy. Flat, compositional design pushes you toward smaller interfaces and more explicit implementations, and I like that. The verbosity can be kept in check with good design, and with bad design, the failure mode leans towards more verbosity instead of more complexity.
For more complicated features, composition & interfaces can make things more verbose, but honestly I like that. Inheritance's most powerful feature is open recursion (defined in the linked article), and I find open recursion to be implicit and thorny. If you need that level of power, I'd rather the corresponding structure be explicit, with builders and closures and such.
[1]: Not saying this is correct, but as someone who prefers composition to inheritance, this is what feels natural to me.
Well, I am sure, that all the graphic libaries I ever used, had this inheritance model. (The graphics libary I build, as well.)
The libaries I have seen, that used a different model, I did not really like and they were also rather exotic, than in wide use. But I am willing to take a look at better designed succesful inheritance free ones, to see how it can be done,.if you happen to know one ..
Neither Go nor Rust have inheritance, so any graphic library implemented in those languages will be inheritance-free; ditto anything in a functional language, for the most part. In general, these tend to be very declarative toolkits, being post-React, but they should illustrate the point. For something more widely used in industry, I know Imgui is a popular immediate-mode library.
Now if you're say writing a high performance game, rendering engine, then maybe you want to squeeze out another 10 frames per second (FPS) but not committing resources to the overhead of that mixin decorator singleton factory facade messenger design pattern and just have some concrete tight C or assembly loop at the beating heart of it all
- manually write a bunch of forwarding methods and remember to keep them updated, or
- inheritance.
Manual forwarding also operates as a forcing function to write small interfaces and to keep different pieces of logic separated in different layers, both of which feel like good design to me. (Though I'm not saying I'd turn my nose up at a terser notation for method forwarding, haha.)
Compare: Ruby mixins or Go embedded struct fields.
class EncapsulatedCounter:
def __init__(self, initial_value):
_count = initial_value
def increment():
nonlocal _count
_count += 1
return _count
self.increment = increment
counter = EncapsulatedCounter(100)
new_value = counter.increment()
print(f"New value is: {new_value}") def make_counter(start=0):
count = start
def incr():
nonlocal count
count += 1
return count
return incr
Example: >>> c = make_counter()
>>> c()
1
>>> c()
2
But it hides nothing: >>> c.__closure__[0].cell_contents
2
>>> c.__closure__[0].cell_contents = -1
>>> c()
0
"private" in Python is cultural, not enforced. (you can access `self.__private` from outside too if you want).For example, Javascript was influenced by a few languages, one of which was a language called Self which is a Smalltalk like language where instead of instantiating classes you clone existing objects. It's a prototype based language. So, there weren't any classes for a long time in Javascript. Which is why there are these weird class like module conventions where you create an object with functions inside that you expose. Later versions of ECMA script add classes as syntactic sugar for that. And Typescript does the same. But it's all prototypes underneath.
Go has this notion of structs that implement interfaces implicitly via duck typing. So you don't have to explicitly implement the interface but you can treat an object as if it implemented the interface simply if it full fills the contract provided by that interface. Are they objects? Maybe, maybe not. No object identity. Unless you consider a pointer/memory reference as an identifier.
Rust has traits and other constructs. So it definitely has a notion of encapsulation, polymorphism, etc., which are things associated with OOP. But things like classes are not part of Rust and inheritance is not a thing. It also does not have object identity because it emphasizes things like immutability and ownership.
Many modern languages have some notion of objects though. Many of the languages that have classes tend to discourage inheritance. E.g. Kotlin's classes are closed by default (you can't extend them). The narrow IEEE definition is at this point a bit dated and reflects the early thinking in the sixties and seventies on OOP. A lot has happened since then.
I don't think a lot of these discussions are that productive because they conflate a lot of concepts and dumb things down too much. A lot of it boils down to "I heard that inheritance is bad because 'inject reasons' and therefore language X sucks". That's maybe a bit harsh on some languages that are very widely used at this point.
Search for "Functional Programming in C++: How to improve your C++ programs using functional techniques".
Why do you associate polymorphism with OOP? It’s a pretty universal PL concept. Haskell’s polymorphism is one of the defining features of its type system.
From the link above:
"Instead of seeing a program as a monolithic structure, the code of a SIMULA program was organized in a number of classes and blocks. Classes could be dynamically instantiated at run-time, and such an instance was called an "object". An object was an autonomous entity, with its own data and computational capabilities organized in procedures (methods), and objects could cooperate by asking another object to perform a procedure (i.e., by a remote call to a procedure of another object)."
His rant about CS historians is also a fun subject
I've always been so curious what the broader technical ecosystem looks like here. Presumably there are still processes running on systems. But these processes have lots of objects in them? And the objects are using Mach message passing to converse with other processes elsewhere? Within an application, are objects communicating across Mach too?
There's so much high level rhetoric about. Such as this bit. But I'd love a real technical view at what was happening, what objects really were here. https://computerhistory.org/blog/the-deep-history-of-your-ap... https://news.ycombinator.com/item?id=42111938
This is a fun work. It feels like the brief outline for a Speaking for the Dead for OOP. Huge amount of things to lots of different people over time.
Seconding @rawgabbit's recommendation for Casey Muratori's The Big OOPs: Anatomy of a Thirty-five-year Mistake, which really is hunting back and back for the cosmogenesis of objects, and covers so much terrain. Objectogenesis? https://youtu.be/wo84LFzx5nI
This works very well both for concrete things like radio and for more abstract things like math or Marx's notion of private property. This is also the principle employed by religious and mystical parables or the book "A pattern language".
As beautifully shown in that paper, all you need is a primitive “message send” operation and a “lookup” message, pretty much everything else in OOP isn’t necessary or can be implemented at run-time.
I'm only somewhat joking. I actually find this view very useful. Codata is basically programming to interfaces, which we can think of as OO without confusing implementation approaches like inheritance. Codata is the dual to (algebraic) data, meaning we can convert one to the other. We can think of working with an abstract API, which we realise as codata or data depending on what best suits the project. More in the book I'm writing [1].
In general I agree with the author. There are a lot of concepts tangled up in OOP and discussion around the benefits of OOP are rarely productive.
This is sad to read because prototypes are conceptually easier to understand than classes. It’s unfortunate that most developers experience with them is JavaScript, because its implementation is extremely poor. I recommend trying Io which is very Self inspired as well as Lua.
But json wins out because it can be learned much more quickly.
Something similar could be said with OOP vs Functional Programming.
- Gilad Bracha
https://gbracha.blogspot.com/2022/06/the-prospect-of-executi...
> Structured Programming imposes discipline on direct transfer of control. Object Oriented Programming imposes discipline on indirect transfer of control. Functional programming imposes discipline upon assignment. Each of these paradigms took something away. None of them added any new capability. Each increased discipline and decreased capability.
The interface (or extensible class) enables safe indirect transfer of control.
There has been different terms and meaning to it - but we all know the "OOP" thrown about since the mid-to-late 90s is the Java way.
A typical Java book back then would have 900 pages.. half of which is explaining OOP. While not focusing fully on Java, it does help transition that knowledge over to Delphi or C++ or.. eventually.. C#, etc.
Overall -- we all knew what "Must have good OOP skills" means on a job advert! Nobody was confused thinking "Oh.. I wonder which OOP they mean?"
I have a love/hate relationship with OOP. If I have to use a language that is OOP by default then I use it reasonably. While the built in classes will have theor own inheritence -- I tend to follow a basic rule of no higher that 2. Most of the time it is from an interface. I prefer composition over inheritence.
In C#, I use static classes a fair bit. In this case, classes are helpful to organise my methods. However, I could do this at a namespace level if I could just create simple functions -- not wrapped inside a class.
OOP has its place. I prefer to break down my work with interfaces. Being able to use to correct implementation is better than if/switch statements all over the place. However, this can be achieved in non OOP languages as well.
I guess my point is that OOP was shoved heavily back in the day. It was shutup and follow the crowd. It still has it's place in certain scenarios - like GUI interfaces.
Yep
> Information hiding also encourages people to create small, self-contained objects that “know how to handle themselves,” which leads directly into the topic of encapsulation.
This is where it all goes wrong. No module is an island. There's always relationships between different objects/modules/actors in your system.
Who delivers a letter: Postman or Letter? Who changes a light globe, Handyman or LightGlobe?
Things rarely handle themselves - and if they do, it's probably just a pure function call - so why use Objects?
If you start bending over backwards to implement Letter.deliver(Postman p) (and then make it "general" by changing it to IPostman) you'll never stand up straight again. What if I have a List<Letter>, where does the deliver() code go now?"
If you instead write Deliver(Postman p, Letter l), the opportunities to rewrite/refactor/batch just present themselves.
Yes, and for just cause. OOP was invented in Simula76 (1976) and popularized in C++ (1982). OOP solved a very real problem of allowing applications to logically scale in memory constrained systems by allowing logic to grow independently and yet retain access to memory already claimed by a parent structure. Amazing.
Now fast forward and you get languages like Java, Go, and JavaScript. These languages are garbage collected. Developers have absolutely no control over memory management in those languages. None at all. It really doesn't matter because the underlying runtime engines that power these languages are going to do whatever is in the best interest of execution speed completely irrespective of application or memory size. We just don't live in a world where the benefits offered by OOP exist. Technology has moved on.
The only reason for OOP today is code culture. Its a form of organizational vanity that continues to be taught because its what people from prior generations were taught. Most new languages from the last 15 years have moved away from OOP because it contains a lot of overhead as decoration with no further utility.
All definitions of OOP in common use include some form of inheritance. That said I do OOP 0% of the time in my programming. Most developers I have worked with never do OOP unless the given language or employer forces it.
However with hardware progress, performance is not the only critical criteria when systems grow in size, in variety of hardware, with internet volumes, in the number of moving parts, and of people working on them. Equally if not more important are: maintainability, expressivity so less lines of code are written, and overall the ability to focus on essential complexity rather than the accidental one introduced by the langage, framework, and platform. In the world of enterprise software Java was welcomed with so much cheers that indeed a "code culture" started that grew to an unprecedented scale, internet scale really, on which OO rode as well.
However not all control is lost as you say. The JVM that also runs more advanced langages with a JIT that alleviates some of the loss of performance due to the levels of indirections. GC are increasingly effective and tunable. Also off-heap data structures such as ring buffers exist to achieve performance comparable to C when needed. See Martin Thompson video talks on mechanical sympathy, which he gave after working on high frequency trading on the JVM, and check his later work on Aeron (https://aeron.io/). As usual it's all about trade-offs.
Really, is this happening??? From the job listings I have seen, this is not so.
Bashing OO has been popular since I was in college 25 years ago, but it's also been part of nearly every job I've had except a few embedded systems (Fortran, C).
The problem is that the components are often connected to different interfaces/graphs. Components can never be fully separated due to debug, visualization and storage requirements.
In non-OOP systems the interfaces are closed or absent, so you get huge debug, visualization and storage functions that do everything. On addition to the other functionality. And these functions need to be updated for each different type of data. The complexity moves to a different part. But most importantly, any new type requires changes to many functions. This affects a team and well tested code. If your product is used by different companies with different requirements (different data types), your functions become overly complex.
MarkusQ•2mo ago