In contrast, pure model components tend to evolve slowly, which justifies the investment of a comprehensive test suite which verifies things like data constraints, business logic, persistence. If automated testing were seen as a priority, this would be a no-brainer for any serious app. However, testing tends to be underappreciated in app development. This goes some way to explaining why frameworks carelessly fold in M, V, C to the same component.
Do people really do this? That's mind-numbing.
If you wanted to implement MVC with a separate application data model you had to do work to set up a separate model, and keep it in sync with the UI. None of this class of old tools provided any built-in assistance for defining models separate from Widgets, except for some support for binding UI to database queries/results. Of course this was separate from the Smalltalk world, where there were frameworks for building up models out of pre-defined model "atoms" such as an observable number model that you could bind to various views.
And then, 10, 20 years after the fact, people will start attacking popular implementations that differ from the original using some "new canonic interpretation" that is either extremely recent, or an interpretation that is old but was lost in time.
This is especially common around Smalltalk and OOP for some reason. Smalltalk's OOP is nothing like what existed either before or after, but since Alan Kay invented the term, Smalltalk is weaponised against C++/Java-style OOP. Not that C++/Java OOP is the bees knees, but at least their definition is teachable and copyable.
Design patterns suffer because in most explanations the context is completely missing. Patterns are totally useless outside very specific contexts. "Why the hell do I need a factory when I can use new"? Well, the whole point is that in some frameworks you don't want to use new Whatever, you dummy. If only this was more than a two-sentence blurb in the DDD book (and the original patterns book totally glosses over this, for almost all patterns).
And monads became the comical case, because they are totally okay in Haskell, but once it gets "explained" and migrated to other languages they become this convoluted mostly useless abstraction at best, footgun at worst (thinking of the Ruby one here).
Rubbish. In terms of OO language constructs, Smalltalk is almost entirely derived from Simula. Let’s not revise history.
That said as someone fairly unfamiliar with Smalltalk I'd like an example of what other parts of Smalltalk play well with it's OOP Sauce...
The "big leap" in Smalltalk was the idea that everything is an object and computation is message-passing, not just classes and instances. That’s not from Simula. Simula was more like an extension of Algol with OO bolted on. Smalltalk was a whole new conceptual model, which is not as simple to explain as Simula/C++/Java-style OOP.
But I stand by what I actually said: Smalltalk's OOP is indeed very novel, even for today, especially compared to C++/Java, but it's also very different from Simula, especially the early Smalltalk versions.
It's not without lineage (Ruby and IO) or peers (Erlang), but it's still an incredibly different flavor of OOP than Simula. This is not a slight, this is a compliment to Alan Kay. But to compare it to C++ is to miss the mark. C++ is from a different branch of OOP.
Now Alan makes it clear that the inspiration for Smalltalk OO came from Simula (and a bit from the Burroughs 220 and 5000, from Sketchpad etc.), but to say that Smalltalk is just that is a stretch at best.
The more direct line goes from Simula (Algol with classes) to C++ (C with classes).
There are a set of three short laws that define what a monad is. I’m not sure that really fits in with MVC, OOP, or design patterns.
You could say, the key to getting MVC correct is understanding the Expression Problem, and designing its solution into your programming language in the first place, so you don't need MVC, but if you want to do it, it becomes actually neat and clean and modular.
So I built my own web app system (not a monolithic framework) to test out the opinion. And, I'd say it's working quite well---for me, at any rate.
Have a gander: https://github.com/adityaathalye/clojure-multiproject-exampl...
See also, Polylith application architecture; a far more sophisticated and generalised form of what I'm doing in my system. https://polylith.gitbook.io/polylith/
---
The term "Expression Problem" was coined in context of statically typed languages, but the formulation has nothing to do with static typing per se. Its general form is polymorphic multiple dispatch (not Objects versus Functions, but Objects and Functions).
See: Philip Wadler's explanation (where he coins the term): https://homepages.inf.ed.ac.uk/wadler/papers/expression/expr...
> The Expression Problem is a new name for an old problem. The goal is
> to define a datatype by cases, where one can add new cases to the
> datatype and new functions over the datatype, without recompiling
> existing code, and while retaining static type safety (e.g., no
casts).i.e. He was trying to bring the solution from the dynamic / interpreted language space, to the difficult case of statically typed languages.
---
Anyway, I picture it as an "X/Y" problem. Something like this:
https://www.evalapply.org/posts/clojure-web-app-from-scratch...
7.2. Solve The Expression Problem
Playtime:
- What if we frame everything in terms of the Expression Problem?
- Add a new Y, extend all Xs to it? Without cooperation of existing Xs?
- Add new X, extend all Ys to it? Without cooperation of existing Ys?
| X * Y | y1 | y2 | y3 | ... |
|-------+----+----+----+-----|
| x1 | | | | |
| x2 | | | | |
| x3 | | | | |
| ... | | | | |
(edit: formatting)It's hard to define how a pattern can be useful, because they're patterns, not recipes or snippets. They're supposed to fit in the reader's solution, not the author's examples. You're supposed to have the problem before reaching out for a solution and patterns are not solutions, they're models of solution, each with their own tradeoffs, costs and advantages.
Every `new` can be a factory or builder, but usually `new` is the right thing
Of course I've also seen heaps of singletons that exist for no reason and could just have been a static class, sometimes because of cargo cult, sometimes because "what if we want to make it configurable later?"
Such a waste of energy
In a greater sense, in our profession we tend to learn about new hammers by forcing them into the next (work or personal) project that vaguely resembles a nail, and I think that's largely OK if the alternative is stagnating.
This is what happened with REST too, and it frustrates me more than it probably should.
The original pattern is such a good idea and not even remotely abstract. It's a well defined architectural pattern for a well defined problem yet people still managed to bastardize it to the point that the term REST barely means anything today
In most discussions REST has come to mean “cute URLs” thanks to Rails.
I once got a new hire from Uber and for months on end his complaint was that “the services are too big”.
C# accidentally solved this problem with extension methods, these little helper utils at least get grouped by type and not in one humongous file. Or maybe that was part of the design team's intention behind them all along.
And because they're static you can easily see when services or state are getting passed into a method, clearly showing when it should in fact be some sort of service or a new type.
Even in architectures that start as distributed, I’ve seen the “involuntary monolith” arising.
Way too common, unfortunately.
Sun RPC was microservices.
But understanding they are several decades old concept isn't cool, doesn't sell conference tickets, books and consulting training.
The 2007 book RESTful Web Services was wildly influential in popularizing the standard by clarifying and presenting a set of further guidelines or constraints that it called Resource-Oriented Architecture.
Personally, I think "API" is an unclear term for that kind of structure. The only actual interfacing is the HTTP protocol between the server and web browser. But the browser traditionally only acts as a proxy for the user, who is the one being served access to the resources.
Bingo.
Like Monads, it mostly later interpretations missing the mark.
Fielding’s dissertation is fine in itself. The web itself, AtomPub, OData, among others, all follow REST and HATEOAS.
Same for Monads: the laws are fine and the implementation in Haskell is fine, but meme tutorials and later implementations miss the mark.
The original pattern is extremely abstract and a bad idea. There has been precisely one successful implementation of the original REST "pattern", the web, and only because the pattern was retrofitted onto it; most of the things in REST-as-originally-defined are bad ideas, as any apples-to-apples comparison will show.
I get oppositely frustrated because "REST" was adopted as a rallying cry for one or two good ideas (fitting your protocol to the GET/POST and 2xx/4xx/5xx distinctions from HTTP instead of treating it as a completely opaque transport layer; not wrapping everything in oodles of XML) and the term brought along a lot of bad ideas as baggage. But the meaning of the term shifted towards doing the things that are good because the original meaning was bad.
Like everyone totally forgot patterns are mainly for understanding existing systems like you use a framework - hey this looks like a factory let’s use one from the framework we built stuff with instead of implementing our own.
Besides of course every developer wanting to build framework so others adhere to what he built not the other way around ;)
I don't find them very useful (today) to understand existing systems that don't intentionally use the patterns. They don't occur very often in well-designed systems in the first place, even less so unintentionally.
So they will be present in well designed systems just that they are not called by their „book name”.
Then I clearly see it in all new frameworks just that each framework has their own name for implementation of the pattern.
Patterns were mostly named so people can discuss easier about solutions that are there.
I will quote first sentence of foreword from my copy of the book „All well-structured object oriented architectures are full of patterns.”
That’s the one I write about, it has foreword from Grady Booch and was tied to OOPSLA meeting with C++ and Smalltalk examples. To add OOP stuff to it sounds like a different book because that book is about OOP by definition.
of course at the time they really thought nobody would question if oo was the best way to program so it didn't seem usefully misleading. Patterns apply to non oo programning as well
"most explanations"? Most crappy explanations on the web and in introductory courses perhaps.
The original GoF Design Patterns book and all of the Pattern Languages of Program Design books that followed it define and adhere to a pattern (form, template) for how to document design patterns. The main elements of this form are (GoF, p.3):
1. The *pattern name* ...
2. The *problem* describes when to apply the pattern. It explains the problem and its context [emphasis mine]. ...
3. The *solution* ...
4. The *consequences* are the results and trade-offs of applying the pattern. ...
I am guessing this form comes from Christopher Alexander, but I don't have a copy of A Timeless Way of Building at hand.
MVC is a great, even proverbial pattern, but I don't recall having seen it presentewd in the "Patterns Format" anywhere. Such a presentation would make it easier to understand no doubt.
- Pages 125–143 of Pattern-Oriented Software Architecture (1996) by Buschmann et al. ISBN 978-0471958697.
- Pages 330–332 of Patterns of Enterprise Application Architecture (2003) by Martin Fowler. ISBN 978-0321127426.
Design patterns are not "just a convention" they are practical solutions to often encountered problems. They are a way of extracting commonly applied, useful solutions and documenting them for reuse. If you go and read a properly documented design pattern I think it's pretty hard to misunderstand what it is, what it's good for, when to apply it, and when maybe don't. But it is definitely possible to misapply them. I'm still living in the shadow of implementing Observer, and then trying to implement undo as an Observer by translating observed events into Commands and placing them on to the undo stack. Messy.
Makes sense decades on it was all just personal abstractions
The problem is when implementations aren’t actually monads at all. The same goes for other functional concepts. I wrote a blog about Java’s Optional::map here: https://kristofferopsahl.com/javas-optional-has-a-problem/
It’s the same kind of problem, where naming signals something the implementation is not.
(Am I allowed to link my own blog btw?)
IIRC it doesn't fulfill monad axiom, but I don't think there's a huge problem with it. By the time you're using Option<>-like, I don't think you should use bare `null` at all in your project. Mixing Option<>-like and bare `null` sounds like playing with fire.
Also, if you're using Java 17+ (`record` in your example), you're probably better off writing your own Option<T> to support sum-type matching & product-type destructuring.
Alternatively, just don't call it 'map'.
I agree implementing your Option<T> type is better. The problem is that people will use whatever is available in the standard library—I am not working in isolation.
That's completely backwards IME. The whole point of Option is to allow you to make precise distinctions, and not allowing null in it when null is allowed in regular variables is a recipe for disaster.
For example, the flagship use case of Optional is to make it possible to implement something like a safer Map#get(), where you can tell the difference between "value was not in the map" and "value was in the map, but null". A language that wanted to evolve positively could do something like: add Optional to the language, add Map#safeGet that returns Optional, deprecate Map#get, and then one chronic source of bugs would be gone from the language. (And yes, ideally no-one would ever put null in the map and you wouldn't have this problem in the first place - but people do, like it or not). Instead, Java introduced an Optional that you can't put nulls in, so you can't do this.
fn unit<T>(value: T) -> M<T>
fn map<T1, T2, F>(input: M<T1>, f: F) -> M<T2>
where F: Fn(T1) -> T2
...and finally the magic bit: fn flatten<T>(input: M<M<T>>) -> M<T>
This, in turn, allows defining what you really want: fn flatMap<T1, T2, F>(input: M<T1>, f: F) -> M<T2>
where F: Fn(T1) -> M<T2>
...where the mapping creates an "extra" layer of M<...>, and then we flatten it away immediately.(There are other rules than ones I listed above, but they tend to be easy to meet.)
Once you have flatMap, you can share one syntax for promises/futures, Rust-style Return and "?", the equivalent for "Option", and a few dozen other patterns.
Unfortunately, to really make this sing, you need to be able to write a type definition which includes all possible "M" types. Which Rust can't do. And it also really helps to be able to pick which version of a function to call based on the expected return type. Which Rust actually can do, but a lot of other low- and mid-level languages can't.
So monads have a very precise definition, and they appear in incomplete forms all over the place in modern languages (especially async/await). But it's hard to write the general version outside of languages like Haskell.
The main reason to know about monads in other languages is that if your design is about 90% of the way to being a monad, you should probably consider including the last 10% as well. JavaScript promises are almost monads, but they have a lot of weird edge cases that would go away if they included the last 10%. Of course, that might not always be possible (like in many Rust examples). But if you fall just barely short of real monads, you should at least know why you do.
(For example, Rust: "We can't have real monads because our trait system can't quite express higher-order types, and because ownership semantics mean our function types are frankly a mess.")
Exactly, this was my point, it wasn't clear.
The original definition, and Haskell's implementation are good in itself. Monads in Haskell are not that difficult or too abstract.
It was Monad tutorials and partial implementations missed the mark, like in your example.
Myself, similarly, I've seen way too many Option<T> implementations in Typescript that are less safe than if (value !== null) {}, because they replace a static check with an exception in runtime.
That's a language limitation and has nothing to do with the design pattern. You can do this just fine e.g. in Python.
Often I see actual common practices of "OOP" being used as arguments against it. Which are then dismissed as 'not true OOP' by it's proponents.
Only recently did I see someone give a presentation talking about not just the historical meaning of the term and it's origins but also the common practices that are associated with it and detailed some issues with it. (I'm guessing because he was tired of hearing the same defenses over and over again.)
A lot of people never read Fielding’s dissertation or is aware of the monad laws but is producing bad code and bad tutorials.
It takes someone like Casey, months of reaearch and 3 hours to actually dig those things out 60 years after the fact.
My best guess from this article, given then "associated by observer" link from View to Controller, is that the View is supposed to pass events to the Controller, which will interpret them into changes to the Model. But what's the format of these events that's both meaningfully separate from the View, e.g. could be emitted from different views to maybe different controllers, but doesn't just force the View to do most of the work we want the Controller to do?
e.x. in a 'proper' ASP.NET MVC 4 project I 'inherited', the View took input data in and with a tiny bit of JS magic/razor fuckery around the query page etc, but overall the controllers would return the right hints for the Razor/JS 'view' to move the application flow along or otherwise do a proper reload.
The Controller in ASP.NET MVC takes on the role of both the classic Controller and part of the classic Model's role (orchestrating the retrieval/updating of data). The connection between the View and the Model is completely severed and mediated by the Controller.
Splitting the logic from the state and presentation make the code very testable. You can either have the state as input and you test your presentation, or have the presentation as input (interaction and lifecycle events) and test your state (or its reflection in the presentation).
Also this decoupling makes everything more flexible. You can switch your presentation layer or alter the mechanism for state storage and retrieval (cache, sync) without touching your logic.
The web makes it quite a bit more involved with a separation of client- and server-side state - plus you have a given frontend "framework" in the shape of DOM, which people often leave out of the picture.
This latter necessities the 'escape hatches' in React and alia.
That's actually precisely the anti-pattern. Massive View Controller is an example of this.
The Model is where your logic is. Everything that is in any way semantically relevant.
Views handled display and editing (yes, also editing!). Controllers ... well ... I guess you might have a need for them.
There has to be a boundary that controls changes to the model. The confusion with MVC is where is the best place for this boundary. Well more than one place as it turns out because there are at least two models of reality trying to converge. The model itself and the view of the model (and the user’s mind).
The view’s job is to present a projection of the model and then collect change events to the model. Thus could be a UX or an API. Other events can also change the model like say sensor data.
The controller decides what view to show and retrieves model data to project and translates change events coming from the external world (views or events) into changes the model should interpret. This includes gatekeeping such as auth, and error handling.
That’s a lot for one class so it can get confusing very quickly. Why localize it in one place?
So viewtrollers come around where the controller is in the view class but in the onhandle methods. This also makes sense since each view has a mini controller to handle all the jiggling bits.
This works well when there are no orthogonal injections like auth or eventing. When those are added it makes sense ins viewtroller to extend the model with controller functionality to for eg control authorization or have a thin event receiver to fsm in the model.
This all works but three years later it’s hard to figure out when I read the code again. So I have learnt to treat the model as pure data as much as possible and the view as much about rendering as possible. Views can have little controllers for handling the jiggling. What the controller cares about is when a change to the system needs to happen.
Then I can put the system control fsm in one place. I can put all event handling in the same fsm to avoid race conditions.
The goal is to make it easier to reason about.
What I don’t want are multiple threads of fsms in conflict with each other.
Trygve was very explicit about the model being a model of how the user thinks about the problem:
There should be a one-to-one correspondence between the model and its parts on the one hand, and the represented world as perceived by the owner of the model on the other hand.
https://web.archive.org/web/20090424042645/http://heim.ifi.u...
Second paragraph.
> There has to be a boundary that controls changes to the model.
Yes. It's called the API of the model. The model makes available API for all semantically valid operations. As I wrote elsewhere, I usually have a facade that acts as the top-level API for the model.
The views can call this API. So can other entities.
> The controller ... decides what view to show
possibly. But views can handle their own subview.
> The controller ... and retrieves model data to project
Nope. Not the job of the controller.
> The controller ... and translates change events coming from the external world (views or events) into changes the model should interpret.
Nope, it doesn't. The views typically do that as well. Controllers may be get involved if there is a complex sequence of steps that doesn't really naturally fit into a view.
> That’s a lot for one class so it can get confusing very quickly.
Yeah, if you put all sorts of stuff in the controller that doesn't belong there.
> Why localize it in one place?
Indeed. Don't incorrectly localize all these things in one place. MVC, for example, tells you not to do this.
> This all works but three years later it’s hard to figure out when I read the code again.
Yes. MVC is much better than that thing you came up with.
I took the time to reread the literature and review my actual code. My updated understanding.
Model controls and holds the state of the system. It ensures the data is always valid.
Controller controls the boundary between the system and the user.
View represent the model and capture user intent to change the model. They ask the model for data they need; however I disagree with the practice that views change data directly but instead prefer they send an intention to change to the app which does the work.
Auth was not considered in 1979 as far as I can tell. Authentication is part of the “controller” but in middleware usually because it’s part of the user input boundary, and generally better if done in one place early in the event lifecycle. Authorization is part of the model.
App logic is decomposed into workflows or use cases in the app layer. Events coming in through the controller are translated into what the system understands and then passes it onto the workflow to execute.
Thus these should take change intent from the view and then actually tell/ask the model what needs to change. This allows the app to catch errors from the model, recover if they can, or handle multi step flows. Results and errors are then sent back to the view (eg GUI dialog) or controller (eg api call) that initiated the workflow.
This makes it easier to put different views over the same app logic (mobile, web, api, agent) and also test workflows in isolation.
Modern views have their own controllers for mouse and keyboard. That’s fine. Don’t care. That’s effectively outside the system in the client experience (eg browser) anyway.
Where I have trouble is when I
- put a ton of app logic in the controller
- put a ton of model updates in the view
- have a single controller for the entire system instead of one per system boundary/interface of user events.
The (DDD?) style of app logic being encapsulated outside of the controller makes a lot more sense to me now that I see it.
¯\_(ツ)_/¯
What gives?
Put all the logic in the model. If you want authentication, put that in an "authenticated model" that wraps your model. Don't put it in the controller.
> Controller controls the boundary between the system and the user.
Nope. That is Apple-MVC. Aka Massive-View-Controller. It is not MVC.
The model API is the boundary between the model and the rest of the system.
https://blog.metaobject.com/2015/04/model-widget-controller-...
I don’t know why you think this is a combative conversation where I need will to accept or reject anything. I lack understanding of how you would solve the same problem. Throwing chaffe is not communication. It creates a second problem beyond the one we are discussing.
I don’t know how you handle routing. Do you put that in the model as well?
That just leaves formatting the requested changes into a language the server model accepts.
Maybe model is more 'database', controller is API interface (server side + client request requirements), and view is end user render?
Basically, the thinking was to let the programmer design the view and then implement the code-behind. I'll spare you from my rants about this, but it was popular.
Nowadays, with vibe coding, there is no need to use obtuse design patterns for the sake of RAD. Sensible architectures can easily by used by LLMs without sacrificing engineer or designer agility.
So like in React, you'd have your Redux store as the Model, React components (with useState etc) as the View, and then your Controller is the reducer which is called from UI code and updates the Model.
Maybe that's incorrect definitionally, but it makes sense to me.
As you say, in MVC, the vortex should be User -> Controller -> Model -> View -> User. Best if this is the only vortex (Flux pattern). This can be nicely expressed functionally. That's why I think beans (and mutable variables in general) are bad because each has it's own small vortex of updates.
Another major aspect of the original "true" MVC is multiple simultaneous views on to the same model, e.g., a CAD program with two completely live views on the same object, instantly reflecting changes made in one view in the other. In this case MVC is less "good idea" than "table stakes".
I agree that MVC has fuzzed out into a useless term, but if the original is to be recovered and turned into something useful, the key is not to focus on the solution so much as the problem. Are you writing something like a CAD program with rich simultaneous editing? Then you probably have MVC whether you like it or realize it or not. The farther you get from that, the less you probably have it... and most supposed uses of it I see are pretty far from that.
This is a really insightful way to frame it.
That's sensible. But it's generally useful to split your core state from your presentation, and then you'll find strands of logic that belongs to neither, generally glue code, but some can be useful enough to warrant a module of their own. Also your core state can be alien from the view itself (think a game data (invisible walls, sound objects) and the actual rendering).
Maybe this architecture is not MVC, but MVC can be a first stab for a quick and dirty separation. Then once a cleaner separation can be done by isolating specific modules (in layers, graph, whatever)
And in implementing some process, what is it? As in: what is its encoding in $language and where does it go?
So you end up with the local stamp collectors in the office and get into an argument of: it is part of the model, so should be in the Model class. "Process, nah, that is totally a controller aspect. It does something." etc.
Could you give an example? I've never understood how one could possibly reuse a Controller independently of a View. At a minimum any kind of mouse-based direct manipulation requires the Controller having access to the displayed geometry in order to implement hit testing. E.g. how is a Controller supposed to update the selection range in a text editor without screen-coordinate character extent information from the view, or a drawing editor Controller accessing scene graph geometry for object selection, control handle manipulation, etc.
And you're absolutely right!
The problem you're seeing is one of the misunderstandings/misinterpretations of MVC, that the controller is for all interactions/editing. It's not. It's perfectly fine for the View to handle this.
Instead a model is one or more collaborating objects.
Not sure why this would lead to anemic models, which I completely agree are a common anti-pattern.
In fact, to me it seems rather the opposite would be true: having the single object facade facilitates having a complex model that coordinates all the different pieces to represent a unified view of said model to the views, which can then be very simple and transparent.
In turn, when the models were coupled with views individually, that has tended to lead to exactly that View → DB Table mapping of dumb data objects you rightly criticize.
Then there's the ORM thing, particularly active record ORMs, where "a model" means a database table. And things like serialisation libraries (e.g. Pydantic) where "a model" is one type.
Something that changed how I thought of it was in Robert Martin's Clean Code where the says the whole MVC lives in the outer layers of the application. So basically, "model" is context specific. It depends what part of your application you're talking about. MVC is about building GUIs, that's it. An application usually consists of a lot more.
I used to say things like this. M and V were always pretty unambiguous, but “controller” was kind of like “Christianity”, everyone talks like it’s a unifying thing, but then ends up having their very own thoughts about what exactly it is, and they’re wildly divergent.
One of the early ParcPlace engineers lamented that while MVC was cool, you always needed this thing at the top, where it all “came together” and the rules/distinctions got squishy. He called it the GluePuppy object. Every Ux kit I’ve played with over the years regardless of the currently in vogue lets-use-the-call-tree-to-mirror-the-view-tree, yesteryears MVVM, MVC, MVP, etc, always ends up with GluePuppy entities in them.
While on the subject, it would be remiss to not hat tip James Depseys MVC song at WWDC
https://youtu.be/kYJmTUPrVuI?feature=shared
“Mad props to the Smalltalk Crew” at 4:18, even though he’d just sung about a controller layer in cocoa that was what the dependency/events layers did in various smalltalks.
My programs were simple, so M was data, V was presentations of the data, C was interaction on the M and maybe V.
It only got confusing when I got more experience.
M is the Model. That means the data and all the things you might ever want to do with the data. So any interaction you might want to do from the view is (ideally) a single message-send to the model.
> V was presentations of the data
And editing the data.
> C was interaction on the M and maybe V.
> It only got confusing when I got more experience.
:-)
I like the term "GluePuppy". It absolutely is a crucial part, or parts.
Basically, it defines the architecture of the system, pulls all the objects together and connects them to create a useful system.
It doesn't help that we don't have linguistic support for "glueing things together", we only have procedure calls. So that's one of the issues I am addressing with Objective-S, actual linguistic support for "glue". And yeah, please don't put the rules/distinctions over designing a clean system.
That was a thing I discovered only recently when working with two teams, one Android and one iOS: developers in general, and particularly good architects, want to build these uniformly recursively decomposed modules.
You can't do that with a good OO architecture, because you absolutely need the GluePuppy to tie things together. That one is different from the other modules. If you don't allow for a GluePuppy, but instead insist on uniform modules, you inescapably get procedural decomposition instead of OO decomposition. If you are in an OO language, that means singletons everywhere, because every module needs to know about its dependencies. And that gets you into a world of hurt when you want to do what OO is supposed to be good for, handle variation.
Embrace the GluePuppy! It loves you and wants to make your life simpler.
The "lets-use-the-call-tree-to-mirror-the-view-tree" fashion also has to do with this, IMHO, because our languages only have support for writing down procedures. So if you can get your view-tree expressed by the call-tree, you get to write it down directly in the code, rather than constructing it indirectly, with the view-tree being a hidden side effect.
That is a clear benefit, but the costs are pretty horrendous.
How about we extend our languages so that we can also write down those view hierarchies directly in the code, and at the same time using a format that can also be read/written as data (down LISPers, down)?
https://blog.metaobject.com/2022/06/blackbird-simple-referen...
I haven't written Objective-C in a decade or so, but isn't this a pretty big mischaracterization of the language? NSInteger is a typedef to a C type IIRC, while there's NSNumber for the cases you want an object and/or are deserializing -- and which has observable propeties?
Like, why do we even need any of that stuff? I blogged about it [1] and spoke about it [2] and the post even got some HN love [3].
The opening parable concludes with this...
Multitudes of sworn "Rails developer"s, "Laravel developer"s, "Django
developer"s, "Next.js developer"s and suchlike throng the universe…
Why?
...
...
Once upon a time, there was one.
WebObjects.
Now they are numberless.
The occasional email and DM gives me succour that I am not alone in my confusion. Even people who've "grown up" using traditional MVC frameworks took a minute to self-check and felt "huh, looks like I can look harder at this thing that I do".Clojuring the web application stack: Meditation One
[1] blogged: https://www.evalapply.org/posts/clojure-web-app-from-scratch...
[2] talked: https://www.youtube.com/watch?v=YEHVEId-utY&list=PLG4-zNACPC...
deck: https://www.evalapply.org/posts/clojure-web-app-from-scratch...
source: https://github.com/adityaathalye/clojure-multiproject-exampl...
[3] discussed: https://news.ycombinator.com/item?id=44041255
165 points by adityaathalye 3 months ago | 39 comments
It was __sold__ not to programmers, but to NON programmers.
The model is the underlying shape (the data in storage).
The Controller is like the security guard for the warehouse / building.
The View is what's presented to external clients (end users).
In desktop GUI apps, the delineation is much crisper: the model is the data that the application manages (the CAD geometry, the document, etc). The view is one or more renderings of the data to screen, and the controller is the input and command processor that updates both the model and the view.
Storage is not central to this architecture - it exists, of course, but it’s not really described as part of these core relationships.
I'd like to add a couple of points to TFA.
Yes, it is absolutely paramount to understand what the model is. It is the abstract representation of the domain. The rest of the architecture serves the model and should be as minimal and transparent as possible. Particularly Apple-space code tends to get this very wrong by having very thin models and all the logic in the Massive View Controllers.
It is also important to understand that MVC is not about specific objects, but rather about roles and communication patterns. Different objects can have those roles, and they can actually be usefully combined at times (though do keep the model separate, please).
One crucial part of the communication patterns that TFA duly notes is that models do not know about views. That means that views only ever pull data from models, models never push data towards views. It also means that in an update, the model just says "I have changed". It does not send the data that changed. The "the model changed" notifications is also the only reason a view gets updated.
No, the controller doesn't poke the correct updated data into the view after it has notified the model, that leads to update chains and cycles. IIRC, that was one of the problems that React was trying to solve with "MVC", except it turns out that actual MVC never had those problems in the first place. Mis-application of MVC does.
Having the view always update itself from the model means that view state is always consistent, even if you miss or skip intermediate updates. Skipping intermediate view updates is important, because the model can potentially change a lot faster than the view can update, never mind the user processing those view updates.
Also, one common misunderstanding (that also leads to Massive View Controllers) is the mistaken belief that views must not edit models, that you must have a controller to edit it. That is actually not in the original MVC paper[3]. In the original paper, views can edit models and that makes things a lot more sensible.
Controllers are a bit of a catch-all for stuff that didn't fit anywhere else. The Views in Cocoa actually take over some of those roles, and that works absolutely fine. (Imagine my confusion when ViewControllers were introduced...)
[1] https://blog.metaobject.com/2015/04/model-widget-controller-...
[2] https://blog.metaobject.com/2017/03/concept-shadowing-and-ca...
[3] https://web.archive.org/web/20090424042645/http://heim.ifi.u...
As someone mentioned in another thread, a separate controller starts to make sense if there are several views or views + other parts that somehow access (some of) the same data.
Qt model-view initially didn't even involve QML by the way, and QTreeView etc still exist for the classic desktop look and feature set. I don't use them much anymore neither, but that's because I mostly do embedded stuff. I wouldn't want my e-mail client or text editor to be written in QML though.
For example, what if you have two widgets that need to be side by side? And the user needs the ability to use the keyboard to switch between them? What if now you have a third widget below them that is also tabbed?
At this point you need a state machine to track the state and where the user is currently at. It's easy if this is done for you but pretty difficult otherwise in either architecture.
This is necessary for zero trust in application design. Traditionalists really seem to struggle with this shift in mentality that, when you are designing a system, trust is where the problems come from.
Just as an example, collecting date inputs from a user might be three different fields in the view model and only one field in the data model, and be completely different data types (int vs datetime). If you are working with a client side application then you may not want to pass the entire object to the client because you don’t trust them with all the information, and you cannot trust them to maintain state, so you only transfer the date value in a data transfer object.
These are all models with wildly different intents. If you can’t understand the intent of this separation of concerns then you are designing insecure systems.
UIKit is explicitly designed for MVC. If you want to write the most concise, performant, maintainable, UIKit code, you do so, using MVC, and classic OOP. I have tried other models, but they end up as messy kludges.
SwiftUI was designed to be more flexible, and can employ other patterns. I find that OOP is sometimes useful (especially for things like observable models), but there’s no reason not to do it, using other methods. It doesn’t force you to use anything in particular.
The main issue with SwiftUI, is that it’s still quite “unripe,” and we are limited in what we can do with it. I am looking forward to this changing, over time (it’s already improved, quite a bit). Time will tell, whether or not it can completely replace UIKit. I haven’t really been able to use it for any of my shipping projects, yet. I know of a number of apps that have, but I’ve been unwilling to make the compromises necessary, to do it, myself.
Some tools were designed to be used in certain ways, and coercing them into methodologies for which they weren’t designed, can result in a mess.
If I want to bang nails into a board, a hammer is the best tool. I have banged nails in the past, by flipping a screwdriver around, and using the handle, but that damages the screwdriver, and doesn’t work especially well.
But maybe a nail isn’t the best way to join the boards. If I use screws, then the join will be much better. In that case, the proper tool is a screwdriver. I guess I could still use a hammer, but the results are unlikely to be satisfactory.
Ultimately we want a nice set of reusable UI components that can be used in many different situations. We also want a nice set of business logic components that don't have any kind of coupling with the way they get represented.
In order to make this work, we're going to need some code to connect the two, whether it's a 'controller' or a 'view model' or some other piece of code or architecture.
However we choose to achieve this task, it's going to feel ugly. It's necessarily the interface between two entirely different worlds, and it's going to have to delve into the specifics of the different sides. It's not going to be the nice clean reusable code that developers like to write. It's going to be ugly plumbing, coupled code that we are trying to sweep into one part of the codebase so that we can keep the rest of it beautiful.
You have Table component.
You have TableData interface. Table component asks TableData interface for data to show.
You have TableCallback interface. When user interacts with Table, like click or whatever actions are implemented, TableCallback methods are called.
When you want to use Table with your data, you implement TableData and TableCallback for your data. That's about it.
I've seen this approach implemented in most UI frameworks. You might rename TableData to TableModel or whatever, but essentially that's it.
But to try to explain myself more clearly - in the architecture you describe, who is that it is implementing TableData and TableCallback? Is it your beautiful clean business logic classes that have no coupling to their representation - in which case that is weirdly coupled in an ugly way, or is it some other class that acts as the bridge between the two worlds, in which case, that's where your ugly code is living.
Also, a while back it was way less common for UIs to have backend services. Nowadays those have taken away most of the "model" side in a lot of apps.
Apple is to blame, as they give absolutely O clue on that part (only demo apps with structures that don’t scale)
If you’re learning MVC, Scott’s OdeToCode/Pluralsight material still nails the fundamentals and the why behind them.
With Observers:
We have hidden control-flow and lost intent. They subvert the readability of the developer's intention, in some cases they make you lose the callstack, and it has you searching the project on what code modifies a variable which is a lot harder than searching for a function call. Don't get me started on dealing with handling errors and out of order events. And oh man, is it easy to just avoid using encapsulation and creating a good interface/api for your piece of code.
Most of your code isn't re-usuable as you think:
A lot of things are naturally and forever tied together. Your UI is a reflection of _some_ model, The actions it can perform is based on it's current context, and if your UI changes then your business logic and model probably changes as well. This die hard need of separation and modularity only increases the complexity of the code with the majority of times the code not even being reused.
The only case that I've found somewhat reasonable to use observers is the database. What caused the database to change and effect it has is already pretty far removed from each other when a piece of UI needs to reflect the database.
Granted, It's possible to work around some of these issues, but please please I'm tired of debugging why a menu only opens 50% of the time because there is a piece of code several classes away from the context that doesn't fire correctly and looks like if (child.preferred.child.model.somethingElse.isFinished) { child.menu.child.openMenu = true }
What would make more sense to me is to simply define the controller as an intermediary between model and view for updates in both directions. The controller would simply represent whatever needs to be specific to the particular combination of model and view in the particular application context. Depending on context, you might use a no-op controller when no customization is needed, as in the example of the article where a checkbox (view) is bound to a boolean property (model).
A hundred people with their own clearly self-evident truth as to whether models are thin or fat.
Absolute burning certainty as to where validation logic lives.
And maybe 3 or even as many as 4 mad hermits who claim to understand what a Controller is.
This is in contrast to the 1990s model of web programming where you wrote an HTML page with a <form>, pointed the action to some URL, and that URL was a cgi-script that couldn't redraw the form so error handling was difficult.
In a lot of cases you could say the data fetching is a dependency of the view and not the other way around, for instance if it is a blog post you might have a model object for the actual blog post but then want to put arbitrary widgets into the view which in turn requires fetching whatever model objects are necessary to draw the widgets. From the viewpoint of a CMS user, for instance, they want to drop the widget into the template and have it "just work" (have the framework figure out the fetching.)
The first exposure a lot of people had to this paradigm was Ruby-on-Rails and since it had a rich model system people thought the model system was the important bit but I'd say the router is the most important bit and how you fetch the data and format it is secondary, in fact it's totally fair to use different fetching and templating paradigms for different pages that live under the same router.
pixelworm•1d ago
cypherpunk666•1d ago
THE OG
mpweiher•19h ago
https://web.archive.org/web/20090424042645/http://heim.ifi.u...
Enjoy!