I think at least some of these issues can be avoided with a different UI/UX to avoid passing temporal/unsaved data between screens.
looking forward to the next instalment!
Then solving this by recreating the entire stepper html at each step, with the added complexity that if it contains something you want to keep "it's a nightmare"?
Then having to create a temporary server-side session to store data that somehow the browser can't keep between two clicks?
Etc.. it's write web apps like it's 1999.
The two biggest problems with HTMX is that being fully server side controlled you need to put the whole app state in the URL and that quickly becomes a nightmare.
The other is that the code of components is split in two, the part that is rendered on the first time, and the endpoint that returns the updated result. You need a lot of discipline to prevent it from turning that into a mess.
The final nail on the coffin for me was that the thing I wanted to avoid by picking HTMX; making a rest api to separate the frontend and the backend, was actually a good thing to have. After a while I was missing the clean and unbreakable separation of the back and front. Making the rest api was very quickly done, and the frontend was quicker to write as a result. So HTMX ended up slower than react / vue. Nowadays react/vue provide server side rendering as well so i'm not sure what Htmx has to bring.
> The two biggest problems with HTMX is that being fully server side controlled you need to put the whole app state in the URL and that quickly becomes a nightmare.
Or you can create a large session object (which stores all the state) on the server, and have a sessionId in the URL (although I'd prefer a cookie) to associate the user with that large session object.
> Nowadays react/vue provide server side rendering as well so i'm not sure what Htmx has to bring.
You should be doing this anyway ... it's so annoying when my wife sends me a link at work and it just goes to a generic page instead of the search results she wanted to share with me. She ends up mostly sending me screenshots these days because sharing links don't work.
A more reasonable implementation of sharing search results would be to store results on the server and have a storedResults id key in the url.
The shopping cart can be kept on the back end and referenced by an id stored in a cookie.
You can keep partially filled out forms in hidden form variables and can send them back in either GET or POST.
Not all requests require all the form data, for instance my RSS reader YOShInOn is HTMX based -- you can see two forms from it here:
https://mastodon.social/@UP8/114887102728039235
in the one at the upper left there is a main form where you can view one item and evaluate it which involves POSTing a form with hidden input fields but above that I can change the judgements of the past five items by just flipping one of the <selects> which needs to only submit the id and the judgement selected in the select. I guess on clicking one of the buttons in the bottom section I could redraw the the bottom section, insert a <select> row at the bottom of the list and the delete the one at the top, but it just redraws the whole form which is OK because I don't have 200k worth of open graph and other meta data in the <head> and endless <script> tags and any CSS other than bootstrap and maybe 5k of my own, which all caches properly.
Yes, shopping cart state should be in the URL in the form of a server-side token under which the cart state is stored. Ditto for partially filled in forms, if that's something your app needs.
All page state should be transitively reachable from the URL used to access that page, just like all state in a function in your favourite programming language should be transitively reachable from the parameters passed into that function, eg. no global variables. The arguments for each are basically the same.
All of this stuff needs to be stored on the server anyway… otherwise how will you get it back on the page when I switch computers or pull it up on my phone.
You say all of that needs to be stored in the server?
That is how you make a big server crawl with just 100 users, regardless of the programming language of the backend.
Even mobile browsers handle this just fine: https://share.icloud.com/photos/022RMgNZWot7w6AXurHPKC_Nw
This is a myth, servers and round trips can be way faster than most people seem to believe: https://www.youtube.com/watch?v=0K71AyAF6E4#t=21m21s
That's thousands of DOM updates per second being sent and rendered at 144fps.
Some will manually push a History entry, but not all
Also you don't want that store server side because there can be multiple parallel tabs and you don't get notified server side when the tab is closed to properly cleanup the associated resources.
?tab[0]=/some/url&tab[1]=/some/resource&activeTab=0
Bam, you have tabs in the url. I can duplicate the tab, share my view, or whatever. Assuming the other user/tab/window/profile has access to these resources, it’ll show exactly the same thing. I can even bookmark it!
You can even add popups:
?popupModal=saveorleave
This state probably won’t be applied on an entry, but what’s great is that pushing [back] in the browser is the same effect as cancel! If you click “leave” then you do a “replace state” instead of a “push state” navigation, so the user doesn’t go back to a modal…
This was, at one point, decently standard logic. Then people who don’t know how browsers work started creating frameworks and reinventing things.
I digress. I’m just so glad I left the front end 15 years ago, I’d lose my shit if I were dealing with this kind of stuff every day.
Then it is not simple (as in simple I understand it) it's just the same we already have, reinvented.
If it would be simple and lean, it would solve the most common problems by itself (like I don't need to care about the HTTP part in .Net - it's a one liner and the framework solves it for me).
Yeah, that also bothered me. To me it looks like the page (template) should fetch that partial from the same endpoint that will deliver the partial via the wire to HTMX.
I haven't gotten around to it yet, but my plan is to use in-process REST with Objective-S, so that accessing the internal endpoint will be the cost of a function call.
The HTTP wrapper for external access is generic.
I think it should be obvious that if a software is easy and convenient to the author to write (simple and lean), then the complexity falls onto the users.
To avoid the cost of updating the entire page, htmx only fetches a parent element and all of its children, but this runs into the problem that you must choose the common parent element for all the elements you want to update.
So the author reaches the conclusion that htmx is not meant to be used for SPA style apps. It's meant to add a little bit of interactivity to otherwise static HTML.
Not exactly, you can use Out-Of-Band updates, which means the server can arbitrarily choose to update specific elements outside the parent.
> So the author reaches the conclusion that htmx is not meant to be used for SPA style apps. It's meant to add a little bit of interactivity to otherwise static HTML.
Can you clarify where I seemed to have come to this conclusion, as this is not what I intended to express.
Why is every developer trying to make things complicated?
Whenever I go debug unnecessary state machines, or have to refactor them (to compress the number if steps), I scratch my head half the time, trying to follow the string of thought that my predecessor felt so smart about.
I usually solve the second problem by simply saving the state of the individual input fields, you only need a user session. Depending your use-case, you might need to be transactional, but you can still do this saving everything as "partial" and close the "transaction" (whatever it might mean in the given context) at the last step. Much-much simpler than sending form data over and over.
I did mention using OOB, but I preferred swapping the entire Stepper because the logic on the backend was just a little bit cleaner, and the Stepper didn't include anything else anyways.
> I usually solve the second problem by simply saving the state of the individual input fields, you only need a user session.
I believe this is exactly what I did in the article, no?
In datastar the "Out Of Band" updates is a first class notion.
I think it is best seen in examples on DS website.
HTMX also has the option of using SSE with an extension [0]. I've used this to update the notifications tray for example. You could probably do it for OPs example too.
I didn't see guidance in the docs for routing one tab's interaction events to the backend process managing that tab's SSE. What's the recommend practice? A global, cross-server event bus? Sticky sessions with no multiprocessing, and an in-process event bus?
If a user opened the same page in two tabs, how should a datastar backend know which tab's SSE to tie an interaction event to?
So when opened on a different tab, the backend would do authentication and render the page depending on the store state.
In general, the backend must always compare the incoming state/request with stored state. E.g the current step is step 2 but the client can forces it to go to step 4 by manipulating the URL.
DS v1.0 now supports non-SSE (i.e. simple request/response interaction as well) [1]. This is done by setting appropriate content-type header.
[1] https://data-star.dev/reference/actions#response-handling
This is just what I can glean from the docs, I've never actually used datastar myself.
You might need async if there are lot of concurrent users and each of them using long duration SSE. However, this is not DS specific.
This blogpost affirms it
If you mean developer hostile–sure, reloading the backend server can be annoying if it compiles slowly, like say Rust. I would say, pick the right tool for the job. If you're going to use htmx, pick a backend language that compiles and reloads very quickly.
Why not use cookies?
So, when /upload is requested, the backend in response sets a cookie with a random uploadId (+ TTL). At the backend, we tie sessionId and uploadId.
With every step which is called, we verify sessionId and uploadId along with additional state which is stored.
This means even if the form is opened on a different tab, it will work well.
This does not have the benefit of being usable across different tabs or even closing and re-opening the page. Besides, (a minor point) shoving all the state in the cookie makes code simple i.e. don't have use URL params.
However, it's difficult to get things right. I spent way too much time on some basic features that I could have shipped quicker if I used React.
The issue with React though, is that you end up with a ton of dependencies, which makes your app harder to maintain in the long-term. For example, I have to use a third-party library called react-hook-form to build forms, when I can do the same thing using plain HTML and a few AlpineJS directives if I need dynamic fields.
I'm not sure if I'll ever build an app using HTMX again but we need more people to write about it so that we can nail down patterns for quickly building server rendered reactive UIs.
This Github demo was pulled straight out of some work I was doing for the Admin of Prayershub (https://prayershub.com).
Working on this specific feature (the soundtrack uploader) though, I reguarly asked myself "what if I just used Svelte or SolidJS"?
Note, Prayershub uses a regular mix of HTMX and SolidJS, so I can pop-in SolidJS whenever I find convenient.
For other page, I'll use full-blown SolidJS (with JSX and everything) for like a popup. Example: https://pasteboard.co/hY35xM7VbATG.png
Now, how I specifically embed SolidJS, its pretty simple, I have my entrypoint files for specific pages: assets/admin-edit-book.tsx assets/admin-edit-song.tsx assets/single-worship.tsx assets/worship-edit.tsx
Then I have a 30-line build script that invokes esbuild along with esbuild-plugin-solid (to compile JSX to plain-old html, no fancy virtual dom) to compile the scripts into javascript.
I can share the build script if you'd like. It helps that SolidJS is so self-contained that it makes such a setup trivial.
https://gist.github.com/BookOfCooks/42181c4214442144c3e4d7e5...
For passing data into it I've used Inertia.js [0] and also my own data-in-page setup that's parsed and loaded into the Vue app. The app then writes the changes back out, usually into a hidden form input. The form on the page is then submitted as usual to the server, along with any other data that I need.
It's a great way for adding more complicated behaviour to an existing app.
Otherwise vanilla forms are great in React. If you did this by hand in Vue or vanilla it would also be hell.
Also in terms of maintenance burden the top libraries in this space are massively popular. Most here would likely be making a great maintenance burden decision in offloading to well reputed teams and processes rather than in housing your own form and validation library.
It's really not. If you walk away from your project for 7 years, your vanilla JS will just load into the web browser and still behave the same. If you walk away from your React (or other NPM-based project) for the same amount of time, you won't be able to build all your dependencies from source without spending time updating everything. Going with something like HTMX or plain JS vastly reduces your maintenance overhead.
So when you're actively maintaining something and you bring in a dependency, you're in some sense outsourcing some of that work, whether it's a colleague or an outside party maintaining that library. The specifics of who begins to matter. Is it the React team maintaining that part of the codebase? Is it lonely author in Kyiv? Or is it you?
So what is it like to be the colleague of someone who wrote their own Tanstack Forms and successfully or unsuccessfully integrated with Zod and the like? Or did they choose to write their own runtime type validator too? That's maintenance burden.
I don’t get it. This is super easy w htmx.
Regular form elements work just fine in React, all you need to do is interrupt the onInput and onSubmit handler and deal with the form data yourself. I've tried a handful of these form libraries and frankly they make everything way more complicated and painful than it needs to be.
I've recently, once again, gave native inputs a chance in a new project. It lasted as long as I've described in the first sentence. And I've been in the frontend world for 20 years. Trust me, you don't want complicated native forms.
And react-hook-form is just what you need (albeit it also is boilerplate-ish, so I always end up wrapping it up in a simpler and smarter hook and component).
edit: Same, in a sense, for HTMX. It's ok for simple things. But eventually you may end up trying to build a house with a fork. The fork in itself is not a bad tool, sure. But you also don't need a concrete mixer with your morning toast.
Why? I've written a lot of React and I've never used this library. In fact, I rarely use any React-focused dependencies except a router, as you say every dependency has a cost especially on the client.
React works just fine without dependencies.
But for forms I honestly would recommend to start with plain React and only abstract things you need in your project.
I want to make the intent of this blog post extremely clear (which tragically got lost when I got deep into the writing).
I love HTMX, and I've built entire sites around it. But all over the internet, I've seen HTMX praised as this pristine perfect one-stop-solution that makes all problems simple & easy (or at least... easier than any framework could ever do).
This is a sentiment I have not found to be true in my work, and even one where the author of HTMX spoke out against (although I can't find the link :(
It's not a bad solution (it's actually a very good solution), but in real production sites, you will find yourself scratching your head sometimes. For most applications, I believe it will make ALMOST everything simpler (and lighter) than traditional SPA frameworks.
But for some "parts" of it, it is a little tricker, do read "When Should You Use Hypermedia?" [1];
In the next blog post (where we'll be implementing the "REAL" killer features), I hope to demonstrate that "yes, HTMX can do this, but it's not all sunshine & rainbows."
---
On a completely separate note, one may ask, then, "why use HTMX?" Personally, for me, it's not even about the features of HTMX. It's actually all about rendering HTML in the backend with something like Templ [2] (or any type-safe html templating language).
With Templ (or any type-safe templating language), I get to render UI from the server in a type-safe language (Golang) accessing properties that I KNOW exist in my data model. As in, the application literally won't compile & run if I reference a property in the UI that doesn't exist or is of the incorrect type.
You don't get that with a middle-man API communication layer maintained between frontend and backend.
All I need now is reactivity, and htmx was the answer. Hope you understand!
[1] https://htmx.org/essays/when-to-use-hypermedia/#if-your-ui-h...
Maybe it's more accurate to say, "HTMX is frontend for backend developers who want a SPA."
IF you used a pure SPA approach with client-side validation for each step, and server-side validation only done at the last step, I believe it would be simpler.
However, let's say you introduce anything slightly more complicated. Like say you do server-side validation with each step, now you have to somehow persist that "validated data". In that case, the implementation in the article is indeed simpler (or at least not as complicated as a traditional SPA).
I've been programming on personal projects with html-form (my own lib that is a radically paired down version of HTMX with a focus on using native form controls, which makes it so you don't need hydration like you do in HTMX).
So, some thoughts on your problem.
One note. Many people mention OOB. But there is also hx-select, which can be quite handy.
I think there are multiple different approaches that are possible, that I can think of, depending on your needs.
Use eventing. On the server create an event that will be called on the front end by HTMX. When this is called have some vanilla JS that updates the class on your location in the form.
Send all the forms down and have them only visible based on CSS or using an input element is checked or not or using JS. That would be pretty straight forward.
If the form passes then replace all the body contents or main element. If it doesn't just update the form and add any user information. Or send an event back which causes a popup to tell the user what their issue is. Or send an even that causes some text to show.
Also, you can also use CSS and native HTML form validation that will show text underneath an input (or above the input) that tells the user what the the input requires. Modern CSS has a pseudo selector which only turns on when the input is invalid[1].
This is simple enough that a straight up MPA could be used with page animations between pages using view transitions[2].
Similar to one of the solutions above, but using data-action style of programming.[3]
But, I usually try to not over complicate things.
[1]: https://developer.mozilla.org/en-US/docs/Web/CSS/:user-inval...
[2]: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_view_tr...
From reading this, I’ve decided I will never attempt to use it again. All I could think was, just use Go’s HTML templating. What is HTMX adding, really?
HTMX here is making it so the page works without doing a full HTTP form submission + page load for each stage of the "wizard". instead, you write some HTMX stuff in the page that submits the form asynchronously, then the server sends back the HTML to replace part of the page to move to you to the next step in the "wizard", and then HTMX replaces the relevant DOM nodes to make that so.
Go's templating is completely unrelated to any of this happening on the front end - it's just generating some HTML, both the "whole page loaded by the browser normally" and the "here's the new widget state code", and so obviously:
> just use Go’s HTML templating.
is incorrect.
This allows for some nicely responsive interactions, but introduces complexity.
I'm not the previous poster, but it's a fair question whether the maybe faster responses justify the complexity. In many cases it probably would not.
(Actually, I suspect it's rare; if you know how to make partial page responses fast for HTMX, you know how to make full page responses fast and don't necessarily need HTMX, up until your page just gets too large overall.)
The general problem with HTMX is that, by default, the page state, as a function of the initial page plus the accumulated user's interactions with the page, live only in the user's browser tab. This can seem fine for a while, but opens up some fairly fat edge cases (this article covers some of them). There are ways to handle this, but it's additional complexity and work. Maybe someone has or will create an HTMX-friendly server-side templating framework to take the grunt work out of it, but you still have to wonder if one of the numerous existing full page templating mechanisms might not still be superior, overall.
Hasn't jQuery been able to do this since the early 2000s? Even vanilla JS has this functionality for decades too.
.fetch() has no issues returning server-side rendered HTML and has a lot more options and freedom than what HTMX provides.
I do not think HTMX is a bad library by any means. I just can't see what it buys over vanilla JS.
I remember, working with a co-worker, we planned out the process (a step-by-step on the application like this post) and it made sense to me - but this journey was much harder for my co-worker.
Why is this? Simply because he is familiar with MVC pattern of sending json data back and forth, and getting the frontend to update and render it, etc. The idea of html flying with htmx as the behaviour (inside html tags) was just too much.
For me, I always preferred the old school way of just letting the server-side generate the html. All htmx does is adds extra functionality.
I tried hard to explain that we are sending html back, and to break things down one at a time but each new task was left scratching his head.
In the end, our website had somewhere around 20-50 lines of javascript! Much smaller footprint than over 400 lines in our previous project (that being generous). Sure, our server side code was larger, but it was all organised into View/PartalView files. To me, it made really good sense.
In the end, I dont think I won htmx over with my co-worker. As for another co-worker, who had a chance to build a new project with htmx, decided on some client javascript tool instead. I dont think I got a legit answer why he did that.
With all this above, I learned that some (perhaps most... perhaps all) struggle to adapt to htmx based on their many years building websites a particular way, with popular tools and javascript libraries. Overall htmx does not really change anything - you are still building a website. If anything htmx just add an additional layer to have website really work.
Yoda's words now have new meaning :- "No! No different. Only different in your mind. You must unlearn what you have learned."
For some its just not happening. I guess Luke Skywalker really shows his willpower, being able to adapt to htmx easier than others. :-)
We are now the minority, and they are the norm.
A team that decides to shift towards this approach to development has to get buy-in from the designers as well. It's not just the devs who have to retrain.
This is true. 100%
The difference for me and my AppDev career history is -- I never worked with a team of dedicated frontend developers. We are primarily backend developers who are frontend devs as well (though we can admit it is secondary to our backend skills)
Personally, the change for us was placing our html on the server side. So, to me, styling is not a problem and easy to test. It should be, with some training, easy for a dedicated frontend developer to jump in as well... though we might have to shuffle things around with their tools, to gel nicely with the backend team.
If anything, I think this transition would keep the two departments closer together - communication is needed especially for htmx webapps.
I think it can be difficult to win over other backend developers with htmx, as my original post suggests. Adding a frontend layer as well... it is unlikely htmx will be taken seriously when the majority want to stick with what they know.
Building things using standards and not fighting the paradigm with a new paradigm. What's great about HTMX is that it fits alongside the standards in a way that SPA frameworks generally don't.
But focusing on the web itself and everything it's capable of without a framework means you can easily move off of HTMX if need be.
100%
At a basic level - it is all html, css, javascript and a server side language at the end of the day. Whether we are talking today or back in the early 00's.
For few years now, we add nodejs, typescript, React, etc.. on top of it. Personally, while I understand the purpose of using such tools for complicated web development, I still believe good websites can be created without them. It keeps is simple, small in size, etc.
Of course, a few years before that the push was angular or knockoutjs. Before that the push was jQuery, etc.
For the future. Lets say in the next 15 years, while I still believe that html, css and javascript will remain.. I do think react, like angular, will be replaced by something else.
Honestly, I think its just a matter of time before WASM takes over or the evolution of such technology. Personally, I have toyed WASM builds in compiled languages and think it will win for web development for speed, performance, and lack of fluff. However, we are not there yet.
For example, I had to build a internal web application for staff. They have a number of drop downs and text fields, etc. I was experimenting implementing it with (something like) immediate mode UI such as IMGUI in Go. While the results were great, it reached a deadend not because of WASM, langauge, but lack of UI features. I needed to include OpenStreetMap, which is not supported. I had to bite the bullet and accept writing it as a typical Website.
I went with htmx + leafletjs in the end. Again, it worked out well.
That's honestly the main reason. It's so you can build all three channels the same(ish) way
I think a lot of the arguments over HTMX come down to this difference. The people that love it see how much better it is for their use case than something like React, while the critics are finding it can't replace bigger frameworks for more demanding projects.
(Here's an example interface made with HTMX. IMO React would have been overkill for this compared to how simple it was with HTMX. https://www.bulletyn.co )
return pox.Templ(http.StatusOK, templates.AlertError("Name cannot be empty")), nil
Oof, an HTTP 200 OK response with a body that says the request actually was not OK.I like htmx, but this is probably the weakest part of it.
htmx is supposed to let you write semantic HTML, but it's obviously not semantic HTML/HTTP to respond HTTP 200 to incorrect user input. But I think OP is doing this because if they had responded HTTP 400 - Bad Request, htmx would have thrown away the response body by default.[0]
[0] https://htmx.org/docs/#modifying_swapping_behavior_with_even...
It's perfectly semantic HTML, it just doesn't have to be semantic HTTP (as in, you don't have to push semantics to the HTTP level, keep it at the HTML level). Thinking that HTTP status codes should be semantically meaningful means you're still thinking of htmx endpoints are or must be API endpoints. That's a mistake.
Another comment asserted HTMX can handle this, it just needs to be configured to do so. If that is the case, then I don’t see an actual issue.
You can, but it feels like you're sort of fighting the htmx library at that point. I do it, as it's the least bad option, but I generally find footguns with violating HTTP conventions (e.g., returning HTTP 200 for a bad request) or going outside the mainstream of a library (telling htmx that HTTP 400 can have a meaningful response body).
htmx supports returning other responses, and you can handle their behaviour if you choose to.
At a basic level, validation/checks should be done on the server side. If there is a problem, you can return HTML. I dont see what the big deal is.
On another note, you can call a javascript function, for before and after some occurance, like doing a "swap"
You can check the status and go a different route if you so with.
These are just a couple of options and does not add much complication to the overall htmx design. You still end up with much less javascript code this way.
This is not a 'part of' htmx. Htmx doesn't prescribe how you handle errors. You can easily respond with 400 or some other appropriate status code on error, and plug in to an htmx hook on the client side to handle it appropriately.
Eg here's how I handle form validation errors with a 422 response status: https://dev.to/yawaramin/handling-form-errors-in-htmx-3ncg
What's "oof" about it? The application layer should not inject error codes into the transport layer which is what HTTP is in this case.
Do you also think that Apache/Nginx should be injecting codes into IP packets?
If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?
Huh? HTTP is an application layer protocol. It's perfectly acceptable for the application to return a non-200 status code when the request is invalid and can't be processed. There's a widely accepted status code for that exact scenario: 400 Bad Request. It informs the client that there was something wrong with their request, and in well-designed APIs, reading the response body would tell them the reason why. It would be wasteful for the client to always read the response and parse structured data to decide whether the request was successful (at the application level) or not. Status codes allow us to do that.
That said, I've seen arguments for and against this practice, as sibling comments mention, and ultimately consistency and documentation are more important than semantics.
The reason this line is blurry nowadays is because in the beginning web servers didn't contain complex logic. The web server was the application. Then came CGI scripts and application servers, and suddenly the application itself was making protocol-level decisions. The way this is typically structured in large applications is to have protocol-level abstractions that translate app-level errors into HTTP errors. But in small applications it's acceptable, though unsightly, to have HTTP logic mixed with business logic.
> Do you also think that Apache/Nginx should be injecting codes into IP packets?
Web servers do speak TCP/IP, so I'm not sure what your point is. Usually this is not something regular web apps need to be concerned with, but it's possible and sometimes desirable to introduce logic at the TCP or IP layer. There are proxy tools that work at both layer 4 and layer 7.
> If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?
By the status code, error message, and headers. An application would typically never return 502 Bad Gateway, a 301/302 redirect, or set headers like Cache-Control. By that same token, a reverse proxy/webserver would typically never override a 404 with a 200, or inject JSON error messages in the payload.
The application ultimately decides the Content-Type of the response, which Content-Types it supports, and which headers it expects, so why shouldn't it also decide which status codes to return and which response headers to set? A gateway between it and the user can change or enhance this protocol, and specific gateways could be extracted to handle common things like authn/authz and load balancing, but the frontend gateway shouldn't override the message the application is sending (in typical circumstances). Both things can coexist with different responsibilities while speaking the same protocol. HTTP is flexible enough to support that.
I'm curious, though: if you treat HTTP as the transport layer, what protocol does your application speak to the gateway? Is there some translation gateway that translates application-level semantics into HTTP ones?
> Huh? HTTP is an application layer protocol.
I want to emphasise that "in this case" bit.
HTTP is an application layer protocol when the application in question in a webserver and nothing else.
In the case of REST, HTTP is simply a transport protocol. It is not necessary to use HTTP as the transport for RESTful applications. It's common, convention even, but not required.
> if you treat HTTP as the transport layer, what protocol does your application speak to the gateway?
WSGI, maybe? Sure, you can emit status codes there too, but it will be a different protocol you are talking over, not HTTP.
I've seen gRPC gateways for HTTP REST endpoints too.
> Is there some translation gateway that translates application-level semantics into HTTP ones?
I don't think we should be translating application status codes into HTTP status codes. I mean, sure, I've done it myself plenty of times, but it is a mixing of layers and a mixing of concerns.
The fact is, HTTP semantics are defined for (and in the context of) a webserver not an application server. That our application server is chatty with HTTP does not place it in the running context of a webserver.
The semantics of HTTP status codes makes absolutely no sense when emitted by an application.
You might argue that one of them (or maybe two, if we're being generous) such as "400 Bad Request" should be emitted by the application if (for example) a parameter is missing but even in that case it makes more sense for the application to send error-code/error-message so that more information can be given (such as which parameter is missing/invalid, etc).
If you're sending "400" status code for a missing parameter, how will the client know whether the HTTP request was malformed or whether the application input was mangled?
> HTTP is an application layer protocol when the application in question in a webserver and nothing else.
I haven't heard that definition before, and don't really agree with it.
HTTP is the protocol web servers use to communicate with web clients. Whether the server is serving static files or dynamic content based on complex logic doesn't change this.
> In the case of REST, HTTP is simply a transport protocol. It is not necessary to use HTTP as the transport for RESTful applications. It's common, convention even, but not required.
That's true, but I don't see any practical benefit of this distinction. REST concepts map cleanly to HTTP semantics, and practically all REST deployments use HTTP.
> WSGI, maybe?
I guess so, but WSGI is an abstraction useful for interpreted languages and Python specifically. It was a solution to standardize the deployment of a growing number of web frameworks, and to address the lack of a production-ready HTTP server in Python itself. Other languages and ecosystems don't need this abstraction. It would be like trying to make Java servlets universal. Some approaches are a good fit for some ecosystems, but not for others.
As I mentioned in my previous post, the way this is typically handled in, say, a Go web application, is by having an HTTP layer that acts as an intermediary between the protocol and the application. This way your business logic can remain free from HTTP-specific tasks like serialization, parsing, validation, etc. But if the application is only ever meant to be exposed via HTTP, then there's no harm in avoiding the abstraction, and having it speak HTTP directly. This might not be a good idea for testing and maintainability, but it's fine for small applications.
> I've seen gRPC gateways for HTTP REST endpoints too.
That's different. gRPC builds on top of HTTP, and uses a fundamentally different payload and request mechanism. It requires supported clients to even use it, which is why gateways are useful. But REST over HTTP is still plain HTTP. Clients don't need to be aware that they're talking to a REST endpoint, and REST serves as usage documentation more than anything else.
> The semantics of HTTP status codes makes absolutely no sense when emitted by an application.
That depends on the application. If an HTTP endpoint wraps an application call to create a user, and the caller doesn't provide a user name, the application can return an error, which the HTTP endpoint can translate to a 400 status code, including the error message in the payload. OR the HTTP endpoint can do some validation upfront, and immediately return a 400.
I agree with you that it wouldn't make sense for the application code to return HTTP status codes, but not because it's wrong semantically. I think it's wrong from a design standpoint (separation of concerns). HTTP semantics are limited at describing all application concepts, but the ones that are there map pretty cleanly, especially when REST is used.
> If you're sending "400" status code for a missing parameter, how will the client know whether the HTTP request was malformed or whether the application input was mangled?
Again, by reading the response body. Just because HTTP status codes don't describe all application errors, doesn't mean that it's a good idea to abandon them entirely, and always return 200. If the client receives a 400 response, then they can immediately know that something went wrong with the request, and they should inspect the response body for details. Nothing stops the application from returning custom error codes internally that uniquely identifies the actual reason for the failure, if the clients find this useful.
If the request was malformed, then a 400 response would make sense. If the application input was mangled, then the status code will depend on what happened. Was the mangled data part of the request? Then it's still a 400. Was the data mangled during endpoint or application processing? Then a 5xx response would be more suitable.
There are no hard rules for this, and many, many APIs are poorly implemented. But this doesn't mean that applications shouldn't take advantage of the full breadth of HTTP concepts to implement user and computer-friendly interfaces.
I don't follow. What's the boundary between transport layer and application layer in a Go web app?
The Go web app is responsible for specifying both the HTTP status code and the response body.
What's the purpose of different HTTP 4xx errors if they're not supposed to come from the application?
>Do you also think that Apache/Nginx should be injecting codes into IP packets?
No, because Apache/Nginx is not responsible for populating IP datagrams, whereas Go web apps are responsible for generating the entire HTTP response.
>If your application injects codes into the HTTP layer, how on earth does the client know whether the error originated at the application or at the reverse proxy/webserver?
It can't.
What design do you have in mind where a client gets a HTTP 404 and can distinguish between the web server and the application server? Are you saying a "not found" at the application layer should return HTTP 200 and the client has to check the HTTP body for the real error code?
Just because it's the same app preparing both the transport and the application, you think that it's the same layer of comms?
> Are you saying a "not found" at the application layer should return HTTP 200 and the client has to check the HTTP body for the real error code?
The "not found" is not an application level error, even if, in your backend, you mixed them all up into the same function.
Yes, that's what I think.
I don't see any other way of dividing it, which is why I asked what you think the boundary is otherwise.
Again: Can you explain what you think the boundary is between the application layer and transport layer in a Go web app?
>The "not found" is not an application level error, even if, in your backend, you mixed them all up into the same function.
If the request is something like /user/detail/12345 and user 12345 doesn't exist, what should the response be?
> Yes, that's what I think.
If you don't know how communications composed of multiple layers work, I'm afraid I can't really help with that understanding in a comment section on a forum.
I mean, for example, you can use git over ssh or git over https, but no one thinks that the git communications + https (or git comms + ssh) is a single layer.
> Again: Can you explain what you think the boundary is between the application layer and transport layer in a Go web app?
A single Go web app also does SMTP and TCP[1]. Do you also think that SMTP and TCP are on the same comms layer(s) as HTTP and REST?
Many Go apps also support SSL. Does that mean it is okay for the HTTP webserver to put HTTP-specific content into the SSL layer?
---------------------
[1] Maybe you have a Go webapp that never needs to send confirmation emails, but when I last did that in Go, the Go app needed to reach into the TCP stack (specifically to set socket options) in order to make SMTP work.
> If the request is something like /user/detail/12345 and user 12345 doesn't exist, what should the response be?
This is quite an interesting question. If we consider that webservers host paginated applications, the answer would be that this is indeed an application level concern and we should return a 404 since the resource is not found.
SPAs without SSR may have muddled this since the application is about serving a client side application then. We could expect a 200.
It is not as straightforward as it seems perhaps...
I'm not asking you to explain protocol stacks, and I feel like that's obvious.
I've been open to your viewpoint, and I've asked you to clarify your position. Instead you just keep mocking me and feigning surprise that I hold a pretty mainstream view.
At this point, I'm left to assume you're either trolling or you have a viewpoint that can't bear scrutiny, so I'll stop engaging with you.
<div id="foo" hx-swap-oob="true">yes</div>
<div id="bar" hx-swap-oob="true">possibly</div>
will make HTMX update both the #foo div and the #bar div already on the page.I get the premise of HTMX and when and why to use it, it's not solution to everything however it is a blessing for backend developers' who wants to work on frontend.
-> A bit of backstory
For my project Daestro[0], which is bit complex (and big) I chose Rust as backend and Svelte (with Sveltekit) as frontend SPA app. This was my first time working on both. After years of working on Django, I wanted to try statically typed language, after some research and trial, I chose Rust. Sveltekit was obvious because it made sense to me compared to other frameworks and it was super easy to pick up.
After working on Sveltekit for a year, I realised I've been spending a lot of time doing these same thing: 1. You create the api on the backend 2. then you create Zod schema on the frontend for form validation 3. the create +page.ts to initialize the form 4. in +page.svelte you create the actual form and validate it there itself with zod before sending it to the server
Hopping over two code bases just for a simple form, and Daestro has a lot of forms. I was just exhausted with this. Then HTMX started to get a lot of traction, I was watching it from a distant but having worked with Django and it's template, I was dismissive of it and thought having separate frontend is best approach.
-> Why I'm leaning towards HTMX now?
- Askama (rust crate) is a template engine which is compile time checked - Askama supports block fragments[1], which is you can render certain part (block) of template, plus for HTMX usage - Askama's macro almost don't make me miss Svelte's components - Rust has amazing type system, now you can just use it, no need to replicate those on Typescript - same codebase, no more hopping - only one binary to deploy (currently for Daestro I've 3 separate deployments)
-> My rules for using HTMX
You must self-impose a set of rules on how you want to use HTMX, otherwise things can get out of you hand and instead of solving a problem you'll create bigger ones. These are my rules:
- Keep your site Multi-page Application and sprinkle some HTMX to make it SPA like on per page basis - make use of hx-target header to only send the block fragments that is required by HTMX (very easy with Askama) - do not create routes with partial page rendering instead a route must render complete page, and then use block framents to render only what is being asked in hx-target - Do not compromise on security[2]
[0]: https://daestro.com [1]: https://askama.readthedocs.io/en/stable/template_syntax.html... [2]: https://htmx.org/docs/#security
At that point, personally I think it'd be easier to use Preact with a no-build workflow for those bits of the app that have a lot of contained logic themselves, and don't necessarily require a round-trip to the server.
I wouldn't use HTMX for that specific use case.
But yeah it's great to see people sharing their approaches!
can’t wait to steal this uploader for my https://harcstack.org project
[HTMX, Air, Red and Cro]
alex-moon•6mo ago
rapnie•6mo ago
[0] https://data-star.dev
jgalt212•6mo ago
mbvisti•6mo ago
jgalt212•6mo ago
andersmurphy•6mo ago