/s?
Please no - it is so much nicer and easier when using a website with poor UI/filtering capabilities/whatever, to look at the network requests tab from devtools in the browser and get json output which you can work with however you want locally versus getting html and having to remove the presentation fluff from it only to discover it doesn't include the fields you want anyway because it is assuming it should only be used for the specific table UI... Plus these days internet while out and about isn't necessarily fast, and wasting bandwidth for UI which could be defined once in JS and cached is annoying
It sounds like you're complaining that a server isn't shipping bits that it knows the client isn't going to use?
> wasting bandwidth for UI which could be defined once in JS and cached is annoying
How much smaller is the data encoded as JSON than the same data encoded as an HTML table? Particularly if compression is enabled?
ETA: And even more so, if the JSON has fields which the client is just throwing away anyway?
What seems wasteful to me is to have the server spend CPU cycles rendering data into JSON, only to have the front-end decode from JSON into internal JS representation, then back into HTML (which is then decoded back into an internal browser representation before being painted on the screen). Seems better to just render it into HTML on the server side, if you know what it's going to look like.
The main advantage of using JSON would be to allow non-HTML-based clients to use the same API endpoints. But with everyone using electron these days, that's less of an issue.
Well, as many things in life, it depends. If the cells are just text, there is no much difference. But, if the cells are components (for example, popover things or custom buttons that redirect to other parts of the site), the difference of not shipping all those components per cell and rendering them on the frontend starts to become noticeable.
Sure, tell me more. I always enjoy a cool story.
Well put. I think the main issue is that we have a generation of "front end engineers" who have only ever worked with javascript apps. They have no experience of how easy it is to write html and send it via a server. The same html works now, worked 20 years ago, and will work 20 years from now.
Progressive enhancement, forms with fields depending on other fields, real time updates, automated retries when the backend is down, advanced date selectors, maps with any kind of interactivity.
Any of the above is an order of magnitude harder to do backend-only vs backend API + any frontend.
And why would you even want progressive enhancement if you can just send the proper full version right away, without waiting for MBs of JS to "hydrate" and "enhance" the page
Progressive enhancement is often done to mask the fact that fetching the data takes an unacceptable amount of time, otherwise no effort would be done to mask it.
Your plan is to take that same unacceptable time, and add the server side render-to-html time on top of it, and that will improve it via...
Why not validate on the server and return a response or error as HTML?
I’m not trying to argue in bad faith here. I know client side validation is necessary in some cases, but IMO it’s only an augmentation for UX purposes on top of server side validation.
If your form has files you'd want to at least check it before sending data unnecessarily to the server.
Also it's always better to have a tighter feedback loop. That is really the main reason why there's validation on the frontend and backend, not just the latter.
HTML has come a long way since the bad old days. General constraints can be applied directly to the input element. It still makes sense add additional explanations for the invalid input with JS though.
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
The same is true for any JS "validation" and I was using common terminology.
From a user point of view as long as you keep the feedback loop short what difference can they see?
we've come full circle <3
Then you just gotta write the same HTML generating code on the server, isn't it? It looks like just the difference of the code being on frontend or backend, then I'd prefer it to be on the frontend side.
Having the server return plain JSON means the APIs can be reused across products effortlessly and also means that all style changes can be done in the same codebase.
I get reminded of how important this is every time I get to work on an old project that has APIs return HTML that is then being inserted into the DOM by something like jQuery. Figuring it out and updating it is typically a huge mess.
How many products actually share the same server backend? Do they all organise the same data on the same pages? If no, then you already need per-product APIs to avoid making O(N) fetches from the client side Having your backend be aware of what is being presented is rarely a bad thing
Edit: This could still be way simpler than the "hydration" approach which is so popular.
It is a typical CRUD app (like most of them). I like the idea behind HTMX or datastar. I built a prototype in it as well. But then I pivoted to solidjs.
Some reasons for this:
1. Single backend API for web, mobile and any other platform
2. Some UI patterns which are suited for JSON REST APIs [#patterns]
3. Ability to support partial functionality offline.
#patterns
1. Showing a subset of columns in a table but have an option to view & edit each record in detail. In my case a dialog opens with editable checkbox. So it doubles-up as "view" and "edit". These actions render just another view of existing data. A round-trip to server is not desirable IMO.
2. Filtering, sorting on columns. HTML based solution becomes tedious if some of the cells are not plain text but instead a complex component like a <div>. Sorting json and rendering is much better experience IMO.
Edit: About solidjs & datastar
1. It has fine-grained reactivity which I find it appealing and it uses signals which hopefully will be standardized.
2. The compiled bundle size is much smaller than I expected. I have 24KB compressed js which is serves quite a lot of views, pages.
3. Datastar is amazing and I have added it in my toolbox.
I would if I was working on anything that was just a page of text and shitty flat forms like these examples always assume.
What if the client wants to render the output of an API in different ways depending on what screen is visible? If the server API outputs JSON, this is trivial. If it outputs HTML the client is stuck into rendering what the server gives it.
Far better to have a tool that lets you define small, reusable, composable building blocks and let your team just use those.
I'm looking at Alpine.js for that last 15%.
For a lot of internal and personal projects I use a combination of custom HTML elements with XSLT: https://lindseymysse.com/x-s-l-t/
Still requires Javascript, but makes writing HTML a lot more fun.
How would I do the same with plain HTML?
Server-side rendering of HTML is really fast if an efficient compiled language is used. If you are using Node/PHP/Python/Ruby on the server, you will feel the pain. Server rendered HTML is also scraper friendly because the scraper then doesn't need to use a virtual browser.
Once you start bolting on all this stuff to HTML, congratulations, you have built a web framework.
I am not advocating that everyone should start using React. But HTML forms are severely underpowered, and you cannot escape JavaScript at some point. I would love it if forms could consume and POST JSON data, that would make this all a lot easier in a declarative manner.
form:has(.conditional-checkbox:not(:checked)) .optional-part-of-form {
display: none;
}
I’m not saying it’s better (it’s not). Just saying there’s a lot of space between “just HTML” and “a web framework”. It’s worth considering these other options instead of going “full React” from the get-go.If this happens, it sounds like a syndrome of API first endpoints that return 200 with a json error field.
> I don't think having the server render the table HTML and you injecting it is a good idea.
HTMX, Alpine AJAX and other similar progressive web frameworks work exactly this way, as do server side rendered React.js and friends.
> What if the server has downtime, and returns a 200 response but with a "maintenance mode" page
If the server is in maintence mode, it should not display the web application/web page, but instead show a "We're in maintenance mode" messages.
> Having it render only on a successful response and correct parsing of JSON data is more reliable.
You're comparing making a simple web page with either no secondary calls or a single secondary call using a few lines of code to writing a client side web application. It's a bit like comparing a car with a bicycle.
> You also start complicating things in terms of separation of concerns. You potentially have to adapt any styling considerations in your API, for instance if the table needs a class adding to it. Overall, not a good idea, imho.
This is certainly an opinion and that works for you, but HTMX and similar actually make much of my life easier, rather than harder since all that styling, etc. can live alongside my server logic, rather than being in an entirely separate second application.
renegat0x0•1h ago
- my server was running on raspberry PI, now heavy responses are returned via JSON, I think REST-like data. Which makes me offload burden to clients
- javascript makes the page more responsive. Parts can be loaded via separate calls, and it divides something that would be otherwise a monolith into subsequent small calls
- In my experience nearly all simple binary decisions "use only X", and "use only Y" solutions are bad
gwd•1h ago
He's not saying don't use Javascript; he's saying that instead of having those small calls shipping JSON which is then converted into HTML, just have those small calls ship HTML to begin with.
It's surprising how far you can get with something like HTMX.
naasking•31m ago
This is often at odds with page responsiveness, and can increase server load considerably.