I recently initiated the backmigration and my approach thus far however has been to take out the "administrative" part out into Rails to benefit from all the useful conventions there, but keep the "business services" in JS or Python and have the two communicate. Best of both worlds, and the potential of all of rubygems, npm and pypi combined.
BTW, I’m also on a similar trajectory using a mix of Java, Python and Node.js to solve different problems. It has been very pleasant experience compared to if I had been bullish on just one of these languages and platforms.
Having just hit severe scaling issues with a python service I’m inclined to only write my servers in Go or Rust anymore. It’s only a bit harder and you get something that can grow with you
Convention over configuration and less code is fine, but unfortunately Rails is not a great example of it IMO. The "rails" are not strong enough; it's just too loosey goosey and it doesn't give you much confidence that you're doing it "the right way". The docs don't help much either, partly because of the history of breaking changes over releases. And the Ruby language also doesn't help because of the prolific globals/overrides and implicitness which makes for "magic".
So you're encouraged/forced to write exhausting tests for the same (normally dumb CRUD) code patterns over and over and over again. Effectively testing the framework moreso than your own "business logic", because most of the time there barely is any extra logic to test.
So I'm also surprised it gained the reputation is has.
Although that's really selling it short - it's so much more than that! But in the context of this conversation, it's a good place to look.
Since most websites will never scale past the limitations of these frameworks, the productivity gains usually make this the right bet to make.
How many CVEs have you had reported yet against your custom code for probably the middleware you wrote to get a request with params available to use them (the params) in an SQL query?
Of course if you are good at this and have a lot of experience than even using an LMM I am sure your code is more than fine. But on average I think it is safe to say that any middleware or library code to deal with HTTP requests and make the information available for business logic generated by an LLM has probably a lot of bugs (some subtle some very visible).
For me the power of Rails is that if you do CRUD web apps it is battle tested and has a long history of giving you what you need for fast building business logic for that CRUD app. Is it the knowledge that is put into designing a web framework that works for 90% of the operations you need to write your custom business logic.
At least it’ll make for an interesting post mortem blog post.
Like yeah, I know you can do it. But it was much more effort to do things like writing robust migrations or frontend templates. I’d love to find something in Go or Typescript that made me feel quite as productive as Rails did.
Maybe I am comparing apples and oranges, not sure.
upsert_all[1] is available to update a batch of records in a single write that does not invoke model callbacks.
activerecord-import[2] is also very nice gem that provides a great api for working with batches of records.
It can be as simple as extracting your callback logic and a method (def self.batch_update) and running your callback logic after the upsert.
[1] https://api.rubyonrails.org/classes/ActiveRecord/Relation.ht... [2] https://github.com/zdennis/activerecord-import
"It can be as simple as extracting your callback..." Isn't this the kind of repetitive thing a framework should be doing on your behalf?
To be fair, ActiveRecord isn't a fault Rails invented. Apparently it's from one of Martin Fowler's many writings where each model instance manages its own storage. Even Fowler seems to say that the DataMapper approach is better to separate concerns in complex scenarios.
Even outside of batch processing, there will usually be a few queries that absolutely benefit from being rewritten in a lower layer of the ORM or even plain SQL. It's just a fact of life.
Ruby/Rails was a breath of fresh air. Translate: Not a PITA.
The real driver is complexity cost. Every line of client JS brings build tooling, npm audit noise, and another supply chain risk. Cutting that payload often makes performance and security better at the same time. Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.
So yes, the pendulum is swinging back toward the server, but it’s not nostalgia for 2004 PHP. It’s about right‑sizing JavaScript and letting HTML do the boring 90 % of the job it was always good at.
The first framework I ever got to use was GTK with Glade and QT with designer shortly there after. These, I think, show the correct way to arrange your applications anywhere, but also it works great on the web.
Use HTML and CSS to create the basis of your page. Use the <template> and <slot> mechanisms to make reusable components or created widgets directly in your HTML. Anything that gets rendered should exist here. There should be very few places where you dynamically create and then add elements to your page.
Use javascript to add event handlers, receive them, and just run native functions on the DOM to manage the page. The dataset on all elements is very powerful and WeakMaps exist for when that's not sufficient. You have everything you need right in the standard environment.
If your application is API driven then you're effectively just doing Model-View-Controller in a modern way. It's exceptionally pleasant when approached like this. I have no idea why people torture themselves with weird opinionated wrappers around this functionality, or in the face of an explosion of choices, decide to regress all the way back to server side rendering.
I've heard people say they just want "Pure JS" with no frameworks at all because frameworks are too complex, for their [currently] small app. So they get an app working, and all is good, right until it hits say 5000 lines of code. Then suddenly you have to re-write it all using a framework and TypeScript to do typing. Better to just start with an approach that scales to infinity, so you never run into the brick wall.
It's not about using the most powerful tool always, it's about knowing how to leverage modern standards rather than reinventing and solving problems that are already solved.
People who say stuff like this have obviously never actually used modern day FE frameworks, because they have all been very stable for a long while. Yes, APIs change, but that's not unique to JS/frontend, and also nothing really forces you to update with them unless you really need some shiny new feature, and at least IME Vue 3 has been nothing but gold since we got on it.
This is just intentional ignorance.
As I’ve become more senior I’ve realized that software devs have a tendency to fall for software “best practices” that sound good on paper but they don’t seem to question their practical validity. Separation of concerns, microservices, and pick the best tool for the job are all things I’ve ended up pushing back against.
In this particular case I’d say “pick the best tool for the job” is particularly relevant. Even though this advice is hard to argue against, I think it has a tendency to take developers down the path of unnecessary complexity. Pragmatically it’s usually best to pick a single tool that works decently well for 95% of your use cases instead of a bunch of different tools.
Further, being against a bloated framework is not the same as being against frameworks. Those frameworks are actually principles. It’s possible for a team to come up with or use existing principles without using a framework.
Finally, “always use React” brings other costs. You need a team to build your system twice. That means you need bigger teams; more funding to do the same thing, and so on. You add complexity at the team level and at the software level when using frameworks. The person above you said that blindly “following best practices” is bad while stating a “best practice” of always start with React. That particular “best practice” not always being the best practice is the entire point of this thread.
Having been a web developer for a quarter century, I know how tempting it is (yes, for small projects) to try to just wing it and do everything without a framework, and I know what a tarpit that way of thinking is. If you disagree, then you were certainly welcome to share you own opinion.
> Most people seemingly advocating for non-React are actually saying to start simple and add the complexity where and when it’s needed.
In my experience that’s actually not the case. That might be what people claim, but in my professional experience some people really don’t like frontend work and they try to avoid frontend frameworks because they think it’ll make their work more tolerable, but what usually happens is they start out “simple” but pretty quickly product requirements come in that are hard to do without some framework, then there’s a scramble to add a framework or hack it into some parts of the app.
You can't just switch horses in the middle of the stream. You have to ride back to the ranch, saddle up on a different horse, and restart your entire journey on a better horse.
This assumes that non-React approaches are simple.
Everything after ready should have been static content.
I am saying that allowing for JavaScript to be dynamically downloaded and executed after the page is ready was a mistake.
You can build your Google docs, your maps, and figmas. You don’t need JS to be sent after the page is ready to do so.
Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.
Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.
Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.
Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.
It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.
If you can exchange static content, you need very little scripting to be able to pull down new interactive pieces of functionality onto a page. Especially given that HTML and CSS are capable of so much more today. You see a lot of frameworks moving in this direction, such as RSCs, where we now transmit components in a serializable format.
Trade offs would have to be made during development, and with a complex enough application, there would be moments where it may be tough to support everything on a single page. However. I don’t think supporting single page is necessarily the goal or even the spirit of the web. HTML imports would have avoided a lot of unnecessary compilers, build tools, and runtime JS from being created for example.
We had that in the form of MapQuest, and it was agonizingly slow. Click to load the next tile, wait for the page to reload, and repeat. Modern SPAs are a revelation.
You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.
And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.
So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.
I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.
People are jumping into nextjs because react is pushing it hard even tho it’s a worse product and questionable motives.
I’ve seen undefined make it all the way to the backend and get persisted in the DB. As a string.
JS as a language just isn’t robust enough and it requires a level of defensive programming that’s inconvenient at best and a productivity sink at worst. Much like C++, it’s doable, but things are bound to slip through the cracks. I would actually say overall C++ is much more reasonable.
This is where I know that, some people, are not actually programming in either of these languages, but just writing meme driven posts.
JS has a few footguns. Certainly not so many that it's difficult to keep in your head, and not nearly as complex as C++, which is a laughable statement.
You've "seen null make it to the database," but haven't seen the exact same thing in C++? Worse, seen a corrupted heap?
It's like people just talk in memes or something.
This is how a lot of discourse feels these days. People living in very different realities.
Though in this case, seeing the most complex C++ app they've built would illuminate what's going on in theirs.
Opening up a 10K lines JS file is like jumping into the ocean. Nothing is obvious, nothing makes sense. You're allowed to just do whatever the fuck in JS. Bugs where always ephemeral. The behavior of the code was impossible to wrap your head around, and it seemed to change under your feet when you weren't looking.
Now, the backend was written in old C++. And yes, it was easier to understand. At least, I could click and go to definition. At least, I could see what was going in and out of functions. At least, I could read a function and have a decent understanding of what it should be doing, what the author's intention is.
The front end, spread across a good thousand JS files, was nothing of the sort. And it was certainly more buggy. Although, I will concede, bugs in C++ are usually more problematic. In JS usually it would just result in UI jankyness. But not always.
I think that’s a feature not a bug.
But then again, I generally like and use Typescript.
The problem is that the behavior becomes so complex and so much is pushed to runtime that there's no way to know what the code is actually doing. There's paths that are only going to be executed once a year, but you don't know which ones those are. Eventually, editing the code becomes very risky.
At this particular codebase, it was not uncommon to see 3, 4 or 5 functions that do more or less the same thing. Why? Because nobody dared change the behavior of a function, even if it's buggy. Maybe those bugs are the only thing keeping other bugs from cropping up. It's like wack-a-mole. You fix something, and then the downstream effects are completely unpredictable.
It becomes a self-eating snake. Because the codebase is so poor, it ends up growing faster and faster as developers become more risk-averse.
In C++, there's only one null, nullptr. But most types can never be null. This is actually one area where C++ was ahead of the competition. C# and Java are just now undoing their "everything is nullable" mistakes. JS has that same mistake, but twice.
It's not about complexity, although that matters too. C++ is certainly more complex, I agree, but that doesn't make it a more footgunny language. It's far too easy to make mistakes in JS and propagate them out. It's slightly harder to make mistakes in C++, if you can believe it. From my experience.
You should still know a language like Rust or Zig for systems work, and if you want to work in ML or data management you probably can't escape Python, but Typescript with Bun provides a really compelling development experience for most stuff outside that.
As fun side anecdote, if you're doing scientific computing in a variety of fields, Fortran 95 is mostly still fine ;)
I greatly prefer Java with Spring Boot for larger backend projects.
To answer your question directly - yes, it’s fine, it’s actually expected behavior.
Every form also normally ends up duplicating validation logic both in JS for client-side pre-submit UX and server-side with whatever errors it returns for the JS to then also need to support and show to the user.
Anecdotally, it seems like I encounter a lot more web apps these days where refreshing doesn’t reset the state, so it’s just broken unless I dig into dev tools and start clearing out additional browser state, or removing params from the URL.
Knock it off with all the damn state! Did we forget the most important lesson of functional programming; that state is the root of all evil?
There are times the user experience is just objectively better with more state, and you have to weigh the costs.
If I am filling out a very long form (or even multi-page form) I don’t really want all that state lost if I accidentally refresh the page.
Which is what HN does and it sucks. It's very common for me to vote on a couple things and then after navigating around I come back to see that there are comments that don't have a vote assigned.
Of course the non-JS version would be even more annoying. I would never click those vote buttons if every vote caused a full page refresh.
The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.
I agree with this. Sprinkle in the JS as and when it is needed.
> The problem with SPAs is that they force having to maintain a JS-driven system on every single page, even those that don't have dynamic behavior.
I don't agree with this: SPAs don't force "... having to maintain a JS-driven system on every single page..."
SPA frameworks do.
I think it's possible to do reasonably simple SPAs without a written-completely-in-JSX-with-Typescript-and-a-5-step-build-process-that-won't-work-without-25-npm-dependencies.
I'm currently trying out a front-end mechanism to go with my high-velocity back-end mechanism. I think I've got a good story sorted out, but it's early days and while I have used my exploratory prototype in production, I've only recently iterated it into a tiny and neat process that has no build-step, no npm, and no JS requirement for the page author. All it uses is `<script src=...>` in the `<head>`, with no more JS on the rest of the page.
Very limited though, but it's still early days.
That's kinda the goal I'm trying to reach. If you know of any SPA that doesn't come with all the baggage and only uses `<script src=...>`, by all means let me know.
We can sit here all day and think up counterexamples, but in the real world what you're doing 99% of the time is:
1. Presenting a form, custom or static.
2. Filling out that form.
3. Loading a new page based off that form.
When I open my bank app or website, this is 100% of the experience. When I open my insurance company website, this is 100% of the experience. Hell, when I open apartments.com, this is like 98% of the experience. The 2% is that 3D view thingy they let you do.
Notification count in the top right?
Remaining credit on an interactive service (like the ChatGPT web interface)?
So, maybe two(!) business use-cases out of thousands, but it's a pretty critical two use-cases.
I agree with you though - do all normal HTML form submissions, and for those two use-cases use `setInterval` to set them from a `fetch` every $X minutes (where you choose the value for $X).
There's an entire domain of apps where you truly need a front-end. Any desktop-like application. Like Google Sheets, or Figma. Where the user feedback loop is incredibly tight for most operations.
Most companies aren’t international.
For applications that are not highly interactive, you don't quite need a lot of tooling on the BE, and since need to have a BE anyway, a lot of standard tooling is already in there.
React style SPAs are useful in some cases, but most apps can live with HTMX style "SPA"s
“So the backend gave this weird …”
“What backend?”
“The backend for the frontend…”
“So not the backend for the backend for the frontend?”
I jest, but only very slightly.
I'm arguing to just use a single API, not creating one for UI, at least when you want things to be simple for multiple clients.
I think the DX is significantly better as well with fast reload…
As a user, the typical SPA offers a worse experience. Frequent empty pages with progress bars spinning before some small amount of text is rendered.Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition, but there's probably ways to build a 'SPA' experience that avoids these too. (E.g. the "HTML swap" approach others mentioned ITT tends to work quite well for that.)
The high compute overhead of typical 'vDOM diffing' approaches is also an issue of course, but at least you can pick something like Svelte/Solid JS to do away with that.
This is an implementation choice/issue, not an SPA characteristic.
> there's probably ways to build a 'SPA' experience that avoids these too
PWAs/service workers with properly configured caching strategies can offer a better experience than SSR (again, when implemented properly).
> The high compute overhead...
I prefer to do state management/reconciliation on the client whenever it makes sense. It makes apps cheaper to host and can provide a better UX, especially on mobile.
Just how low-spec and/or how much state-data are we talking about here? I ask only because I am downloading an entire dataset and doing all the logic on the client, and my PC is ancient.
I'm on a computer from 2011 (i7 870 @ 2.9GHz with 16MB of RAM), and the client-side filtering I do, even on a few dozens of thousand of records retrieved from the server, still takes under a second.
On my private app, my prospect list containing maybe 4k records, each pretty substantial (as they include history of engagements/interactions with that client) is faster to sort and filter on the client than it is to download the entire list.
I am usually waiting for 10s while the hefty dataset downloads, but the sorting and filtering happens in under a second. I do not consider that a poor UX.
How do you know how large the dataset is? All you know from my post is that a dataset that takes 10s to download (I'm indicating the size of it here!) takes under a second to filter and sort.
My point is that if your client-code is taking long to filter and sort, then your dataset is already so large that the user has been waiting a long time for it already; they already know that this dataset takes time.
FWIW, the data is coming in as CSV, compressed, so it's as small as possible. It's not limited by the server. Having it rendered by the server will increase the payload substantially.
I can't really speak for those sites anyway, or why they are so slow doing things on the client, but like I said, I've written client-side processing and used my 2011 desktop, and there has been no pegging of the CPU or large latencies when filtering/sorting data client-side.
> while absolutely massively complex and large server-rendered HTML loads and renders in an eyeblink.
I've not had that experience - a full page refresh with about 10MB of data does not happen in an eyeblink. It takes about 6 seconds. There's a minimum amount of time for that data to download, regardless of whether it is pre-rendered into `<table>` elements or whether it is sent as JSON. Turning that JSON into a `<table>` on the client takes about 40ms on my 2011 desktop. Sorting it again takes about 5ms.
For this use-case (fairly large amounts of data), doing a full-page refresh each time the user sets a new sort criteria is unarguably a poorer experience than a bit of JS that goes through the table element and re-orders the `<tr>` elements.
In this case, using server-rendered HTML is always going to be 6000ms whenever the user re-sorts the table. Using a client JS function takes 5ms. On a machine purchased in 2011.
Okay, lets assume it is faster by whatever the latency is for a network request.
What sort of use-case are we talking about where a table is displayed on a content or e-commerce side and the user is not allowed to re-sort it?
It's all about the user's experience, not the developer's, and I can't see how a UX that prevents sorting is a better UX. Ditto for a sortable table that refreshes the page each time the user sorts it.
Yes, I know that this can be made to work properly, in principle. The problem is that it requires effort that most web devs are apparently unwilling to spend. So in practice things are just broken.
My hypothesis is that they’ve had to deal with so many random web apps breaking the back button so that behaviour is no longer intuitive for them. So I don’t push back against in-app back buttons any more.
I myself am guilty of (about 14 years ago now) giving an SPA a "reload" button, which had it go and fetch clean copies of the current view from the server. It was a social app; new comments and likes would automatically load in for the posts already visible, but NEW posts would NOT be loaded in, as they would cause too much content shift if they were to load in automatically.
Admittedly this is not a great solution, and looking back on it now, I can think of like 10 different better ways to solve that issue… but perhaps some users of that site are seeing my comment here, so yeah, guilt admitted haha.
/)
An auction site I use loads in the list of auctions after the rest of the page loads in, and also doesn't let you open links with middle click or right click>new tab, because the anchor elements don't have href attributes. So that site is a double-dose of having to open auctions in the same tab, then going back to the list page and losing my place in the list of auctions due to the late load-in and failure to save my scroll location.
1. Fetch index.html 2. Fetch js, css and other assets 3. Load personalized data (json)
But usually step 1 and 2 are served from a cdn, so very fast. On subsequent requests, 1 and 2 are usually served from the browser cache, so extremely fast.
SSR is usually not faster. Most often slower. You can check yourself in your browser dev tools (network tab):
vs.
Poster child SSR: https://nextjs.org/
So much complexity and effort in the nextjs app, but so much slower.
SSR also has excess round trips by nature. Without Javascript, posting a form or clicking a like button refreshes the whole page even though a single <span> changed from a "12 likes" to "13 likes".
It just depends on what you are after. You can completely drop the backend, apis, and have a real time web socketed sync layer that goes direct to the database. There is a row based permissions layer still here for security but you get the idea.
The client experience is important in our app and a backend just slows us down we have found.
you might be able to drop a web router but pretending this is "completely drop[ping] the backend" is silly. Something is going to have to manage connections to the DB and you're not -- I seriously hope -- literally going to expose your DB socket to the wider Internet. Presumably you will have load balancing, DB replicas, and that sort of thing, as your scale increases.
This is setting aside just how complex managing a DB is. "completely drop the backend" except the most complicated part of it, sure. Minor details.
Which is fine and cool for an app, but if you do something like this for say, a form for a doctor's office, I wish bad things upon you.
That's never the case.
Please forgive the self-promotion but this was exactly the premise of a conference talk I gave ~18 months ago at performance.now() in Amsterdam: https://www.youtube.com/watch?v=f5felHJiACE
Yes, that one. I want that experience please.
I built a lib specifically designed for this strat: https://starfx.bower.sh/learn#data-strategy-preload-then-ref...
If you are curious, my most recent blog post is all about this concept[0] which I wrote because people seem to be misinformed on what RSCs really are. But that post didn't gain any traction here on HN.
Is it more complex? Sure–but it is also more powerful & flexible. It's just a new paradigm, so people are put off by it.
[0] Server Components Give You Optionality https://saewitz.com/server-components-give-you-optionality
Maybe the answer was never in JS eating the entire frontend, and changing the tooling won’t make it better, as it’s always skirting what’s actually good for the web.
I used to agree but these days with Vite things are a lot smoother. To the point that I wouldn't want to work on UI without fine-grained hot reloads.
Even with auto reload in PHP, .NET, etc you will be wasting so much time. Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.
That’s honestly not that many things IRL. If you look at all the things you build only a minority actual demand high interactivity, or highly custom JS. Otherwise existing UI libraries cover the bulk of what people actually need to do on the internet (ie, not just whatever overly fancy original idea the designers think is needed for your special product idea).
It’s mostly just dropdowns and text and tables etc.
Once you try moving away from all of that and questioning if you need it at every step you’ll realize you really don’t.
It should be server driven web by default with a splattering of high functionality islands of JS. That’s what rails figured out after changing the frontend back and forth.
> Even with auto reload in PHP, .NET, etc you will be wasting so much time
Rails has a library that will refresh the page when files change without a full reload, using Turbo/Hotwire. Not quite HMR but it’s not massively different if your page isn’t a giant pile of JS, and loads quickly already.
What if you have a modal opened with some state?
Or a form filled with data?
Or some multi-selection in a list of items that triggers a menu of actions on those items?
Etc.
And it's true Vite can't always do HMR but it's still better than the alternative.
Stimulus controllers can store state.
> Or a form filled with data?
Again, you can either use a Stimulus controller, or you can just render the data into the form response, depending on the situation.
> Or some multi-selection in a list of items that triggers a menu of actions on those items?
So, submenus? Again, you can either do it in a Stimulus controller (you can even trivially do things like provide a new submenu on the fly via Turbo), or you can pre-render the entire menu tree server-side and update just the portion that changes.
None of these are complex examples.
Yes, obviously, but do these maintain state after hot reload?
If you change the JS or the controller itself, obviously, state stored in JS would be lost unless you persisted it locally somehow.
No -- but you could. And it wouldn't be the end of the world. So I'm just saying, DX doesn't eclipse all other considerations.
Didn't everybody say the exact same thing about Node, React, jQuery...? There is always a new and shiny frontend JS solution that will make the web dev of old obsolete and everyone loves it because it's new and shiny, and then a fresh crop of devs graduates school, the new shiny solution is now old and boring, and like a developer with untreated ADHD, they set out to fix the situation with a new frontend framework, still written in JavaScript, that will solve it once and for all.
I still build websites now the same as I did when I graduated in 2013. PHP, SQL, and native, boring JavaScript where required. My web apps are snappy and responsive, no loading bars or never-ending-spinning emblems in sight. shrug
I'm quite surprised to hear this is a common thing. Besides myself, I don't know a single person who has ever installed a PWA. For people in tech, despite knowing they exist. For people outside tech, they don't know they exist in the first place.
Does management actually have any PWAs installed themselves?
They should have designed it as a proper native experience.
Yes, the service worker thing is annoying but you possibly don't need it if you have a server backend. It's basically a glirified website with a home screen icon. Most of the native vehicle, asset or fitness tracking apps need a backend anyways and they fail miserably when disconnected from the network.
Better do a mobile Web friendly website and leave it at that.
Most users hardly tell the difference anyway.
And the metrics are saying that people click it?
It definitely makes complete sense in that scenario, but remains a very niche usecase where people have no other option.
>People outside tech just get installation instructions
People outside of tech don't need instructions to install non-PWA, store apps. So all this does to me is reinforce that no one is installing PWAs outside of niche scenarios where 1. people basically have to use the app due to a connection to a physical institution 2. they are explicitly told how to do it 3. the app is not available on the stores for legal reasons.
Depends on age and tech awareness. Many still do, when they cannot rely on a family member to do it for them. Overall installing PWA is no more complicated than getting something from a store.
'Twas before my time. What was so great about it? I remember needing it installed for Netflix like 15 years ago. Did you ever work with Flash? How was that?
If you ever worked seriously on anything non-SPA you would never, ever claim SPAs “dramatically reduce complexity”. The mountain of shit you have pull in to do anything is astronomical even by PHPs standards and I hate PHP. Those days were clean compared to what I have to endure with React and friends.
The API argument never sat well with me either. Having an API is orthogonal: you can have one or do not have one, you can have one and have a SSR app. In the AI age an API is the easy part anyway.
This. From a security perspective, server side dependencies are way more dangerous than client side.
You write:
<div id="moo" />
<form hx-put="/foo" hx-swap="outerHTML" hx-target="#moo">
<input hidden name="a" value="bar" />
<button name="b" value="thing">Send</button>
</form>
Compared to (ChatGPT helped me write this one, so maybe it could be shorter, but not that much shorter, I don't think?): <div id="moo" />
<form>
<input hidden name="a" value="bar" />
<button name="b" value="thing" onclick="handleSubmit(event)" >Send</button>
</form>
<script>
async function handleSubmit(event) {
event.preventDefault();
// the form submit stuff
const form = event.target.form;
const formData = new FormData(form);
const submitter = event.target;
if (submitter && submitter.name) {
formData.append(submitter.name, submitter.value);
}
// hx-put
const response = await fetch('/foo', {
method: 'PUT',
body: formData,
});
/ hx-swap
if (response.ok) {
const html = await response.text();
// hx-target
const target = document.getElementById('moo');
const temp = document.createElement('div');
temp.innerHTML = html;
target.replaceWith(temp.firstElementChild);
}
}
</script>
And the former just seems, to me at least, way way *way* easier to read, especially if you're inserting those all over your code.Going with your example, how would you do proper validation with HTMX? For example, the input element's value cannot be null or empty. If the validation fails, then a message or something is displayed. If the validation is successful, then that HTML is replace with whatever?
I have successfully gotten this to work in HTMX before. However, I had to rely on the JS API for that is outside the realm of plain HTML attribute-based HTMX. At that point, especially when you have many inputs like this, the amount of work one has to do with the HTMX JS API starts to look at lot like the script tag in your example, but I would argue it's actually much more annoying to deal with.
1. Do the validation server side and replace the input (or a label next to the input, see https://htmx.org/examples/inline-validation/)
2. Use the HTML 5 Client Side form validation API, which htmx respects:
https://developer.mozilla.org/en-US/docs/Learn_web_developme...
So, I did end up going with #1 with a slight variation.
You also commented on another comment of mine stating:
> if you are using the htmx javascript API extensively rather than the attributes, you are not using htmx as it was intended
There seems to be some confusion, and I apologize. I extensively used attributes. That wasn't the part of the API I was referring to. Rather, I should have specified that I was heavily relying on a lot of the htmx.on() and html.trigger() methods. My usage of htmx.trigger() was predominately due to something being triggered on page load, but also, it needed to be triggered again if a certain condition was met -- to refetch the html with the new data -- if that makes sense.
I should also preface that I was working on this project about two years ago. It looks like a lot has changed with HTMX since then!
After enough of the HTMX JS API, I figured, "What is HTMX even buying me at this point?" Even if plain JS is more verbose, that verbosity comes with far less opinions and constraints.
> You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.
Now you're dealing with 2 sets of tooling instead of 1.
> And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.
There is no app which would not care about first paint. It's literally the first part of any user experience.
> So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.
So SEO and first paint are not necessary features, but an API for multiple clients is? Most apps I've worked with for over 15 years of web dev never needed to have an API.
> I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.
With backend apps the reload IS fast. SPA's have to invent tooling like fast reload and optimistic updates to solve problems they created. With server apps, you just don't have these problems in the first place.
Fast forward to what I am doing today in my new job. We have a pretty complex setup using Redwoodjs along with several layers of abstraction with Graphql (which I approve of) and a ton of packages and modules tied together on the front end with react, storybook, etc. and some things I am not even sure why they are there. I see new engineers joining our team and banging their heads to make even the smallest of changes and to implement new features and having to make code changes at multiple different places. I find myself doing similar things as well from time to time - and I always can't help but think about the complexity that I used to deal with when working with these MVC frameworks and how ridiculously easy it was to just throw logic in a controller and a service layer and and the view templates for the UI stuff. It all fit in so easily and shipping features was super simple and quick.
I wouldn't discount react as a framework but I am also starting to some cracks caused by using TypeScript on the backend. This entire Javascript world seems to be a mess you don't want to mess with. This is probably just me with an opinion, but but using Turbo, Stimulus and and sprinkles of LiveView got me really really far very quickly.
For the disadvantages, I cannot think of any. It is a bit slower than hand rolling your own REST API, but the difference is not severe enough to make you give up on it.
On the plus side, it does have offer communication advantages if you have entirely independent BE and FE teams, and it can help minimize network traffic for network-constrained scenarios such as mobile apps.
Personally, I have regretted using GraphQL every time.
Many interactions are simply better delivered from the client. Heck some can only be exclusively delivered from the client (eg: image uploading, drag and drop, etc).
With HTMX, LiveViews, etc there will be challenges integrating server and client code... plus the mess of having multiple strategies handling different parts of the UI.
I would consider that the bare acceptable minimum along an upload progress indicator.
But it can get a lot more complicated. What if you need to upload multiple images? What if you need to sort the images, add tags, etc? See for example the image uploading experience of sites like Unsplash or Flickr.
HTMX just ism't the right tool to solve this unless you're ready to accept a very rudimentary UX.
Please implement a multi image upload widget and then come back to argue about this.
I could be misremembering, but didn't browsers used to have this built in? Like there used to be a status bar that showed things like network activity (before we moved to a world where there is always network activity from all of the spying), upload progress, etc.
I don't remember if it was in Firefox, but SeaMonkey even has a "pull the plug" button to quickly go offline/online in the status bar.
Bizarre that "progress" is removing basic functionality and then paying legions of developers to re-add it in inconsistent ways everywhere.
IME this is backwards. All that stuff is a one-off fixed cost, it's the same whether you have 10 lines of JS or 10,000. And sooner or later you're going to need those 10 lines of JS, and then you'll be better off if you'd written the whole thing in JS to start with rather than whatever other pieces of technology you're using in addition.
Was this not the case? And if so, what has fundamentally changed?
Having one API for web and mobile sounds good but in practice often the different apps have different concerns.
And SEO and page speed were always reasons the server never died.
In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version.
Is that allowed by app stores? Doesn’t it negate the walled gardens if you can effectively treat the app as a mini browser that executes arbitrary code ?
What app stores don't like is you reinventing javascript i.e shipping your own VM. What they don't mind is you reinventing html and css.
So it is common for servers today to send mobile apps
{"Elementtype": "bottom_sheet_34", "fg_color": "red",..., "actions": {"tap": "whatever"}, ... }
However the code that takes this serialised UI and renders it, and maps the action names to actual code is shipped in the app itself. So, the app stores don't mind it.This is what the GP is talking about.
It covers a surprising number of usecases, especially since many actions can be simply represented using '<a href>' equivalents -- deeplinks. With lotties, even animations are now server controlled. However, more dynamic actions are still client-controlled and need app updates.
Additionally, any new initiative , think new feature, or think temporary page for say valentine's day, is all done with webviews. I'm not clued in on the review process for this.
Nevertheless, if your app is big enough then almost every rule above is waived for you and the review process is moot, since once you become popular the platform becomes your customer as much as you are theirs. For example, tiktok ships a VM and obfuscated bytecode for that VM to hinder reverse engineering (and of course, hide stuff)
I loved building things that way.
Been a web dev for over a decade, and I still use plain JS. I have somehow managed to avoid learning all the SPAs and hyped JS frameworks. I used HTMX for once project, but I prefer plain JS still.
I was a JQuery fan back in the day, but plain JS is nothing to scoff at these days. You are right though, in my experiences at least, I do not need anything I write to all happen on a single page, and I am typically just updating something a chunk at a time. A couple of event listeners and some async HTTP requests can accomplish more than I think a lot of people realize.
However, if I am being honest, I must admit one downfall. Any moderately complex logic or large project can mud-ball rather quickly -- one must be well organized and diligent.
Figma is written in C++ to webasm.
And what makes me like Next.js, besides the SaaS SDKs that give me no other framework choice, it is being quite similar to those experiences.
Figma is a definite yes. But Gmail is something we say from late 00s and somehow continue till now. I thought it has been proven we dont need SPA for Email Client. Hey is perfectly fine other than a little slow, mostly due to server response time and not Turbo / HTML / HTMX itself.
I still believe we have a long way to go and innovate on partial HTML swaps. We could have push this to the limit so that 98% of the web doesn't need SPA at all.
Really hopes Rails has more in store this year.
I am starting to think now is a great time to return to some of the Knockout-era ideals of "Progressive Enhancement". Web Components, the template tag, local storage, CSS view transitions, and a few other subtle modern things seem to be getting close to the point where the DX is as good or better than SPAs and the UX feels similar or better, too.
Why do you need GraphQL here?
If your developer workstation can't send a few KB of data over a TCP socket in a reasonable amount of time due to the colossal amount of Zoomer JavaScript abstraction nonsense going on, something has gone terribly wrong.
The whole idea of needing "islands" and aggressive caching and all these other solutions to problems you created -- that you have somehow managed to make retrieving a trivial amount of data off a flash storage device or an in-memory storage system of some kind slow -- is ludicrous.
What's funny is that people struggling after deploying it now think that they have invented the N+1 problem.
The app was initially client-side only. I choose GraphQL over REST because Hasura created a super quick API for the database.
Also, our API is public! We started as an alternative to Goodreads when they closed their API.
I think what confuses people is Ruby’s meta programming. The ability to dynamically define named methods makes rails seem far more magical than it actually is. You also can’t tell the difference between method calls and local variables.
I wish I got along better with Rails, honestly.
It’s honestly a really underrated framework, smartly designed, with probably the best ORM that exists and a great ecosystem.
Unfortunately, the documentation is painfully bad and the Getting Started guides are really boring compared to Rails or Django.
There may also be Laravel but I can’t say anything about it since I never tried it.
I have complaints about Laravel, but I think it's a lot easier to find examples, and modern PHP has static typing improvements. But I would much rather use C#
Exposed is a solid-enough ORM for my tastes, Ktor is easy-to-use and clean; and Kotlin gives me a type system and fluent / Ruby-style method chaining.
I looked at InertiaJS and it feels like too much "magic" for me personally. I've never used it so I could be wrong but it feels like too many things you have to get working perfectly and the instability in the JS ecosystem makes me worry about adding too many layers.
Pre-rendering (as popularized by static site generators) is the additional step that increases complexity significantly, sometimes security issues too when session-protected cached pages are mistakenly served to the wrong users.
When your business goal is put text on screen the next logical step is to ask how much time and money does the tech stack really save? I have never found a developer that answer that question with a number. That’s a really big problem.
The reasons I prefer client-side rendering: (1) separation of concerns UX in the front, data/business in the back (2) Even as a back-end dev, prefer Vue to do front-end work rather than rendering text + scripts in the backend that run in the browser, (3) at scale it's better to use the client hardware for performance (other than initial latency).
I get where you're coming from but that's actually quite a bit of an oversimplification even for many web apps outside of the 1% for which a lot of modern web development solutions and frameworks seem to have been created.
For one thing it doesn't take any account of input. When someone draws something with Figma or writes something in Google Docs or buys something from Amazon - or indeed any online shop at whatever scale - or gets a set of insurance quotes from a comparison site or amends something in an employee's HR record or whatever it may be the user's input is a crucial part of the system and its behaviour.
For another, we're not just putting text on the screen: we're putting data on the screen. And whilst data can always be rendered as text (even if not very readably or comprehensibly), depending on what it represents, it can often be more meaningfully rendered graphically.
And then there are integrations that trigger behaviour in other systems and services: GitHub, Slack, eBay, Teams, Flows, Workato, Salesforce, etc. Depending on what these integrations do, they can behave as inputs, outputs, or both.
And all of the above can result in real world activity: money is moved, orders are shipped from warehouses, flow rates are changed in pipelines, generators spool up or down, students are offered (or not offered) places at universities, etc.
I suppose you could have custom CSS (e.g. via Stylebot) remove 90% of the elements and all but one of the pictures, but would that really make the amazon purchasing experience better?
Even the search box itself lags when typing because somehow the text input is synced to the autocomplete search?
/rant
I wonder how you'll handle image uploading, drag and drop, media players, etc with simple static content rendering.
That's about as absurd a statement as saying all of Backend is just "returning names matching a certain ID" for how out of date and out of touch it is.
It's like saying that the entire job of a politician is to speak words out loud. You're reducing a complex problem to the point that meaningful discussion is lost.
Can anyone come up with the ideal use case where SSR shines? I'm willing to buy it if I see it.
Most websites are significantly simpler to build and maintain with SSR and traditional tools. An entire generation has forgotten this it seems and still thinks in terms of JS frameworks even when trying SSR.
As one example take this website, which serves the page your wrote your comment on using an obscure lisp dialect and SSR.
Wait, is SSR a thing outside the context of websites?
It gets rather painful though, which is why we don't do that anymore.
Microsoft introduced XMLHttpRequest in 2000 for this exact reason - its original purpose was to allow the newly introduced Outlook web UI to fetch data from the server and use that to update the DOM as needed. This was then enthusiastically adopted by other web app authors (most notably Google, with GMail, Google Maps, and Google Talk), and other browsers jumped onto the bandwagon by providing their own largely compatible implementations. By 2006 it was a de facto standard, and by 2008 it was standardized by W3C.
The pattern was then known as AJAX, for "asynchronous JS and XML". At first web apps would use the browser API directly, but jQuery appeared right around that time to provide a consistent API across various browsers, and the first SPA frameworks showed up shortly after (I'm not sure if I remember correctly, but I think GWT was the first in 2006).
I run skatevideosite.com and accidentally did the first rewrite when I took it over in react because that’s all I knew. I absolutely tanked the seo.
Rewrote it in rails and got everything back in shape and it’s been a fun experience!
Many teams use this with React.
Hotwire is the default and they develop it because DHH wants to, but they're not putting up any barriers to you using whatever you want.
Also, DHH doesn't seem to care about how big it is. His stated goal is for it to forever be a framework that's usable by a single dev.
Dunno I loved rails, built monoliths, built api only, but when I tried sprinkling a bit of react in my views (say you only need a little bit of interaction, or want to use a react date picker) then theres all these sharp edges.
The reason I want it to be bigger is that user base helps the single dev, with help resources, up to date gems, and jobs.
If you need only a sprinkling why not vanilla JS with Stimulus? Pulling in React for only a "sprinkling" seems like overkill.
The benefit of the current approach is that you can use any vanilla JS code, and it's especially easy if it uses ES Modules.
Also the whole point of React/Vue/Svelte is that they're all complete frameworks meant to do your whole UI. Using "just a sprinkling" of these seems like the worst of both worlds.
Dunno, my app pulls in a fairly heavy JS dependency (Echarts) and it took all of 2 minutes to set it up using Stimulus.
I would really be interested in real world performance metrics comparing load times etc. on a stock nextjs app using defaults vs. rails and co.
- Cost
- Complexity
- Learning curve
- Scalability
- Frequent changes
- And surprisingly bad performance compared with the direct competitors
Nowadays, NextJS is rarely the best tool for the job. Next and React are sitting in the "never got fired for buying IBM" spot. It is a well earned position, as both had a huge innovational impact.
Do you need best in class loading and SEO with some interactivity later on? Astro with islands. Vitepress does something similar.
Do you need a scalable, cost efficient and robust stack and have moderate interactivity? Traditional SSR (RoR, Django, .NET MVC, whatever) with maybe some HTMX.
Do you have a highly interactive app? Fast SPA like Svelte, Solid or Vue.
NextJS generates by default all assets and js-chunks with a sha256 hash in the filename - essentially making them immutable. As outlined in the NextJS, I serve my assets folder with `Cache-Control: public, max-age=604800, immutable`. In a webapp where your users use your app on a semi-daily basis that means all assets and resources will be cache forever, or until you re-deploy a new version of the app. The data comes via REST (in whatever backend-language you want to use) so I don't see how any SSR can outperform nextjs here.
This is an interview with him last year on "one person" approaches to web app development that I liked a lot: https://www.youtube.com/watch?v=0rlATWBNvMw
https://world.hey.com/dhh/the-waning-days-of-dei-s-dominance...
https://world.hey.com/dhh/dei-is-done-minus-the-mop-up-b3bbb...
Americans don't seem to understand nuance, so when DHH posts about support for people's right to protest, how he loves being a father, how he doesn't want politics in the workplace and doesn't proclaim the sky is falling because of politics they seem to think he's the devil.
https://en.m.wikipedia.org/wiki/Paradox_of_tolerance
However he is right in many cases, and I don’t expect anyone to be right all the time, myself included. It’s strange to look for political leadership from a programmer anyhow.
For people who commonly use these frameworks -- is it common to have issues where data or code intended only for server execution makes its way onto the client? Or are there good safeguards for this?
But for sure the lack of clear lines for where the server ends and the client begins has always been a pain of these kinds of framework offerings.
without ANY irony or sarcasm, i just want appreciate that its funny how that happens completely without explicit desire or intention to have this effect from the developers of Next (i'm serious, don't hate me guys, we are friends, i do believe that this ofc is not intended)
i'm sure there's a good and meaningful explanation (that I'm interested in reading) but lots of little microdecisions compound when the developer of the framework does not also experience it as a paying customer (or, more subtly, the developer of the framework wants to serve the 10000x larger enterprise customer and needs to make choices to balance that vs the needs of the small)
You can prototype stuff very fast with rails and its a mighty tool in the right hands.
Not everyone has looked into or tried everything.
The upsides is that by not trying to hide the database and pretend it doesn't exist you can avoid a whole class of work (and the safety abstractions provided) and be incredibly productive if the requirements align.
Rails also uses way too much magic to dynamically construct identifiers and do control flow.
The over-use of magic and the under-use of static types makes it extraordinarily difficult to navigate Rails codebases. It's one of those things where you have to understand the entire codebase to be able to find anything. Tractable for tiny projects. For large projects it's a disaster.
Rails is a bad choice (as is Ruby).
My favourite web framework at the moment is Deno's Fresh. You get the pleasure of TSX but it's based around easy SSR rather than complex state management and hooks. Plus because it's Deno it's trivial to set up.
All that being said I still use (and like) Rails, currently comparing Phoenix/Elixir to Rails 8 in a side project. But I use typescript w/ Node and Bun in my day job.
Rails is a sharp knife. There is Rails way to do things. You may of course choose to do them differently (this is a contrast with other toolkits that fight this hard), but you are going to have to understand the system well to make that anything but a giant hassle.
With rails, the way it scales is statelessness. You have to map the front end actions to individual endpoints on the server. This works seamlessly for crud stuff (create a thing; edit a thing; delete a thing; list those things). For other use cases it works less seamlessly. NB: it works seamlessly for nested "things" too.
Complex multi-step flows are a pain point. eg you want to build data structures over time where between actions on the server (and remember, you must serialize everything you wish to save between each action), you have incomplete state. Concretely: an onboarding flow which sets up 3 different things in sequence with a decision tree is going to be somewhat unpleasant.
You must keep most state on the server and limit FE state. Hotwire works extremely well but the FE must be structured to make hotwire work well.
I've actually found it to work pretty well with individual pages build in react. My default is to build everything with hotwire and, when the FE gets too complex, to fall back to react.
Rails is nobody's idea of a fast system. You can make it perform more than well enough, but fast it is not.
Upsides, my take: it is the best tool to build websites. The whole thing is built by developers for developers. DX and niceness of tools are valued by the community. Contrast with eg the terrible standard lib that other languages (hi, js) have. Testing is by far the most pleasant I've used, with liberal support for mocking rather than having to do DI. For eg things like [logic, api calls, logic, api calls, logic, db calls] it works incredibly well. It is not the most popular toolkit and it's not react, so that can count against you in hiring.
If you have a class `Customer` with a field `roles` that is an array of strings, you can write code like this
class Customer
ROLES = ["superadmin", "admin", "user"]
ROLES.each do |role|
define_method("is_#{role}?") do
roles.include?(role)
end
end
end
In this case, I am dynamically defining 3 methods `is_superadmin?` `is_admin?` and `is_user?`. This code runs when the class is loaded by the Ruby interpreter. If you were just freshly introduced into this codebase, and you saw code using the `is_superadmin?` method, you would have no way of knowing where it's defined by simply grepping. You'd have to really dig into the code - which could be more complicated by the fact that this might not even be happening in the Customer class. It could happen in a module that the Customer class includes/extends.The other feature is `method_missing`. Here's the same result achieved by using that instead of define_method:
class Customer
ROLES = ["superadmin", "admin", "user"]
def method_missing(method_name, *args)
if method_name.to_s =~ /^is_(\w+)\?$/ && ROLES.include?($1)
roles.include?($1)
else
super
end
end
end
Now what's happening is that if you try to call a method that isn't explicitly defined using `def` or the other `define_method` approach, then as a last resort before raising an error, Ruby checks "method_missing" - you can write code there to handle the situation.These 2 features combined with modules are the reason why "Go to Definition" can be so tricky.
Personally, I avoid both define_method and method_missing in my actual code since they're almost never worth the tech debt. I have been developing in Rails happily for 15+ years and only had one or two occasions where I felt they were justified and the best approach, and that code was heavily sprinkled with comments and documentation.
See here:
https://github.com/heartcombo/devise/blob/main/lib/devise/co...
def self.define_helpers(mapping) #:nodoc:
mapping = mapping.name
class_eval <<-METHODS, __FILE__, __LINE__ + 1
def authenticate_#{mapping}!(opts = {})
That code is *literally* calling class_eval with a multi-line string parameter, where it inlines the helper name (like admin, user, whatever), to grow the class at runtime.It hurts my soul.
Dynamically generated methods can provide amazing DX when used appropriately. A classic example from Rails is belongs_to, which dynamically defines methods based on the arguments provided:
class Post < ApplicationRecord belongs_to :user end
This generates methods like:
post.user - retrieves the associated user
post.user=(user) - sets the associated user
post.user_changed? - returns true if the user foreign key has changed.
Customer.limit(1000).to_a
completes in about 10ms, whereas:
Customer.last(1000).pluck(:id, :name, :tenant_id, :owner_id, :created_at, :updated_at)
runs in around 7ms.
Active Record methods are defined at application boot time as part of the class, they're not rebuilt each time an instance is created. So in a typical web app, there's virtually no performance penalty for working with Active Record objects.
And when you do need raw data without object overhead, .pluck and similar methods are available. It’s just a matter of understanding your use case and choosing the right tool for the job.
---
I have now done several Google searches to - well, admittedly, to try and counter your argument; but what I've since found is:
* Every friggin' benchmark is wildly different [0, 1]
* Some of these test pages are obnoxious to read and filter; **BUT** Javascript regularly finds itself to be **VERY** fast [0]
On a more readable and easily-filtered version (that has very differnet answers) [1],
* plain Javascript (not Next.js) has gotten *REALLY* fast, serverside
* Kotlin is (confusingly?!) often slower than JS, depending on the benchmark
^-- this one doesn't make sense to me
^-- in at least one example, they're basically on par (70k rps each)
* Ruby and Python are painfully slow, but everyone else sorta sits in a pack togetherI will probably be able to find another benchmark that says completely different things.
Benchmarking is hard.
I'm also having trouble finding the article from HN that I was sure I saw about Next.JS's SSR rendering performance being abysmal.
[0] https://www.techempower.com/benchmarks/#section=data-r23
[1] https://web-frameworks-benchmark.netlify.app/result?asc=0&f=...
https://benchmarksgame-team.pages.debian.net/benchmarksgame/... is the most adequate, if biased in ways you may not agree with, if you want to understand raw _language_ overhead on optimized-ish code (multiplied by the willingness of the submission authors to overthink/overengineer, you may be interested in comparing specific submissions). Which is only a half (or even one third) of the story because the other half, as you noted, is performance of frameworks/libraries. I.e. Spring is slow, ActiveJ is faster.
However, it's important to still look at the performance of most popular libraries and how well the language copes with somewhat badly written user code which absolutely will dominate the latency way more often than anyone trying to handwave away the shortcomings of interpreted languages with "but I/O bound!!!" would be willing to admit.
- The Global interpreter lock (GIL) in Ruby is less performant than async thread programming in JS (and some other languages)
- Rails creates a monolith rather than a bunch of independent endpoints. If you have a large team, this can be tricky (but is great for smaller teams who want to move fast)
- How Rails integrates with JS/CSS is always changing. I recommend using Vite instead of the asset pipeline, unless you're going with the stand Rails stimulus js setup.
- Deploying Rails in a way that auto-scales the way serverless functions can is tricky. Their favored deployment is to server of set size using Kamal.
I often see myself going back to Ruby on Rails for my private stuff. It's always a pleasure. On the other side, there are so few rails people available (compared to js) that it's not viable for any professional project. It would be irresponsible to choose that stack over js and often java for the backend.
Anyone have similar feelings?
I'm personally an elixir Phoenix Fanboy now, so I don't choose rails as my first choice for personal projects, but I think it is an excellent choice for a company. In fact, I would probably recommend it the most over any framework if you need to hire for it.
This has been my experience.
It is very easy to write a server with it, hosting and deploying is painless, upgrading it (so far) has been painless, linting and debugging has been a breeze.
If you're coming from Ruby, then learning Elixir requires a small mental adjustment (from Object Oriented to Functional). Once you get over that hump, programming in Elixir is just as much fun as Ruby! :)
I still haven't found an ORM with JS that really speaks to me.
> there are so few rails people available (compared to js) that it's not viable for any professional project
I don't think this is true; Shopify is a Rails shop (but perhaps it's more accurate to say it's a Ruby shop now). It feels easy to make a mess in Rails though, imo that's the part that you could argue is irresponsible
My take: the JS ecosystem tends to avoid abstraction for whatever reason. Example: they don’t believe that their web framework should transparently validate that the form submission has the correct shape because that’s too magical. Instead the Right Way is to learn a DSL (such as Zod) to describe the shape of the input, then manually write the code to check it. Every single time. Oh and you can’t write a TS type to do that because Reasons. It all comes off as willful ignorance of literally a decade or more of established platforms such as Rails/Spring/ASP.NET. All they had to do was steal the good ideas. But I suspect the cardinal sin of those frameworks was that they were no longer cool.
I have a hard time relaying this without sounding too negative. I tried to get into SSR webdev with TS and kept an open mind about it. But the necessary ingredients for me weren’t there. It’s a shame because Vite is such a pleasure to develop with.
I thought Prisma.js was the most popular by far? It's the one I've always seen used in docs and examples.
Someone will steal the good ideas eventually. And everyone will act like it’s the first time this idea has ever come up. I’ve seen it happen a few times now, and each time it makes me feel ancient.
As a side effect this reminds me how much I must not know what I don't know. It's a bit scary.
Where I work, as an org, we have gone all-in on Prisma across dozens of projects so getting buy-in to try something else will be an uphill battle.
Long story short: I ended up choosing ASP.NET Core with Minimal APIs. The main reason was indeed EF Core as ORM, which I consider as one if not the best ORM. In the Node world there's so much promise (Prisma, Drizzle, ...) but also so much churn.
I don't mean that rewrite hell is a permanent state, but you will always be rewriting parts of your project. I'd rather choose an ecosystem where the friction for rewriting is minimal.
Choose boring tech that doesnt change since its already mature and battle tested and because it is not beholden to the whims of some VC money or whatever.
React itself (not Next.js) doesnt change a lot and will let you run your app for the next decade at least.
Same with any boring PHP, Ruby, Python, Java, dotnet framework out there.
You might need to upgrade versions, but there will very seldom be breaking changes whete you have yo rewrite a lot.
Just use Gel [1] and you won't have to deal with ORMs (plus you get other great features).
Do you need a separate frontend framework? No, probably not, and that's exactly the problem that Next solves - write your backend and frontend in the same place.
Do you need a complicated build process? No. You want your build process to be just "run npm". And that's what something like Next gets you.
"Monolithic RoR app with HTML templates on VPS" would introduce more problems than it solves. If Next-style frameworks had come first, you would be posting about how RoR is a solution in search of a problem that solves nothing and just overcomplicates everything. And you'd be right.
Everytime I hit the "should we use GraphQL" question in the last decade we balked because we already had fast REST like APIs and couldn't see a how it would get faster.
To your point it was more of a mish-mash than anything with a central library magically dealing with the requests, so there is more cognitive load, but it also meant we had much more control over the behavior and performance profile.
Not remotely true. There are plenty of web apps that work just fine with a standard fixed set of API endpoints with minimal if any customization of responses. Not to mention the web apps that don't have any client-side logic at all...
GraphQL solves a problem that doesn't exist for most people, and creates a ton of new problems in its place.
The value of GraphQL is also its downfall. The flexibility it offers to the client greatly complicates the backend, and makes it next to impossible to protect against DoS attacks effectively, or even to understand app performanmce. Every major implementation of GraphQL I've seen has pretty serious flaws deriving from this complexity, to the point that GraphQL APIs are more buggy than simpler fixed APIs.
With most web apps having their front-end and back-end developed in concert, there's simply no need for this flexibility. Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend. When that kind of change is too hard or expensive to do, it's an organisational failing, not a technical one.
Sure, some use-cases might warrant the flexibility that GraphQL uses. A book tracking app does not.
But also no problem with it. There might be some queries expressible in your GraphQL that would have severe performance problems or even bugs, sure, but if your frontend doesn't actually make queries like that, who cares?
> Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend.
Sure, but how are you actually going to do that? You're always going to need some way for the frontend to make requests to the backend that pull in related data, so that you avoid making N+1 backend calls. You're always going to have a bunch of distinct but similar queries that need the same kind of related data, so either you write a generic way to pull in that data or you write the same thing by hand over and over. You can write each endpoint by hand instead of using GraphQL, but it's like writing your own collection datatypes instead of just pulling in an existing library.
The tools and patterns to limit these (very common, in any kind of system) drawbacks are so well-established that its a non-issue for anyone sincerely looking at the tech.
People with bad intentions can make those slow queries happen at high volume with custom tooling, they don’t have to restrict themselves to how the frontend uses the queries
Depends how your system is set up. I'm used to only allowing compiled queries on production instances, in which case attackers have no way of running a different query that you don't actually use.
Adversaries are not restricted to using your system the way you designed your system. GraphQL queries are trivial to pull out of Wireshark and other sniffers. If you deliver it to the browser, any determined-enough adversary will have it, period. I wouldn't be surprised in the least if it is already a thing for LLM models to sniff GraphQL endpoints in the quest for ever more data.
Do you understand how compiled queries in GraphQL (or even an old-school RDBMS) work? All that gets sent over the wire is the query id. There's physically no way to make the server execute a query the author didn't write.
> You can write each endpoint by hand instead of using GraphQL, but it's like writing your own collection datatypes instead of just pulling in an existing library.
Everyone else is "pulling in an existing library", they have names like Express and Kysely, and thinking that Apollo is the only library that deserves this designation is a bit of a head-scratcher.
If you take the time to invest in a proper REST API first, odds are the endpoint may already exist, and you may not need to wait for a new backend build; not investing in a custom endpoint for every frontend change unless real-world performance requirements actually dictate it. You get tooling that is more mature and easier to maintain as a result, makes it easier for Product to experiment (remember: not forcing a backend change for every frontend change), and not using a fad-of-the-month just because it came out of a FAANG.
I agree! If you're in control of the experience, then I wouldn't choose GraphQL for a limited experience either.
The project started because Goodreads was retiring their API, and I wanted to create something better for the community. I have no idea how people will use it. The more we can provide, and the more flexible it is, the more use cases it'll solve.
So far we have hundreds of people using the GraphQL API for all kinds of things I'd never expect. That's the selling point of GraphQL for me - being able to build and figure out the use case later.
But I would never want to create a GraphQL API from scratch (not again). In this case, Hasura handles that all for us. In our case it was easier than creating a REST API.
You throw away all the debuggability and simplicity of rest for close to zero advantages
Then the security looks also annoying to manage, yeah sure the front-end can do whatever it wants but nobody ever wanted that.
1: one example being tax records with all associated information about tax collecting agencies and taxpayers — it's a lot of data
We've been using it in production for 10 years. Would I change a single thing? No. Every day I come to work thankful that this is the tech stack that I get to work on because it _actually works_ and doesn't break down, regardless of size.
But lots of apps can do with a lightweight pull API that can be tailored, fits the applications' access control model (as points of contrast to GraphQL) and it's less work and less risk than finding, integrating and betting on a GraphQL implementation.
As far as I understand hardcover was really created because goodreads discontinued their api and the team at hardcover saw how many people relied on it for a myriad of different niche projects.
If hardcover was just a replacement for the goodreads platform, then I'd agree with you. But it's not. It's there for the api, with the platform around it intended as a way to ensure free access of the api for everyone. And from that pov choosing GraphQL makes a lot of sense imo. You can't anticipate all the cool and different things people might want to do with it, so they chose the most flexible api spec they could.
On the other hand, I'm not sure if a complete rails rewrite was the right choice. The App was slow and sluggish beforehand, with frequent ui glitches and it still has those same issues. Their dev blog claims significant performance increases, but as a user I haven't noticed a big difference. Sticking with next.js, but moving to a selfhosted instance and then iteratively working on performance improvements would've been (imho) the better way forward. I see no reason why next.js somehow fundamentally couldn't do what they're trying to do, but rails can. Especially with just 30k users (which tbc is a great achievement, just not impressive from a technical standpoint).
"Simplicity is achieved when there's nothing left to remove".
I was the same expert level with Python, now I'm using trpc, nextjs, drizzle, wakaq-ts, hosted on DO App Platform and you couldn't pay me enough to go back to Python, let alone the shitstorm mess that's every Rails app I've ever worked on.
I've also not seen the 1s Next.js pageloads you had, but I'm confident of figuring a fix if that becomes a problem.
I've built a few apps in it now, and to me, it starts to feel a bit like server-side React (in a way). All your HTML/components stream across to the user in reaction to their actions, so the pages are often very light.
Another really big bonus is that a substantial portion of the extras you'd typically run (Sidekiq, etc) can basically just be baked into the same app. It also makes it dead simple to write resilient async code.
It's not perfect, but I think it's better than RoR
TLDR; Are most Phoenix deployments focused on a local market or deployed 'at the edge' or are people ignoring the potentially janky experience for far-flung users?
However, Elixir and Phoenix is more than just LiveView! There’s also an Inertia plugin for Phoenix, and Ecto as an “ORM” is fantastic.
I haven't done a lot of optimistic updates with LiveView yet. I'm not sure how sanely you could really achieve it (because it seems you'd lose the primary benefit: server-side rendering / source of truth).
However, there are a few mechanisms you can use to note that the page is loading / processing a LV event that can assist the user in understanding the latency. e.g., at the very least, make a button spindicate. I've experienced (in my own apps) the "huh is the app dead?" factor with latency, which suggests I need to simulate latency more. If the socket is unstable or cannot connect, the app is just entirely dead, though the fallback to longpolling is satisfactory.
I think it would really shine for internal apps due to the sheer velocity and simplicity of developing and deploying it.
In the worst case, you could fall back to using regular controllers or APIs controllers, so I still see it being a "better version of Ruby" overall. However, if we're going back to this, I would rather use SolidStart and do it all in TypeScript anyway.
At the end of the day, I'm very torn between the resilience/ease/speed of Elixir and the much better type system in TS. The ability to just async something and know it will work is kind of crazy for improving performance of apps (check out assign_async)
> the majority of results are 300ms+
Another thing to consider is that a lot of apps (SPA powered by API) take 300~1000ms to even give you a JSON response these days. So if you can get by with making a button spin while you await the liveview response (or are content with topbar.js) I think you can get roughly close to the same experience.
> deployed 'at the edge'
The nice part of Elixir is you could probably make a global cluster quite easily. I've never done it though. You could have app nodes close to users. I think you'd have to think of a way to accelerate your DB connection however (which probably lives in 1 zone).
> loading the entire homepage only takes one query [if you're logged out]
You can do this with Next.js SSR - there's nothing stopping you from reading from a cache in a server action?
They also talk about Vercel hosting costs, but then self host Rails? Couldn't they have self hosted Next.js as well? Rails notoriously takes 3-4x the resources of other web frameworks because of speed and resources.
Yep! It'd be possible with Next.js. The difference is how it's organized. In Next.js with RSCs, we were fetching data for each part of the page where it's used (trending books, Live events, blog posts, favorite books). Each of those could be their own cache hit to Redis.
One advantage of Rails is the controller. We can fetch all data in s single cache lookup. Of course it'd be possible to put everything needed in a single lookup in Next.js too, but then we wouldn't be leveraging RSCs.
I tried self-hosting Next.js on Digital ocean, but it crashed due to memory leaks without a clear way to understand where the leak was. Google Cloud Run and Vercel worked because it would restart each endpoint. We have more (and cheaper) hosting options with Rails.
“Deployment economy” is also new.
Rails has a very strong track record of matching internet scale.
Cloud is highly optimized for traditional server applications. From my experience with Next.js - this is the opposite. A lot of deployment components that don’t naturally fit in, and engineering required to optimize costs.
* difficult auth story. next-auth is limited in a few ways that drove us to use iron-session, such as not being able to use a dynamic identity provider domain (we have some gov clients who require us to use a special domain). This required us to basically own the whole openid flow, which is possible but definitely time we didn’t expect to have to spend in a supposedly mature framework.
* because the NextJS server wasn’t our primary API gateway we ended up having to proxy all requests through it just to add an access token to avoid exposing it on the client. The docs around this were not very clear, and this adds yet another hop with random gotchas like request timeout/max header size/etc.
* the framework is very aggressive about getting you on their cloud, and they make decisions accordingly. This was at odds with our goals.
* the maintainers aren’t particularly helpful. While on its own this would be easy to look past, there are other tools/frameworks we use in spite of their flaws because the maintainers are so accessible and helpful (shout out to Chillicream/HotChocolate!)
And Kotlin + Ktor feels very good to write in on serverside. Fast, easy and fluent to write in, like Ruby; but with Java's ecosystem, speed and types.
So, the components themselves will look something like this:
fun HtmlBlockTag.radioButtonWithLabel(
groupName: String,
id: String,
hidden: Boolean = false,
radioButtonFunc: (INPUT.() -> Unit)? = null,
func: LABEL.() -> Unit
) {
radioInput(name=groupName) {
this.id = id
this.hidden = hidden
radioButtonFunc?.invoke(this)
}
label { this.htmlFor = id; func() }
}
And then use of them will be like this: call.respondHtml {
body {
div(CSS_CLASS_NAME) {
radioButtonWIthLabel(MORE_CSS_CLASS_NAME, "group", "id") {
+"Text for the label"
}
}
}
}
More complicated examples just extend that quite a lot.I've also got whole files dedicated to single extension functions that end up being a whole section that I can place anywhere.
---
And then to test those single-function components, I'll do something like this:
class SingleSectionTest {
private suspend fun ApplicationTestBuilder.buildApplicationAndCall(
data: DataUsedForSection
): Document {
application {
routing {
get("test") {
call.respondHtml {
body {
renderSingleSection(data)
}
}
}
}
}
val response = client.get("test")
val body = Jsoup.parse(response.bodyAsText())
return body;
}
@Test
fun `simple test case`() = testApplication {
val data = DataUsedForSection("a", "b", "c")
val body = buildApplicationAndCall(data)
// all the asserts
}
}
And so on. Is this what you were wondering? Or would you like a different sort of example?I've gotten more used to them and I get why they can be so great now; but there's still some real annoyances with them I just can't shake (like the import problem and the related "where is this code?!" problem)
---
Purely importing a component that's just a simple class, like you can do in Java/Typescript with their jsx and tsx files would be pretty cool, yeah. You could fake it by making a data class and adding a
fun renderInto(tag: HtmlBlockTag) { ... }
type method, but with how Ktor's Dsl is implemented, you're still going to need to connect it into the giant StringBuilder (or whatever) that the dsl is building to.To help me get something close to that idea, I tend to dedicate whole files to bigger components and even name the file the same as the root HtmlBlockTag Extension method (like RadioButtonWithLabel.kt if I did it for the earlier one). Those files are great because you can put a bunch of helper methods in the same file and keep it all contained, a lot like a class.
Astro is also really nice and easy to learn and host.
You mentioned giving up on Remix after poking at it for a day. IMHO that was a mistake.
I love Next.js. I have used other frameworks including RoR and there is nothing like it (except Svelte or Nuxt but I view them as different flavors of the same core idea). But I only use it to make monoliths. I can see getting frustrated with it if I was using it alongside a different back end.
“use client”, server actions that aren’t scrutable in a network tab, laggy page transitions, and, until recently, inscrutable hydration errors: these are some of the recent paper cuts I experienced with Next.
I’d still use it for new projects, but I am keen to use TanStack Start when it’s ready
i’m personally really interested in the next wave of frameworks that make local first development intuitive, like One or something that bakes in Zero
The broader point was basically that the Rails UI integration tests took a very long time, and required the whole system up, and we had a pretty large team constantly pushing changes. While not 100% unique to Rails, it was exacerbated by RoR conventions.
We moved much of the UI to a few Next.js apps where the tests were extremely fast and easy to run locally.
How many integration tests do you have? I generally only test a few core flows and then leave the rest to controller/request tests.
As a historically backend-developer, I've tended to dislike Html/JS/CSS. It's a meaningfully different paradigm from the Swing/Awt, WinForms, Android UX, etc. That alone was enough to frustrate me and keep me on the backend. To learn how to make frontend, I've had to since learn those 3. They're finally becoming familiar.
BUT, for front-end developers, they needed to learn "yet another language"; and a lot of these languages have different / obnoxious build systems compared to nvm and friends. And then, like anyone who's ever changed languages knows, they had to learn a whole bunch of new frameworks, paradigms, etc.
Well, they would have, but instead, some of them realized they could push Javascript to the backend. Yes, it's come with *a lot* of downsides; but, for the "Get Shit Done" crowd - and especially in the world of "just throw more servers at it" and "VC money is free! Burn it on infra!" these downsides weren't anything worth worry about.
But the front-end devs - now "full stack devs" but really "javascript all the things" devs -, continued to create in a visible way. This is reflective of all the friggin' LinkedIn Job Postings right now that require Next.JS / Node.JS / whatever roles for their "full stack" positions. One language to rule them all, and all that.
Just some ramblings, but I think it's strongly related to why people would choose Next.JS __ever__, given all its downsides.
Not sure about Rails, haven't used it in more than a decade, but NextJS was a major contributor to massive burnout. Of one thing I'm certain: Phoenix is my last web framework. I love it to bits, and I hope to retire before it stops being cool.
You must imagine my chagrin when React started moving towards rendering on the server(SSR, Server Components, etc). I was happy to move to a full client implementation. Sadly, SEO cannot be ignored.
You cannot just blindly trust the page speed metric but it should be impossible to miss things like this when you are actually using the site. Compare the experience to something like GoodReads that's using plain old SSR and you'll immediately notice the difference.
I've been saying this forever and this is a great reminder for those React-hating folks here on HN: usually it's the developer's fault his web is slow, not the framework's.
bananatron•2d ago
cpursley•2d ago
travisgriggs•2d ago
I remember many years ago an akin experience, talking to John Brant and Don Roberts who had done the original refactoring browser in Smalltalk. Java was on its meteoric rise with tons of effort being dumped into Eclipse. They, and others with them, were eager to port these techniques to eclipse, and the theory was they’d be able to do even more because of they typing. But Brant/Roberts that surprisingly it has been more difficult. Part of the problem was the AST. Java, while typed, had a complex AST (many node types), compared to that of Smalltalk (late/runtime typed) which had a very simple/minimal AST. It was in interesting insight.
cpursley•2d ago