I recently initiated the backmigration and my approach thus far however has been to take out the "administrative" part out into Rails to benefit from all the useful conventions there, but keep the "business services" in JS or Python and have the two communicate. Best of both worlds, and the potential of all of rubygems, npm and pypi combined.
BTW, I’m also on a similar trajectory using a mix of Java, Python and Node.js to solve different problems. It has been very pleasant experience compared to if I had been bullish on just one of these languages and platforms.
Having just hit severe scaling issues with a python service I’m inclined to only write my servers in Go or Rust anymore. It’s only a bit harder and you get something that can grow with you
Convention over configuration and less code is fine, but unfortunately Rails is not a great example of it IMO. The "rails" are not strong enough; it's just too loosey goosey and it doesn't give you much confidence that you're doing it "the right way". The docs don't help much either, partly because of the history of breaking changes over releases. And the Ruby language also doesn't help because of the prolific globals/overrides and implicitness which makes for "magic".
So you're encouraged/forced to write exhausting tests for the same (normally dumb CRUD) code patterns over and over and over again. Effectively testing the framework moreso than your own "business logic", because most of the time there barely is any extra logic to test.
So I'm also surprised it gained the reputation is has.
Although that's really selling it short - it's so much more than that! But in the context of this conversation, it's a good place to look.
Since most websites will never scale past the limitations of these frameworks, the productivity gains usually make this the right bet to make.
Like yeah, I know you can do it. But it was much more effort to do things like writing robust migrations or frontend templates. I’d love to find something in Go or Typescript that made me feel quite as productive as Rails did.
Maybe I am comparing apples and oranges, not sure.
upsert_all[1] is available to update a batch of records in a single write that does not invoke model callbacks.
activerecord-import[2] is also very nice gem that provides a great api for working with batches of records.
It can be as simple as extracting your callback logic and a method (def self.batch_update) and running your callback logic after the upsert.
[1] https://api.rubyonrails.org/classes/ActiveRecord/Relation.ht... [2] https://github.com/zdennis/activerecord-import
"It can be as simple as extracting your callback..." Isn't this the kind of repetitive thing a framework should be doing on your behalf?
To be fair, ActiveRecord isn't a fault Rails invented. Apparently it's from one of Martin Fowler's many writings where each model instance manages its own storage. Even Fowler seems to say that the DataMapper approach is better to separate concerns in complex scenarios.
The real driver is complexity cost. Every line of client JS brings build tooling, npm audit noise, and another supply chain risk. Cutting that payload often makes performance and security better at the same time. Of course, Figma‑ or Gmail‑class apps still benefit from heavy client logic, so the emerging pattern is “HTML by default, JS only where it buys you something.” Think islands, not full SPAs.
So yes, the pendulum is swinging back toward the server, but it’s not nostalgia for 2004 PHP. It’s about right‑sizing JavaScript and letting HTML do the boring 90 % of the job it was always good at.
The first framework I ever got to use was GTK with Glade and QT with designer shortly there after. These, I think, show the correct way to arrange your applications anywhere, but also it works great on the web.
Use HTML and CSS to create the basis of your page. Use the <template> and <slot> mechanisms to make reusable components or created widgets directly in your HTML. Anything that gets rendered should exist here. There should be very few places where you dynamically create and then add elements to your page.
Use javascript to add event handlers, receive them, and just run native functions on the DOM to manage the page. The dataset on all elements is very powerful and WeakMaps exist for when that's not sufficient. You have everything you need right in the standard environment.
If your application is API driven then you're effectively just doing Model-View-Controller in a modern way. It's exceptionally pleasant when approached like this. I have no idea why people torture themselves with weird opinionated wrappers around this functionality, or in the face of an explosion of choices, decide to regress all the way back to server side rendering.
I've heard people say they just want "Pure JS" with no frameworks at all because frameworks are too complex, for their [currently] small app. So they get an app working, and all is good, right until it hits say 5000 lines of code. Then suddenly you have to re-write it all using a framework and TypeScript to do typing. Better to just start with an approach that scales to infinity, so you never run into the brick wall.
It's not about using the most powerful tool always, it's about knowing how to leverage modern standards rather than reinventing and solving problems that are already solved.
People who say stuff like this have obviously never actually used modern day FE frameworks, because they have all been very stable for a long while. Yes, APIs change, but that's not unique to JS/frontend, and also nothing really forces you to update with them unless you really need some shiny new feature, and at least IME Vue 3 has been nothing but gold since we got on it.
As I’ve become more senior I’ve realized that software devs have a tendency to fall for software “best practices” that sound good on paper but they don’t seem to question their practical validity. Separation of concerns, microservices, and pick the best tool for the job are all things I’ve ended up pushing back against.
In this particular case I’d say “pick the best tool for the job” is particularly relevant. Even though this advice is hard to argue against, I think it has a tendency to take developers down the path of unnecessary complexity. Pragmatically it’s usually best to pick a single tool that works decently well for 95% of your use cases instead of a bunch of different tools.
Everything after ready should have been static content.
I am saying that allowing for JavaScript to be dynamically downloaded and executed after the page is ready was a mistake.
You can build your Google docs, your maps, and figmas. You don’t need JS to be sent after the page is ready to do so.
Today, what you are saying is definitely a concern, but all APIs are abused beyond their intended uses. That isn’t to say we shouldn’t continue to design good ones that lead users in the intended direction.
Thinking about how the web was designed today, isn’t necessarily good when considering how it could work best tomorrow.
Not quite, I wasn't trying to make a bigger point about is/ought dynamics here, I was more curious specifically about the Google Maps example and other instances like it from a technical perspective.
Currently on the web, it's very easy to design a web page where you only pay for what you use -- if I start up a feature, it loads the script that runs that feature; if I don't start it up, it never loads.
It sounds like in the model proposed above where all scripts are loaded on page-load, I as a user face a clearly worse experience either by A.) losing useful features such as Street View, or B.) paying to load the scripts for those features even when I don't use them.
We had that in the form of MapQuest, and it was agonizingly slow. Click to load the next tile, wait for the page to reload, and repeat. Modern SPAs are a revelation.
You still have to deal with all the tooling you are talking about, right? You’ve just moved the goalpost to the BE.
And just like the specific use cases you mentioned for client routing I can also argue that many sites don’t care about SEO or first paint so those are non features.
So honestly I would argue for SPA over a server framework as it can dramatically reduce complexity. I think this is especially true when you must have an API because of multiple clients.
I think the DX is significantly better as well with fast reload where I don’t have to reload the page to see my changes.
People are jumping into nextjs because react is pushing it hard even tho it’s a worse product and questionable motives.
I’ve seen undefined make it all the way to the backend and get persisted in the DB. As a string.
JS as a language just isn’t robust enough and it requires a level of defensive programming that’s inconvenient at best and a productivity sink at worst. Much like C++, it’s doable, but things are bound to slip through the cracks. I would actually say overall C++ is much more reasonable.
You should still know a language like Rust or Zig for systems work, and if you want to work in ML or data management you probably can't escape Python, but Typescript with Bun provides a really compelling development experience for most stuff outside that.
I greatly prefer Java with Spring Boot for larger backend projects.
Every form also normally ends up duplicating validation logic both in JS for client-side pre-submit UX and server-side with whatever errors it returns for the JS to then also need to support and show to the user.
Anecdotally, it seems like I encounter a lot more web apps these days where refreshing doesn’t reset the state, so it’s just broken unless I dig into dev tools and start clearing out additional browser state, or removing params from the URL.
Knock it off with all the damn state! Did we forget the most important lesson of functional programming; that state is the root of all evil?
There are times the user experience is just objectively better with more state, and you have to weigh the costs.
If I am filling out a very long form (or even multi-page form) I don’t really want all that state lost if I accidentally refresh the page.
Most companies aren’t international.
For applications that are not highly interactive, you don't quite need a lot of tooling on the BE, and since need to have a BE anyway, a lot of standard tooling is already in there.
React style SPAs are useful in some cases, but most apps can live with HTMX style "SPA"s
“So the backend gave this weird …”
“What backend?”
“The backend for the frontend…”
“So not the backend for the backend for the frontend?”
I jest, but only very slightly.
I think the DX is significantly better as well with fast reload…
As a user, the typical SPA offers a worse experience. Frequent empty pages with progress bars spinning before some small amount of text is rendered.Your typical SPA has loads of pointless roundtrips. SSR has no excess roundtrips by definition, but there's probably ways to build a 'SPA' experience that avoids these too. (E.g. the "HTML swap" approach others mentioned ITT tends to work quite well for that.)
The high compute overhead of typical 'vDOM diffing' approaches is also an issue of course, but at least you can pick something like Svelte/Solid JS to do away with that.
This is an implementation choice/issue, not an SPA characteristic.
> there's probably ways to build a 'SPA' experience that avoids these too
PWAs/service workers with properly configured caching strategies can offer a better experience than SSR (again, when implemented properly).
> The high compute overhead...
I prefer to do state management/reconciliation on the client whenever it makes sense. It makes apps cheaper to host and can provide a better UX, especially on mobile.
Yes, I know that this can be made to work properly, in principle. The problem is that it requires effort that most web devs are apparently unwilling to spend. So in practice things are just broken.
My hypothesis is that they’ve had to deal with so many random web apps breaking the back button so that behaviour is no longer intuitive for them. So I don’t push back against in-app back buttons any more.
An auction site I use loads in the list of auctions after the rest of the page loads in, and also doesn't let you open links with middle click or right click>new tab, because the anchor elements don't have href attributes. So that site is a double-dose of having to open auctions in the same tab, then going back to the list page and losing my place in the list of auctions due to the late load-in and failure to save my scroll location.
1. Fetch index.html 2. Fetch js, css and other assets 3. Load personalized data (json)
But usually step 1 and 2 are served from a cdn, so very fast. On subsequent requests, 1 and 2 are usually served from the browser cache, so extremely fast.
SSR is usually not faster. Most often slower. You can check yourself in your browser dev tools (network tab):
vs.
Poster child SSR: https://nextjs.org/
So much complexity and effort in the nextjs app, but so much slower.
It just depends on what you are after. You can completely drop the backend, apis, and have a real time web socketed sync layer that goes direct to the database. There is a row based permissions layer still here for security but you get the idea.
The client experience is important in our app and a backend just slows us down we have found.
you might be able to drop a web router but pretending this is "completely drop[ping] the backend" is silly. Something is going to have to manage connections to the DB and you're not -- I seriously hope -- literally going to expose your DB socket to the wider Internet. Presumably you will have load balancing, DB replicas, and that sort of thing, as your scale increases.
This is setting aside just how complex managing a DB is. "completely drop the backend" except the most complicated part of it, sure. Minor details.
Which is fine and cool for an app, but if you do something like this for say, a form for a doctor's office, I wish bad things upon you.
That's never the case.
Yes, that one. I want that experience please.
If you are curious, my most recent blog post is all about this concept[0] which I wrote because people seem to be misinformed on what RSCs really are. But that post didn't gain any traction here on HN.
Is it more complex? Sure–but it is also more powerful & flexible. It's just a new paradigm, so people are put off by it.
[0] Server Components Give You Optionality https://saewitz.com/server-components-give-you-optionality
Maybe the answer was never in JS eating the entire frontend, and changing the tooling won’t make it better, as it’s always skirting what’s actually good for the web.
I used to agree but these days with Vite things are a lot smoother. To the point that I wouldn't want to work on UI without fine-grained hot reloads.
Even with auto reload in PHP, .NET, etc you will be wasting so much time. Especially if you're working on something that requires interaction with the page that you will be repeating over and over again.
That’s honestly not that many things IRL. If you look at all the things you build only a minority actual demand high interactivity, or highly custom JS. Otherwise existing UI libraries cover the bulk of what people actually need to do on the internet (ie, not just whatever overly fancy original idea the designers think is needed for your special product idea).
It’s mostly just dropdowns and text and tables etc.
Once you try moving away from all of that and questioning if you need it at every step you’ll realize you really don’t.
It should be server driven web by default with a splattering of high functionality islands of JS. That’s what rails figured out after changing the frontend back and forth.
> Even with auto reload in PHP, .NET, etc you will be wasting so much time
Rails has a library that will refresh the page when files change without a full reload, using Turbo/Hotwire. Not quite HMR but it’s not massively different if your page isn’t a giant pile of JS, and loads quickly already.
What if you have a modal opened with some state?
Or a form filled with data?
Or some multi-selection in a list of items that triggers a menu of actions on those items?
Etc.
And it's true Vite can't always do HMR but it's still better than the alternative.
No -- but you could. And it wouldn't be the end of the world. So I'm just saying, DX doesn't eclipse all other considerations.
Didn't everybody say the exact same thing about Node, React, jQuery...? There is always a new and shiny frontend JS solution that will make the web dev of old obsolete and everyone loves it because it's new and shiny, and then a fresh crop of devs graduates school, the new shiny solution is now old and boring, and like a developer with untreated ADHD, they set out to fix the situation with a new frontend framework, still written in JavaScript, that will solve it once and for all.
I still build websites now the same as I did when I graduated in 2013. PHP, SQL, and native, boring JavaScript where required. My web apps are snappy and responsive, no loading bars or never-ending-spinning emblems in sight. shrug
I'm quite surprised to hear this is a common thing. Besides myself, I don't know a single person who has ever installed a PWA. For people in tech, despite knowing they exist. For people outside tech, they don't know they exist in the first place.
Does management actually have any PWAs installed themselves?
They should have designed it as a proper native experience.
'Twas before my time. What was so great about it? I remember needing it installed for Netflix like 15 years ago. Did you ever work with Flash? How was that?
If you ever worked seriously on anything non-SPA you would never, ever claim SPAs “dramatically reduce complexity”. The mountain of shit you have pull in to do anything is astronomical even by PHPs standards and I hate PHP. Those days were clean compared to what I have to endure with React and friends.
The API argument never sat well with me either. Having an API is orthogonal: you can have one or do not have one, you can have one and have a SSR app. In the AI age an API is the easy part anyway.
This. From a security perspective, server side dependencies are way more dangerous than client side.
You write:
<div id="moo" />
<form hx-put="/foo" hx-swap="outerHTML" hx-target="#moo">
<input hidden name="a" value="bar" />
<button name="b" value="thing">Send</button>
</form>
Compared to (ChatGPT helped me write this one, so maybe it could be shorter, but not that much shorter, I don't think?): <div id="moo" />
<form>
<input hidden name="a" value="bar" />
<button name="b" value="thing" onclick="handleSubmit(event)" >Send</button>
</form>
<script>
async function handleSubmit(event) {
event.preventDefault();
// the form submit stuff
const form = event.target.form;
const formData = new FormData(form);
const submitter = event.target;
if (submitter && submitter.name) {
formData.append(submitter.name, submitter.value);
}
// hx-put
const response = await fetch('/foo', {
method: 'PUT',
body: formData,
});
/ hx-swap
if (response.ok) {
const html = await response.text();
// hx-target
const target = document.getElementById('moo');
const temp = document.createElement('div');
temp.innerHTML = html;
target.replaceWith(temp.firstElementChild);
}
}
</script>
And the former just seems, to me at least, way way *way* easier to read, especially if you're inserting those all over your code.Fast forward to what I am doing today in my new job. We have a pretty complex setup using Redwoodjs along with several layers of abstraction with Graphql (which I approve of) and a ton of packages and modules tied together on the front end with react, storybook, etc. and some things I am not even sure why they are there. I see new engineers joining our team and banging their heads to make even the smallest of changes and to implement new features and having to make code changes at multiple different places. I find myself doing similar things as well from time to time - and I always can't help but think about the complexity that I used to deal with when working with these MVC frameworks and how ridiculously easy it was to just throw logic in a controller and a service layer and and the view templates for the UI stuff. It all fit in so easily and shipping features was super simple and quick.
I wouldn't discount react as a framework but I am also starting to some cracks caused by using TypeScript on the backend. This entire Javascript world seems to be a mess you don't want to mess with. This is probably just me with an opinion, but but using Turbo, Stimulus and and sprinkles of LiveView got me really really far very quickly.
For the disadvantages, I cannot think of any. It is a bit slower than hand rolling your own REST API, but the difference is not severe enough to make you give up on it.
On the plus side, it does have offer communication advantages if you have entirely independent BE and FE teams, and it can help minimize network traffic for network-constrained scenarios such as mobile apps.
Personally, I have regretted using GraphQL every time.
Many interactions are simply better delivered from the client. Heck some can only be exclusively delivered from the client (eg: image uploading, drag and drop, etc).
With HTMX, LiveViews, etc there will be challenges integrating server and client code... plus the mess of having multiple strategies handling different parts of the UI.
I would consider that the bare acceptable minimum along an upload progress indicator.
But it can get a lot more complicated. What if you need to upload multiple images? What if you need to sort the images, add tags, etc? See for example the image uploading experience of sites like Unsplash or Flickr.
HTMX just ism't the right tool to solve this unless you're ready to accept a very rudimentary UX.
IME this is backwards. All that stuff is a one-off fixed cost, it's the same whether you have 10 lines of JS or 10,000. And sooner or later you're going to need those 10 lines of JS, and then you'll be better off if you'd written the whole thing in JS to start with rather than whatever other pieces of technology you're using in addition.
Was this not the case? And if so, what has fundamentally changed?
Having one API for web and mobile sounds good but in practice often the different apps have different concerns.
And SEO and page speed were always reasons the server never died.
In fact, the trend is the opposite direction - the server sending the mobile apps their UIs. That way you can roll out new updates, features, and experiments without even deploying a new version.
Is that allowed by app stores? Doesn’t it negate the walled gardens if you can effectively treat the app as a mini browser that executes arbitrary code ?
What app stores don't like is you reinventing javascript i.e shipping your own VM. What they don't mind is you reinventing html and css.
So it is common for servers today to send mobile apps
{"Elementtype": "bottom_sheet_34", "fg_color": "red",..., "actions": {"tap": "whatever"}, ... }
However the code that takes this serialised UI and renders it, and maps the action names to actual code is shipped in the app itself. So, the app stores don't mind it.This is what the GP is talking about.
It covers a surprising number of usecases, especially since many actions can be simply represented using '<a href>' equivalents -- deeplinks. With lotties, even animations are now server controlled. However, more dynamic actions are still client-controlled and need app updates.
Additionally, any new initiative , think new feature, or think temporary page for say valentine's day, is all done with webviews. I'm not clued in on the review process for this.
Nevertheless, if your app is big enough then almost every rule above is waived for you and the review process is moot, since once you become popular the platform becomes your customer as much as you are theirs. For example, tiktok ships a VM and obfuscated bytecode for that VM to hinder reverse engineering (and of course, hide stuff)
I loved building things that way.
Been a web dev for over a decade, and I still use plain JS. I have somehow managed to avoid learning all the SPAs and hyped JS frameworks. I used HTMX for once project, but I prefer plain JS still.
I was a JQuery fan back in the day, but plain JS is nothing to scoff at these days. You are right though, in my experiences at least, I do not need anything I write to all happen on a single page, and I am typically just updating something a chunk at a time. A couple of event listeners and some async HTTP requests can accomplish more than I think a lot of people realize.
However, if I am being honest, I must admit one downfall. Any moderately complex logic or large project can mud-ball rather quickly -- one must be well organized and diligent.
Figma is written in C++ to webasm.
And what makes me like Next.js, besides the SaaS SDKs that give me no other framework choice, it is being quite similar to those experiences.
Why do you need GraphQL here?
If your developer workstation can't send a few KB of data over a TCP socket in a reasonable amount of time due to the colossal amount of Zoomer JavaScript abstraction nonsense going on, something has gone terribly wrong.
The whole idea of needing "islands" and aggressive caching and all these other solutions to problems you created -- that you have somehow managed to make retrieving a trivial amount of data off a flash storage device or an in-memory storage system of some kind slow -- is ludicrous.
What's funny is that people struggling after deploying it now think that they have invented the N+1 problem.
I think what confuses people is Ruby’s meta programming. The ability to dynamically define named methods makes rails seem far more magical than it actually is. You also can’t tell the difference between method calls and local variables.
I wish I got along better with Rails, honestly.
It’s honestly a really underrated framework, smartly designed, with probably the best ORM that exists and a great ecosystem.
Unfortunately, the documentation is painfully bad and the Getting Started guides are really boring compared to Rails or Django.
There may also be Laravel but I can’t say anything about it since I never tried it.
I looked at InertiaJS and it feels like too much "magic" for me personally. I've never used it so I could be wrong but it feels like too many things you have to get working perfectly and the instability in the JS ecosystem makes me worry about adding too many layers.
Pre-rendering (as popularized by static site generators) is the additional step that increases complexity significantly, sometimes security issues too when session-protected cached pages are mistakenly served to the wrong users.
When your business goal is put text on screen the next logical step is to ask how much time and money does the tech stack really save? I have never found a developer that answer that question with a number. That’s a really big problem.
The reasons I prefer client-side rendering: (1) separation of concerns UX in the front, data/business in the back (2) Even as a back-end dev, prefer Vue to do front-end work rather than rendering text + scripts in the backend that run in the browser, (3) at scale it's better to use the client hardware for performance (other than initial latency).
I get where you're coming from but that's actually quite a bit of an oversimplification even for many web apps outside of the 1% for which a lot of modern web development solutions and frameworks seem to have been created.
For one thing it doesn't take any account of input. When someone draws something with Figma or writes something in Google Docs or buys something from Amazon - or indeed any online shop at whatever scale - or gets a set of insurance quotes from a comparison site or amends something in an employee's HR record or whatever it may be the user's input is a crucial part of the system and its behaviour.
For another, we're not just putting text on the screen: we're putting data on the screen. And whilst data can always be rendered as text (even if not very readably or comprehensibly), depending on what it represents, it can often be more meaningfully rendered graphically.
And then there are integrations that trigger behaviour in other systems and services: GitHub, Slack, eBay, Teams, Flows, Workato, Salesforce, etc. Depending on what these integrations do, they can behave as inputs, outputs, or both.
And all of the above can result in real world activity: money is moved, orders are shipped from warehouses, flow rates are changed in pipelines, generators spool up or down, students are offered (or not offered) places at universities, etc.
I suppose you could have custom CSS (e.g. via Stylebot) remove 90% of the elements and all but one of the pictures, but would that really make the amazon purchasing experience better?
I wonder how you'll handle image uploading, drag and drop, media players, etc with simple static content rendering.
That's about as absurd a statement as saying all of Backend is just "returning names matching a certain ID" for how out of date and out of touch it is.
It's like saying that the entire job of a politician is to speak words out loud. You're reducing a complex problem to the point that meaningful discussion is lost.
Can anyone come up with the ideal use case where SSR shines? I'm willing to buy it if I see it.
Most websites are significantly simpler to build and maintain with SSR and traditional tools. An entire generation has forgotten this it seems and still thinks in terms of JS frameworks even when trying SSR.
As one example take this website, which serves the page your wrote your comment on using an obscure lisp dialect and SSR.
Wait, is SSR a thing outside the context of websites?
It gets rather painful though, which is why we don't do that anymore.
Microsoft introduced XMLHttpRequest in 2000 for this exact reason - its original purpose was to allow the newly introduced Outlook web UI to fetch data from the server and use that to update the DOM as needed. This was then enthusiastically adopted by other web app authors (most notably Google, with GMail, Google Maps, and Google Talk), and other browsers jumped onto the bandwagon by providing their own largely compatible implementations. By 2006 it was a de facto standard, and by 2008 it was standardized by W3C.
The pattern was then known as AJAX, for "asynchronous JS and XML". At first web apps would use the browser API directly, but jQuery appeared right around that time to provide a consistent API across various browsers, and the first SPA frameworks showed up shortly after (I'm not sure if I remember correctly, but I think GWT was the first in 2006).
I run skatevideosite.com and accidentally did the first rewrite when I took it over in react because that’s all I knew. I absolutely tanked the seo.
Rewrote it in rails and got everything back in shape and it’s been a fun experience!
Many teams use this with React.
Hotwire is the default and they develop it because DHH wants to, but they're not putting up any barriers to you using whatever you want.
Also, DHH doesn't seem to care about how big it is. His stated goal is for it to forever be a framework that's usable by a single dev.
Dunno I loved rails, built monoliths, built api only, but when I tried sprinkling a bit of react in my views (say you only need a little bit of interaction, or want to use a react date picker) then theres all these sharp edges.
The reason I want it to be bigger is that user base helps the single dev, with help resources, up to date gems, and jobs.
I would really be interested in real world performance metrics comparing load times etc. on a stock nextjs app using defaults vs. rails and co.
- Cost
- Complexity
- Learning curve
- Scalability
- Frequent changes
- And surprisingly bad performance compared with the direct competitors
Nowadays, NextJS is rarely the best tool for the job. Next and React are sitting in the "never got fired for buying IBM" spot. It is a well earned position, as both had a huge innovational impact.
Do you need best in class loading and SEO with some interactivity later on? Astro with islands. Vitepress does something similar.
Do you need a scalable, cost efficient and robust stack and have moderate interactivity? Traditional SSR (RoR, Django, .NET MVC, whatever) with maybe some HTMX.
Do you have a highly interactive app? Fast SPA like Svelte, Solid or Vue.
This is an interview with him last year on "one person" approaches to web app development that I liked a lot: https://www.youtube.com/watch?v=0rlATWBNvMw
https://world.hey.com/dhh/the-waning-days-of-dei-s-dominance...
https://world.hey.com/dhh/dei-is-done-minus-the-mop-up-b3bbb...
Americans don't seem to understand nuance, so when DHH posts about support for people's right to protest, how he loves being a father, how he doesn't want politics in the workplace and doesn't proclaim the sky is falling because of politics they seem to think he's the devil.
https://en.m.wikipedia.org/wiki/Paradox_of_tolerance
However he is right in many cases, and I don’t expect anyone to be right all the time, myself included. It’s strange to look for political leadership from a programmer anyhow.
For people who commonly use these frameworks -- is it common to have issues where data or code intended only for server execution makes its way onto the client? Or are there good safeguards for this?
But for sure the lack of clear lines for where the server ends and the client begins has always been a pain of these kinds of framework offerings.
without ANY irony or sarcasm, i just want appreciate that its funny how that happens completely without explicit desire or intention to have this effect from the developers of Next (i'm serious, don't hate me guys, we are friends, i do believe that this ofc is not intended)
i'm sure there's a good and meaningful explanation (that I'm interested in reading) but lots of little microdecisions compound when the developer of the framework does not also experience it as a paying customer (or, more subtly, the developer of the framework wants to serve the 10000x larger enterprise customer and needs to make choices to balance that vs the needs of the small)
You can prototype stuff very fast with rails and its a mighty tool in the right hands.
Not everyone has looked into or tried everything.
The upsides is that by not trying to hide the database and pretend it doesn't exist you can avoid a whole class of work (and the safety abstractions provided) and be incredibly productive if the requirements align.
Rails also uses way too much magic to dynamically construct identifiers and do control flow.
The over-use of magic and the under-use of static types makes it extraordinarily difficult to navigate Rails codebases. It's one of those things where you have to understand the entire codebase to be able to find anything. Tractable for tiny projects. For large projects it's a disaster.
Rails is a bad choice (as is Ruby).
My favourite web framework at the moment is Deno's Fresh. You get the pleasure of TSX but it's based around easy SSR rather than complex state management and hooks. Plus because it's Deno it's trivial to set up.
All that being said I still use (and like) Rails, currently comparing Phoenix/Elixir to Rails 8 in a side project. But I use typescript w/ Node and Bun in my day job.
Rails is a sharp knife. There is Rails way to do things. You may of course choose to do them differently (this is a contrast with other toolkits that fight this hard), but you are going to have to understand the system well to make that anything but a giant hassle.
With rails, the way it scales is statelessness. You have to map the front end actions to individual endpoints on the server. This works seamlessly for crud stuff (create a thing; edit a thing; delete a thing; list those things). For other use cases it works less seamlessly. NB: it works seamlessly for nested "things" too.
Complex multi-step flows are a pain point. eg you want to build data structures over time where between actions on the server (and remember, you must serialize everything you wish to save between each action), you have incomplete state. Concretely: an onboarding flow which sets up 3 different things in sequence with a decision tree is going to be somewhat unpleasant.
You must keep most state on the server and limit FE state. Hotwire works extremely well but the FE must be structured to make hotwire work well.
I've actually found it to work pretty well with individual pages build in react. My default is to build everything with hotwire and, when the FE gets too complex, to fall back to react.
Rails is nobody's idea of a fast system. You can make it perform more than well enough, but fast it is not.
Upsides, my take: it is the best tool to build websites. The whole thing is built by developers for developers. DX and niceness of tools are valued by the community. Contrast with eg the terrible standard lib that other languages (hi, js) have. Testing is by far the most pleasant I've used, with liberal support for mocking rather than having to do DI. For eg things like [logic, api calls, logic, api calls, logic, db calls] it works incredibly well. It is not the most popular toolkit and it's not react, so that can count against you in hiring.
I often see myself going back to Ruby on Rails for my private stuff. It's always a pleasure. On the other side, there are so few rails people available (compared to js) that it's not viable for any professional project. It would be irresponsible to choose that stack over js and often java for the backend.
Anyone have similar feelings?
I'm personally an elixir Phoenix Fanboy now, so I don't choose rails as my first choice for personal projects, but I think it is an excellent choice for a company. In fact, I would probably recommend it the most over any framework if you need to hire for it.
This has been my experience.
It is very easy to write a server with it, hosting and deploying is painless, upgrading it (so far) has been painless, linting and debugging has been a breeze.
If you're coming from Ruby, then learning Elixir requires a small mental adjustment (from Object Oriented to Functional). Once you get over that hump, programming in Elixir is just as much fun as Ruby! :)
I still haven't found an ORM with JS that really speaks to me.
> there are so few rails people available (compared to js) that it's not viable for any professional project
I don't think this is true; Shopify is a Rails shop (but perhaps it's more accurate to say it's a Ruby shop now). It feels easy to make a mess in Rails though, imo that's the part that you could argue is irresponsible
My take: the JS ecosystem tends to avoid abstraction for whatever reason. Example: they don’t believe that their web framework should transparently validate that the form submission has the correct shape because that’s too magical. Instead the Right Way is to learn a DSL (such as Zod) to describe the shape of the input, then manually write the code to check it. Every single time. Oh and you can’t write a TS type to do that because Reasons. It all comes off as willful ignorance of literally a decade or more of established platforms such as Rails/Spring/ASP.NET. All they had to do was steal the good ideas. But I suspect the cardinal sin of those frameworks was that they were no longer cool.
I have a hard time relaying this without sounding too negative. I tried to get into SSR webdev with TS and kept an open mind about it. But the necessary ingredients for me weren’t there. It’s a shame because Vite is such a pleasure to develop with.
I thought Prisma.js was the most popular by far? It's the one I've always seen used in docs and examples.
Someone will steal the good ideas eventually. And everyone will act like it’s the first time this idea has ever come up. I’ve seen it happen a few times now, and each time it makes me feel ancient.
Long story short: I ended up choosing ASP.NET Core with Minimal APIs. The main reason was indeed EF Core as ORM, which I consider as one if not the best ORM. In the Node world there's so much promise (Prisma, Drizzle, ...) but also so much churn.
Do you need a separate frontend framework? No, probably not, and that's exactly the problem that Next solves - write your backend and frontend in the same place.
Do you need a complicated build process? No. You want your build process to be just "run npm". And that's what something like Next gets you.
"Monolithic RoR app with HTML templates on VPS" would introduce more problems than it solves. If Next-style frameworks had come first, you would be posting about how RoR is a solution in search of a problem that solves nothing and just overcomplicates everything. And you'd be right.
Everytime I hit the "should we use GraphQL" question in the last decade we balked because we already had fast REST like APIs and couldn't see a how it would get faster.
To your point it was more of a mish-mash than anything with a central library magically dealing with the requests, so there is more cognitive load, but it also meant we had much more control over the behavior and performance profile.
Not remotely true. There are plenty of web apps that work just fine with a standard fixed set of API endpoints with minimal if any customization of responses. Not to mention the web apps that don't have any client-side logic at all...
GraphQL solves a problem that doesn't exist for most people, and creates a ton of new problems in its place.
The value of GraphQL is also its downfall. The flexibility it offers to the client greatly complicates the backend, and makes it next to impossible to protect against DoS attacks effectively, or even to understand app performanmce. Every major implementation of GraphQL I've seen has pretty serious flaws deriving from this complexity, to the point that GraphQL APIs are more buggy than simpler fixed APIs.
With most web apps having their front-end and back-end developed in concert, there's simply no need for this flexibility. Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend. When that kind of change is too hard or expensive to do, it's an organisational failing, not a technical one.
Sure, some use-cases might warrant the flexibility that GraphQL uses. A book tracking app does not.
But also no problem with it. There might be some queries expressible in your GraphQL that would have severe performance problems or even bugs, sure, but if your frontend doesn't actually make queries like that, who cares?
> Just have the backend provide the APIs that the front-end actually needs. If those needs change, also change the backend.
Sure, but how are you actually going to do that? You're always going to need some way for the frontend to make requests to the backend that pull in related data, so that you avoid making N+1 backend calls. You're always going to have a bunch of distinct but similar queries that need the same kind of related data, so either you write a generic way to pull in that data or you write the same thing by hand over and over. You can write each endpoint by hand instead of using GraphQL, but it's like writing your own collection datatypes instead of just pulling in an existing library.
The tools and patterns to limit these (very common, in any kind of system) drawbacks are so well-established that its a non-issue for anyone sincerely looking at the tech.
You throw away all the debuggability and simplicity of rest for close to zero advantages
Then the security looks also annoying to manage, yeah sure the front-end can do whatever it wants but nobody ever wanted that.
We've been using it in production for 10 years. Would I change a single thing? No. Every day I come to work thankful that this is the tech stack that I get to work on because it _actually works_ and doesn't break down, regardless of size.
"Simplicity is achieved when there's nothing left to remove".
I was the same expert level with Python, now I'm using trpc, nextjs, drizzle, wakaq-ts, hosted on DO App Platform and you couldn't pay me enough to go back to Python, let alone the shitstorm mess that's every Rails app I've ever worked on.
I've also not seen the 1s Next.js pageloads you had, but I'm confident of figuring a fix if that becomes a problem.
I've built a few apps in it now, and to me, it starts to feel a bit like server-side React (in a way). All your HTML/components stream across to the user in reaction to their actions, so the pages are often very light.
Another really big bonus is that a substantial portion of the extras you'd typically run (Sidekiq, etc) can basically just be baked into the same app. It also makes it dead simple to write resilient async code.
It's not perfect, but I think it's better than RoR
TLDR; Are most Phoenix deployments focused on a local market or deployed 'at the edge' or are people ignoring the potentially janky experience for far-flung users?
However, Elixir and Phoenix is more than just LiveView! There’s also an Inertia plugin for Phoenix, and Ecto as an “ORM” is fantastic.
> loading the entire homepage only takes one query [if you're logged out]
You can do this with Next.js SSR - there's nothing stopping you from reading from a cache in a server action?
They also talk about Vercel hosting costs, but then self host Rails? Couldn't they have self hosted Next.js as well? Rails notoriously takes 3-4x the resources of other web frameworks because of speed and resources.
“Deployment economy” is also new.
Rails has a very strong track record of matching internet scale.
Cloud is highly optimized for traditional server applications. From my experience with Next.js - this is the opposite. A lot of deployment components that don’t naturally fit in, and engineering required to optimize costs.
* difficult auth story. next-auth is limited in a few ways that drove us to use iron-session, such as not being able to use a dynamic identity provider domain (we have some gov clients who require us to use a special domain). This required us to basically own the whole openid flow, which is possible but definitely time we didn’t expect to have to spend in a supposedly mature framework.
* because the NextJS server wasn’t our primary API gateway we ended up having to proxy all requests through it just to add an access token to avoid exposing it on the client. The docs around this were not very clear, and this adds yet another hop with random gotchas like request timeout/max header size/etc.
* the framework is very aggressive about getting you on their cloud, and they make decisions accordingly. This was at odds with our goals.
* the maintainers aren’t particularly helpful. While on its own this would be easy to look past, there are other tools/frameworks we use in spite of their flaws because the maintainers are so accessible and helpful (shout out to Chillicream/HotChocolate!)
And Kotlin + Ktor feels very good to write in on serverside. Fast, easy and fluent to write in, like Ruby; but with Java's ecosystem, speed and types.
You mentioned giving up on Remix after poking at it for a day. IMHO that was a mistake.
I love Next.js. I have used other frameworks including RoR and there is nothing like it (except Svelte or Nuxt but I view them as different flavors of the same core idea). But I only use it to make monoliths. I can see getting frustrated with it if I was using it alongside a different back end.
“use client”, server actions that aren’t scrutable in a network tab, laggy page transitions, and, until recently, inscrutable hydration errors: these are some of the recent paper cuts I experienced with Next.
I’d still use it for new projects, but I am keen to use TanStack Start when it’s ready
i’m personally really interested in the next wave of frameworks that make local first development intuitive, like One or something that bakes in Zero
The broader point was basically that the Rails UI integration tests took a very long time, and required the whole system up, and we had a pretty large team constantly pushing changes. While not 100% unique to Rails, it was exacerbated by RoR conventions.
We moved much of the UI to a few Next.js apps where the tests were extremely fast and easy to run locally.
As a historically backend-developer, I've tended to dislike Html/JS/CSS. It's a meaningfully different paradigm from the Swing/Awt, WinForms, Android UX, etc. That alone was enough to frustrate me and keep me on the backend. To learn how to make frontend, I've had to since learn those 3. They're finally becoming familiar.
BUT, for front-end developers, they needed to learn "yet another language"; and a lot of these languages have different / obnoxious build systems compared to nvm and friends. And then, like anyone who's ever changed languages knows, they had to learn a whole bunch of new frameworks, paradigms, etc.
Well, they would have, but instead, some of them realized they could push Javascript to the backend. Yes, it's come with *a lot* of downsides; but, for the "Get Shit Done" crowd - and especially in the world of "just throw more servers at it" and "VC money is free! Burn it on infra!" these downsides weren't anything worth worry about.
But the front-end devs - now "full stack devs" but really "javascript all the things" devs -, continued to create in a visible way. This is reflective of all the friggin' LinkedIn Job Postings right now that require Next.JS / Node.JS / whatever roles for their "full stack" positions. One language to rule them all, and all that.
Just some ramblings, but I think it's strongly related to why people would choose Next.JS __ever__, given all its downsides.
bananatron•10h ago
cpursley•9h ago
travisgriggs•2h ago
I remember many years ago an akin experience, talking to John Brant and Don Roberts who had done the original refactoring browser in Smalltalk. Java was on its meteoric rise with tons of effort being dumped into Eclipse. They, and others with them, were eager to port these techniques to eclipse, and the theory was they’d be able to do even more because of they typing. But Brant/Roberts that surprisingly it has been more difficult. Part of the problem was the AST. Java, while typed, had a complex AST (many node types), compared to that of Smalltalk (late/runtime typed) which had a very simple/minimal AST. It was in interesting insight.