You can pay some upfront cost and have wildly more performant apps.
Another person mixing up web apps with web sites.
We do need frameworks for web apps. Yes people were wrongly making websites using frameworks.
But I am busy building web apps and without frameworks it is not feasible to build one.
Web shops are somewhere in the middle, they need a little bit of interactivity for the cart, especially if the user opens multiple tabs.
But static websites should never be SPAs.
Add to it dev churn each 6 months one dev leaves an new joins full of fresh new ideas how to jquery. In the meantime also 2 freelancers adding their stuff.
I also get tired of the arguments from ignorance. The you don't know how hard life is type of bullshit arguments. I do know because I have done this work, probably much longer than you, and its not as challenging as you are claiming.
https://view-transitions.chrome.dev/
Here's an older, more designed demo that only works in Chrome.
document.startViewTransition(() => {
document.documentElement.dataset.colorMode = 'dark' // 'light'
});
Then try to update the attribute manually instead, and compare the butter-smooth transition with view transitions vs. without.My SPA navigation solution is just simple CSS toggle of display none/block and then force it on page load if there is a matching URL fragment. Total JavaScript to make this SPA navigation is about 20 or so lines of JS. Everything else is CSS and WebSockets. The state management is almost as simple.
Let me give you an example - one of my biggest gripes about web ux is the fact that in 2025 most shops still requires you to fully reload (and refetch) content when you change filters or drill down a category.
A common use case is when you come to a shop, click on "books" (request), then on "fantasy" subsection (another request), realize the book you're looking for is actually a "sci-fi", so you go back (request, hopefully cached) and go to "sci-fi" (another request).
It's much better ux when a user downloads the whole catalogue and then apply filters on the client without having to touch the server until he wants to get to the checkout.
But it's a lot of data - you may say - maybe on Amazon, but you can efficiently pack sections of most shops in data that will enable that pattern in less kilobytes that takes one product photo.
I've been building web apps like that since ca. 2005 and I still can't understand why it's not more common on the web.
Like I don't find Hacker News to be egregious to navigate, and nearly every nav is a reload. It runs fine on my 2008 laptop with 4 GB of RAM.
But I go to DoorDash on the same device, and it takes 30s to load up a list of 50 items. They give you a countdown for a double dash, and I genuinely don't think it's possible to order some curry and get a 6 pack of soda in less than 3 minutes. And 2.5 minutes is waiting for it to render enough to give me the interface. Almost none of it is a full reload.
What makes SPAs unwieldy is not the technology but the lack of desire to optimize. It loads fine on yesteryear's Macbook Air? Enough, push it.
I very well remember heavy, slow-loading websites of, say, year 2000, without any SPA stuff, even though lightweight, quick-loading pages existed even then, in the epoch of dial-up internet. It's not the technology, it's the desire to cram in more, and to ship faster, with least thinking involved.
Sure, lightweight, quick-loading pages existed, but sometimes you want to see a picture.
This was visible not only on a 33600 phone connection at home, but also on a megabit connection at work, because, shockingly, how fast your backend is also plays a major role.
On a 2008 device, in 2025? On a mobile connection?
https://dev.to/cyco130/how-to-get-client-side-navigation-rig...
In Tanstack Router it's a boolean you set when creating the router. The documentation nicely lays out what's being done under the hood, and it's quite a bit.[1] I wouldn't try that at home.
In React Router you just chuck a <ScrollRestoration /> somewhere in your DOM.[2]
[1]: https://tanstack.com/router/v1/docs/framework/react/guide/sc...
[2]: https://reactrouter.com/6.30.1/components/scroll-restoration
It reminded me of the time when I joined Wikia (now Fandom) back in, I think it was 2006. One of the first things that landed on my desk was (I can't recall the context) deeplinking.
And I remember being completely flabergasted, as I came Flash/games background, and for us that problem was completely solved for at least 4 years at the time (asual swfaddress package). I felt kind of stupid having to introduce that concept to much more senior engineers that I was at the time.
[0] https://html-preview.github.io/?url=https://gist.githubuserc...
And if a store is selling books, it might have hundreds of thousands of them. No, it's not a good experience to transfer all that to the client, with all the bandwidth and memory usage that entails.
If it's one their website the competitors can write a simple crawler and create that catalog.
And you don't have to send every single field you have in your database. Once the user selects a category you can send a metadata that enable the client to scaffold the UI. Then you cache the rest while user interacts with the site.
Barnes and Nobles - according to their FAQ - has 1 million unique items in their catalog. But they also have tons of different categories. A single book cover weights around 30kb.
I'll leave it as an excercise to figure out how much data you can fit into 30kb to make usable filtering system.
btw: opening their front page downloads 12.1MB already.
Not even 1 bit per item in the Barnes and nobles catalog? So not much.
Into 30kb? That's just 300 items at 100 bytes each. So not a lot?
I'm playing around with a newer version that uses a sqlite database instead. Sqlite officially has wasm builds, and the database file is already built to be separated into pages. With HTTP Range Requests, I can grab only the pages I need to fulfill any queries.
Sqlite full text search even works! Though I'm hesitant to call that a success because you do end up grabbing the entire FTS table for shorter searches. Might be better to download the entire database and build the FTS table locally.
This is what we[0] do too. We have a single JSON with a thousand over BOMs that's loaded directly into the browser. Previously we loaded the inventory data via an API as is usually expected. The fact that there's even an API meant requiring progress and loading bars, API unavailability scenarios, etc.
Having it all as a single preloaded JSON meant that all of the above goes away. Response is instantaneous.
[0]: https://chubic.com
In theory you could build a SSR on top of that same backend, but for some reason that feels wrong, like an extra layer of indirection.
Network requests themselves are not slow. Even a computer with a bad connection and long distance will probably have less than 500ms round trip. More like 50 to a few hundred at most. Anything beyond that is not the client, it's the server. If you make the backend slow then it'll be slow. If you make the request really large then the bad connection will struggle.
It's also worth mentioning that I would much rather deliver a good service to most users than make everyone's experience worse just for the sake of allowing someone to load the page and continue using it offline. Most websites don't make much sense to use offline anyway. You need to send some requests, best approach is simply to make them fast and small.
Not if the client is, e.g., constantly moving between cell towers, or right near the end of their range, a situation that frequently happens to me on road trips. Some combination of dropped packets and link-layer reconnections can make a roundtrip take 2 seconds at best, and upwards of 10 seconds at worst.
I don't at all disagree that too many tiny requests is the cause of many slow websites, nor that many SPAs have that issue. But it isn't a defining feature of the SPA model, and nothing's stopping you from thoughtfully batching the requests you do make.
What I mainly dislike is the idea of saving a bit of client effort at the cost of more roundtrips. E.g., one can write an SSR site where every form click takes a roundtrip for validation, and also rejects inputs until it gets a response. Many search forms in particular are guilty of this, and also run on an overloaded server. Bonus points if a few filter changes are enough to hit a 429.
That is to say, SSR makes sense for websites with little interaction, such as HN or old Reddit, which still run great on high-latency connections. But I get the sense it's being pushed into having the server respond to every minor redraw, which can easily drive up the number of roundtrips.
Personally, having learned web development only a few years ago, my impression is that roundtrips are nearly the costliest thing there is. A browser can do quite a lot in the span of 100,000 μs. Yet very few people seem to care about what's going over the wire. If done well, the SPA model seems to offer a great way to reduce this cost, but it's been tainted by the proliferation of overly massive JS blobs.
I guess the moral of the story is "people can write poorly-written websites in any rendering model, and there's no panacea except for careful optimization". Though I still don't get how JS blobs got so big in the first place.
Right, but how often is a user both using my website and on a roadtrip with bad coverage? In the grand scheme of things, not very often. I also think this depends on what the round trip is for. Maybe the 10s round trip is simply because it's a rather large request.
> I don't at all disagree that too many tiny requests is the cause of many slow websites
That's not really what I was saying, though I don't disagree with it. If you're sending multiple small requests then there are two ways to go about it: You can send all of them at the same time, then wait for responses and handle them as they come back. The other option is to send a request, wait for a response, then send the next etc. The latter option causes slowness, because now you're stacking round trips on top of one another. The former option can be completely fine.
But I'm not saying the client should be sending lots of requests. I'm saying they should get the data they need rather than all the data they could possibly need. This can be done in one request that gets a few kilobytes of data, you can fit 64kb in a single tcp packet. That's quite a bit of data, easily enough space to do useful stuff. For example the front page of HN is 8kb. It loads fast.
I'm also not saying you should use SSR. I do think that SSR is a great way to build websites, but my previous comment was specifically about SPAs. You don't have to send requests for every little thing - you can validate forms on the frontend in both SPAs and SSR.
Round trips are costly but not that much. A lot of round trips are unavoidable, what I'm saying is that you should avoid making them slower by sending too much data. And also avoid stacking them serially.
This is actually a great example of what I mentioned elsewhere about how people seem to have forgotten how to make a SPA responsive. These are both simpler implementations, but not really the best choice for user interaction. A better solution is to take the paginated version and pre-cache what the user might do next: When the results are loaded, return page 1 and 2, display page 1, and cache page 2 so it can be displayed immediately if clicked on. If they do, display it immediately and silently request and cache page 3 in the background, etc etc. This keeps the SPA responsive with few to no loading spinners if they take the happy path, and because it's happening in the background you can automatically do retries for a flaky connection without bothering the user.
This was how gmail and google maps blew peoples' minds when they were first released, by moving data to the frontend and pushing requests to the server into the background, the user could keep working without interruption while updates happened the background without interrupting their flow.
Smooth transitions are a nice side effect, but not the reason for an SPA. The core argument of the article, that client-side routing is a solution for page transitions, is a complete misunderstanding of what problems SPAs solve. So absolutely, if you shared that misunderstanding of SPAs and used them to solve the wrong problem, this article is 100% correct.
But SPAs came about in the days of jQuery, not React. You'd have a complex app, and load up a giant pile of jQuery spaghetti, which would then treat each div of your app is its own little mini-app, with lots of small network requests keeping everything in sync. It solved a real problem, of not wanting to reload all that code every time a user on an old browser, with a slow connection, changed some data. jQuery made it feasible to do SPAs instead.
Later, React and other frameworks made it less spaghetti-like. And it really took off. Often, for sketchy reasons. But the strongest argument for SPAs remains using them as a solution to provide a single-load of a large code bundle, that can be cached, to provide minimal network traffic subsequent to the load when the expected session time of a user is long enough to be worth the trouble of the complexity of an SPA.
So in addition to aggressive caching, I’d say keeping the app’s file size down is also pretty important to making it useful on poor connections. That’s frequently something that’s barely optimized at all, unfortunately.
So unless it is an all text app, the size of the code bundle is probably going to be quickly dwarfed by the size of media like images, animated images, or videos.
If a site has an SPA with a, say, 3mb code bundle, I think in most cases, that’s not an architecture issue. It’s probably an issue of poor engineering and switching to a MPA is not suddenly going to make those engineers competent.
Errors/messages/warnings is one of the first things I do in an app framework, since it's incredibly hard to "bolt that on" later.
https://www.reddit.com/r/bayarea/comments/1cqhr4i/what_is_up...
(I love this silly site for downvoting this question.)
Being able to reliably and programmatically interact with client-side storage and the url, as well as improvements in DOM apis and commodification of hardware with more ram and faster faster CPUs, among many others factors, seem to have contributed.
I'll have you know I spent time on organizing and structuring my code with early JS design patterns like IIFEs to limit scope, lazy loading of modules, and minification.
Anyway, in my experience, AngularJS was the biggest attempt at making structured front-end applications, and it really appealed / appeals (Angular is still very popular apparently) to Java developers; biggest ones was its modularization (which wasn't a thing in JS yet), dependency injection, and testability.
When we started out with an app in Backbone (to replace a Flex app because it wouldn't work on the ipad), I actually advocated against things like testing, thinking that the majority of functionality would be in the back-end. I was wrong, and the later AngularJS rebuild was a lot more intensive in front-end testing.
Of course, nowadays I'm repulsed by the verbosity, complexity and indirection of modern-day Angular code. or Java code for that matter.
Interestingly, new Angular is slowly moving away from these, introducing Signals for state management and standalone components, and I see these developers actually struggling a lot to adopt new Angular patterns.
Still, I believe Angular is a really great platform for building B2B or enterprise apps. It’s testing and forms handling is far ahead of every other SPA, and it actually fees like a cohesive framework where people have spent time designing things the right way; something I absolutely cannot say about react frameworks such as Next.js or Remix.
Any Turing-complete system is spaghetti in the hands of bad programmers. And simple & clear in the hands of good ones who know how to design.
My bet is that everyone here both agrees with you and is able to replace "jQuery" with "HTML", "CSS", and "JavaScript" to reach similar conclusions about the cultures of each. The problem is bad programmers, not the tech.
It isn’t really equipped or opinionated on statefulness, which means that everybody was at one point kludging stuff onto it that didn’t make sense.
Workforce market forces like that have a vastly greater effect than “bandwidth optimisation”.
My evidence for this is simple: every SPA app I’ve ever seen is two orders of magnitude slower than ordinary HTML would have been. There is almost never a bandwidth benefit in practice. Theoretically, sure, but every app I come across just dumps half the database down the wire and picks out a few dozen bytes in JS code. There's a comment right here in this discussion advocating for this! [1]
Another angle is this: if you had a top 100 site with massive bandwidth costs, then sure, converting your app to a SPA might make financial sense. But instead what I see is tiny projects start as a SPA from day one, and no chance that their bandwidth considerations — either cost or performance — will ever be a factor. Think internal apps accessed only over gigabit Ethernet.
I’ve seen static content presented as a SPA, which is just nuts to me.
[1] "It's much better ux when a user downloads the whole catalogue" from: https://news.ycombinator.com/item?id=44688834
If you have a huge org working on the project you might actually succeed in sticking to that architecture even when serving as plain old HTML, but smaller teams are likely to eventually write full stack spaghetti (which might still be fine for some use cases!). Once there was a fashionable term "progressive web app", with manifest workers optionally moving some backend stuff into the browser for offline-ish operation, and these days I also see a parallel pattern: progressively moving a browser UI into an electron-esque environment, where you can features requiring more local access than the browser would allow.
This never happens, for some values of never.
When a SPA app is initially developed, the "client" and the "API" are moulded to each other, like a bespoke piece of couture tailored to an individual's figure. Hand-in-glove. A puddle in a depression.
There is absolutely no way that some other category of platform can smoothly utilise such a specialised, single-purpose API without significant changes.
The reality is that most SPA apps are monoliths, even if the client app and the API app are in different Git repos in order to pretend that this is not the case.
I'd argue then you don't have an SPA. However I don't see how you could have a application like Figma or Discord and say "ordinary HTML is faster" (or even possible).
Every popular technology has been over implemented - these same enterprises probably have a 100-node Spark cluster to process 1GiB of data.
YouTube, for me, is unfathomably slow. It takes about a minute before I can get a specific song in one of my playlists playing. Every view change is 5-20 seconds, easily.
Facebook and the like now show polyfills for 10-30 characters of text, because hundreds of thousands of servers struggle to deliver half a line of text over terabits of fibre uplinks. Meanwhile my 2400 baud modem in the 1990s filled the screen with text faster!
Jira famously was so slow that this would never fail to be mentioned here any time it was mentioned. Service Now is better, but still slow for my tastes.
Etc...
If you disagree, link to me a fast SPA app that you use on a regular basis.
PS: Just after writing this, I opened a Medium article, which used -- I assume -- SSR to show me the text of the article quickly, then replaced it with grey polyfills, and then 6 full seconds later it re-rendered... the same static text with a client-side JavaScript SPA app. 100 requests, 7 MB, for 2 KB of plain text.
If you limit history to the most recent message (and have an link to the archive at the top) you could simply reload the entire page on some interval that declines with message frequency (and when you submit the form)
Since the html document is pretty much empty the reload happens so fast you won't see the flashing. With transitions it would be perfectly smooth.
With modern css you can put the elements out of order. You can simply append the new line to the end of the html document that represents the channel. (And to the archive) Purging a few old lines can be done less frequently.
I haven't tried it but it should work just fine. I will have to build it.
Initial load will be 100x faster. The page reloads will be larger but also insanely robust.
No, I mean discord. An application where you can chat, recieve phone calls and watch a live stream all at the same time.
A pure html chat client is uninteresting - there have been realtime html chat clients since I was teenager, even before the release of jquery.
Phone calls and live streams are things for which a tab needs to stay open. If you want to do other things simultaneously both the browser and the OS could facilitate it - but do so rather poorly.
It's not why people make spa's it seems?
Calling Discord "a chat cliënt [sic]" is barely one step removed from "I could build that in a weekend". So go ahead. Wait, what is stopping you?
HTTP 2 has been adopted by browsers for like 10 years now and its multiplexing makes packaging large single bundles of JS irrelevant. SPA’s that use packaging of large bundles doesn’t leverage modern browser and server capabilities.
H2 doesn’t make packing irrelevant… there’s still an IPC overhead with many small files… and larger bundles tend to compress better (though compression dictionaries might help here)
Large Js bundles are a huge problem though
There is a certain level of complexity beyond which you need to load data on the fly (instead of all up front on page load) and you literally cannot avoid an SPA. Choosing to build an SPA is not just some arbitrary whimsical decision that you can always avoid.
Sometimes people just go straight to SPA because they're unsure about the level of complexity of the app they're building and they build an SPA just to be sure it can handle all the requirements which might come up later.
One of my first jobs involved rebuilding a multi-page EdTech 'website' as an SPA, the multi-page site was extremely limiting, slow and not user-friendly for the complex use cases which had to be supported. There was a lot of overlay functionality which wouldn't make sense as separate pages. Also, complex state had to be maintained in the URL and the access controls were nuanced (more secure, easier enforce and monitor via remote API calls than serving up HTML which can mix and match data from a range of sources).
I think a lot of the critiques of SPAs are actually about specific front end frameworks like React. A lot of developers do not like React for many of the reasons mentioned like 'resetting scrollbars' etc... React is literally a hack to try to bypass the DOM. It was built on the assumption that the DOM would always be unwieldy and impossible to extend, but that did not turn out to the the case.
Nowadays, with custom web components, the DOM is actually pretty easy to work with directly but info about this seems to be suppressed due to React's popularity. Having worked with a wide range of front end frameworks including React for many years, the developer experience with Web Components is incomparably superior; it works exactly as you expect, there are no weird rendering glitches or timing issues or weird gotchas that you have to dig into to. You can have complex nested components; it's fast and you have full control over the rendering order... You can implement your own reactivity easily by watching attributes from inside a Web Component. The level of flexibility and reliability you get is incomparable to frameworks like React; also you don't need to download anything, you don't need to bundle any libraries (or if you do, you can choose how to bundle them and to what extent; you have fine-grained control over the pre-loading of scripts/modules), the code is idiomatic.
what do you do about the lack of (i assume) ecosystem? due to the ready ubiquity there's practically a library for everything. do you find that using WC you are having to hand roll a lot? i don't mean to be a package slave but for complex and tedious things like graphs / charts.
I presume Web Components are so great they haven't had anything happen since 2018
Ehm... define the Web Component render blocking in the head, because you want to prevent FOUCs. Then try to access the .innerHTML of your Web Component in the connectedCallback
https://dev.to/dannyengelman/web-component-developers-do-not...
I'd love to see examples of where this is actually the case and it's drastically different from just sending HTML on the wire.
Most SPAs I've worked on/with end up making dozens of large calls after loading and are far far slower than just sending the equivalent final HTML across from the start. And you can't say that JSON magically compresses somehow better than HTML because HTML compresses incredibly well.
Most arguments about network concerns making SPAs a better choice are either propaganda or superstition that doesn't pan out in practice.
There are complete CAD applications running in browsers for PCB and mechanical design with many layers, 3D view, thousands of components, etc.
For example: https://easyeda.com/ https://www.onshape.com
> because HTML compresses incredibly well
Haven't compression under TLS have been mostly disabled after CRIME and BREACH attack?
Agreed. The article was a frustrating read. The author is an SEO consultant. SEO consultants likely have a heavy focus on marketing websites. Actual apps and not marketing websites do benefit significantly from SPA. Imagine building Google Maps without SPA. You can animate page transitions all you want, the experience will suck!
he has a really warped view that SPAs are somehow purely about routing.
he does correctly point out that a lot of sites could and should be treated as sites not apps.
IMO it will be hard for some traditional sites to adapt to the new browser capabilities, since we've built an entire ecosystem around SPAs. The author's advice should've been: use the browser's built-in capabilities instead of client-side libraries whenever possible.
Also, keep in mind he's sharing his own experience, which might be different from ours. I've used some great SPAs and some terrible ones. The bad ones usually come down to inexperience from developers and hiring managers who don't understand performance, don't measure it, don't handle errors properly, and ignore edge cases.
Some devs build traditional sites as SPAs and leave behind a horrible UX and tech debt the size of Mount Everest. If you don't know much about software architecture, you're more likely to make mistakes, no matter what language or framework you're using.
I realised years ago there's no "better" language, framework, platform, or architecture, just different sets of problems. That's why developers spend so much time debating implementation details instead of focusing on the actual problems or ideas. And that's fine, debates can be useful as long as we don't lose sight of what we're trying to solve and why.
For example: Amazon's developers went with an MPA. Airbnb started as an MPA but now uses a hybrid approach. Google Maps was built as an SPA, while the team behind Search went with an MPA.
It's a reimagination of ye Olde three tier approach. Perfect for apps.
Being able to come back to a webpage with previous directions? I think it would be glorious.
Many websites simply have some session/state dependent page.
They are either the worst developers in the world or it is not simple.
I don’t rule any of the two possibilities out.
Some things you get for free with the correct architecture. For whatever reason, SPA does not have a great track record with their users.
Or they have a good reason to not do it (in some PMs mind).
At a guess, resetting the view, and displaying the nearest Domino's Pizza sponsored highlight pin on the map, could be one of them.
If this is an intentional choice, all in an effort to show me more local ads… ugh. I really hope this isn’t the case.
https://datasette-tiles-demo.datasette.io/-/tiles/basemap?no...
I wish I could take full credit but it was a PR by dracos: https://github.com/simonw/datasette-tiles/pull/16
these days there's a "copy link" button for directions which you can save / share
(viper.pl 2000, philips.pl 2001 .. - are.. 'unreal' ??
'µloJSON': https://web.archive.org/web/20020702120539js_/http://www.vip...
historised chained htmls restart onerror: https://web.archive.org/web/20020402025320js_/http://www.aut... )
jQuery 2006, React 2013
Only for certain types of applications… the route change time for many SPA’s is way higher than for the equivalent MPA
SPAs also make sense when you want to decouple the front end from the back end, so that you have a stable interface like a RESTful API and once AngularJS gets deprecated you can move to something else, or that when your outdated Spring app needs to be updated, you'll have no server side rendering related dependencies to update (or that will realistically prevent you from doing updates, especially when JSF behavior has changed between versions, breaking your entire app when you update).
> When it is worth the pain to load a large bundle in exchange for having really small network requests after the load.
The slight difference in user experience might not even enter the equation, compared to the pain that you'd have 5 years down the line maintaining the project. As for loading the app, bundle splitting is very much a thing and often times you also get the advantage of scoped CSS (e.g. works nicely in Vue) and a bunch of other things.
What is the cause of this?
1. Bad experiences with JavaScript apps that have aggregated complexity (be it essential or incidental complexity)?
2. Non-JS developers mystified and irritated at a bunch of practices they've never really internalised?
3. The undercurrent of "frontend is not real programming" prejudice that existed long before React etc al. and will continue to exist long after it?
Modern CSS is powerful, and HTML is the way to deliver web content.
Every web framework is _literally_ a hack to make the web something it isn’t. People who haven’t seen the evolution often pick up the bad habits as best practice.
For thise JavaScript people, I recommend trying Laravel or Ruby/Rails. And then once you realize that JavaScript sucks you’ll minimize it.
What I find interesting though is the assumption that web dev is done by "JavaScript people", that even the best "JavaScript people" have no technical breadth, and therefore fester in a cesspool of bad ideas, unlike your median backend dev who swims in the azure lakes of programming wisdom.
Now, I've done plenty of SPAs, but I've also done plenty of other things (distributed systems, WebVR apps, metaprogramming tools, DSLs, CLIs, STB software, mobile apps, a smidgeon of embedded and hobbyist PSOne games). Which gives me the benefit of a generalist perspective.
One thing I have observed is that in each silo, there are certain programmers who assume they are the only sane group in the industry. They aren't interested in what problems the other siloes are solving and they assume any solution "over there" they don't understand is bad / decadent / crazy.
These groups all regard each other with contempt, even though they are largely addressing similar fundamental issues in different guises.
It's a social dynamic as much as any technical one.
Without a cohesive community, mutual respect, and recognition of a shared body of knowledge, we don’t have the solidarity to make it happen.
As for Laravel, I’d say people were making complex applications (Ebay, Amazon, Yahoo) in 1999 —- Google Maps were better than Mapquest, which drew each image with a cgi-bin, but many SPA applications are form handling applications that could have been implemented with the IBM 360 and 3270 terminal.
The OG web was missing a few things. Forms were usually written on one HTML page and received by a separate cgi-script. To redraw the form in case of errors you need one script that draws the form and draws the response and a router that choose which one to draw. You need two handfuls of helper functions, for instance
<? draw_select($options,$selected_options,$attributes) ?>
to make forms which can be populated based on what’s already in your database. People never wrote down “the 15 helper functions” because they were too busy writing frameworks like Laravel and Ruby-on-Rails that did 20x more than you really needed. So the knowledge to build the form applications we were building in 1999 is lost like the knowledge of how the pyramids were built.As for performance, web sites today really are bloated, it’s not just the ads and trackers, it’s shocking how big the <head/> of documents get when you are including three copies of the metadata. If you are just drawing a form and nothing else, it’s amazing how fast you can redraw the whole page —- if you are able to delete the junk, old school apps can be as fast as desktop apps on the LAN and still be snappy over mobile.
It's an entirely different concept. It's certainly not the right technology for a news site, but days ago in a different place, there was for example the discussion about how an SPA and minimalistic API services fit a lot better with your average embedded device.
This is a very dark under current that is rarely brought up. It’s real. A lot of software developers see themselves as Romans, and the rest as Vandals.
...and yet, i keep running into web (and even mobile apps) that load the bundle, and subsequent navigation is just as slow, or _even slower_. Many banking websites, checking T-Mobile balance... you wait for the bundle to load on their super-slow website, ok, React, Angular, hundreds of megs, whatever. Click then to check the balance, just one number pulled in as tiny JSON, right? No, the website starts flashing another skeleton forever, why? You could say, no true SPA that is properly built would do that, but I run into this daily, many websites and apps made by companies with thousands of developers each.
I've soured on SPAs in the past few years. So much more can be done with standards than people realize. SPAs were best for specific use cases but we made them the default for everything. Marketing pages are built in React! Basic pages with marketing copy have a build step and require hundreds of megabytes of dependencies.
Like the author I've transitioned to a mantra of "let the web be the web."
But we have a whole generation of developers and designers that have come of age with SPA and mobile-like ux as standard. Getting everyone back to basics and understanding hypermedia, markup languages and cascading styles is a big ask.
a few comments below
"The view transitions API is a disaster."
I love HN.
Linear's speed comes from being "offline-first", but I challenge you to name almost any other product in common usage that does it that way. It's just not common. On the other hand if I want to buy tickets I'd rather most of that website be SSR and do SPA where you actually still need it. Or forums, news sites, blogs, and informational sites.
There is so much out there that would be better developed with SSR and then use CSS to make it feel SPA-like than to actually be a SPA.
A long time ago some webdevs started abusing the SPA concept to build simple websites. However that is not within the original meaning of the term SPA, because simple websites are not web apps. The author assumed that everyone would just understand that they are talking about SP"A"s and not SPAs, because for a certain subset of webdevs working on websites, the antonym of SPA is MPA, and it's normal to refer to your website as an "app". However for a certain other subset of webdevs, the antonym of SPA is simple website, and what the author is talking about are not SPAs at all.
Because I mostly built backoffices, chats, forums, ecommerces, all things that would've worked better as websites.
Here's my header component, and all its scoped CSS, and here are the 5 subcomponents that make it up, and all their individual scoped CSS.
Page transitions are 0.001% of the desire to go the SPA route.
1. Next won the war in the west, big time. Very very big time. Whenever people talk about new React apps they inadvertently mean Next. On the Vue side Nuxt is the default winner, and Nuxt is just the Next of Vue. That means that by default, by reflexive instinct, people are choosing Next and MPA as their strategy. If you want to correct the overly extreme pendulum motion then you should be telling people to try out SPA. The last 8 years has been a feverish push for MPA. Even the Facebook docs point straight to Next, totally deprecating Create React App. This is also Facebook ceding their battle to Next.
2. Whenever people complain about the complexity of Next, they are complaining about the difficulties of cutting edge MPA strategy, which evolves on a year to year basis. SPA on the other hand is a story that has frozen in time for who knows how many years. Almost a decade?
3. Doing MPA is strictly harder than doing SPA, much much harder. You have to observe the server/client distinction much more closely, as the same page may be split down the middle in terms of server/client rendered.
4. If you're writing an SPA and want to be more like MPA and load data at the time of hitting a user-navigable endpoint, that's on you, benefits and costs and all. You can also load data with anticipation so the client navigation is basically instant.
5. For every sexy SEO-able front-facing property you're going to have many teams with internal apps or dashboards backing that front-facing property. That's where many React devs are employed. Do not unnecessarily take on burden because you're so eager to ship the first "frame" of your app in a perfect way.
Even the Facebook docs point straight to Next
Startups and SV jerk each other off by promoting each other (think affiliates). None of it means shit.
Next is probably a garbage framework, but it's people's livelihoods. It's very hard to erase something that literally defines people (yes, your resume is YOU).
No, this is FB ceding the battle. They absolutely didn't want this. They dropped CRA because social media celebrities were shitting on CRA. Dan Abramov had to do a complete 180 in a single day, after writing a long thoughtful essay in defense of CRA.
"Bro, you should see the celebrities shitting on React"
Like WHO!? What developer celebrity, what universe have I been missing out on?
Anyway, I do love me a good ol' fashioned "fuck SPAs, back to HTML" punching bag post on HN. It's always the same discussion over and over.
You don't need generic answers about people wanking each other off for "think affiliates".
When starting a project the right move to examine what best fits your project, not which one was recently victorious in a war. I’ve grown to dislike React because I see it being abused so often for a site where it isn’t necessary. There are plenty of projects where it is necessary too, but that’s not universal.
sure, but I suppose you can observe that they do? And hence
>Wars, battles, personalities on social media
become reasonable narratives to engage in to describe what is actually happening in the social activities that form around these tools
I think that's simple: because they are financially invested in them. That's obvious for the developers working on the frameworks themselves or building libraries / plugins / UI-themes for them, but I believe it's also correct for "normal" developers who build things with these frameworks.
They know these frameworks and can use them, and they've made an investment in time to get to that point. Likely they're also making at least some of their money _because_ they know these frameworks. Emotional attachment follows the economic attachment, and then you'll get plenty of rationalizations.
Now I gotta occasionally use Angular, and it's boilerplate hell. Adding one button involves editing 30 classes and files even if you don't use templates. I took a course at work where even the instructor got confused adding a button. Why would anyone ever use this besides Google, or do they even use it?
html in your JS > JS in your html.
angular is a mess. it's the java of web frameworks. if you want up be enterprise(tm) go for it. I’m convinced it's only a thing because it gives people job security since nobody else chooses to touch it.
> Build a site like a site. Use HTML. Use navigation. Use the platform.
Sure, but what about all the other problems that aren't solved by View Transitions? There's some truth to the fact that frameworks like Next.js has jumped the shark. But they're not solving the problems of _just_ the SPA.
They are mostly shitty
---
Interestingly, every landing page or website for recent AI apps have looked AMAZING. Designers and standard website developers are totally on point. It's just, crappy developers who can't create a rich experience on top of incredible design that's the issue. CSS is not going to fix what can't be fixed (some people are not supposed to be in this profession, but hey, it pays ... for another three or so years).
- excellent frameworks for client side logic (interactivity) - separation of concerns (presentation logic vs. backend) - improved DevEx => inc. speed of development => happiness for all
The sad thing is that an article like this will get plenty of eyeballs due to comments like my own adding to the algo, but it should have never made it above the fold.
That’s why I don’t like SSR mixed with client side rendering. Either do a website or an app.
The author of this article is making shit up to justify writing an article where they can show of their CSS skills. It's lame and dumb.
There are two good reasons for SPAs that I can see:
1. Your app probably needs interactivity anyway; for most apps, it’s not just going to be HTML and CSS. Writing your app in some unholy combination of React and HTML is not fun especially when you need to do things like global state.
2. Loading the structure of pages up front so that subsequent data loads and requests are snappy. Getting a snappy loading screen is usually better than clicking and getting nothing for 500ms and then loading in; the opposite is true below I’d say 100ms. Not needing to replace the whole page results in better frontend performance, which can’t really be matched today with just the web platform.
Basecamp has probably invested the most in making a fairly complex webapp without going full SPA, but click around for like 30 seconds and you’ll see it can’t hold a candle in performance to SPAs, never mind native apps.
With that said, I agree that I’d want the web to work more like the web, instead of this weird layer on top. All the complexity that Next.js and SPAs have added over the years have resulted in more responsive but sometimes more tedious-to-build apps, and gigantic bundles. I just don’t think you can match the performance of SPAs yet with just HTML.
Now, this means you will need approximately twice as many developers. On the plus side, more work gets done on the client-side, so you probably need fewer servers for your billion users, and you may save in servers (and sysadmins) more than it costs you in developers to handle approximately twice as much code.
Except...99.9% of the shops using SPAs don't (and never will) have enough traffic for that tradeoff to make sense. The devs want (or at least wanted, back in the day) to go work at a company that did, so they wanted to use tools that would make sense if you're Facebook-sized. But, for the great majority of shops that made SPAs, there was no good reason to do so. This isn't Facebook's fault, but it is a reason why SPA's are annoying to so many people; they are almost always used in places that don't need them, and shouldn't be using them, in order to pad out a resume.
Of course, a lot of the effort also went to tying together various systems, replacing outdated ones, developing smarter and better chatbots and voice bots to guide users towards answers or self-service, etc.
Especially if they have some kind of AI integration feature.
I spent weeks trying to get it to behave predictably and failed entirely.
Don’t use this api if you value your sanity it’s the worst api I’ve used in a browser.
Anyone enthusing about this API hasn’t done much with it.
Javascript is over-rated.
CSS > JS
Native and web have different strengths, that's ok.
* You have a large number of users compared to your resources and you can’t afford for your user base to always hit your server. Comparatively, deploying API-only apps is far cheaper when you’re resource-starved (eg early stage startup).
* You don’t care about SEO, for example you’re building internal tooling. You then don’t need to care about hydration at all. Much simpler to separate concerns (again esp at the beginning).
* Offline mode (eg PWA or reusable code in Electron) or cases where you want to be resilient to network failures. In the case that your app is dependent on the server for basic functionality like navigation, you can’t support any type of offline mode.
This was NOT the reason why.
SPAs are better because it offloads as much processing as possible to EDGE cpu's. The idle compute among all users aggregates to a massive amount so might as well use it and minimize html construction and routing on the server side.
Why not spell out CMO and SPA and a civilian such as myself will not have to toddle off and dig out a search engine?
If you are going to get all assertive and pissed off, please be inclusive too.
FFS, I have a degree in Civ Eng but if I start wittering on about bridges and roads I will define abbreviations. Its common courtesy.
With regards your analogy involving Spotify: I ripped all my CDs to FLAAC some years ago. The tapes and records took a little longer. No idea what a Spotify is. Well I do, I'm an IT consultant, but I don't actually care, I'm a 50 something IT consultant! I bought my music years ago and listen to it on my gear and from my gear alone.
This SPA thing sounds like it is getting out of hand and it also sounds like there are tribal divisions being drawn up. IT does love a tribe! We even have multiple internal tribes: devops, webdev and so on.
I'm nominally a PHB ...
https://bugzilla.mozilla.org/show_bug.cgi?id=1823896
also mobile safari so basically all iphones.
If you turn off Javascript? Pages with client-only components (like the 100% client-rendered QR code generator) will show their fallback, everything else will load and render perfectly -- if a little less quickly -- than if you let Next do its thing. It's all rendered into the HTML, and it's actually more effort to not render components on the server. Progressive enhancement for the win!
By how many times the Window's load event fire for your app.
Absolute disagree with the way you're arguing though.
"I am Jack's native declarative transition."
I can't relate to the situations, problems, or solutions this article seems to just take for granted.
As someone who's been developing web apps since the 2000s, let me tell you the origin of SPA has few things to do with the "false promise of SPAs" he listed, but largely due to companies in the late 2000/early 2010s wanting to go "mobile first". This usually meant they still had a desktop second somewhere, which implied they were architecting the entire system to completely separate the frontends and the backend.
Before, what web devs meant by frontend was essentially server-side rendered HTML templates with perhaps a little bit of jQuery running on the client-side. Now, since mobile and desktop web apps are to share some business logic and the database somehow, people had to rediscover REST by reading Roy Fielding's Phd dissertation that inspired the original HTTP. This meant now every company was moving to service-oriented architecture and started exposing their backend APIs onto the open internet so their mobile apps and SPAs running in the browser can share the same APIs. This was a cost saving measure.
This period also coincided with the steady decline of full-stack webapp frameworks like Ruby on Rails and Django because for a couple of years, these frameworks had no good ways to support an API only applications. Django hadn't even reached 1.0 back then. This was a time when NodeJS was really starting to pick up momentum. Once people had started being more comfortable with JS on the server-side, lots of people suddenly realized they could push a lot of business logic to increasing powerful desktop browsers and phones, application hosts people now call "edge devices".
This is the true impetus of SPA. How is CSS going to kill this need?
Before the SPA, these were common issues. That’s why there were a gazillion Java server pages frameworks to solve them. You also have frameworks that tried to encode the UI state in some way. My favorite was Seaside, which used continuations to store the UI state.
The following article should probably be titled “It’s time for modern bloggers to kill the clickbait titles” and discuss the trade-offs of each architectural decision more balancedly.
The comparison of bloated SPAs with lean web sites is bogus. If someone takes the effort to make their web site lean, they'd do the same with an SPA. If someone makes a slow, bloated SPA with megabytes of Javascript, they'd make a slow, bloated web site with megabytes of Javascript. I think we've all seen enough of the web to know this is true.
I click on articles like these because I've seen the effort that goes into a good SPA, and I'm interested in anything that would allow people to deliver an equivalent experience with less effort. All I see here is a tiny bit of cosmetic polish. Polish is appreciated, but this doesn't seem like something that would tip the balance between building an SPA or not. Am I missing something?
By what measure?
Let me tell you as a developer who has been on both sides of things, developing server rendered pages and not having to worry about the server disagreeing with the client is the ultimate developer experience. Build a competent app that can serve full pages in .1 seconds and no one will care that your site isn't an SPA. They want a fast reliable site.
By every performance metric, the new app was faster.
But we kept getting user feedback that the new site was "clunky" and "slow", even though we saw that the p90 was much lower on the new site. Most of our users asked us to enable a toggle to let them go back to the old "fast" site.
I'm not sure if this is a universal experience, but I think a lot of other sites that tried the CSR -> SSR move had similar experiences. It's just harder to talk about, since it goes against the usual narrative.
Performance - No, in most cases.
User experience - No, in most cases.
What are you talking about. Majority of SPAs have abysmal performance compared to regular HTML rendered websites and that reflects poorly on user experiences.
I think if people remembered how productive you could be before the SPA frontend/backend split they'd reconsider. Being able to take a feature from A to Z without either context-switching between ecosystems or, even worse, involving other people, was incredibly productive and satisfying. Not to mention a much more streamlined dev env without a bloated js ecosystem of bundlers/compilers and whatnot.
He is _not_ talking about a SaaS dashboard SPA. He's talking about marketing sites and other content heavy stuff like blogs, landing pages, etc. It mentions this in many places if you go past the headline.
He is completely correct. SPAs should not be used for marketing sites full stop. Perhaps there are some edge cases where it may make sense (though I cannot think of any) but in general, if you are building anything that resembles a blog or landing page with nextjs et al you have done it wrong, close the code editor and start again. I'll give you a pass if you are developing an MVP of something and you need something up very quickly, but if you have any commercial traffic you will thank my later.
I have done a lot of audit work for this kind of stuff. Things I've seen:
10MB+ of React/JS libs to load a simple blog page
Putting images in the bundle, base64d. So badly that the page crashes OOM on many devices.
And my favourite of all time - shipping 120MB (!) of JSON down for a catalog. It literally loaded _the entire database_ to the front end to show _one_ product. It wasn't even an ecommerce site; it was literally a brochure of products you could browse for.
You’re not mad at SPAs, you’re mad at bad developers.
Id argue all the grousing of spa is because people made websites as apps when they should have been websites instead.
Btw view transitions are pretty slick and require zero js to get snazzy transitions.
Edit: there are some significant limitations with view transitions around performance and a few other gochas I should mention when they are used with pure css. I still love em tho.
No, you found some tradeoffs and decided that the drawbacks are worse than the advantages. If industry disagrees, they tend to have their reasons, but as we love being tortured artists and really feeling our individualism, offbeat hot takes are always in demand.
That's all I see in quite a few HN titles ;D
Bottom line: they build the SPA, but leave behind a terrible UX and tech debt the size of Mount Everest.
If the vanilla web can't do this easily, then it's not a good solution to your need for a SPA.
Imagine an e-commerce site that lets you review your order history and product pages offline (even if a bit outdated). That kind of experience feels genuinely useful—much more so than the "you're offline, here’s a cute dog (I love the pictures though)" fallback most sites provide.
Over the weekend, I experimented with service workers for the first time (assisted by Vibe coding). My initial impression was honestly a bit magical—“this website works offline, unlike most mobile apps these days that are just WebView-wrapped SPAs and crash offline.” [1]
That said, debugging was rough. Vibe coding output had subtle issues I found hard to untangle as a newcomer, cache saved v/s cache matching was being done incorrectly in the code, which LLM wasn't able to point out (cors/non-cors requests had issue).And Chrome’s DevTools made debugging manageable, but Firefox’s service worker support felt clunky in comparison (personal take).
Curious if others feel the same—are PWAs underused because of DX hurdles, lack of awareness, or just industry momentum around SPAs?
Offline order history is only a marginal improvement on the e-commerce experience from a customer and business perspective, so it's more appealing to us engineers who appreciate it as a feat of engineering prowess.
In other words, offline isn't PWAs killer feature. Besides, native apps can do it too.
PWA's killer features are circumventing the app store and the app store tax and not maintaining two codebases for Android and iOS.
Another Hacker News client would be a good example of a good PWA that you might install to your phone. It could have niceties like "save for later" or special styles applied to your favorite commenters. Offline support would be useful too, of course but not the main reason to develop a PWA.
Uncensored, paid content is another significant use case.
Agreed, I wish we lived in world where PWAs had atleast an equal share compared to mobile apps. Apps winning, mostly have been a suicide for privacy.
Coincidentally, there's another HN story with even more relevance to our discussion. [1]
YouTube serves hundreds of MB payloads to serve information. Even a bloated SPA is tiny in comparison.
Although, I think YouTube is also an SPA.
Some websites serve hundred MB payloads to show a hero video.
Why is JS optimization the great evil when there is often so much more waste in media or design choices?
From my own point of view and experience, SPA makes the most sense when you do not want to bundle backend and frontend together in your server code. Having a separate FE and BE with clearly set API(json) is very important in many many many cases and it allows you to use the same BE for different FE, like desktop website, mobile website, desktop client(electron), mobile application... trying to do this on the server would be hell. Also worth mentioning is that if you want to make changes in your FE, you have to bring down you BE in order to deploy them, whereas if you have BE and FE separated, you can deploy the FE without any downtime.
There is a lot more that could be said but the main point is that moving data rendering into FE and letting BE just serve raw data is the way to go in many situations.
We have been moving computation between FE and BE for decades now, but I think the tech is now sufficient to not force us to pick one over the other but chose what works best.
Personally, I think that rendering the UI on BE is archaic and should be handled on the client via some bundled thin client code, ala SPA. So I will always prefer client-rendering over server-rendering, no matter the setup.
PS: You might be interested in https://data-star.dev/ which came out of dissatistfaction with HTMX and I think will be the way to go about bridging FE and BE in the future.
Also, one of my banks recently changed their old website to a new SPA one. And it is now completely useless. It can't load information in time, so most of the forms are empty. You can't even go back because it is a SPA. So I can only log out, log in and try again. Kind of scary when you are handling a lot of money.
And it is not just one website. As I said I can't recall ever using a good SPA website. So yeah, I can't wait until they are all gone.
First example that comes to my mind: web version of ProtonMail. Going to settings feels like loading a completely separate website. Or OVH dashboard.
You likely don't even notice that most of what you browse are SPAs.
The reason why there is pushback is because the article is straight up misinformation.
Most SPAs use full on routing. You can not distinguish refresh and address bar behavior between a SPA and static pages.
Furthermore, SPAs integrates perfectly with the browsers navigation stack.
Where does this bs come from?
Prefect example of the sufficiently advanced compiler argument.
Hacker news is not a SPA.
Algolia's search is a SPA, but is perfectly utilizes the search so you can refresh a page (besides the error that they don't use page-based search), and is largely indistinguishable from a non-SPA site.
> You can load entire HN
You obviously have not context what so ever on how much content is hosted on HN.
You underestimate how fast modern internet is. With MPA website like HN there is no need to wait for any script to render first.
If they botch the back button in an SPA they will botch other things in an MPA
[0] I am firmly on the slack-performance-is-a-disgrace train, but their web client is a great example of a well done SPA - it’s miles better than the app other than notifications.
Just a few days ago I had to get an OTP via email and it was completely frustrating. No indication nothing, just a loading circle. The old MPA version was much better.
Usually I want to ship an app to customers, but for that I need an app the targets Apple platforms, hopefully I can built it to target iOS, iPadOS and MacOS but maybe those are three different apps. I need an app that targets Android. I also need an app that targets Windows, and I need a linux app.
Then I need to distribute all those apps, so I need to get onto the AppStore for MacOS, the play store for linux, and whatever Huawei/other Chinese manufacturers use, and whatever amazon uses, and probably have an “official” APK available for stuff like fdroid.
I need a windows installer, maybe a portable package for windows, and get onto MSStore (which will then cover winget).
I need to pick some collection of linux distros to target, usually targeting having a deb release and rpm can be good enough. And you can build an “installer” yourself if you are so inclined.
Or I could just ship an SPA, I had to build out the same servers if the data was not local anyway.
So they were better.
At work we are building a new "website like" frontend and it is a SPA (that internally operates as a MPA) built with React. The main reasons are we: know this setup well and know when hiring we will find people who know this setup as well. Beyond that, it will allow us to build out more application like features in the future if needed.
This approach has been popular in the industry for over 10 years now. Whereas most of the current discussion and tech on the frontend feels like churn and betting on the next thing. A lot of people just want tools that are mature and can get the job done regardless of them being the best tool under specific criteria.
The concerns of SPAs and CSS are completely orthogonal.
Now, I understand the argument that simple article-based websites shouldn't necessarily be SPAs. Page transitions are perfectly doable in pure HTML and CSS. But if you have less then trivial local state and complex + specific components, forcing the MPA pattern will only complicate your codebase.
I'm not talking about this from a technical standpoint, though there are many reasons that in most cases this is the best technological fit.
I'm talking about this from the position of "what I want to use". I'm sick of loading and navigting overly JS heavy, overly styled, fragile "apps". When I encounter a "proper" website that loads fast, and I can understand easily it's like a breath of fresh air.
I'm sure there exist (foolish) efforts to make CSS somehow able to do some UI logic, but that is never a good idea. CSS is a poorly made/overly complex spec that deserves to die. It's too late to make that argument unfortunately, but to want even more of CSS in the modern web stack is kinda lunatic.
What? This guy wanted to show off how much he knew about CSS and decided the only way to do it was completely make stuff up about a hot-button topic in webdev.
andix•15h ago
A good SPA has a lot of benefits. Because it can be interactive like a native app. It can only use those benefits, if it is interactive to some extent (like gmail or google docs). Smooth navigation is a very bad reason for picking a SPA.
varenc•15h ago
But agree that for things like GMail, etc, a SPA approach definitely makes sense. I just think most SPA sites I come across aren't in that category.
andix•14h ago
Let's take slack as an example. We had those chat websites 20 years ago. The thread was in it's own frame and got periodically reloaded. It's just bad UX.
strken•14h ago
andix•14h ago
epolanski•14h ago
It has nothing to do with being a static website or an SPA, nothing.
nfw2•9h ago
varenc•8h ago
At least that's the theory. There might be other tells that degrade the experience, but not sure what they are?