> These issues are present in the patches published last week.
> The patches published last week are vulnerable.
> If you already updated for the Critical Security Vulnerability, you will need to update again.
React2Shell and related RSC vulnerabilities threat brief - Cloudflare
https://blog.cloudflare.com/react2shell-rsc-vulnerabilities-... (https://news.ycombinator.com/item?id=46237515)
Seems to affect 14.x, 15.x and 16.x.
Trying to justify the CVE before fully explaining the scope of the CVE, who is affected, or how to mitigate it -- yikes.
If there are so many React developers out there using server side components while not familiar with the concept of CVEs, we’re in very serious trouble.
It’s common for critical CVEs to uncover follow‑up vulnerabilities because researchers scrutinize adjacent code paths looking for variant exploit techniques to test whether the initial mitigation can be bypassed.
It turns out this introduces another problem too: in order to get that to work you need to implement some kind of DEEP serialization RPC mechanism - which is kind of opaque to the developer and, as we've recently seen, is a risky spot in terms of potential security vulnerabilities.
What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.
Some libs in the stack are great but they were made pre rsc fad.
It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.
Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.
Not to mention the whole middleware and being able to access the incoming request wherever you like.
You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.
Personally I don’t like it but I do understand the appeal.
But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.
Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.
I think my ideal setup would be:
1. route.ts for RESTful routes
2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.
3. no other built-in magic.
RSCs are React components that call server side code. https://react.dev/reference/rsc/server-components
Actions/"use server" functions are part of RSC: https://react.dev/reference/rsc/server-functions They're the RPC system used by client components to call server functions.
And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.
Vercel has become a merchant of complexity, as DHH likes to say.
I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.
Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.
Align early on wrt values of a framework and take a closer look at the funder's incentives.
An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.
There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.
I do respect the things React + Next team is trying to accomplish and it does feel like magic when it works but I find myself caring more and more about predictability when working with a team and with every major version of Next + React, that aspect seems to be drifting further and further away.
So I ran a static analysis (grep) on the apk generated and
points light at face dramatically
the credentials were inside the frontend!
Most frameworks also by default block ALL environment variables on the client side unless the name is preceded by something specific, like NEXT_PUBLIC_*
I’ve been out of full stack dev for ~5 years now, and this statement is breaking my brain
These kind of node + Mobile apps typically use an embedded browser like electron or a builtin browser, it's not much different than a web app.
You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment. This lets you, for example, constrain that a database layer or an env file can never make it into the client bundle (or that some logic that requires client state can never be accidentally used from the stateless request/response cycle). You also have two directives that explicitly expose entry points between the two worlds.
The vulnerabilities in question aren't about wrong code/data getting pulled into a wrong environment. They're about weaknesses in the (de)serialization protocol which relied on dynamic nature of JavaScript (shared prototypes being writable, function having a string constructor, etc) to trick the server into executing code or looping. These are bad, yes, but they're not due to the client/server split being implicit. They're in the space of (de)serialization.
Have a Landing/marketing page? Then, yes, by all means render on the server (or better yet statically render to html files) so you squeeze every last millisecond you can out of that FCP. Also easy to see the appeal for ecommerce or social media sites like facebook, medium, and so on. Though these are also use cases that probably benefit the least from React to begin with.
But for the "app" part of most online platforms, it's like, who cares? The time to load the JS bundle is a one time cost. If loading your SaaS dashboard after first login takes 2 seconds versus 3 seconds, who cares? The amount of complexity added by SSR and RSC is immense, I think the payout would have to be much more than it is.
they may get other vulnemerelities as they are also in JS, but RSC class vulelnebereleties won't be there
I can't recommend it enough. If you never tried/learnt about it, check it out. Unless you're building an offline first app, it's 100% the safest way to go in my opinion for 99.9% of projects.
Instead of creating routes and using fetch() you just pass the data directly to the client side react jsx template, inertia automatically injects the needed data as json into the client page.
React team reinvent the wheel again and again and now we back to laravel
In retrospect I should have given it more thought since React Server Components are punted in many places!
I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:
- https://blog.isquaredsoftware.com/2025/06/react-community-20...
- https://blog.isquaredsoftware.com/2025/06/presentations-reac...
If you watch the various talks and articles done by the React team for the last 8 years, the general themes are around trying to improve page loading and data fetching experience.
Former React team member Dan Abramov did a whole series of posts earlier this year with differently-focused explanations of how to grok RSCs: "customizable Backend for Frontend", "avoiding unnecessary roundtrips", etc:
Conceptually, the one-liner Dan came up with that I liked is "extending React's component model to the server". It's still parent components passing props to child components, "just" spread across multiple computers.
Apps that use React without server components are not affected.
Google has a similar technology in-house, and it was a bit of a nightmare a few years back; the necessary steps to get it working correctly required some very delicate dancing.
I assume it's gotten better given time.
***
Seems that server functions are all the rage. We are unlikely to have them.
The main reason is that it ties the frontend and the backend together in undesirable ways.
It forces a js backend upon people (what if I want to use Go for instance).
The api is not client agnostic anymore. How to specify middleware is not clear.
Requires a bundler, so destroys isomorphism (isomorphic code requires no difference between the client and the server/ environment agnostic).
Even if it requires a bundler because it separates client and server implementation files, it blurs the data scoping (especially worrying for sensitive data) Do one thing and do it well: separate frontend and backend.
It might be something that is useful for people who only plan on having a javascript web frontend server separate from the API server that links to the backend service.
Besides, it is really not obvious to me how it becomes architecturally clearer. It would double the work in terms of security wrt authorization etc. This is at least not a generic pattern.
So I'd tend to go opposite to the trend and say no. Who knows, we might revisit it if anything changes in the future.
***
And boy, look at the future 3 weeks later...
To be fair, the one good thing is that they are hardening their implementation thanks to these discoveries. But still seems to me that this is wholly unnecessary and possibly will never be safe enough.
Anyway, not to toot my own horn, I know for a fact these things are difficult. Just found the timing funny. :)
It happened with Next.js as well https://github.com/vercel/next.js/discussions/11106
> Say Python ran in the browser natively, and you reimplented React on browser and server in Python. Same problem, not Javascript.
Yes.
And since Python does not natively run in the browser, that mistake never happens. With JavaScript, the desire to have "backend and frontend in a single codebase" requires active resistance.
It's the same vulnerabilities because Next uses the vulnerable parts of React.
Your rational is quite poor as I can write an isomorphic web app in C or Rust or Go and run parts in the browser, what then? Look, many of us also strongly dislike JavaScript but generally that distaste is based on its actual shortcomings and failures, you don't have to invent new ones plenty already exist.
If you have a single codebase for Go-based code running in an untrusted browser (the "toilet") and a trusted backend (the "kitchen"), then the same contamination is highly likely.
Did you even bother to read my comment? Try again, please. Next time don't skip over parts.
Indeed, but unlike Go/Python (backend) and TS/JS (frontend), the separation is surmountable, and the push to "reuse" is high.
Other than types and stuff like zod validators there's not a lot of overlap between server and client code.
I agree with your point that iso code can be confusing. But beyond that I think you're just pushing an irrational anti JS narrative.
Programming languages do lead to certain software architectures. These are independent but not orthogonal issues.
There we go.
The vulnerable packages are the ones starting with `react-server-` (like `react-server-dom-webpack') or anything that vendors their code (like `next` does).
At this point you might as well deprecate RSC as it is clearly a contraption for someone trying to justify a promotion at Meta.
Maybe they are going to silently remove “Built RSC at Meta!” in their LinkedIn bios after this. So what other vulnerabilities are going to be revealed in React after this one?
> We are not using RSC at Meta yet, bc of limits of our packaging infra (it’s great at different things) and because Relay+GraphQL gives us many of the same benefits as RSCs. But we are fans and users of server driven UI and incrementally working toward RSC.
(as of April 2025)
Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.
In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.
The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.
The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.
The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.
On the contrary, HTMX is the attempt of backend "eating" frontend.
HTMX preserves the boundary between client and server so it's more safe in backend, but less safe in frontend (risk of XSS).
I’m not interested in flame wars per se, but I can tell you there are better alternatives, and that the closer you stay towards targeting the browser itself the better, because browser APIs are at least an order of magnitude more secure and performant than equivalent JS operations.
In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.
I suspect there will be many more security issues found in it over the next few weeks.
Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.
Next vendors most of their dependencies, and they have an enormously complex build system.
The benefits that next and RSC offer, really don't seem to be worth the cost.
I had moved off nextjs for reasons like these, the mind load was getting too heavy for not too much benefit
DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:
This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.
It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.
The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.
And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.
It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.
(The same confusion comes up regularly whenever you touch Next.js apps.)
I'm a nobody PHP dev. He's a brilliant developer. I can't understand why he couldn't see this coming.
I agree I underestimated the likelihood of bugs like this in the protocol, though that’s different from most discussions I’ve had about RSC (where concerns were about user code). The protocol itself has a fairly limited surface area (the serializer and deserializer are a few kloc each), and that’s where all of the exploits so far have concentrated.
Vulnerabilities are frustrating, and this seems to be the first time the protocol is getting a very close look from the security community. I wish this was something the team had done proactively. We’ll probably hear more from the team after things stabilize a bit.
But sometimes, occasionally, a moonshot idea becomes a home run. That's why I dislike cynicism and grizzled veterans for whom nothing will ever work.
React lost me when it stopped being a rendering library and became a "runtime" instead. What do you know, when a runtime starts collapsing rendering, data fetching, caching, authorization boundaries, server and client into a single abstraction, the blast radius of any mistake becomes enormous.
Making complex things complex is easy.
Vue on the other hand is just brilliant. No wonder it's creator, Evan You went on to also create Vite. A creation so superior that it couldn't be confined to Vue and React community adopted it.
Or just fork if the maintainers want to go their way. If your solution has its merits it will find its fans.
While everyone is free to fork and maintain React. It's by no means an easy task, specially if it's not their job like Dan's is.
Plus, industry tends to gravitate towards what is popular. Network effects an all. So if a massively popular tool is subpar, the complications of it aren't without impact.
And no one is immune to criticism. LLMs are criticised for their sycophancy but some humans are no different when it comes to gatekeeping criticism.
I personally think it's the other way around, since code exposure increases the odds that a security breach happens, while DoS does not increase chances of exposure, but affects reliability.
Obviously we are simplifying a multidimensional severity to one dimension, but I personally think that breaches are more important than reliability. I'd rather have my app go down than be breached.
And I don't think it's a trivial difference, if you'd rather have a breach than downtime, you will have a breach.
Now I'm doubting RSC is a good engineering technology or a good practice.The real world is tradeoffs: RSC really help us improve our develop speed as we have good teamates that has good understanding of fullstack.
Do hope such things won't happen again.
How about either just return html (maybe with htmx), or have a "now classic" SPA.
The frontend must be the most over engineered shitshow we as devs have ever created. Its where hype meets the metal.
Backend in python/ruby/go/rust.
Frontend in javascript/typescript.
Scripts in bash/zsh/nushell.
One upon a time there was a low amount of friction and boilerplate with this approach, but with Claude and Codex it’s changed from low to none.
Except I find most front end stacks to lead to either endless configuration (e.g. Vue with Pinia, router, translation, Tailwind, maybe PrimeVue and a bunch of logic for handling sessions and redirects and toast messages and whatnot) and I feel the pull to just go and use Django or Laravel or Ruby on Rails mostly with server side templates - I much prefer that simplicity, even if it feels a bit icky to couple your front end and back end like that.
Let the server render everything. Let JS render everything, server is only providing the initial div and serves only JSON from then on. Actually let JS render partial HTML rendered on the server! Websockets anyone?
Imagine SQL server architecture or iOS development had this kind of ADHS syndrome.
are people shipping faster due to them ? or it's all complexity, security vulnerabilities like this. you're not facebook. render html the classic way if you need server rendered html. if you really do need an SPA - which is 5% of the apps out there - then yeah use client side react, vue, svelte etc - none of those RPC server actions etc
I wonder if similar magic fat pipe technologies (like Blazor) have similar vulnerabilities waiting to be discovered. Maybe compiled languaged are safer by default in this scenario, but anything built in Python, PHP, Ruby or any "code is data" language would probably fare similarly poorly.
chuckadams•1d ago
tshaddox•1d ago
reactordev•1d ago
tshaddox•1d ago
nawgz•1d ago
Surely there are not so many people building e-commerce sites that server components should have ever become so popular.
skydhash•1d ago
nawgz•1d ago
reactordev•23h ago
robertoandred•1d ago
rustystump•1d ago
Now they are shoving server rendering into react native…
pjmlp•1d ago
Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.
sangeeth96•1d ago
I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
Atotalnoob•1d ago
brazukadev•1d ago
pjmlp•1d ago
Using anything else requires yak shaving instead of coding the application code.
That is the only reason I get to use them.
tacker2000•1d ago
If anything the latter is much easier to maintain and to develop for.
acdha•1d ago
This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.
From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.
ricardobeat•1d ago
c-hendricks•1d ago
chuckadams•1d ago
acdha•21h ago
chuckadams•19h ago
ricardobeat•17h ago
seer•1d ago
Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.
Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.
React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.
It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.
The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.
I know react can accomplish _a ton_ more on the front end but few projects actually need that power.
pjmlp•1d ago
whizzter•1d ago
hedayet•1d ago
moomoo11•1d ago
Like with almost everything people then shit on something they don’t understand.
mubou2•1d ago
Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.
c0balt•1d ago
McGlockenshire•1d ago
This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?
Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.
crubier•17h ago
Could you explain more here? What do you consider "business logic". Context: I have a client app to fly drone using gamepad, mouse and keyboard, and video feedback and maps, and drone tasking etc.
CharlieDigital•1d ago
Main downside is the hot reload is not nearly as nice as TS.
But the coding experience with a C# BE/stack is really nice for admin/internal tools.
vbezhenar•1d ago
seer•1d ago
Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.
Honestly it is kinda nice.
dmix•1d ago
Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.
For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).
brendanmc6•1d ago
It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.
Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.
mubou2•1d ago
Ndymium•1d ago
LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...
JeremyNT•1d ago
Server side rendering has been with us since the beginning, and it still works great.
Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.
qingcharles•1d ago
For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.
oefrha•1d ago
array_key_first•1d ago
The problem with API + frontend is:
1. You have two applications you have to ensure are always in sync and consistent.
2. Code is duplicated.
3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.
I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.
Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.
chuckadams•1d ago
array_key_first•14h ago
Say I want to draw a box which has many checkboxes - like a multi-select. A very, very simple, but powerful, widget. In most Web applications, this widget is incredibly hard to develop.
Why is that? Well first we need to get the data for the box, and ideally just this particular page of the box, if it's paginated. So we have to use an API. But the API is going to come with so much baggage - we only need identifiers really, since we're just checking a checkbox. But what API endpoint is going to return a list of just identifiers? Maybe some RESTful APIs, but not most.
Okay okay, so we get a bunch of data and then throw away most of it. Whatever. But oh no - we don't want this multi-select to be split by logical objects, no, we have a different categorization criteria. So then we rope in another API, or maybe a few more, and we then group all the stuff together and try to splice it up ourselves. This is a lot of code, yes, and horribly frail. The realization strikes that we're essentially doing SQL JOIN and GROUP BY in JS.
Okay, so we'll build an API. Oh no you won't. You can't just build an API, it's an interface. What, you're going to write an API for your one-off multi-select? But what if someone else needs it? What about documentation? Versioning? I mean, is this even RESTful? Sure doesn't look like it. This is spaghetti code.
Sigh. Okay, just use the 5 API endpoints and recreate a small database engine on the frontend, who cares.
Or, alternative: you just draw the multi-select. When you need to lazily update it, you just update it. Like you were writing a Qt application and not a web application. Layers and layers of complexity and friction just disappear.
chuckadams•49m ago
christophilus•22h ago
A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.
Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.
array_key_first•14h ago
When I say traditional client-server applications, I mean the type of stuff like X or IPC - the stuff before the Web.
> A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.
There's really no reason it "should" be either one or the other because BOTH have huge drawbacks.
The problem with the first approach (SSR with JS sprinkled) is that particular interactions become very, very hard. Think, for example, a node editor. Why would we have a node editor? We're actually doing this at work right now, building out a node editor for report writing. We're 95% SSR.
Turns out, super duper hard to do with this approach. Because it's so heavily client-side interactive so you need lots and lots of sync points, and ultimately the SERVER will be the one generating the report.
But actually, the client-side approach isn't very good either. Okay, maybe we just serialize the entire node graph and sent it over the pipe once, and then save it now and again. But what if we want to preview what the output is going to look like in real-time? Now this is really, really hard - because we need to incrementally serialize the node graph and send it to the server, generate a bit of report, and get it back, OR we just redo the report generation on the front-end with some front-loaded data - in which case our "preview" isn't a preview at all, it's a recreation.
The solution here is, actually, a chatty protocol. This is the type of thing that's super common and trivial in desktop applications - it's what gives them superpowers. But it's so rare to see on the Web.
procaryote•21h ago
fatbird•21h ago
No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.
Code is duplicated.
Not if the frontend isn't trying to model the internals of the backend.
Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.
array_key_first•14h ago
This is the idea, and idea which can never be fully realized.
The backend MUST understand what the frontend sees to some degree, because of efficiency, performance, and user-experience.
If we build the perfect RESTful API, where each object is an endpoint and their relationships are modeled by URLs, we have almost realized this vision. But it cost us our server catching on fire. It thrashed our user experience. Our application sucks ass, it's almost unusable. Things show up on the front-end but they're ghosts, everything takes forever to load, every button is a liar, and the quality of our application has reached new depths of hell.
And, we haven't realized the vision even. What about Authentication? User access? Routing?
> Not if the frontend isn't trying to model the internals of the backend.
The frontend does not get a choice, because the model is the model. When you go against the grain of the model and you say "everything is abstract", then you open yourself up to the worst bugs imaginable.
No - things are linked, things are coupled. When we just pretend they are not, we haven't done anything but obscure the points where failure can happen.
> Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.
No, this is a stark decrease in velocity.
When I need to display a new form that, say, coordinates 10 database tables in a complex way, I can just do that if the application is SSR or Livewire-type. I can just do that. I don't need the backend team to implement it in 3 months and then I make the form. I also don't need to wrangle together 15+ APIs and then recreate a database engine in JS to do it.
Realistically, those are your two options. Either you have a performant backend API interface full of one-off implementations, what we might consider spaghetti, or you have a "clean" RESTful API that falls apart as soon as you even try to go against the grain of the data model.
There are, of course, in-betweens. RPC is a great example. We don't model data, we model operations. Maybe we have a "generateForm" method on the backend and the frontend just uses this. You might notice this looks a lot like SSR with extra steps...
But this all assumes the form is generated and then done. What if the data is changing? Maybe it's not a form, maybe it's a node editor? SSR will fall apart here, and so will the clean-code frontend-backend. It will be so hellish, so evil, so convoluted.
Bearing in mind, this is something truly trivial for desktop applications to do. The models of modern web apps just cannot do this in a scalable, or reliable, way. But decades old technology like COM, dbus, and X can. We need to look at what the difference is and decide how we can utilize that.
tracker1•23h ago
I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.
epolanski•1d ago
Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.
Worst of all?
The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.
CodingJeebus•1d ago
Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.