https://gist.github.com/andrewarrow/c75c7a3fedda9abb8fd1af14...
400 lines of QL vs one rest DELETE / endpoint
We are paying that same complexity tax you described, but without the benefit of needing to support thousands of unknown 3rd-party developers.
Equivalent delete queries in rest / graphql would be
curl -X DELETE 'https://api.example.com/users/123'
vs curl 'https://api.example.com/graphql?query={ deleteUser(id: 123) { id } }'we have a mixed graphql/REST api at $DAY_JOB and our delete mutations look almost identical to our REST DELETE endpoints.
TFA complains needing to define types (lol), but if you're doing REST endpoints you should be writing some kind of API specification for it (swagger?). So ultimately there isn't much of a difference. However, having your types directly on your schema is nicer than just bolting on a fragile openapi spec that will quickly become outdated when a dev forgets to update it when a parameter is added/removed/changed.
No need to update manually. Further, you can prevent breaking changes to the spec using oasdiff
- Overly verbose endpoint & request syntax: $expand, parenthesis and quotes in paths, actions etc.
- Exposes too much filtering control by default, allowing the consumer to do "bad things" on unindexed fields without steering them towards the happy path.
- Bad/lacking open source tooling for portals, mocks, examples, validation versus OpenAPI & graphQL.
It all smells like unpolished MS enterprise crap with only internal MS & SAP adoption TBH.
My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.
TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.
For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.
When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.
The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.
Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all
Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.
It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.
Is resolver composition really that different from function composition?
Not sure about the schema evolution part. Protobufs seem to work great for that.
It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).
But you're right, if you have version skew and the client is expecting something else then it's not much help.
You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.
I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.
Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.
Site: https://kubb.dev/
despite many Rest flaw that I know that it feels tedious sometimes, I still prefer that
and now with AI that can scaffold most rest. the pain point of rest mostly "gone"
now that people using a lot of Trpc, I wonder can we combine Grpc + rest that essentialy typesafe and client would be guaranteed to understand how model response look ?????
I also really liked that you can create a snapshot of the whole schema for integration test purposes, which makes it very easy to detect breaking changes in the API, e.g. if a nullable field becomes not-nullable.
But I also agree with lots of the points of the article. I guess I am just not super in love with REST. In my experience, REST APIs were often quite messy and inconsistent in comparison to GraphQL. But of course that’s only anecdotal evidence.
Do people actually work like this is 2025? I mean sure, I guess when you're having entire teams just for frontends and backends then yea, but your average corporate web app development? It's all full stack these days. It's often expected that you can handle both worlds (client and server) and increasingly its even TypeScript "shared universe" when you don't even leave the TS ecosystem (React w/ something like RR plus TS BFF w/ SQL). This last point, where frontend and backend meet, is clearly the way things are going in general. I mean these days React doesn't even beat around the bush and literally tells you to install it with a framework, no more create-react-app, server side rendering is a staple now and server side components are going to be a core concept of React within a few years tops.
Javascript has conquered the client side of the internet, but not the server side. Typescript is going to unify the two.
The internet at large seems to have a fundamental misunderstanding about what GraphQL is/is not.
Put simply: GQL is an RPC spec that is essentially implemented as a Dict/Key-Value Map on the server, of the form: "Action(Args) -> ResultType"
In a REST API you might have
app.GET("/user", getUser)
app.POST("/user", createUser)
In GraphQL, you have a "resolvers" map, like: {
"getUser": getUser,
"createUser": createUser,
}
And instead of sending a GET /user request, you send a GET /query with "getUser" as your server action.The arguments and output shape of your API routes are typed, like in OpenAPI/OData/gRPC.
That's all GraphQL is.
Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead
I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things
My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness
At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.
Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names
Flexibility in dev, restricted in prod
I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.
I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.
The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.
It's about the only thing about my job I still do like.
The difference is that it is schema-first, so you are describing your API at a level that largely replaces backend-for-frontend stuff. If it's the only interface to your data you have a lot less code to write, and it interfaces beautifully with the query builder.
I tend not to use it in unsecured contexts and I don't know if I would bother with GraphQL more generally, though WP-GraphQL has its advantages.
this gets repeated over and over again, but if this your take on GraphQL you def shouldn't be using GraphQL, because overfetching is never such a big problem that would warrant using GraphQL.
In my mind, the main problem GraphQL tries to solve is the same "impedance mismatch" that ORMs try to solve. ORM's do this at the data level fetching level in the BE, while GraphQL does this in the client.
I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.
In my opinion, GraphQL tooling never panned out enough to make GraphQL worthwhile. Hasura is very cool, but on the client side, there's not much going on... and now with AI programming you can just have your data layers generated bespoke for every application, so there's really no point to GraphQL anymore.
Wait, what? Overfetching is easily one of the top #3 reasons for the enshittification on the modern web! It's one of the primary causes of incredible slowdowns we've all experienced.
Just go to any slow web app, press F12 and look at the megabytes transferred on the network tab. Copy-paste all text on the screen and save it to a file. Count the kilobytes of "human readable" text, and then divide by the megabytes over the wire to work out the efficiency. For notoriously slow web apps, this is often 0.5% or worse, even if filtering down to API requests only!
#2 downloading the same fields multiple times
#3 downloading unneeded data/code
Checks out
TLDR, you get nice features like: if the field you're selecting doesn't exist, the extension will create the field for you (as a client field.) And your entire app is built of client fields that reference each other and eventually bottom out at server fields.
The complexity and time lost to thinking is just not worth it, especially once you ship your GarphQL app to production, you are locking down the request fields anyway (or you're keeping yourself open for more pain)
I even wrote a zero-dependency auth helpers package and that was not enough for me to keep at it
https://github.com/verdverm/graphql-autharoo
Like OP says, pretty much everything GraphQL can do, you can do better without GraphQL
Also, using a proper GraphQL server and not composing it yourself from primitives is usually beneficial.
Apollo shows up in the README and package.json, so I'm not sure why you are assuming I was not using a proper implementation
Maybe I'm missing something, but I think they did well
https://www.gqlstandards.org/ is an ISO standard. The Graph Database people don't love search engine results when they're looking for something.
I maintain a graph database where support for GQL often comes up.
Do we think this has turned out to hold? Is caching an API http response of no value in 2025.
Does it really? What if you need to store user preferences?
Also, some would say BFF is easier to implement than GraphQL.
I’m not a database neckbeard but I’ve always been confused how GraphQL doesn’t require throwing all systems knowledge about databases out the window
Same I thought about nest.js, Angular.
All of them hard to understand by heart at beginning, later (a few years), you feel it and get value.
Sounds stupid, but I tried to reimplement all the benefits using class transformers, zod, custom validators, all others packages. And always end up: “alright, graphql does this out of the box”.
REST is nice, same as express.js if you create non-production code. Reality is you need to love this boilerplate. AI writes this anyway.
> Especially if your architecture already solved the problem it was designed for.
What I need is to not want to fall over dead. REST makes me want to fall over dead.
> error handling is harder than it needs to be GraphQL error responses are… weird. > Simple errors are easier to reason about than elegant ones.
Is this a common sentiment? Looking at a garbled mash of linux or whatever tells me a lot more than "500 sorry"
I'm only trying out GraphQL for the first time right now cause I'm new with frontend stuff, but from life on the backend having a whole class of problems, where you can have the server and client agree on what to ask for and what you'll get, be compiled away is so nice. I don't actually know if there's something better than GraphQL for that, but I wish when people wrote blogs like this they'd fill them with more "try these things instead for that problem" than simply "this thing isn't as good as you think it is you probably don't need it".
You should be using Relay[0] or Isograph[1] on the frontend, and Pothos[2] on the backend (if using Node), to truly experience the benefits of GraphQL.
[0]: https://relay.dev/
Yet I am conflicted on whether it’s a real value add for most use-cases though. Maybe if there are many micro-services and you need a nice way to tie it all together. Or the underlying DB (source or truth data stores) can natively support responses in GraphQL. Then you could wrap it in a thin API transformation BFF (backend for frontend) per client and call it a day.
But in most cases, you’re just shifting the complexity + introducing more moving parts. With some discipline and standardization (if all services follow the same authentication mechanics), it is possible to get the same benefits with OpenAPI + an API catalog. Plus you avoid the layers of GraphQL transformations in clients and the server.
100% based on my anecdotal experience supporting new projects and migrations to GraphQL in < $10B market cap companies (including a couple of startups).
Post-honeymoon, I returned to REST+Open API
But, we are quite constraint on resources, so now even the BFF seems to consume more and more BE development time. Now we are considering letting the FE use some sort of bridge to the BE's db layer in order to directly CRUD what it needs and therefore skip the BFF API. That db layer already has all sorts of validations in place. Because the BE is Java and the FE is js, it seems the only usable bridge here would be gRPC. Does anyone have any other ideas or has done anything in this direction?
This is not surprising: Apollo only recently added support for data masking and fragment colocation, but it has been a feature of Relay for eternity.
See https://www.youtube.com/watch?v=lhVGdErZuN4 for the benefits of this approach:
- you can make changes to subcomponents without worrying about affecting the behavior of any other subcomponent,
- the query is auto-generated based on the fragment, so you don't have to worry that removing a field (if you stop using it one subcomponent) will accidentally break another subcomponent
In the author's case, they (either) don't care about overfetching (i.e. they avoid removing fields from the GraphQL query), or they're at a scale where only a small number of engineers touch the codebase. (But imagine a shared component, like a user avatar. Imagine it stopped using the email field. How many BFFs would have to be modified to stop fetching the email field? And how much research must go into determining whether any other reachable subcomponent used that email field?)
If moving fast without overhead isn't a priority (or you're not at the scale where it is a problem), or you're not using a tool that leverages GraphQL to enable this speed, then indeed, GraphQL seems like a bad investment! Because it is!
However, I think GraphQL really shines when the backend and frontend are developed by different organizations.
I can only speak from my experience with Shopify's GraphQL APIs. From a client-side development perspective, being able to navigate and use the extensive and (admittedly sometimes over-)complex Shopify APIs through GraphQL schemas and having everything correctly typed on the client side is a godsend.
Just imagining offering the same amount of functionality for a multitude of clients through a REST API seems painful.
If you are using something which requires you to write the GraphQL schema manually and then adapt both the server and the client... it's a completely different experience and not that pleasant at all.
But the other graph query language "Cypher" always seemed a lot more intuitive to me.
Are they really trying to solve such different problems? Cypher seems much more flexible.
gideon60•4h ago
sibeliuss•4h ago
c-hendricks•3h ago