/draw_point?x=7&y=20&r=255&g=0&b=0
/get_point?x=7&y=20
/delete_point?x=7&y=20
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.This is also how HN does it:
/vote?id=44507373&how=up&auth=...
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Surely you're not advocating mutating data with GET?
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
The story url only would have to point to a web page that creates the upvote post request via JS.
CORS is a lot less strict around GET as it is supposed to be safe.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
Anyway, this is lame low effort trolling for some unknown purpose. Stop it.
See this post for example:
https://news.ycombinator.com/item?id=22761897
Quotes:
"Voting ring detection has been one of HN's priorities for over 12 years"
"I've personally spent hundreds of hours working on this"
Why?
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
It’s interesting that Stripe still even uses form-post on requests.
So your payloads look like this:
{
"id": 1,
"href": "http://someplace.invalid/things/1",
"next-id": 3,
"next-href": "http://someplace.invalid/things/3",
}
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
Also, who determined these rules are the definition of RESTful?
> Also, who determined these rules are the definition of RESTful?
Roy Fielding.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
REST = Hell No
GQL = Hell No.
RPC with status codes = Grin and point.
I like to get stuff done.
Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
Why do this for API unless the API really really fits that style (rare).
GQL is expensive to parse and hides information from proxies (200 for everything)
That’s got nothing to do with REST. You don’t have to do that at all with a REST API. Your URLs can be completely arbitrary.
Yes. All endpoints POST, JSON in, JSON out (or whatever) and meaningful HTTP status codes. It's a great sweet spot.
Of course, this works only for apps that fetch() and createElement() the UI. But that's a lot of apps.
If I don't want to use an RPC framework or whatever I just do:
{
method: "makeBooking",
argument: {
one: 1,
two: "too",
},
...
}
And have a dictionary in my server mapping method names to the actual functions.All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
HTTP verbs are not just for decoration.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.Yes, and it's so nice when done well.
The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
I had to chuckle here. So true!
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
SEARCH is from RFC 5323 (WebDAV).
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
I think good rest api design is more a service for the engineer than the client.
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
Nice.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
OMG. Pure gold!
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
:)
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
More links here: https://news.ycombinator.com/item?id=44510745
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
POST /complex
value1=something
value2=else
which then responds with 201 Created
Location https://example.com/complex/53301a34-92d3-447d-ac98-964e9a8b3989
And then you can make GET request calls against that resource.It adds in some data expiration problems to be solved, but its reasonably RESTful.
Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Web browsers do exactly this!
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
Adding actions to it!
POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.
POST to api/user:signup
Boom! Full REST for entities + actions with custom requests and responses for actions!
How do I make a restful filter call? GET request params are not enough…
You POST to api/user:search, boom!
(I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)
what's the confusion? you're creating a new user entity in the users collection.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
Let alone ux affordances, branding, etc.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
There are plenty of valid criticisms, but that is not one, in fact thats where it shines.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.
You could also check out https://data-star.dev for an even better approach to this.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
https://htmx.org/essays is a good introduction to these things
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
edit: typo
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
/user/123/orders
How is this fundamentally different than requesting /user/123 and assuming there’s a link called “orders” in the response body?
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
The term has caused so much bikeshedding and unnecessary confusion.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
for a lot of places, POST with JSON body is REST
It is a fundamentally flawed concept that does not work in the real world. Full stop.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
1. i can submit a request via HTTP
2. data is returned as JSON by a response
3. the most minimal amount of HTTP/Pagination necessary is required
REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.
makeitdouble•8h ago
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
sublinear•7h ago
Zardoz84•7h ago