frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We need a new theory of democracy – because this version has failed

https://www.salon.com/2025/08/24/we-need-a-new-theory-of-democracy-because-this-version-has-failed/
1•hkhn•59s ago•0 comments

Free AI Security Testing

1•aiagentlover•2m ago•0 comments

The Revolution Will Not Be Star Wars: What Andor Depicts

https://www.nybooks.com/online/2025/07/24/the-revolution-will-not-be-star-wars/
1•mitchbob•4m ago•1 comments

Etomidate to Be Listed in Misuse of Drugs Act from Sep 1 in Singapore

https://www.channelnewsasia.com/singapore/vaping-etomidate-kpods-class-c-drug-ong-ye-kung-5311236
1•kelt•13m ago•0 comments

The fastest sorting algorithm [video]

https://www.youtube.com/watch?v=Y95a-8oNqps
1•hundredwatt•14m ago•0 comments

I can prove I've solved this sudoku without revealing it [video]

https://www.youtube.com/watch?v=Otvcbw6k4eo
2•hundredwatt•15m ago•0 comments

Why wind farms attract so much misinformation and conspiracy theory

https://theconversation.com/why-wind-farms-attract-so-much-misinformation-and-conspiracy-theory-2...
2•voxadam•18m ago•0 comments

Ask HN: Has anyone made an MCP server for calorie tracking?

2•joshcsimmons•20m ago•0 comments

Typeclassopedia

https://wiki.haskell.org/index.php?title=Typeclassopedia
1•marvinborner•28m ago•0 comments

Digital Cargo Cult: How Zoomers Ruined Old Internet Nostalgia

https://cy-x.net/articles?id=13
2•Kokouane•31m ago•0 comments

Gemini CLI: Custom slash commands

https://cloud.google.com/blog/topics/developers-practitioners/gemini-cli-custom-slash-commands
1•tanelpoder•31m ago•0 comments

Burkina Faso Halts Gates Foundation-Backed Anti-Malaria Project

https://www.bloomberg.com/news/articles/2025-08-23/burkina-faso-halts-gates-foundation-backed-ant...
3•voxadam•31m ago•1 comments

Show HN: Fast, private command-line history search with instant documentation

https://github.com/cybrota/recaller
1•nyell•35m ago•0 comments

Japan has opened its first osmotic power plant –what is it and how does it work?

https://www.theguardian.com/world/2025/aug/25/japan-osmotic-power-plant-fukuoka
2•pseudolus•40m ago•1 comments

Regular Expression Matching Can Be Simple and Fast (2007)

https://swtch.com/~rsc/regexp/regexp1.html
1•Bogdanp•48m ago•1 comments

The Robinhood of Systemic Trading – Nvestiq

https://www.nvestiq.com/
2•Aman4312•48m ago•1 comments

Understanding alignment – from source to object file

https://maskray.me/blog/2025-08-24-understanding-alignment-from-source-to-object-file
1•MaskRay•58m ago•0 comments

Leeches and the Legitimizing of Folk-Medicine

https://www.asimov.press/p/leeches
3•alphabetatango•1h ago•0 comments

Optimal Brain Damage [pdf]

https://proceedings.neurips.cc/paper_files/paper/1989/file/6c9882bbac1c7093bd25041881277658-Paper...
2•fzliu•1h ago•0 comments

Do LLMs 'store' personal data? This is asking the wrong question (2024)

https://iapp.org/news/a/do-llms-store-personal-data-this-is-asking-the-wrong-question
2•walterbell•1h ago•0 comments

We Vibe Code at a FAANG

https://old.reddit.com/r/vibecoding/comments/1myakhd/how_we_vibe_code_at_a_faang/
2•simonpure•1h ago•2 comments

Linear Scan with Lifetime Holes

https://bernsteinbear.com/blog/linear-scan-lifetime-holes/
2•todsacerdoti•1h ago•1 comments

Intel Should Second-Source Nvidia [video]

https://www.youtube.com/watch?v=5oOk_KXbw6c
1•tambourine_man•1h ago•0 comments

Sunday at the garden party for Curtis Yarvin and the new, new right

https://www.ft.com/content/0e244103-80e8-4acc-9262-d6a45bbbaf14
4•slater•1h ago•2 comments

Where are you being recorded? Almost everywhere

https://proton.me/blog/albert-fox-cahn-surveillance-in-public-private-spaces
5•devonnull•1h ago•0 comments

CatVector Heatmap Visualizer

https://tanelpoder.com/catvector/
1•tanelpoder•1h ago•0 comments

Experiment will attempt to counter climate change by altering ocean

https://insideclimatenews.org/news/10082025/ocean-carbon-removal-climate-change/
1•PaulHoule•1h ago•0 comments

Performance Speed Limits

https://travisdowns.github.io/blog/2019/06/11/speed-limits.html
1•xtacy•1h ago•1 comments

Street-Fighting Mathematics: The Art of Educated Guessing [pdf]

https://ocw.mit.edu/courses/18-098-street-fighting-mathematics-january-iap-2008/d937ad2ca4b7839a5...
1•pykello•1h ago•0 comments

US immigrant population down by more than a million people amid Trump crackdown

https://www.theguardian.com/us-news/2025/aug/23/us-immigrant-population-declines
9•joak•1h ago•1 comments
Open in hackernews

Everything I know about good API design

https://www.seangoedecke.com/good-api-design/
204•ahamez•8h ago

Comments

cyberax•6h ago
> You should let people use your APIs with a long-lived API key.

Sigh... I wish this were not true. It's a shame that no alternatives have emerged so far.

TrueDuality•5h ago
There are other options that allow long-lived access with naturally rotating keys without OAuth and only a tiny amount of complexity increase that can be managed by a bash script. The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key.
rahkiin•5h ago
If api keys do not need to ve stateless, every api key can become a refresh token with a full permission and validity lookup.
marcosdumay•1h ago
This.

The separation of a refresh cycle is an optimization done for scale. You don't need to do it if you don't need the scale. (And you need a really huge scale to hit that need.)

maxwellg•4h ago
Refresh tokens are only really required if a client is accessing an API on behalf of a user. The refresh token tracks the specific user grant, and there needs to be one refresh token per user of the client.

If a client is accessing an API on behalf of itself (which is a more natural fit for an API Key replacement) then we can use client_credentials with either client secret authentication or JWT bearer authentication instead.

TrueDuality•2h ago
That is a very specific form of refresh token but not the only model. You can just easily have your "API key" be that refresh token. You submit it to an authentication endpoint, get back a new refresh token and a bearer token, and invalidate the previous bearer token if it was still valid. The bearer token will naturally expire and if you're still using it, just use the refresh immediately, if its days or weeks later you can use it then.

There doesn't need to be any OIDC or third party involved to get all the benefits of them. The keys can't be used by multiple simultaneous clients, they naturally expire and rotate over time, and you can easily audit their use (primarily due to the last two principles).

0x1ceb00da•48m ago
> The refresh token/bearer token combo is pretty powerful and has MUCH stronger security properties than a bare API key

I never understood why.

TrueDuality•3m ago
The quick rundown of refresh token I'm referring to is:

1. Generate your initial refresh token for the user just like you would a random API key

2. The client sends the refresh token to an authentication endpoint. This endpoint validates the token, expires the refresh token and any prior bearer tokens issued to it. The client gets back a new refresh token and a bearer token with an expiration window (lets call it five minutes).

3. The client uses the bearer token for all requests to your API until it expires

4. If the client wants to continue using the API, go back to step 2.

The benefits of that minimal version:

Client restriction and user behavior steering. With the bearer tokens expiring quickly, and refresh tokens being one-time use it is infeasible to share a single credential between multiple clients. With easy provisioning, this will get users to generate one credential per client.

Breach containment and blast radius reduction. If your bearer tokens leak (logs being a surprisingly high source for these), they automatically expire when left in backups or deep in the objects of your git repo. If a bearer token is compromised, it's only valid for your expiration window. If a refresh token is compromised and used, the legitimate client will be knocked offline increasing the likelihood of detection.

Audit and monitoring opportunities. Every refresh creates a logging checkpoint where you can track usage patterns, detect anomalies, and enforce policy changes. You get visibility into which clients are actively using the API versus just sitting on old static keys. This also gives you natural rate limiting and abuse detection points.

Most security frameworks (SOC 2, ISO 27001, etc.) prefer time-limited credentials as a basic security control. Might not be relevant in the context of this post, but its an easy win.

Add an expiration time to refresh tokens to naturally clean up access from broken or no longer used clients. Example: Daily backup script. Refresh token's expiration window is 90 days. The backups would have to not run for 90 days before the token was an issue. If it was still needed the effort is low, just provision a new API key. After 90 days of failure you either already needed to perform maintenance on your backup system or you moved to something else without revoking the access keys.

pixelatedindex•5h ago
To add on, are they talking about access tokens or refresh tokens? It can’t be just one token, because then when it expires you have to update it manually from a portal or go through the same auth process, neither of which is good.

And what time frame is “long-lived”? IME access tokens almost always have a lifetime of one week and refresh tokens anywhere from 6 months to a year.

rahkiin•5h ago
Inthink they are talking about refresh token or Api Keys like PATs. Some value you pass in a header and it just works. No token flow. And the key is valid for months and can be revoked
cyberax•4h ago
If you're using APIs from third parties, the most typical authentication method is a static key that you stick in the "Authorization" HTTP header.

OAuth flows are not at all common for server-to-server communications.

In my perfect world, I would replace API keys with certificates and use mutual TLS for authentication.

nostrebored•4h ago
In your perfect world, are you primarily the producer or consumer of the API?

I hate mTLS APIs because they often mean I need to change how my services are bundled and deployed. But to your point, if everything were mTLS I wouldn’t care.

cyberax•44m ago
> In your perfect world, are you primarily the producer or consumer of the API?

Both, really. mTLS deployment is the sticking point, but it's slowly getting better. AWS load balancers now support it, they terminate the TLS connection, validate the certificate, and stick it into an HTTP header. Google Cloud Platform and CloudFlare also support it.

pixelatedindex•3h ago
IME, OAuth flows are pretty common in S2S communication. Usually these tend to be client credential based flows where you request a token exactly like you said (static key in Authorization), rather than authorized grant flows which requires a login action.
cyberax•2h ago
Yeah, but then there's not that much difference, is there? You can technically move the generation of the access tokens to a separate secure environment, but this drastically increases the complexity and introduces a lot of interesting failure scenarios.
pixelatedindex•1h ago
I mean… is adding an OAuth layer in 2025 adding that much complexity? If you’re scripting then there’s usually some package native to the language, if you’re using postman you’ll need to generate your authn URL (or do username/passwords for client ID/secret).

If you have sensitive resources they’ll be blocked behind some authz anyway. An exception I’ve seen is access to a sandbox env, those are easily generated at the press of a button.

cyberax•44m ago
No, I'm just saying that an OAuth layer isn't really adding much benefit when you either use an API key to obtain the refresh token or the refresh token itself becomes a long-term secret, not much better than an API key.

Some way to break out of the "shared secret" model is needed. Mutual TLS is one way that is at least getting some traction.

smj-edison•4h ago
> Every integration with your API begins life as a simple script, and using an API key is the easiest way to get a simple script working. You want to make it as easy as possible for engineers to get started.

> ...You’re building it for a very wide cross-section of people, many of whom are not comfortable writing or reading code. If your API requires users to do anything difficult - like performing an OAuth handshake - many of those users will struggle.

Sounds like they're talking about onboarding specifically. I actually really like this idea, because I've certainly had my fair share of difficulty just trying to get the dang thing to work.

Security wise perhaps not the best, but mitigations like staging only or rate limiting seem sufficient to me.

pixelatedindex•3h ago
True, I have enjoyed using integrations where you can generate a token from the portal for your app to make the requests. One thing that’s difficult in this scenario is authorization - what resources does this token have access to can be kind of murky.
xtacy•5h ago
Are there good public examples of well designed APIs that have stood the test of time?
binaryturtle•5h ago
I always thought the Amiga APIs with the tag lists were cool. You easily could extend the API/ABI w/o breaking anything at the binary level (assuming you made the calls accept tag lists as parameters to begin with, of course).
pixl97•5h ago
While the author doesn't seem to like version based APIs very much, I always recommend baking them in from the very start of your application.

You cannot predict the future and chances are there will be some breaking change forced upon you by someone or something out of your control.

claw-el•4h ago
If there is a breaking change forced upon in the future, can’t we use a different name for the function?
soulofmischief•4h ago
A versioned API allows for you to ensure a given version has one way to do things and not 5, 4 of which are no longer supported but can't be removed. You can drop old weight without messing up legacy systems.
CharlesW•4h ago
You could, but it just radically increases complexity in comparison to "version" knob in a URI, media type, or header.
Bjartr•4h ago
See the many "Ex" variations of many functions in the Win32 API for examples of exactly that!
jahewson•3h ago
/api/postsFinalFinalV2Copy1-2025(1)ExtraFixed
pixl97•2h ago
Discoverability.

/v1/downloadFile

/v2/downloadFile

Is much easier to check for a v3 then

/api/downloadFile

/api/downloadFileOver2gb

/api/downloadSignedFile

Etc. Etc.

echelon•1h ago
I have only twice seen a service ever make a /v2.

It's typically to declare bankruptcy on the entirety of /v1 and force eventual migration of everyone onto /v2 (if that's even possible).

pixl97•1h ago
I work for a company that has an older api so it's defined in the header, but we're up to v6 at this point. Very useful for changes that have happened over the years.
bigger_cheese•58m ago
A lot of the Unix/Linux Syscall api has a version 2+

For example dup(), dup2(), dup3() and pipe(), pipe2() etc

LWN has an article: https://lwn.net/Articles/585415/

It talks about avoiding this by designing future APIs using a flags bitmask to allow API to be extended in future.

claw-el•1h ago
Isn’t having the name (e.g. Over2gb) easier to understand than just saying v2? This is in the situation where there is breaking changes forced upon v1/downloadFile.
ks2048•2h ago
If you only break one or two functions, it seems ok. But, some change in a core data type could break everything, so adding a prefix "/v2/" would probably be cleaner.
andix•4h ago
I don't see any harm in adding versioning later. Let's say your api is /api/posts, then the next version is simply /api/v2/posts.
choult•3h ago
It's a problem downstream. Integrators weren't forced to include a version number for v1, so the rework overhead to use v2 will be higher than if it was present in your scheme to begin.
pixl97•2h ago
This here, it's way easier to grep a file for /v1/ and show all the api endpoints then ensure you haven't missed something.
paulhodge•2h ago
I have to agree with the author about not adding "v1" since it's rarely useful.

What actually happens as the API grows-

First, the team extends the existing endpoints as much as possible, adding new fields/options without breaking compatibility.

Then, once they need to have backwards-incompatible operations, it's more likely that they will also want to revisit the endpoint naming too, so they'll just create new endpoints with new names. (instead of naming anything "v2").

Then, if the entire API needs to be reworked, it's more likely that the team will just decide to deprecate the entire service/API, and then launch a new and better service with a different name to replace it.

So in the end, it's really rare that any endpoints ever have "/v2" in the name. I've been in the industry 25 years and only once have I seen a service that had a "/v2" to go with its "/v1".

ks2048•2h ago
> So in the end, it's really rare that any endpoints ever have "/v2" in the name.

This is an interesting empirical question - take the 100 most used HTTP APIs and see what they do for backward-incompatible changes and see what versions are available. Maybe an LLM could figure this out.

I've been just using the Dropbox API and it is, sure enough, on "v2". (although they save you a character in the URL by prefixing "/2/").

Interesting to see some of the choices in v1->v2,

https://www.dropbox.com/developers/reference/migration-guide

They use a spec language they developed called stone (https://github.com/dropbox/stone).

gitremote•1h ago
I don't think the author meant they don't include /v1 in the endpoint in the beginning. The point is that you should do everything to avoid having a /v2, because you would have to maintain two versions for every bug fix, which means making the same code change in two places or having extra conditional logic multiplied against any existing or new conditional logic. The code bases that support multiple versions look like spaghetti code, and it usually means that /v1 was not designed with future compatibility in mind.
pbreit•1h ago
Disagree. Baking versioning in from the start means they will much more likely be used, which is a bad thing.
claw-el•5h ago
> However, a technically-poor product can make it nearly impossible to build an elegant API. That’s because API design usually tracks the “basic resources” of a product (for instance, Jira’s resources would be issues, projects, users and so on). When those resources are set up awkwardly, that makes the API awkward as well.

One issue I have with weird resources are those that feel like unnecessary abstraction. It makes it hard for the human to read and understand intuitively, especially someone new to these set of APIs. Also, it makes it so much harder to troubleshoot during an incident.

frabonacci•5h ago
The reminder to "never break userspace" is gold and often overlooked.. ahem Spotify, Reddit and Twitter come to mind.
runroader•4h ago
I think the only thing here that I don't agree with is that internal users are just users. Yes, they may be more technical - or likely other programmers, but they're busy too. Often they're building their own thing and don't have the time or ability to deal with your API churning.

If at all possible, take your time and dog-food your API before opening it up to others. Once it's opened, you're stuck and need to respect the "never break userspace" contract.

devmor•4h ago
I think versioning still helps solve this problem.

There’s a lot of things you can do with internal users to prevent causing a burden though - often the most helpful one is just collaborating on the spec and making the working copy available to stakeholders. Even if it’s a living document, letting them have a frame of reference can be very helpful (as long as your office politics prevent them from causing issues for you over parts in progress they do not like.)

Supermancho•1h ago
With internal users, you likely have instrumentation that allows you to contact and have those users migrate. You can actually sunset api versions, making API versioning an attractive solution. I've both participated in API versioning and observed it employed in organizations that don't use it by default as a matter of utility.
cyberax•4h ago
I'm a bit of a different opinion on API versioning, but I can see the argument. I definitely disagree about idempotency: it's NOT optional. You don't have to require idempotency tokens for each request, but there should be an option to specify them. Stripe API clients are a good example here, they automatically generate idempotency tokens for you.

Things that's missing from this list but that were important for me at some points:

1. Deadlines. Your API should allow to specify the deadline after which the request is no longer going to matter. The API implementation can use this deadline to cancel any pending operations.

2. Closely related: backpressure and dependent services. Your API should be designed to not overload its own dependent services with useless retries. Some retries might be useful, but in general the API should quickly propagate the error status back to the callers.

3. Static stability. The system behind the API should be designed to fail static, so that it retains some functionality even if the mutating operations fail.

dwattttt•4h ago
The reminder to "never break userspace" is good, but people never bring up the other half of that statement: "we can and will break kernel APIs without warning".

It illustrates that the reminder isn't "never change an API in a way that breaks someone", it's the more nuanced "declare what's stable, and never break those".

chubot•4h ago
Yeah, famously there is no stable public driver API for Linux, which I believe was the motivation for Google’s Fuschia OS

So Linux is opinionated in both directions - towards user space and toward hardware - but in the opposite way

delta_p_delta_x•3h ago
Even if the kernel doesn't break userspace, GNU libc does, all the time, so the net effect is that Linux userspace is broken regardless of the kernel maintainers' efforts. Put simply, programs and libraries compiled on/for newer libc are ABI-incompatible or straight-up do not run on older libc, so everything needs to be upgraded in lockstep.

It is a bit ironic and a little funny that Windows solved this problem a couple decades ago with redistributables.

Retr0id•2h ago
otoh staticly-linked executables are incredibly stable - it's nice to have that option.
delta_p_delta_x•2h ago
From what I understand, statically linking in GNU's libc.a without releasing source code is a violation of LGPL. Which would break maybe 95% of companies out there running proprietary software on Linux.

musl libc has a more permissive licence, but I hear it performs worse than GNU libc. One can hope for LLVM libc[1] so the entire toolchain would become Clang/LLVM, from the compiler driver to the C/C++ standard libraries. And then it'd be nice to whole-program-optimise from user code all the way to the libc implementation, rip through dead code, and collapse binary sizes.

[1]: https://libc.llvm.org/

teraflop•2h ago
AFAIK, it's technically legal under the LGPL to statically link glibc as long as you also include a copy of the application's object code, along with instructions for how users can re-link against a different glibc if they wish. You don't need to include the source for those .o files.

But I don't think I've ever seen anybody actually do this.

loeg•2h ago
You can (equivalently) distribute some specific libc.so with your application. I don't think anyone other than GNU maximalists believes this infects your application with the (L)GPL.
rcxdude•1h ago
Musl is probably the better choice for static linking anyway, GNU libc relies on dynamic linking for a few important features.
resonious•1h ago
The Windows redistributables are so annoying as a user. I remember countless times applications used to ask me to visit the official Microsoft page for downloading them, and it was quite hard to find the right buttons to press to get the thing. Felt like offloading the burden to the users.
rcxdude•1h ago
GNU libc has pretty good backwards compatibility, though, so if not you want to run on a broad range of versions, link against as old a version of libc as is practical (which does take some effort, annoyingly). It tends to be things like GUI libraries and such which are a bigger PITA, because they do break compatibility and the old versions stop being shipped in distros, and shipping them all with your app can still run into protocol compatibility issues.
zahlman•4h ago
Anyone else old enough to remember when "API" also meant something that had nothing to do with sending and receiving JSON over HTTP? In some cases, you could even make something that your users would install locally, and use without needing an Internet connection.
drdaeman•4h ago
I believe it’s pretty common to e.g. call libraries’ and frameworks’ user- (developer-) facing interface an API, like in “Python’s logging library has a weird-looking API”, so I don’t think API had eroded to mean only networked ones.
mettamage•3h ago
I never understood why libraries also had the word API. From my understanding a library is a set of functions specific to a certain domain, such as a statistics library, for example. Then why would you need the word API? You already know it’s a library.

For end points it’s a bit different. You don’t know what are they or user facing or programmer facing.

I wonder if someone has a good take on this. I’m curious to learn.

shortrounddev2•3h ago
To me the API is the function prototypes. The DLL is the library
dfee•3h ago
To use code, you need an interface. One for programming. Specifically to build an application.

Why does the type of I/O boundary matter?

chubot•4h ago
Well it stands for “application programming interface”, so I think it is valid to apply it to in-process interfaces as well as between-process interfaces

Some applications live in a single process, while others span processes and machines. There are clear differences, but also enough in common to speak of “APIs” for both

rogerthis•4h ago
Things would come in SDKs, and docs were in MS Help .chm files.
j45•3h ago
APIs are for providing accessibility - to provide access to interactions and data inside an application from the outside.

The format and protocol of communication was never fixed.

In addition to the rest api’s of today, soap, wsdl, web sockets could all can deliver some form of API.

bigiain•1h ago
CORBA

Shudder...

gct•3h ago
Everyone's decided that writing regular software to run locally on a computer is the weird case and so it has to be called "local first".
ivanjermakov•2h ago
> sending and receiving JSON over HTTP

In my circles this is usually (perhaps incorrectly) called REST API.

mlhpdx•3h ago
Having built a bunch of low level network APIs I think the author hits on some good, common themes.

Versioning, etc. matter (or don’t) for binary UDP APIs (aka protocols) just as much as for any web API.

wener•3h ago
I still try think /v1 /v2 is a break, I don't trust you will keep v1 forever, otherwise you'll never introduce this execuse.

I'd like to introduce more fields or flags to control the behavior as params, not asking user to change the whole base url for single new API.

calrain•3h ago
I like this pattern.

When an API commits to /v1 it doesn't mean it will deprecate /v1 when /v2 or /v3 come out, it just means we're committing to supporting older URI strategies and responses.

/v2 and /v3 give you that flexibility to improve without affecting existing customers.

swagasaurus-rex•2h ago
Cursor based pagination was mentioned. It has another useful feature: If items have been added between when a user loads the page and hits the next button, index based pagination will give you some already viewed items from the previous page.

Cursor based pagination (using the ID of the last object on the previous page) will give you a new list of items that haven't been viewed. This is helpful for infinite scrolling.

The downside to cursor based pagination is that it's hard to build a jump to page N button.

echelon•1h ago
You should make your cursors opaque so as to never reveal the size of your database.

You can do some other cool stuff if they're opaque - encode additional state within the cursor itself: search parameters, warm cache / routing topology, etc.

rockwotj•59m ago
Came here to say these same things exactly. Best write up I know on this subject: https://use-the-index-luke.com/sql/partial-results/fetch-nex...
0xbadcafebee•2h ago
Most people who see "API" today only think "it's a web app I send a request to, and I pass some arguments and set some headers, then check some settings from the returned headers, then parse some returned data."

But "API" means "Application Programming Interface". It was originally for application programs, which were... programs with user interfaces! It comes from the 1940's originally, and wasn't referred to for much else until 1990. APIs have existed for over 80 years. Books and papers have been published on the subject that are older than many of the people reading this text right now.

What might've those older APIs been like? What were they working with? What was their purpose? How did those programmers solve their problems? How might that be relevant to you?

achernik•2h ago
> How should you store the key? I’ve seen people store it in some durable, resource-specific way (e.g. as a column on the comments table), but I don’t think that’s strictly necessary. The easiest way is to put them in Redis or some similar key/value store (with the idempotency key as the key).

I'm not sure how would storing a key in Redis achieve idempotency in all failure cases. What's the algorithm? Imagine a server handling the request is doing a conditional write (like SET key 1 NX), and sees that the key is already stored. What then, skip creating a comment? Can't assume that the comment had been created before, since the process could have been killed in-between storing the key in Redis and actually creating the comment in the database.

An attempt to store idempotency key needs to be atomically committed (and rolled back in case it's unsuccessful) together with the operation payload, i.e. it always has to be a resource-specific id. For all intents and purposes, the idempotency key is the ID of the operation (request) being executed, be it "comment creation" or "comment update".

rockwotj•53m ago
Yes please don’t add another component to introduce idempotency, it will likely have weird abstraction leaking behavior or just be plain broken if you don’t understand delivery guarantees. Much better to support some kind of label or metadata with writes so a user can track progress on their end and store it alongside their existing data.
canpan•1h ago
> many of your users will not be professional engineers. They may be salespeople, product managers, students, hobbyists, and so on.

This is not just true for authentication. If you work in a business setting, your APIs will be used by the most random set of users. They be able to google for how to call your api in python, but not be able to do things like converting UTC to their local time zone.

barapa•59m ago
They suggest storing the idempotency key in redis. Seems like if possible, you should store them in whatever system you are writing to in a single transaction with the write mutations.