I fairly strongly disagree with this. Database identifiers have to serve a lot of purposes, and natural key almost certainly isn’t ideal. Off the top my head, IDs can be used for:
- Joins, lookups, indexes. Here data type can matter regarding performance and resource use.
- Idempotency. Allowing a client to generate IDs can be a big help here (ie UUIDs)
- Sharing. You may want to share a URL to something that requires the key, but not expose domain data (a URL to a user’s profile image shouldn’t expose their national ID).
There is not one solution that handles all of these well. But using natural keys is one of the least good options.
Also, we all know that stakeholders will absolutely swear that there will never be two people with the same national ID. Oh, except unless someone died, then we may reuse their ID. Oh, and sometimes this remote territory has duplicate IDs with the mainland. Oh, and for people born during that revolution 50 years ago, we just kinda had to make stuff up for them.
So ideally I’d put a unique index on the national ID column. But realistically, it would be no unique constraint and instead form validation + a warning on anytime someone opened a screen for a user with a non-unique ID.
Then maybe a BIGINT for database ID, and a UUID4/7 for exposing to the world.
EDIT: Actually, the article is proposing a new principle. And so perhaps this could indeed be a viable one. And my comment above would describe situations where it is valid to break the principle. But I also suspect that this is so rarely a good idea that it shouldn’t be the default choice.
A single AES encryption block is the same size as a UUID and cheap to compute.
I didn’t realise this! The UUID spec mandates some values for specific digits, so I assume this would not be strictly valid UUIDs?
I suddenly got flash-backs.
There are duplicate ISBN numbers for books, despite the system being carefully designed to avoid this.
There are ISBN numbers that have invalid checksums, but are valid ISBNs with the invalid number in the barcode and everything. Either the calculation was incorrectly done, or it was simply a mis-print.
The same book can have hundreds of ISBNs.
There is no sane way to determine if two such ISBNs are truly the same (page numbers and everything), or a reprint that has renumbered pages or even subtly different content with corrected typos, missing or added illustrations, etc...
Our federal government publishes a master database of "job id" numbers for each profession one could have. This is critical for legislation related to skilled migrants, collective workplace agreements, etc...
The states decided to add one digit to these numbers to further subdivide them. They did it differently, of course, and some didn't subdivide at all. Some of them have typos with "O" in place of "0" in a few places. Some states dropped the leading zeroes, and then added a suffix digit, which is fun.
On and on and on...
The real world is messy.
Any ID you don't generate yourself is fraught with risk. Even then there are issues such as what happens if the database is rolled back to a backup and then IDs are generated again for the missed data!
On the other hand I've used surrogate keys for 20 years, and never encountered an issue that wasn't simple to resolve.
I get there are different camps here, and yes your context matters, but "I'm not really interested in why natural keys worked for you." They don't work for me. So arguments for natural keys are kinda meh.
I guess they work for some folk (shrug).
Trusting the client to generate a high-quality ID has a long history of being a bad idea in practice. It requires the client to not be misconfigured, to not be hacked, to not be malicious, to not have hardware bugs, etc. A single server can generate hundreds of millions of IDs per second and provides a single point of monitoring and control.
Modern distributed systems almost always use compound binary packed GUID: EPOCH_TIME, IP, MAC, PID, memory Address-offset, Account ID, and or signed object hash. Thus, the node knows 100% for sure the key is always globally unique, and still preserves its origin.
This makes inefficient SQL design given it de-normalizes most structures, but memory storage cost is cheap compared to the features gained abandoning incremental/indexed keys. Also, combining localized transaction state expected-state pre-conditions in the query with the key packed with breadcrumbs solves problems you don't know you have yet (including non-blocking options.)
In general, many projects end up just implementing an object store in SQL eventually. Yes it is terrible design, but also a convenient bodge =3
Yes you can create your tables with ON UPDATE CASCADE foreign keys, but are those really the only places the ID is used?
Sometimes your own data does present a natural key though, so if it’s fully within your control then it’s a decent idea to use.
Why does it matter? I have seen that many developers rely totally only on the code to manage entities on the database, instead of relying on prepared statements and pure SQL queries. This obviously opens a door for poor optimisation, since these Entity Management libraries don't support certain SQL capabilities.
That said, I’m not a fan of natural keys as primary keys. Especially composite keys. This just takes everybody back to the 80s/early 90s.
It only makes sense when there’s a huge storage benefit
create table citizen (
national_id national_id primary key,
full_name text);
Is national_id really a natural key, or is it someone else's synthetic key? If so, should the owner of that database have opted for a natural key rather than a synthetic key?More arguments for synthetic over natural keys: https://blog.ploeh.dk/2024/06/03/youll-regret-using-natural-...
However that still runs into problems of nondurability of the key in cultures that delay naming for a few years. To name one problem.
So yeah, use a big enough integer as your key, and have appropriate checks on groups of attributes like this.
However, if you are only interested in citizens, then a "natural" key is the citizen id issued by the government department responsible for doing that. (Citizen is a role played by a natural person so probably doesn't have too many attributes.) I still wouldn't use that as a primary key on the citizen table, though.
She doesn’t have a birth certificate.
She was born in a country that was enduring several years of brutal war.
I know another person whose national ID was changed. Systems that use national ID as primary key failed to accept this change.
Natural keys are important. But the real world and the databases that represent them are messy. People’s identities get stolen. Data entry mistakes and integration between systems fail and leave the data in a schizophrenic state.
In my experience I find arguments about natural keys unproductive. I usually try to steer the conversation to the scenarios I mentioned above. Those who listen to me will have a combination of synthetic and natural keys. The first is used to represent system state. The second is used to represent business processes.
BTW: email+password should be separated too. An early draft of GDPR specifically mentioned that, though the final version got less into the weeds.
I’m sure if you vibe code any of this, it will all be plaintext, lol.
If I remember my database lessons correctly there is no strictly highest normal form. It progresses from 1NF to BCNF, but above that it is more choosing different trade-offs.
Even below it is always a trade-off with performance and that is why we most of the time aim for 3NF, and sometimes BCNF.
Devil captured Physicist, Engineer and Mathematician. He gave each of them big can of spam and locked them in the empty room saying „you will be here for 2 weeks - open the can and survive or die to starvation”. After 2 weeks Devil opens Physicist cell. It’s covered floor to the ceiling in complex scribbles. One piece of wall is clean of etching but small dent is visible. Can of span is opened and eaten clean, Physicist sits in corner visibly annoyed. Next one is Engineer. Cell walls are covered in multiple dents and pieces of spam. Engineer is bruised almost as much as the can, but it is ultimately opened and engineer is alive.
Finally the Devil opens Mathematician cell and find him dead. Only „given the cylinder” is etched on the wall.
—-
Puent isn’t about engineering but it always helped me to set limits between software engineering and computer science.
No it shouldnt.
Natural keys sometimes need to change for unforeseen reasons, such as identity theft, and this is really tricky to manage if those keys are cascaded into many tables as foreign keys.
Natural keys are often not unique either. Using the national ID example, there are millions of duplicate SSNs issued within USA. https://www.computerworld.com/article/1687803/not-so-unique....
So, don't use natural keys as primary keys. Put them in as surrogate keys, ideally with a unique constraint.
They had the same birth date, school, parents, phone number, street address, first name, last name, school, teachers, everything...
The story was that their dad was John Smith Sr in a long line of John Smiths going back a dozen generations. It was "a thing" for the family line, and there was no way he was going to break centuries of tradition just because he happened to have twins.
Note: In very junior grades the kids aren't expected to memorise and use a student ID because they haven't (officially) learned to read and write yet! (I didn't use one until University.)
Some examples:
> A relation should be identified by a natural key that reflects the entity’s essential, domain-defined identity
In some domains there is no natural key because the identity is literally an inference problem and relations are probabilistic. The objective of the data model is to aggregate enough records to discover and attribute natural keys with some level of confidence. A common class of data models with this property are entity resolution data models.
> All information in the database is represented explicitly and in exactly one way
Some data models have famously dual natures. Cartographic data models, for example, must be represented both as a graph models (for routing and reachability relationships) and as geometric models (for spatial relationships). The “one true representation” has been a perennial argument in mapping for my entire life and both sides are demonstrably correct.
> Every base relation should be in its highest normal form (3, 5 or 6th normal form).
This is one of those things that sounds attractive because it ignores that it requires no ambiguities about domain boundaries or semantics, which doesn’t exist in practice. I bought into this idea too when I was a young and naive data modeler. Trying to tamp out these ambiguities adds an unbounded number of data model epicycles that add a lot of complexity and performance loss. At some point, strict normalization is not worth the cost in several aspects.
In almost all cases, it is far more important that the data model be efficient to work with than it be the abstract platonic ideal of a domain model. All of these principles have to work on real hardware in real operational environments with all of the messy limitations that implies.
National id is not something issued at birth in the country I live in. It’s something applied for at a certain age.
Never trust something outside your system to be stable.
"tell the truth that is out there"
Both truth and representation are very slippery, many-faceted concepts, encumbered with millennia of use and philosophy. Using them in this way is deceptive to the junior and useless to the senior.
AnonHP•2h ago
weinzierl•2h ago
If I remember my database lessons correctly there is no strictly highest normal form. It progresses from 1NF to BCNF, but above that it is more choosing different trade-offs.
Even below it is always a trade-off with performance and that is why we most of the time aim for 3NF, and sometimes BCNF.
moi2388•1h ago
zeroCalories•1h ago