And if the info is non-scalar, it's either option 2 (nullable FK) or 5 (JSON), depending on whether or not other things join with fields inside it.
MongoDB is great for this
Using a polymorphic setup in something like Postgres is where I like to start any modeling.. until the complexity of queries reveal themselves.
The further one goes down that rabbit hole, the sql can easily become more and more complex for the kinds of queries that become beneficial in polymorphic or graph type databases.
Where SQL gets complex and convoluted over time to handle polymorphic queries, is one place where polymorphic databases can simplify the complex.
Using a polymorphic database may be worth it for some or all of the schema. It seems reasonable to extend something like Postgres to sync and allow reads or writes from the polymorphic side.
Happy to learn and listen from alternatives approaches.
I see two major sources of inspiration that can help us get out of the tar pit.
The first is the EAV approach as embodied in databases such as Datomic, XTDB, and the like. This is about recognizing that tables or documents are too coarse-grained and that entity attribute is a better primitive for modeling data and defining schemas. While such flexibility really simplifies a lot of use cases, especially the polymorphic data from the article, the EAV model assumes data is always about an entity with a specific identity. Once again the storage technology imposes a model that may not fit all use cases.
The second source of inspiration, which I believe is more generic and promising, is the one embodied in Rama from Red Planet Labs, which allows for any data shape to be stored following a schema defined by composing vectors, maps, sets, and lists, and possibly more if custom serde are provided. This removes the whole impedance mismatch issue between code and data store, and embraces the fact that normalized data isn't enough by providing physical materialized views. To build these, Rama defines processing topologies using a dataflow language compiled and run by a clustered streaming engine. With partitioning being a first-class primitive, Rama handles the distribution of both compute and data together, effectively reducing accidental complexity and allowing for horizontal scaling.
The difficulty we face today with distributed systems is primarily due to the too many moving parts of having multiple kinds of stores with different models (relational, KV, document, graph, etc.) and having too many separate compute nodes (think microservices). Getting out of this mess requires platforms that can handle the distribution and partitioning of both data and compute together, based on powerful primitives for both data and compute that can be combined to handle any kind of data and volumes.
If you are storing json blobs in SQLite and using a very fast serializer (gigabytes/s), then anything under a megabyte or so won't really show up on a hot path. Updates to complex entities can actually be faster, even if you're burning more IO and device wear.
If you need to join across properties in the JSON, I wouldn't use JSON. My canary is if I find myself using the built in json query functionality, I am too far into noSQL Narnia.
* https://www.martinfowler.com/eaaCatalog/singleTableInheritan...
* https://www.martinfowler.com/eaaCatalog/classTableInheritanc...
* https://www.martinfowler.com/eaaCatalog/concreteTableInherit...
Properly naming check constraints can go a long way towards providing better error messages, and then there are stored procs which can be used to ensure that multiple tables are updated as a single logical step. It would've been nice if SQL supported literal values in FK constraints, e.g. FOREIGN KEY (id, 'uninsured') REFERENCES patients(id, payment_type), as well as exclusion constraints, e.g. FOREIGN KEY (id) NOT REFERENCES insured_payment_info(id). As it stands, my preferred approach would be combining approach 4 with stored procs, except I would make id the primary key and (id, payment_type) a unique key.
acquiesce•7mo ago
setr•7mo ago
ozgrakkurt•7mo ago
setr•7mo ago
gm678•7mo ago
acquiesce•7mo ago
Other top level comments covered what I wanted to say but my comment is the OG one. I deal with payments, transactions and all that with multiple currencies and other complexities. Just keep it simple and don’t use OOP for this stuff it’s the wrong tool for the job.
setr•7mo ago
If you separate them into different insured and uninsured tables, then any tables associated with billing generally needs to be cascaded into an insured/uninsured variants as well. Billing_customer and billing_customer_details now becomes uninsured_billing_customer and insured_billing_customer and uninsured_billing_customer_details, etc.
As you add more data of this constraint, everything fragments again, scaling at 2^n tables. This is similar to the async coloring problem; what you wanted is to locally fragment the data model, but instead anything that touches it gets poisoned.
Ideally the DB would let you enforce the constraint UNIQUE(uninsured.billing_id, insured.billing_id) and split-table would suffice
I’m not seeing any top-level comments that resolve this —- other than ignoring the problem altogether (let the data model encode invalid states, handle it in app code), or switching to a different database.
frollogaston•6mo ago
I wouldn't do this with separate tables. I also wouldn't do this with polymorphism, or OOP in general, even if the DBMS properly supported OOP. Trying to represent these constraints by classifying things will get confusing fast.
IceDane•7mo ago
Sometimes you can indeed view things from a different perspective and come up with a simpler data model, but sometimes you just can't.
acquiesce•7mo ago
The process for handling the two cases is distinct. This is the classic OOP issue of trying to use a Shape object to represent both Box and Sphere. Just don’t. Stick with transaction/linear code and use transforms as it makes sense for certain cases (ie, MVVM style). Handle the two cases distinctly so there is no room for ambiguity.
People get this confused and they think it can’t be simpler.