Fractional indexing is one of those ideas that looks obvious until you run it in a long-lived list and the keys start growing weirdly. Curious how you handle that tradeoff here. Do you rebalance or renumber at some point, or is compaction left to the caller?
marcobambini•1h ago
Good question, in the current version, compaction is left to the caller.
The library does a few things to keep key growth practical:
1. generate_n_keys_between: if you know you're inserting N items at once (e.g. a bulk paste or drag of multiple items), it distributes them evenly in a single call rather than chaining N sequential generate_key_between calls, which would produce progressively longer keys.
2. Higher bases: base62 (default) and base95 give you a much wider branching factor per character than base10, so keys stay shorter under the same insertion patterns.
But yes, in a long-lived list with adversarial insertion patterns (always inserting at the same spot), keys will grow unboundedly, that's inherent to the algorithm. In practice, compaction is an application-level concern: you'd pick a quiet moment (or a sync boundary, if you're doing CRDT-style collaboration) and renumber the whole list with fresh, evenly spaced keys.
The library gives you generate_n_keys_between(NULL, NULL, n) which makes trivial to generate N fresh keys for the whole list in one call.
miki_ships•1h ago
marcobambini•1h ago
The library does a few things to keep key growth practical:
But yes, in a long-lived list with adversarial insertion patterns (always inserting at the same spot), keys will grow unboundedly, that's inherent to the algorithm. In practice, compaction is an application-level concern: you'd pick a quiet moment (or a sync boundary, if you're doing CRDT-style collaboration) and renumber the whole list with fresh, evenly spaced keys.The library gives you generate_n_keys_between(NULL, NULL, n) which makes trivial to generate N fresh keys for the whole list in one call.