One of the neat properties of base58 (and, strictly speaking, of base62 as well) is that it does not contain any characters that require special encoding to be used in either a URL or a filename. Nor does it contain any characters that are considered to be "word-breaking" by most user interfaces, so you can do things like double-click on a base58 string to select the entire string. Base64 has none of the above properties, while being only very slightly more efficient.
JdeBP•6mo ago
I wonder whether anyone has ever covered all of the tradeoffs in one place. There are a quite a few of these encodings.
UUencoding worked even when passed through non-ASCII mechanisms/protocols that didn't do lowercase, or that were case-insensitive; but at the expense of using what in some contexts would be reserved metacharacters. Whereas XXencoding did not have a problem with metacharacters, only using plus and minus in addition to the alphanumerics, but at the expense of being case-sensitive.
viz encoding can avoid whatever metacharacters one chooses, with no changes to the decoder, the choice being entirely at the encoding end, and is similarly used in scenarios where one does not want to break at whitespace or general word-breaking punctuation; but has a lot of overhead for each such encoded character and requires at minimum an alphabet of three punctuation characters (caret, minus, and backslash), the octal digits, and the letter 'M'.
xhkkffbf•6mo ago
It seems like the challenge is that Base85 includes lots of characters that look similar like an oh and a zero.
OutOfHere•6mo ago
I prefer base56, specifically https://github.com/foss-fund/base56 , for enhanced visual safety. It removes 1 and o. As noted on the linked page, the corresponding charset is 23456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnpqrstuvwxyz.
Note that there is no good standardization of base58 and therefore of base56. As such, there may exist variations depending on where you look, but the linked one is what I prefer.
rjknight•6mo ago
JdeBP•6mo ago
UUencoding worked even when passed through non-ASCII mechanisms/protocols that didn't do lowercase, or that were case-insensitive; but at the expense of using what in some contexts would be reserved metacharacters. Whereas XXencoding did not have a problem with metacharacters, only using plus and minus in addition to the alphanumerics, but at the expense of being case-sensitive.
viz encoding can avoid whatever metacharacters one chooses, with no changes to the decoder, the choice being entirely at the encoding end, and is similarly used in scenarios where one does not want to break at whitespace or general word-breaking punctuation; but has a lot of overhead for each such encoded character and requires at minimum an alphabet of three punctuation characters (caret, minus, and backslash), the octal digits, and the letter 'M'.