Also, mknod for populating entries in /dev, creating fifos, etc.
In my day we had to use od, and were happy to have it, now get off my lawn.
Full disclosure, I am one of those "modern punk kids" but my first boss was firmly in the od generation. and all his documentation referenced it as such. hexdump is ergonomic heaven in comparison.
I remember hitting that problem on HP-UX while building gcc and GNU tools.
Aside: https://en.wikipedia.org/wiki/Ar_(Unix) says both “The ar format has never been standardized” and “Depending on the format, many ar implementations include a global symbol table (aka armap, directory or index) for fast linking without needing to scan the whole archive for a symbol. POSIX recognizes this feature, and requires ar implementations to have an -s option for updating it.”
Does that mean POSIX defines what the CLI ar tool must do, but not what the files it writes must look like?
[1]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/ar.html
[2]: https://pubs.opengroup.org/onlinepubs/7908799/xcu/pax.html
Also even with 255 limitation, there could be optimizations in place with struct/union, similar to small string optimization done in C++ and modern languages.
However even considering this was a non-starter in PDP-7/11, there is no excuse why since 1989, WG14 never considered adding capabilities to the standard library similar to SDS, or language improvements like fat pointers (There was even an attempt proposal from Ritchie himself).
They had 36 years to improve this.
And what do you mean "old enough", I still had new support accounts restricted to 8 characters by clients this year. ;)
CP/M and MS-DOS extended this to a much more generous 8.3
I remember the thrill of multitasking at the command line for the first time with &, blew my mind. :-D
Can’t remember how I obtained Minix before having access to the internet. Maybe downloaded from a BBS? Or possibly an instructor had a copy on floppy.
I do remember that many floppies for Win95 and Slackware, though a few years later.
My PS/2 HDD was only 20MB, and I recall having enough free space after the install. For Minix 1.5, there's another estimate here of 9 DD floppies -- 6.5MB compressed. That seems about right.
I definitely went for the compiler and all the accessories. The C compiler was far from ANSI-compliant and barely K&R. It made for some interesting times.
Minix 3 is BSD-licensed, and distributed on CD-ROM now, and can still be purchased from Pearson! https://www.pearson.com/en-us/subject-catalog/p/operating-sy...
Per getconf -a, I see PATH_MAX (and _POSIX_PATH_MAX) as 4096. Is that small? What would be not-small?
"Not small" is "limited only by resource constraints." Software often breaks when it hits large (but correct!) paths even though there's no technical limitation to using them, and valid ways to construct them, even if POSIX APIs are required by spec to fail for some valid paths because of arbitrary limits.
Linux is actually pretty good about ignoring unnecessary error conditions even if it violates the spec, other unixes not so much.
IIRC, some real-world systems set PATH_MAX to INT_MAX (Solaris?), but I don't know if any modern Unix systems support arbitrary length paths. Everybody demanded features that required the kernel to cache the path in the kernel, at which point the notion of just walking the userspace path buffer (no separate allocation, no need for a limit) went out the window.
EDIT: glibc sets NL_TEXTMAX to INT_MAX. I always get confused when discussing this issue. NL_TEXTMAX made good on the threat that these MAX macros might be (effectively) unlimited and so shouldn't be used to size buffers, but I don't think any system ever did so with PATH_MAX.
And programmers should allocate buffers of absurd size in order to contain pathnames of unpredictable depth?
Also discovered that I could fool the Macos Finder file sorting the other day if I put zero-width spaces in between numerals in the filename. Not sure how I feel about that one though.
After being biten so many times with spaces or special characters in filenames, one learns.
This sounds like a very onerous limitation, so the company's ads had copy that consisted only of three-letter words, to make the case that it's not so bad after all...
(I'm guessing the engine used four bytes to store a reference, and it needed one byte for other metadata.)
EDIT: Sorry, I realize I didn't explain that very well. A Forth system has a "dictionary" that stored the names of words (think procedures or functions) as well as their code. The dictionary is compressed like this (e.g., TYP/4 or TYP/10). When you're writing code, you write TYPE or TYPOGRAPHY in the code and the compiler searches the dictionary for either TYP/4 or TYP/10 depending on what you typed, ignoring the other one. Most of the time this works. If you have a bunch of words that are the same length with different suffixes, they will clash (e.g., AVGX and AVGY both reduce to AVG/4).
That's still one letter more than the earlier Microsoft BASIC ;)
raydenvm•8mo ago
After that, the next file systems went up to 255 characters.
jmclnx•8mo ago
My first UNIX was Wang IN/ix, which was 14 characters. That was in the very late 80s. Not long afterwards for home use I got Coherent OS, that also had a max of 14.
Later was Slackware, seemed the sky was the limit.
But to be honest, I wish there was still a smaller file name limit, I usually keep my file names size small. But I know I am in a tiny minority when it comes to the size of the file name :)
rwmj•8mo ago
Edit: Yes:
https://github.com/gdevic/minix1/blob/3475e7ed91a3ff3f8862b2...
https://github.com/gdevic/minix1/blob/master/fs/type.h
trollbridge•8mo ago
thesuitonym•8mo ago
EDIT: Upon further reading, it appears MS did finally get rid of this restriction, but some 32-bit apps still don't play nice.
qingcharles•8mo ago
AStonesThrow•8mo ago
Some would even have configurable parameters.
dcminter•8mo ago
https://www.1000bit.it/ad/bro/wang/vs8000.pdf
"Wang also offers a UNIX System V.2-compatible operating system, IN/ix for the VS8000 series" and a footnote mentions that it's due in 1990
My Dad did some work with Wang 2200 systems, but by 1990 it had become clear that the IBM PC compatible was inevitable and he'd switched to Niakwa Basic instead of Wang systems for his customers (mostly running a bespoke small business payroll system).
clausecker•8mo ago