I think this is a pretty dangerous attitude, and it is really the only thing wrong with Linux, and probably leads to replacement of simple structure and functionality with a complex software suite that is merely more convenient, like systemd. "Let's change this thing because we want to, because it will improve performance 0.0024%"
Feature creep is what happens when restraint was not exercised.
IMO, since it really doesn't matter what the filesystem looks like, leave it be for standards and compatibility. Seriously, it takes, idk, maybe, a lack of humility to want to change fundamental characteristics of UNIX when the reasons for doing so are a little capricious.
I'm not really talking about the parent, fwiw. I'm talking about the crowd and ochlocracy.
There's plenty greybeards that for them "Linux" is a full screen terminal running emacs on decade-old hardware. "I don't use antialiased fonts, why the hell should I care about decent HiDPI support?" And then protest every time some working group tries to modernise and improve the Linux desktop. You see them every time on this forum.
I'm a greybeard, I've used Linux full time on the desktop for 20 years. I don't get this conservative, "we don't need it" attitude.
I was thinking "just symlink /sbin with /bin", but there would probably be conflicts.
Can't imagine the frustration deva must have.
It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
On FreeBSD the split is very real, anything in /bin/ ships with my OS and is maintained and updated by the FreeBSD team. Anything in /usr/bin/ comes from ports and is thus a 3rd party package I installed and can be safely nuked and I need to maintain/update it.
Having only usable binaries in the path aids discoverability of the system.
All the files that might be expected by others to be in certain standard locations are sym-linked to those locations, e.g. the executables to /usr/bin,/usr/sbin,/bin or /sbin, in order to appear in PATH.
In this case you no longer need any kind of database to know which files may be safely nuked to delete any package.
Moreover, in FreeBSD there is no such separation between the "base OS packages" and "3rd party packages", implemented as a difference between root and /usr. You might have misremembered /usr/local, which is indeed a place for "3rd party packages" in all UNIX-derived operating systems.
There are many "base OS packages" that are installed in /usr/bin or in /usr/sbin.
In any FreeBSD system, you can see their source files in /usr/src/usr.bin and in /usr/src/usr.sbin.
I have been using FreeBSD for a quarter of century, since FreeBSD 2.0, and there has never been such a separation between root and /usr.
The separation between /bin and /usr/bin and the other similar pairs was made only to allow /usr to be unmounted, when it is on another device than the root device, but still have in the root file system the minimal set of tools needed for diagnosing and repairing any broken file system or network connection.
In ancient FreeBSD installations it was always recommended to have a separate small root partition, e.g. of a few hundred megabytes, and some large partitions for usr and var.
This original use has become completely obsolete, because now, for diagnosing and repairing problems, it is preferable to boot from an USB stick or from the network (using a ramdisk as root file system), and then run diagnostics or repair programs without touching even the root file system unless modifying it is intentional.
In FreBSD it might still be possible to put /usr on a different partition or device and then unmount /usr, but in many Linux distributions this traditional usage is broken, because some of the programs installed in the root directories need components installed in /usr, so when /usr is unmounted they stop working.
I always thought the rationale was that if statically linked binaries are on different partition they can be used to recover the system from a failure.
Edit: files in /bin are also statically linked, and I am unsure about what I wrote above but vaguely recall something like that
Not only feasible but it's been implemented a few times over the years. The most notable being GoboLinux[1][2], which is nearly 20 years old.
[1] https://en.wikipedia.org/wiki/GoboLinux
> I was thinking "just symlink /sbin with /bin", but there would probably be conflicts
Given how long /sbin et al have been around, there would always be some edge cases. However it is still possible to do. GoboLinux uses symlinks to achieve LFH[3] compatibility while still having friendly directory names. ArchLinux also just has one bin directory and uses symlinks for compatibility:
» ls -l / | grep bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 bin -> usr/bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 sbin -> usr/bin
» ls -l /usr | grep bin
drwxr-xr-x 5 root root 110,592 2022-05-06 09:23 bin
lrwxrwxrwx 1 root root 3 2021-12-07 02:41 sbin -> bin
[3] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_StandardLikewise, the system configuration goes to /etc while the userland configuration goes to /usr/pkg/etc.
All it takes to factory reset a NetBSD system is an rm -Rf /usr/pkg.
The problem is that the actual benefits a pretty nebulous, so it's probably not worth the effort (and drawbacks of using different conventions than most others *nix users).
On FreeBSD 3rd party packages go into /usr/local and not /usr
You absolutely will get base packages in /usr/bin (eg `env`) so nuking /usr/bin will break your FreeBSD install.
There's a good write up here: https://unix.stackexchange.com/questions/332764/role-of-the-...
It was a historical quirk to start with. At Bell Labs, back in the early 1970s, Unix was being developed on PDP-11s with RK05 hard disks (with removable disk packs), which had an amazingly generous capacity of 2.5MB each. The Unix operating system had grown too big to fit on a single RK05 disk volume so they had to split it across two. Other operating systems of the period faced similar issues, but dealt with them in (arguably) more elegant ways – on IBM mainframes, OS/360 maintained a database ("catalog") mapping file paths (dataset names, to use the proper terminology) to volume names, so you could move a file to another disk without changing its path. True to Unix's penchant for simplicity, its authors decided instead to just split the OS into / and /usr. And the split survived long after they'd upgraded to more spacious disks.
Any other explanation for the split is essentially a retcon. Some of those retcons (even if, as other commenters have pointed out, not your own) may actually have become true – some of them may have been approximately true to begin with, and they influenced people's decisions, thereby making themselves more true over time. But its ultimate origins will forever remain this quirk of computing history.
And why shouldn't they?
It's not as if a user could do anything damaging with them, if the system is setup properly.
> Having only usable binaries in the path aids discoverability of the system.
Except when someone new has to go online to ask "I found this tutorial telling me to use the `xyz` command to do this, but all I get is `bash: xyz: command not found`, please help!"
This means that in practice people will just add sbin to PATH to get a somewhat usable system, which makes the division between bin and sbin useless.
Furthermore, on BSD derived systems binaries that should not be invoked by users directly (e.g., daemons) need to be stored in libexec.
I don't mean to shame you, I sometimes comment without reading TFA, and in your case you add a few more details that were not present in the article. I just found it interesting.
Because of that, in many Linux distributions there are few, if any, static executables. Due to this, it may happen that a botched glibc upgrade makes the system unusable, because no executable can be started to repair it (nowadays many distributions have a static busybox for such situations). I have seen this a couple of times, and the first time I could not understand what happened, because I was used to older systems, where the commands that I tried to execute (e.g. ls or mv) had been statically linked. Such a thing could never happen in a traditional UNIX or Linux system, before glibc disallowed static linking.
The GNU libc should have been split into a libc with most of the functions, which may be linked statically without problems, and into a small library with the name resolving functions, which could be linked dynamically only by the programs which need those functions.
Even better, the name resolving functions should have been organized in such a way to be able to use their default configuration with static linking and choose dynamic linking only when you really intend to override the default configuration when using less common services, e.g. NIS.
We _could_ all decide to drive on the otherside of the road, if the other side is better, but you have to incorporate the cost of the change.
For those of us who ran small-disk NFS workstations back in the day having the split and a common /usr was no quirk and very useful. (There were also diskless (Sun, OpenFirmware netbooting) workstations: common /bin, /usr, but per-machine /var on the NFS server.)
The article states:
> Cheap retail hard drives passed the 100 megabyte mark around 1990, and partition resizing software showed up somewhere around there (partition magic 3.0 shipped in 1997).
Yeah, except if you have a fleet of several hundred or thousand workstations to provision. "Cheap" is relative, especially if you're an academic institution.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html
You may be thinking of the /bin and /usr/bin difference, though.
$ ls -la / | grep -e bin -e lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 bin -> usr/bin lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib64 -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 sbin -> usr/bin
> Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin.
> /bin/ User utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
> /sbin/ System programs and administration utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
My memory is hazy but I recall the distinction being / vs /usr not /bin vs /sbin.
Simplicity is reason enough to change something.
When things break because of reasonable change, they can be fixed. And in this case, backwards compatibility can be ensured simply by symlinking things.
FWIW, Slackware keeps the separate, following the Linux Standard Base.
Most examples will include the standard user path plus /sbin and /usr/sbin but you can add any directories you want to the option.
For every failed candidate, you are doing one system call, so roughly the same cost each way.
Now if you just do an execve, you’re just paying that cost. If you stat first, you pay the cost of another system call that doesn’t change the flow of your program at all (a nice way of saying you’re wasting time).
Unless stat is dramatically faster than exec on a nonexistent or non-executable path, there’s never a case where this is better.
Having mnt be statically linked makes it much easier to recover that system.
The ideal of "/sbin for system tooling" isn't so much one of static vs dynamic but rather users accidentally finding system tools that don't work and sending email to the admin saying "mnt gives me a permission denied error" when they have no business running it.
But I have only ever seen historic references to that argument, from back when dynamic linking was scary and unreliable. I certainly have never encountered that situation in almost 25 years of using Linux.
$ file /sbin/* | grep "dynamically linked" | wc -l
325
$ file /sbin/* | grep -i "static" | wc -l
0
On Ubuntu focal, but Red Hat is similar.I know RHEL, Debian, and Arch do. Not a lot outside of those families.
> to merge /sbin with /bin, and /usr/sbin with /usr/bin
It's a bit more drastic than you make it out to be. This would give two valid $PATHS to the same commands. It would make tab-completion slow. It would likely break all kinds of compatibility across the SUS. And it is incredibly arbitrary, no better or worse than eliminating system hierarchy entirely and putting everything in /.
You're literally saying that not arbitrarily changing the file structure of linux is dangerous. I don't think that's what you meant.
It's not about "because it's been that way for 30 years," even though it's been 50 years, but never mind that, it's about consistency and standards. It just does not matter one way or the other what the structure of the file system is, so any agenda to change something that doesn't matter is itself a specious agenda. Changing fundamental design introduces complexity for no good reason. As soon as you do it, you've create a special case that doesn't work anywhere else and jeapardizes compatibility.
It should be easy enough to repair but it was just an old laptop I wanted to test something on so I ended up throwing the laptop back in the draw instead.
But autocomplete after sudo doesn't work for me on a stock Debian install anyway, not sure what one needs to do to get around that. I don't really rely on it. If I'm doing enough work that needs root I start the session with "sudo su -" anyway so not having autocomplete after sudo is not a big deal for me.
In practice well behaving shells cache the contents of PATH to speed up operations.
Downside is it stops the autocomplete, so if you, say, wish to check quickly how binary is called on the system, e.g. if you should sudo apache2 or httpd, it will not work...
It's just outweighed a couple orders of magnitude by all the overhead that comes with a successfully launching another executable unless you have, like, a thousand junk paths in your PATH.
It didn't work out this way historically (doing unnecessary string processing, requiring extra memory, could've been more expensive than the context switches), and the performance impact of failed execve isn't normally a high priority, and there are other reasons not to want stuff in the kernel (not that it stops frankly less critical stuff from getting in the kernel), but there's definitely low-hanging fruit here if it like, mattered.
Of course in the last twenty-five years I don't think I've ever really used a system set up like that. But it does seem nice to at least be able to do so.
I think it's nice to be able to keep admin utils out of an admin's PATH when the admin isn't intending to use them.
It's much less interesting to me to keep daemons and such out of anyone's PATH if running them can't do much, though usually those things really belong in a libexec directory and should be exec'ed intentionally only.
You could move all the things in /bin and /sbin to /usr/bin and /usr/sbin, then leave behind links (symbolic or hard).
Since everyone ends up having /bin and /usr/bin in PATH, this merge makes a lot of sense from a performance point of view.
Merging bindirs and sbindirs is a touchier topic. Many things in sbindirs should have been in bindirs all along, and many should move to libexecdirs, but some should stay behind so that privileged users can keep sbindirs out of PATH when they're not wearing an admin hat.
$ find /{usr,bin,sbin}
/usr
/usr/bin
/usr/bin/env
/bin
/bin/sh
find: '/sbin': No such file or directoryThat said, I think one of the better reasons (and ways) to weigh the value of changing some long-term practice is to focus on the anticipated costs of the change on one side of the ledger, and the ongoing (easy to ignore) unbounded costs of the status quo on the other (and appropriately weight them by who pays and how often). To shoot from the hip:
- If it's only a modest improvement that still supports a bit of misunderstanding, folksonomizing, and arguing about where things belong--it'll just waste time and energy better spent elsewhere. Any time would probably be better spent on writing and promoting/propagating a really good canonical reference to the status quo that can help drive out confusion and enable devs/admins answer practical questions (even if inefficiently).
- If (utopia warning) someone is able to significantly improve how accurately and quickly humans can make real dev/admin decisions from a clear mental model _and_ get enough buy-in to do it across all of the major Unix-alikes, it's probably worth some medium-term pain.
FWIW, the ongoing progress of NixOS, which doesn't really have any of these paths (beyond /usr/bin/env and /bin/sh), demonstrates that this pain is surmountable with enough eyes and hands.
The first PC I built had 7 disk drives in a tower case, four distinct hard drives. Yes it was crazy. But the largest of these by far was 540 MB. It made sense to keep the boot stuff on its own hard drive.
Linux has `boot`, of course, but `boot` should never appear in $PATH. I think.