It is indeed true that when you have limited resources, simpler - in the sense of better, more beautiful - solutions often emerge.
I’ve spent like eight years with Ubuntu and realize it’s all symbol manipulation to me. I learn what is and goes where but all in practice and never because I understand the semantics.
Also systemd has a "file-hierarchy" man page for its understanding of the hierarchy, which includes e.g. its use of /run and which directory can be read-only - https://man7.org/linux/man-pages/man7/file-hierarchy.7.html
Disk space for binaries has not been a problem for decades now.
No reason to do it on embedded system. Lots of backward compatibility reason on servers/desktops.
Having the OS mounted read-only provides some security benefits.
The other option would of course be to have / mounted ro and then have rw mounts in /home, /etc, /var and /tmp, but this is more complicated than a a rw / and a ro /usr
https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
Off the top of my head:
Nobody questions why main drive is C:, remnant of [an] early computer having two floppy (not sure) drives on A: and B:
Or more recent - C:/Windows/System32 holds 64 bit executables; 32 bit exectuables live in C:/Windows/SysWOW64
Recently I was trying to install some obscure driver for a device that doesn't autodetect in my Windows 10 work computer, I had to go through the old school "add device" wizard. When clicking to manually provide the driver, the dialog is exactly (or almost?) the same as the one from Windows 95, and the path defaults to... A:\! There's no floppy on this computer, there even isn't an optical drive!
I think this is a pretty dangerous attitude, and it is really the only thing wrong with Linux, and probably leads to replacement of simple structure and functionality with a complex software suite that is merely more convenient, like systemd. "Let's change this thing because we want to, because it will improve performance 0.0024%"
Feature creep is what happens when restraint was not exercised.
IMO, since it really doesn't matter what the filesystem looks like, leave it be for standards and compatibility. Seriously, it takes, idk, maybe, a lack of humility to want to change fundamental characteristics of UNIX when the reasons for doing so are a little capricious.
I'm not really talking about the parent, fwiw. I'm talking about the crowd and ochlocracy.
/sbin, /usr/sbin is for binaries that need root. You put them in separate directories so their permissions all match up, and so they don't show up when completing in bash.
The paths without /usr - /bin and /sbin - are available from the get go. It is the very first partition that is mounted, and what is guaranteed to be available if you do "init 1" or boot in single user mode. You can also do fsck from there (assuming the boot partition is not damaged). I don't know how this integrated with initrd (initramfs wasn't a thing yet). I think there was only one "base system" - either initrd was very basic, or the whole base was in initrd, or something similar.
The paths with /usr were managed by the package manager. Word of mouth was: don't install anything manually there. If you do (via make install), keep around the source so you can do make uninstall. But better install to /usr/local or /opt.
The issue is organisation. There is already so much junk in the bin folders. I think it would be much neater to further split the bins into various categories: "shell tools" like ls, [, echo; "applications" like firefox, inkscape, "helpers" like gnome-settings-daemon, ... There is no need to show weird daemons when pressing TAB in bash, and there is no need to show `ls` when picking an application via a GUI.
There's plenty greybeards that for them "Linux" is a full screen terminal running emacs on decade-old hardware. "I don't use antialiased fonts, why the hell should I care about decent HiDPI support?" And then protest every time some working group tries to modernise and improve the Linux desktop. You see them every time on this forum.
I'm a greybeard, I've used Linux full time on the desktop for 20 years. I don't get this conservative, "we don't need it" attitude.
I was thinking "just symlink /sbin with /bin", but there would probably be conflicts.
After that I would make sure to have some working (static) binaries for rescue on every *nix system, tar at least, and on Solaris an extra /usr/sbin/fsck under the /usr mount point). You can fix a lot of things with tar, sed and netcat.
I also got this explanation, but it never made much sense to me. First of all, the binaries there are executable by everyone anyway. Second, it really doesn't matter that they show up during completion. Third, many of them work fine and are quite useful without root! I don't recall the specific examples that bothered me (/sbin and /usr/sbin have been in my PATH forever now), but I think it was something like ifconfig or ping.
Can't imagine the frustration deva must have.
It's a historical quirk on linux, where there is no clear separation between "base OS packages" and "3rd party packages".
On FreeBSD the split is very real, anything in /bin/ ships with my OS and is maintained and updated by the FreeBSD team. Anything in /usr/bin/ comes from ports and is thus a 3rd party package I installed and can be safely nuked and I need to maintain/update it.
Having only usable binaries in the path aids discoverability of the system.
All the files that might be expected by others to be in certain standard locations are sym-linked to those locations, e.g. the executables to /usr/bin,/usr/sbin,/bin or /sbin, in order to appear in PATH.
In this case you no longer need any kind of database to know which files may be safely nuked to delete any package.
Moreover, in FreeBSD there is no such separation between the "base OS packages" and "3rd party packages", implemented as a difference between root and /usr. You might have misremembered /usr/local, which is indeed a place for "3rd party packages" in all UNIX-derived operating systems.
There are many "base OS packages" that are installed in /usr/bin or in /usr/sbin.
In any FreeBSD system, you can see their source files in /usr/src/usr.bin and in /usr/src/usr.sbin.
I have been using FreeBSD for a quarter of century, since FreeBSD 2.0, and there has never been such a separation between root and /usr.
The separation between /bin and /usr/bin and the other similar pairs was made only to allow /usr to be unmounted, when it is on another device than the root device, but still have in the root file system the minimal set of tools needed for diagnosing and repairing any broken file system or network connection.
In ancient FreeBSD installations it was always recommended to have a separate small root partition, e.g. of a few hundred megabytes, and some large partitions for usr and var.
This original use has become completely obsolete, because now, for diagnosing and repairing problems, it is preferable to boot from an USB stick or from the network (using a ramdisk as root file system), and then run diagnostics or repair programs without touching even the root file system unless modifying it is intentional.
In FreBSD it might still be possible to put /usr on a different partition or device and then unmount /usr, but in many Linux distributions this traditional usage is broken, because some of the programs installed in the root directories need components installed in /usr, so when /usr is unmounted they stop working.
It's the Windows way to abstract system folders and provide binary compatibility across architectures. I'd much rather have ld.so.preload and multiarch than this hard links mess though.
I always thought the rationale was that if statically linked binaries are on different partition they can be used to recover the system from a failure.
Edit: files in /bin are also statically linked, and I am unsure about what I wrote above but vaguely recall something like that
Not only feasible but it's been implemented a few times over the years. The most notable being GoboLinux[1][2], which is nearly 20 years old.
[1] https://en.wikipedia.org/wiki/GoboLinux
> I was thinking "just symlink /sbin with /bin", but there would probably be conflicts
Given how long /sbin et al have been around, there would always be some edge cases. However it is still possible to do. GoboLinux uses symlinks to achieve LFH[3] compatibility while still having friendly directory names. ArchLinux also just has one bin directory and uses symlinks for compatibility:
» ls -l / | grep bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 bin -> usr/bin
lrwxrwxrwx 1 root root 7 2021-12-07 02:41 sbin -> usr/bin
» ls -l /usr | grep bin
drwxr-xr-x 5 root root 110,592 2022-05-06 09:23 bin
lrwxrwxrwx 1 root root 3 2021-12-07 02:41 sbin -> bin
[3] https://en.wikipedia.org/wiki/Filesystem_Hierarchy_StandardLikewise, the system configuration goes to /etc while the userland configuration goes to /usr/pkg/etc.
All it takes to factory reset a NetBSD system is an rm -Rf /usr/pkg.
It's more complicated than that - many can do a subset of useful things without root.
Often they can read things as a normal user - things like `apt` or `sysctl` can show you information about your current system, but will only be able to change it as root.
And even something like "shutdown" might be usable for a locally logged in normal user on a systemd system - or it might not be, depending on local configuration.
Finding things that actually always "need root" for everything is kind of hard, even discounting "print help" as a useful thing in its own right. And if you only came up with "chcpu" and "switch_root"... would you really want to have a top-level directory just for those? Plus the historical location for some things is in /sbin, so moving them out has a compatibility cost.
Tbh I find the only winning move here is not to play. There are so few binaries that are actually only useful to root that they don't really hurt in tab completion, and they could always grow non-root accessible features.
The problem is that the actual benefits a pretty nebulous, so it's probably not worth the effort (and drawbacks of using different conventions than most others *nix users).
Why do you want to do that? Well, when you have a machine with virtualization you can share the /usr partition across all instances, physically. Which makes a lot of sense if you want to virtualize hundreds of Linux guests on one physical box: you memory map the /usr partition in hypervisor ram, you share that ram across all guests and wham you have snappy fast virtual machines with low physical footprint.
That was actually done, e.g. on IBM mainframes running "your personal web server" for thousands of users in one single mainframe. Fun times.
And only when the root partition could also be mounted r/o, with just an individual /etc, and when large partitions became doable as /, only then it started to make sense to abandon /usr
The split made lots of sense back in the days.
On FreeBSD 3rd party packages go into /usr/local and not /usr
You absolutely will get base packages in /usr/bin (eg `env`) so nuking /usr/bin will break your FreeBSD install.
There's a good write up here: https://unix.stackexchange.com/questions/332764/role-of-the-...
It was a historical quirk to start with. At Bell Labs, back in the early 1970s, Unix was being developed on PDP-11s with RK05 hard disks (with removable disk packs), which had an amazingly generous capacity of 2.5MB each. The Unix operating system had grown too big to fit on a single RK05 disk volume so they had to split it across two. Other operating systems of the period faced similar issues, but dealt with them in (arguably) more elegant ways – on IBM mainframes, OS/360 maintained a database ("catalog") mapping file paths (dataset names, to use the proper terminology) to volume names, so you could move a file to another disk without changing its path. True to Unix's penchant for simplicity, its authors decided instead to just split the OS into / and /usr. And the split survived long after they'd upgraded to more spacious disks.
Any other explanation for the split is essentially a retcon. Some of those retcons (even if, as other commenters have pointed out, not your own) may actually have become true – some of them may have been approximately true to begin with, and they influenced people's decisions, thereby making themselves more true over time. But its ultimate origins will forever remain this quirk of computing history.
And why shouldn't they?
It's not as if a user could do anything damaging with them, if the system is setup properly.
> Having only usable binaries in the path aids discoverability of the system.
Except when someone new has to go online to ask "I found this tutorial telling me to use the `xyz` command to do this, but all I get is `bash: xyz: command not found`, please help!"
This means that in practice people will just add sbin to PATH to get a somewhat usable system, which makes the division between bin and sbin useless.
Furthermore, on BSD derived systems binaries that should not be invoked by users directly (e.g., daemons) need to be stored in libexec.
It’s probably true that the distinction isn’t really important any more. The things we used to have to worry about in the (g)olden days of Unix (/s) are ridiculous by todays standards. We had one of the first 2.5GB RAID arrays in the country and could run a whole medical laboratory - maybe 100 people running Wyse 60 terminals - on it. We had a dedicated 500MB drive for the OS and a couple of other drives just for database logfiles.
These days the whole OS now fits on a single SSD which takes up a tiny fraction of the device. Large SSDs have made so much complexity obsolete for most people. I believe that one could, quite literally, run that old lab software from a single Raspberry Pi.
The point being, stuff that made sense in that old environment does not necessarily make sense any more. It’s good to have the discussion though.
I don't mean to shame you, I sometimes comment without reading TFA, and in your case you add a few more details that were not present in the article. I just found it interesting.
The other thing, coming from windows, was not understanding where to install things. In windows there's like a single place where you install all your stuff.
Because of that, in many Linux distributions there are few, if any, static executables. Due to this, it may happen that a botched glibc upgrade makes the system unusable, because no executable can be started to repair it (nowadays many distributions have a static busybox for such situations). I have seen this a couple of times, and the first time I could not understand what happened, because I was used to older systems, where the commands that I tried to execute (e.g. ls or mv) had been statically linked. Such a thing could never happen in a traditional UNIX or Linux system, before glibc disallowed static linking.
The GNU libc should have been split into a libc with most of the functions, which may be linked statically without problems, and into a small library with the name resolving functions, which could be linked dynamically only by the programs which need those functions.
Even better, the name resolving functions should have been organized in such a way to be able to use their default configuration with static linking and choose dynamic linking only when you really intend to override the default configuration when using less common services, e.g. NIS.
Why not? It's not like most of them are suid (right?). Most Unix systems I've used allow any user to peruse /sbin at their leisure and run whatever they want.
We _could_ all decide to drive on the otherside of the road, if the other side is better, but you have to incorporate the cost of the change.
That is because that is a standards organization's job. They exist to document what is actually being done, not editorialize about what should be done.
This seems to be a good example of the virtue of this sort of behaviour. The mostly arbitrary changes that have been done here have in themselves caused more problems and wasted effort than just keeping everything the same as it was.
Side note, calling the file system layout "hier" has got to be the stupidest naming choice. Did they want this to be lost forever so that nobody ever finds it?
For those of us who ran small-disk NFS workstations back in the day having the split and a common /usr was no quirk and very useful. (There were also diskless (Sun, OpenFirmware netbooting) workstations: common /bin, /usr, but per-machine /var on the NFS server.)
The article states:
> Cheap retail hard drives passed the 100 megabyte mark around 1990, and partition resizing software showed up somewhere around there (partition magic 3.0 shipped in 1997).
Yeah, except if you have a fleet of several hundred or thousand workstations to provision. "Cheap" is relative, especially if you're an academic institution.
You're kidding me right? Nobody ever bothers with that for anything else and the company I work at spends like more than half the time resolving stupid install breaking changes that nobody asked for. This would just be one minor extra thing on that pile, but at least it would make sense for once.
Every OS I've ever used has had these kinds of quirks, save simple ones that just dump everything in the root folder or equivalent. Its really hard to move files once you ship software and doubly so do an OS. Users expect files to be where they were last version.
No, they're for statically linked executables.
In the old days, I read books for that.
I'd infinitely prefer to use either than Rust.
A much more flexible way of organizing is to use tags. This way a file could have more than one tag.
Having a tag hierarchy would be even better, so you can browse down the hierarchy as you'd traverse the tree structure of a typical file system (with the added advantage of allowing a single file to have multiple categories that it could be in).
It means that if someone decides to get away from this legacy structure and move OS into something like /system/debian-11.1.2/ all those programs would break.
Examples: [1], [2]. I assume that developers have hardcoded those paths because /sbin is often not included into PATH.
[1] https://github.com/blueman-project/blueman/blob/fcef83a01c80...
[2] https://github.com/blueman-project/blueman/blob/fcef83a01c80...
Considering Debian is the only one that hasn't just switched, it does sound like a mountain out of a molehill for package manager breakage.
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html
You may be thinking of the /bin and /usr/bin difference, though.
One wasn't intended to call man directly, instead calling apropos first, finding the appropriate page to open.
$ ls -la / | grep -e bin -e lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 bin -> usr/bin lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 lib64 -> usr/lib lrwxrwxrwx 1 root root 7 Dec 6 23:41 sbin -> usr/bin
> Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin.
That's one way to make new friends :)
> /bin/ User utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
> /sbin/ System programs and administration utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
You can have ~/.config/. Nothing in macOS prevents you from having it. And so, some programs do. The worst thing that happens is that, instead of having one directoy ~/.foo you now have one directory ~/.config/foo and nothing else in ~/.config. But as soon as you add the second thing that uses ~/.config, you now have two directories in there instead of a second dotdirectory in ~.
It's just that for a bunch of them the XDG path is only used if it exists - e.g. emacs predates the spec, so it uses ~/.emacs.d (and a few others) first.
Cargo doesn't use the XDG paths at all, apparently - https://github.com/rust-lang/cargo/issues/1734. However it also needs a directory for binaries (~/.cargo/bin) and ~/.local/bin isn't actually in the spec at the moment (https://gitlab.freedesktop.org/xdg/xdg-specs/-/issues/14).
He's even added a warning to dpkg and a "usrunmess" tool to switch a system to his preferred way of doing things.
It's not clear to me where the breakage lies and I've not seen any actual reports of it.
For more context see https://lwn.net/Articles/890219/
My memory is hazy but I recall the distinction being / vs /usr not /bin vs /sbin.
As far as I recall, early Linux didn't have initrd either; it's a novelty which came later.
> But better install to /usr/local or /opt.
I believe /opt is a novelty which appeared on either FSSTD or its successor FHS; I think /usr/local is older (perhaps even older than Linux), being the default --prefix for autoconf.
Or you could share the whole /usr over NFS to hundreds of diskless workstations, each having their own separate / (also shared over NFS). Remember that disk space was expensive back then; having hundreds of identical copies of the large /usr tree on the NFS server would be a huge waste.
Simplicity is reason enough to change something.
When things break because of reasonable change, they can be fixed. And in this case, backwards compatibility can be ensured simply by symlinking things.
HISTORY
A hier manual page appeared in Version 7 AT&T UNIX.Markdown is a novelty. Back then, it would be just README (with no file extension at all).
> In windows there's like a single place where you install all your stuff.
Windows was even worse. Whenever you installed something, parts of it were in a new directory at the root of C:\, and parts of it were dumped in C:\WINDOWS\SYSTEM together with all the rest that's already there, often overwriting files of the same name (and the names were limited to 8 characters plus the extension, so they were quite opaque) used by other software you had installed earlier (that's the original scenario of what is now called "DLL hell"). On later Windows versions, instead of a new directory at the root of C:\ it was a new directory within "C:\Programs Files" (or is it "C:\PROGRA~1"? Or perhaps "C:\Arquivos de programas" aka "C:\ARQUIV~1"? Or something else?), and instead of C:\WINDOWS\SYSTEM it was now C:\Windows\system32, and there's also the "Common files" directory somewhere. And since there's no package manager (actually there is one, but not everything uses it, and it's very complex), you don't know which file came from which software. Oh, and if the program you installed overwrote a "protected" system file, the operating system overwrites the file again with its own copy.
Many admins feel like a Jedi when they memorize all the trivia about a file's path.
There's no shortage of people in a particular profession that feed on unnecessary complexity even when the original reason for said complexity (i.e. tiny drives) doesn't exist any more.
Now if you'll excuse me I have to figure out why sound doesn't work on Linux in 2022 like it's 1997. No seriously, I legit have to do that now. Someone should really develop another system for sound, again.
Man that sounds awesome. I know we have it made these days with modern internet and computers, but sometimes I day dream about being 19 in the mid to late 90s and getting to experience that age of computing.
FWIW, Slackware keeps the separate, following the Linux Standard Base.
Open a cmd box and type
PATH
How many folders do you see? They all count as places.But I completely agree with everything you said about Linux!
Most examples will include the standard user path plus /sbin and /usr/sbin but you can add any directories you want to the option.
It was just an observation that there are many devs writing UNIX tools on Apple hardware. There was no snark.
Nobody stops Apple developers respecting a Freedesktop spec, but the point is many people that mostly know macOS probably didn't even know XDG was a thing. It's not like Apple encourages it in any of their command line utilities.
You can call the 64 bit architecture x64 all you like, but it's still using the x86 instruction set and it's frequently referred to as x86-64, so naming that 32 bit only folder "... (x86)" will just make things more confusing than they should be.
Taxonomy, in general, consumes and perplexes us. It only seems to get worse as time goes on. Look at your typical react app...
For every failed candidate, you are doing one system call, so roughly the same cost each way.
Now if you just do an execve, you’re just paying that cost. If you stat first, you pay the cost of another system call that doesn’t change the flow of your program at all (a nice way of saying you’re wasting time).
Unless stat is dramatically faster than exec on a nonexistent or non-executable path, there’s never a case where this is better.
Having mnt be statically linked makes it much easier to recover that system.
The ideal of "/sbin for system tooling" isn't so much one of static vs dynamic but rather users accidentally finding system tools that don't work and sending email to the admin saying "mnt gives me a permission denied error" when they have no business running it.
But I have only ever seen historic references to that argument, from back when dynamic linking was scary and unreliable. I certainly have never encountered that situation in almost 25 years of using Linux.
$ file /sbin/* | grep "dynamically linked" | wc -l
325
$ file /sbin/* | grep -i "static" | wc -l
0
On Ubuntu focal, but Red Hat is similar.What I really want is an API that does "create/open/delete a file/directory for the relevant configuration/cache/resources store", be it user configured or platform default. What I get is an external package that gives me a list of potential storage locations (of which I'll probably just pick the first) that may or may not be actual directories on the system which I may or may not have access to touch files in.
Some devs are kindly reminded that there's a spec for these things but often it's too late as data is already in specific paths that users may have come to know. That way you end up with paths that get set by environment variables where you have to tell each and every program where to put their crap.
Other programs don't care enough to implement the standards (like Firefox; the bug report about XDG is old enough to vote [1] and it's still not implemented fully). Kubernetes has an open issue for its client that only ever gets bumped.
Even worse are devs that are reminded of standards like XDG and then decide to give everyone the middle finger. Snap is one of them, not only is the data directory hard-coded, it's hard-coded lowercase unlike every other standard directory on Canonical's distribution itself! Snap's biggest competitor, Flatpak, decided not following the standard is not a problem [3]. At least it's special snowflake folder starts with a period so that it's hidden by default, I suppose. Even Bash doesn't support XDG [4] because not everyone uses Linux (and apparently no effort should be made to support OS specific standards?) with the suggestion closed as won't fix.
Many tools that do support XDG only care about their own standards, of course; Windows has had SHGetKnowlFolderPath since Vista, replacing SHGetFolderLocation which dates back to Windows 2000. Still, developers like to push POSIX standards into Windows, creating .dotfiles and not even bothering to at least mark them as hidden.
There's a big list on the Arch wiki[7] listing programs and their compatibilities with XDG.
[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=259356
[2]: https://github.com/kubernetes/kubernetes/issues/56402
[3]: https://github.com/flatpak/flatpak/issues/1651
[4]: https://savannah.gnu.org/support/?108134
[5]: https://docs.microsoft.com/en-us/windows/win32/api/shlobj_co...
[6]: https://docs.microsoft.com/en-us/windows/win32/api/shlobj_co...
[7]: https://wiki.archlinux.org/title/XDG_Base_Directory#Hardcode...
Suppose a package has a boot-time, size-optimized, limited binary in /bin/runk and a user-optimized, feature-complete binary that requires the entire system to be up in /usr/bin/runk. When /bin and /usr/bin link to the same directory, the package manager will extract these files and run into a problem.
Things become even more complicated when these tools are split into different packages (say runk-boot and runk-user). Tracking which file comes from which package can become near impossible.
Of course this can be resolved relatively easily; make the package manager link-aware by handling the merged-bin setup as a special case and warn or error when files conflict. People don't seem to want to do that for various reasons, some good, some based in opinion only. It's a mess.
I know RHEL, Debian, and Arch do. Not a lot outside of those families.
Of course, the "stuff from BSD" winds up in /bin and /usr/bin anyway, so it's still a mess.
https://developer.apple.com/library/archive/documentation/Fi...
It'll be interesting to see what Microsoft will do if Windows on ARM actually takes off. As far as I know, the current translation layer can't execute amd64 on ARM, only x86. Will we see Program Files, Program Files (x64) and Program File (x86)? It would make sense; have the redirection system ready to go and the naming scheme would also make perfect sense. ARM doesn't need a special 32-bit folder because there's no notable 32-bit vs 64-bit clash; nobody is migrating upgrading their Windows CE device to Windows 11, after all.
$ ls -ld /bin /sbin /usr/bin /usr/sbin
lrwxrwxrwx 1 root root 7 May 3 00:20 /bin -> usr/bin
lrwxrwxrwx 1 root root 7 May 3 00:20 /sbin -> usr/bin
drwxr-xr-x 1 root root 39444 May 10 18:41 /usr/bin
lrwxrwxrwx 1 root root 3 May 3 00:20 /usr/sbin -> bin
Ubuntu, Debian and RHEL, on the contrary still splits up: $ ls -ld /bin /sbin /usr/bin /usr/sbin
lrwxrwxrwx 1 root root 7 set 25 2021 /bin -> usr/bin
lrwxrwxrwx 1 root root 8 set 25 2021 /sbin -> usr/sbin
drwxr-xr-x 2 root root 69632 mag 11 14:00 /usr/bin
drwxr-xr-x 2 root root 20480 mag 11 13:33 /usr/sbin> to merge /sbin with /bin, and /usr/sbin with /usr/bin
It's a bit more drastic than you make it out to be. This would give two valid $PATHS to the same commands. It would make tab-completion slow. It would likely break all kinds of compatibility across the SUS. And it is incredibly arbitrary, no better or worse than eliminating system hierarchy entirely and putting everything in /.
You're literally saying that not arbitrarily changing the file structure of linux is dangerous. I don't think that's what you meant.
It's not about "because it's been that way for 30 years," even though it's been 50 years, but never mind that, it's about consistency and standards. It just does not matter one way or the other what the structure of the file system is, so any agenda to change something that doesn't matter is itself a specious agenda. Changing fundamental design introduces complexity for no good reason. As soon as you do it, you've create a special case that doesn't work anywhere else and jeapardizes compatibility.
It should be easy enough to repair but it was just an old laptop I wanted to test something on so I ended up throwing the laptop back in the draw instead.
- `/bin` for everything except the above, including binaries installed under root
- same pattern for configs, auxiliary and transient stuff
change my mind
But autocomplete after sudo doesn't work for me on a stock Debian install anyway, not sure what one needs to do to get around that. I don't really rely on it. If I'm doing enough work that needs root I start the session with "sudo su -" anyway so not having autocomplete after sudo is not a big deal for me.
In practice well behaving shells cache the contents of PATH to speed up operations.
Downside is it stops the autocomplete, so if you, say, wish to check quickly how binary is called on the system, e.g. if you should sudo apache2 or httpd, it will not work...
/Users/behnam/Library/Application Support
which is _nasty_ when working in a terminal.MS tried to fix this by making directories like:
C:/Users/AppData/Local/https://stackoverflow.com/questions/12427245/installing-in-p...
It's just outweighed a couple orders of magnitude by all the overhead that comes with a successfully launching another executable unless you have, like, a thousand junk paths in your PATH.
It didn't work out this way historically (doing unnecessary string processing, requiring extra memory, could've been more expensive than the context switches), and the performance impact of failed execve isn't normally a high priority, and there are other reasons not to want stuff in the kernel (not that it stops frankly less critical stuff from getting in the kernel), but there's definitely low-hanging fruit here if it like, mattered.
Of course in the last twenty-five years I don't think I've ever really used a system set up like that. But it does seem nice to at least be able to do so.
I don't know why developers have apparently collectively decided to go backwards. If your software doesn't support spaces there's a reasonable chance it doesn't support more exotic characters either, which really sucks if you are not natively English speaking.
I think it's nice to be able to keep admin utils out of an admin's PATH when the admin isn't intending to use them.
It's much less interesting to me to keep daemons and such out of anyone's PATH if running them can't do much, though usually those things really belong in a libexec directory and should be exec'ed intentionally only.
You could move all the things in /bin and /sbin to /usr/bin and /usr/sbin, then leave behind links (symbolic or hard).
Since everyone ends up having /bin and /usr/bin in PATH, this merge makes a lot of sense from a performance point of view.
Merging bindirs and sbindirs is a touchier topic. Many things in sbindirs should have been in bindirs all along, and many should move to libexecdirs, but some should stay behind so that privileged users can keep sbindirs out of PATH when they're not wearing an admin hat.
The problem with space is that it's often a separator, which will not be the case for exotic characters. Fixing issues with exotic characters will not necessarily fix issues with spaces, and vice versa.
$ find /{usr,bin,sbin}
/usr
/usr/bin
/usr/bin/env
/bin
/bin/sh
find: '/sbin': No such file or directoryThat said, I think one of the better reasons (and ways) to weigh the value of changing some long-term practice is to focus on the anticipated costs of the change on one side of the ledger, and the ongoing (easy to ignore) unbounded costs of the status quo on the other (and appropriately weight them by who pays and how often). To shoot from the hip:
- If it's only a modest improvement that still supports a bit of misunderstanding, folksonomizing, and arguing about where things belong--it'll just waste time and energy better spent elsewhere. Any time would probably be better spent on writing and promoting/propagating a really good canonical reference to the status quo that can help drive out confusion and enable devs/admins answer practical questions (even if inefficiently).
- If (utopia warning) someone is able to significantly improve how accurately and quickly humans can make real dev/admin decisions from a clear mental model _and_ get enough buy-in to do it across all of the major Unix-alikes, it's probably worth some medium-term pain.
FWIW, the ongoing progress of NixOS, which doesn't really have any of these paths (beyond /usr/bin/env and /bin/sh), demonstrates that this pain is surmountable with enough eyes and hands.
It's possible that Gentoo will still support the split setup even after the default is changed since it supports many different inits and libcs but I am not sure.
The first PC I built had 7 disk drives in a tower case, four distinct hard drives. Yes it was crazy. But the largest of these by far was 540 MB. It made sense to keep the boot stuff on its own hard drive.
Linux has `boot`, of course, but `boot` should never appear in $PATH. I think.
Also FYI Doom Emacs is currently XDG compliant.
As best as I can tell, `dpkg-query -S` is broken by this iff it's passed a path to a file that's been installed under a different version of that path.
E.g. `dpkg-query -S /usr/bin/vim` fails if vim was installed via `/bin/vim`.
That's a minor bug, that should simply be fixed in dpkg, and that's also easy enough to workaround if the distribution simply installs all files in /usr/bin via /usr/bin.
None of any of that seems to be enough to unilaterally hold up a distribution-wide decision to move to a merged-usr, especially not via official sounding warnings in the install script for a major distribution component, and especially not when this way of doing things works without a lot of complaints in other distributions including the related Ubuntu, and especially not to call for a special Debian solution that has its own problems and to do so years after the fact.
Frankly if I was a debian developer I'd be quite cross with the dpkg maintainer.
This was implemented, as an option, years ago. This was implemented fully in other distributions years ago! Fedora has had it for a decade, with few problems.
Dpkg has a few minor bugs with it so it needs to be fixed. It's holding up progress here.
- Make sure alsa-utils is installed
- Auto-configure hardware devices: alsactl init
- View hardware for playback (use arecord for opposite): aplay -L | grep “^hw:”
^ Use that to make sure your hw is being detected
- Lower level list of sound cards, if having issues: cat /proc/asound/cards
- Base alsa conf: /usr/share/alsa/alsa.conf
^ go there to dive deeper into what alsa is actually doing. It will also show you the priority for config files, so you can go through that and check which ones are in use and modify accordingly. alsactl init should handle most configuration though.
- you will want to mess with this: /etc/modprobe.d/alsa-base.conf …and get it working for your hardware. This is a resource to understand that file better: https://alsa.opensrc.org/MultipleCards
You can google configuration files and find one that works for you. Most issues for normal use will revolve around which card gets set to index 0 / default, so if you know your card you want as default, I’d recommend finding your device id (i think cat /proc/asound/cards will give you vendor/product ids you can use) then making a config using that id to set it as the default card, independent of indexing.
Turned into a lot, stopping here. Sound really shouldn’t be this hard for end users or devs, but it is what it is right now. Anyway, it’s fresh on my mind so at the very least, I might be able to point you in the right direction.
Good luck!
- Someone with no more hair left to pull out
Heh, MacPorts installs stuff to /opt/local on the Mac.
Also, why the heck was $HOME/bin never a thing?
The sysadmin at the time told me the /sbin versions of things where for statically linked binaries that didn't need any other FS' mounted to read any dynamic libs.
I'm not asserting it was right, but just another view into "tribal knowledge" vs "urban legend" vs ???
- "C:\Program Files" <- ARM64 programs go here, as do x64 programs! - "C:\Program Files (Arm)" <- ARM32 programs go here - "C:\Program Files (x86)" <- x86 programs go here
I'm not sure how things like "Common Files" work in C:\program files, unless they made mixing arm64 dlls with x64 exes and vice versa just work. Which they probably did. I'm guessing they did not want another WOW version, since it was already bad enough to have to ship 3 different copies of certain system components, and they did not want to need to include a 4th copy, especially as ARM devices are often a bit light on storage space.