zlacker

[parent] [thread] 1 comments
1. smarks+(OP)[view] [source] 2022-06-21 16:34:31
Right. At least at Sun through the 1990s, when everybody had their own workstations, many network nodes had local filesystems, so they were both NFS clients and NFS servers. For this to work well it pretty much required that UIDs/GIDs were globally consistent.

This was maintained using YP/NIS. But Sun was too big for a single YP/NIS domain, so there was a hack where each YP/NIS master was populated via some kind of uber-master database. At least at one point, this consisted of plain text files on a filesystem that was NFS-mounted by every YP/NIS master....

This was all terribly insecure. Since everybody had root on their own workstations, you could `su root` and then `su somebody` to get processes running with their UID, and then you could read and write all their files over NFS. But remember, this was back in the day when we sent passwords around in the clear, we used insecure tools like telnet and ftp and BSD tools like rsh/rcp/rlogin. So NFS was "no more insecure" than anything else running on the network. But that was ok, because everything was behind a firewall. (Some sarcasm in those last bits, in case it wasn't obvious.)

replies(1): >>KateLa+T11
2. KateLa+T11[view] [source] 2022-06-21 22:34:08
>>smarks+(OP)
Sun did have a firewall by the early 90's. It had application-level proxies, and you'd have to configure applications to bounce through it if you wanted to get to the Internet. In many ways, this was more secure than today's default for firewalls where you can make any outbound connection you want but only the inbound connections are filtered.

Note that I'm not arguing that Sun was a leader in security, but they did make some efforts that other companies didn't.

[go to top]