> The very notion of a stateless filesystem is ridiculous. Filesystems exist to store state.
It's the protocol that's stateless, not the filesystem. I thought the article made a reasonable attempt to explain that.
Overall the article is reasonable but it omits one of the big issues with NFSv2, which is synchronous writes. Those Sun NFS implementations were based on Sun's RPC system; the server was required not to reply until the write had been committed to stable storage. There was a mount option to disable this, but if you enabled it, it exposed you to data corruption. Certain vendors (SGI, if I recall correctly) at some point claimed their NFS was faster than Sun's, but it implemented asynchronous writes. This resulted in the expected arguments over protocol compliance and reliability vs. performance.
This phenomenon led to various hardware "NFS accelerator" solutions that put an NVRAM write cache in front of the disk in order to speed up synchronous writes. I believe Legato and the still-existing NetApp were based on such technology. Eventually the synchronous writes issue was resolved, possibly by NFSv3, though the details escape me.
I've always just presumed the development of EFS recapitulated the evolution of NFS, in many cases quite literally, considering the EFS protocol is a flavor of NFS. S3 buckets are just blobs with GUIDs in a flat namespace, which is literally what stateless NFS is--every "file" has a persistent UID (GUID if you assume host identifiers are unique), providing a simple handle for submitting idempotent block-oriented read and write operations. Theoretically, EFS could just be a fairly simple interface over S3, especially if you can implicitly wave away many of the caveats (e.g. wrt shared writes) by simply pointing out they have existed and mostly been tolerated in NFS environments for decades.
S3 and EFS actually are quite different. Files on EFS are update-able, rename-able and link-able (I.e what’s expected from a file system), while S3 objects are immutable once they are created. This comes from the underlying data structures. EFS uses inodes and directories while S3 is more of a flat map.
Protocol-wise EFS uses standard NFS 4.1. We added some optional innovations outside the protocol that you can use through our mount helper (mount.efs). This includes in-transit encryption with TLS (you can basically talk TLS to our endpoint and we will detect that automatically), and we support strong client auth using SigV4 over x509 client certificate.
Will EFS be updated to use the NFS-TLS RFC once it settles down some?
* https://datatracker.ietf.org/doc/html/draft-ietf-nfsv4-rpc-t...
I can't commit on a public forum for obvious reasons but we'll definitely take a serious look at this, especially when the Linux client starts supporting this. We did consult with the authors of that draft RFC earlier and it should be relatively easy for us to adopt this.