I don't know if that's also true for data integrity on physical magnetic media. FAT12 is not a journaling filesystem. On a modern drive, a crash during a write is at best, annoying while on a 3.5" floppy with a 33mhz CPU, a write operation blocks for a perceptible amount of time. If the user hits the power switch or the kernel panics while the heads are moving or the FAT is updating, that disk is gone. The article mentions sync, but sync on a floppy drive is an agonizingly slow operation that users might interrupt.
Given the 253KiB free space constraint, I wonder if a better approach would be treating the free space as a raw block device or a tiny appended partition using a log-structured filesystem designed for slow media (like a stripped down JFFS2 or something), though that might require too many kernel modules.
Has anyone out there experimented with appending a tar archive to the end of the initramfs image inplace for persistence, rather than mounting the raw FAT filesystem? It might be safer to serialize writes only on shutdown, would love more thoughts on this.
Makes sense, great point. I would rather use a second drive for the write disk space, if possible (I know how rare it's now to have two floppy drives, but still).
This isn't true, I commented lower in the thread, but FAT keeps a backup table, and you can use that to restore the disk.
Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.
1) mark blocks allocated in first FAT
If a crash occurs here, then data written is incomplete, so write FAT1 with data from FAT2 discarding all changes.
2) write data in sectors
If a crash occurs here, same as before, keep old file size.
3) update file size in the directory
This step is atomic - it's just one sector to update. If a crash occurs here (file size matches FAT1), copy FAT1 to FAT2 and keep the new file size.
4) mark blocks allocated in the second FAT
If a crash occurs here, write is complete, just calculate and update free space.
5) update free space1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.
Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).
In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.
While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
Which driver? DOS? FreeDOS? Linux? Did you study any of them?
> While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.
That's a "move". Yes, you would need to write 2-6 sectors in that case.
For short filenames, the new filename can't not fit the directory cluster, because short file names are fixed 8.3 characters, pre-allocated. A long file name can occupy up to 10 consecutive directory entries out of the 16 fixed entries each directory sector (512B) has. So, an in-place rename of a LFN can write 2 sectors maximum (or 1KB).
Considering that all current drives use 4KB sectors at least (a lot larger if you consider the erase block of a SSD) the rename opearation is still atomic in 99% of cases. Only one physical sector is written.
The most complicated rename operation would be if the LFN needs an extra cluster for the directory, or is shorter and one cluster is freed. In that case, there are usually 2 more 1-sector writes to the FAT tables.
Edit: I corrected some sector vs. cluster confusion.