If you can have 1% of stuff down 100% of the time, or 100% of the stuff down 1% of the time, I think there's a preference we _feel_ is better, but I'm not sure one is actually more practical than the other.
Of course, people can always mirror things, but that's not really what this comment is about, since people can do that today if they feel like.
at a higher layer in the stack though, consider the well-established but mostly historic mail list patch flow: even when the listserver goes down, i can still review and apply patches from my local inbox; i can still directly email my co-maintainers and collaborators. new patches are temporarily delayed, but retransmit logic is built in so that the user can still fire off the patch and go outside, rather than check back in every while to see if it’s up yet.
> i can still review and apply patches from my local inbox
`git fetch` gets me all the code from open PRs. And comments are already in email. Now I'm thinking if I should put `git fetch` in crontab.
> retransmit logic is built in so that the user can still fire off the patch and go outside
You can do that with a couple lines of bash, but I bet someone's already made a prettier script to retry an arbitrary non-interactive command like `git push`? This works best if your computer stays on while you go outside, but this is often the case even with a laptop, and even more so if you use a beefy remote server for development.