I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
They would be cheaper than starlink fwiw and most connections can be robust usually.
That being said, one can use tailscale or cloudflare tunnels to expose the server even if its behind nat which you mention in your original comment that you might be against at for paranoid reasons and thats completely fine but there are ways to go do that if you want as well which I have talked about it on the other comment I have written here in-depth.
Cheap vps servers 1 gb ram and everything can cost around 10-11$ per year and using something like hetzner's cheap as well for around 30$ ish an year or 3$ per month most likely while having some great resilient numbers and everything
If anything, people self host because they own servers so upgrading becomes easier (but there are vps's which target a niche which people should look at like storage vps, high perf vps, high mem vps etc. which can sometimes provide servers for dirt cheap for your specific use case)
The other reason I feel like are the ownership aspect of things. I own this server, I can upgrade this server without costing a bank or like I can stack up my investment in a way and one other reason is that with your complete ownership, you don't have to enforce t&c's so much. Want to provide your friends or family vps servers or people on internet themselves? Set up a proxmox or incus server and do it.
Most vps servers sometimes either outright ban reselling or if they allow, they might sometimes ban your whole account for something that someone else might have done so somethings are at jeopardy if you do this simply because they have to find automated ways of dealing with abuse at scale and some cloud providers are more lenient than others in banning matters. (OVH is relaxed in this area whereas hetzner, for better or for worse, is strict on its enforcement)
Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.
I don't know what's the name of dongle though, it was similar to those sd card to usb thing ykwim, I'd appreciate it if someone could help find this too if possible
but also yeah your point is also fascinating as well, y'know another benefit of doing this is that atleast in my area, 5g (500-700mbps) is really cheap (10-15$) with unlimited bandwidth per month and on the ethernet side of things I get 10x less bandwidth (40-80mbps) so much so that me and my brother genuinely thought of this idea
except that we thought that instead of buying a router like this, we use an old phone device and insert sim in it and access router through that way.
If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.
But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.
This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD
(I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)
I hope QTM reaches more traction. Its build on solid primitives.
One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)
So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar
Wormhole: https://github.com/magic-wormhole/magic-wormhole
I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.
Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours
QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth
Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!
Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.
(Slowly move towards the complex setups with asciinema demos for each of them if you wish)
Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion
For something like a website I want on the public internet with perfect reliability, a VPS is a much better option.
How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.
A year later another atmospheric river hit and we had a 4 hour outage. No more jokes.
Make sure to run that generator once every few months with some load to keep it happy.
I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.
Tailscale also has a self-hosted version I believe.
If you just want to put a service on the internet, a VPS is the way to go.
Is it perfect? No, but it's more than enough to cover most brief outages, and also more than enough to allow you to shut down everything you're running gracefully, after you used it for a couple hours.
Major caveat, you'll need a 240V supply, and these guys are 6U, so not exactly tiny. If you're willing to spend a bit more money though, a smaller UPS with external battery packs is the easy plug-and-play option.
> How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
At the end of the day, it's very hard to argue you need perfect uptime in an extended outage (and I say this as someone with a 10kW generator and said 6kVA UPS). I need power to run my sump pumps, but that's about it - if power's been out for 12-18 hours, you better believe I'm shutting down the rack, because it's costing me a crap ton of money to keep running on fossil fuels. And in the two instances of extended power outages I've dealt with, I haven't missed it - believe it or not, there's usually more important things to worry about than your Nextcloud uptime when your power's been out for 48 hours. Like "huh, that ice-covered tree limb is really starting to get close to my roof."
Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.
I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.
Backup generators have their own maintenance schedule.
I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.
So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.
Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.
Rewiring the house for 240V supply and spending $400+500 to refurbish a second-hand UPS to keep the 2500W rack running for 15 minutes?
And then there's the electricity costs of running a 2.5kW load, and then cooling costs associated with getting that much heat out of the house constantly. That's like a space heater and a half running constantly.
I've added an asciinema to the README now <https://asciinema.org/a/z2cdsoVDVJu0gIGn>, showing the manual connection steps. Thanks for the kind words. Hope you find it useful!
I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.
https://www.ankersolix.com/ca/products/f2600-400w-portable-s...
its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.
although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense
Wow the asciinema is really good and very professional, thanks for creating it, I found it very helpful (in the sense that if I ever were to repeat my experiment, now I got your asciinema server) and I hope more people use it
> It could be streamlined with something like Magic Wormhole, though. I'll add that to the backlog and see if there's interest
To be really honest, its not that big of a deal considering one can do that on their own but I just had this idea for my own convenience when I was using QTM
I really like QTM a lot! Thanks for building it once again, I would try to integrate it more often and give you more feedback when possible from now.
Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.
And security is only one crucial aspect. How spam filters react to your IP is another story.
In the end I cherrish the dream but rely on third party server providers.
> maintain my familiarity with it in case I need to use it late at night in the dark with an ice storm breaking all the trees around us.
That's the way to do it. I usually did my trial runs during the day with light readily available but underestimated how much I needed to see what I am doing. Now there's a grounding plug and a flashlight in the "oh shit kit".
Another maker is Goldenmate (less I be accused of being an ad)
Assuming their heating, cooking and hot water is gas, a house doesn't actually consume that much. With a 50kWh battery you can draw just under 300W continuously for a week. I'd expect the average house to draw ~200W with lighting and a few electronics, with a lean towards the evenings for the lighting.
What follows is back of the napkin calculations, so please treat it as such and correct me if I am wrong.
1. Inverters are not 100% efficient. Let's assume 90%
2. Let's also assume that the user does not want to draw battery to 0 to not become stranded or have to do the "Honda generator in the trunk" trick. Extra 10%?
3. 300W continuous sounds a bit low even with gas appliances. Things like the fridge and furnace blower have spiky loads that push the daily average. Let's add 100W to the average load? I might be being too generous here, but I used 300W, not the 200W lower bound.
4. Vehicle side might need some consumption. If powering off the battery, it would probably need to cool the battery or keep some smarts on to make sure it does not drain or overheat? Genuinely not sure how to estimate this, let's neglect it for now.
Math is (50kw - 10%(inverter loss) - 10%(reserve)) / 0.4 = 100 (hours), ~ 4 days.
The above calculations assume a sane configuration (proper bidirectional wire, not suicide cord into 12v outlet). Quick skim of search for cars with bidirectional charging support for home shows batteries between ~40kWh(Leaf) to 250 kWh (Hummer).
So looks like one should be looking for ~80kWh battery, which actually most of the cars in the list have.
Again, very back of the napkin, would probably wanna add 20% margin of error.
Indeed with the fridge it pushes it a bit. But to address some of your other points:
> it would probably need to cool the battery
I'd expect if you're in a storm then you probably don't need any cooling - not to mention a 300W load is nothing for an EV battery compared to actually moving the vehicle. I'd expect some computers in the vehicle to be alive but that should be a ~10-20W draw.
On the other hand, my calculation assumes ~300W continuous. I expect the consumption to lean into the evenings due to the extra lighting, and drop off during other times.
But yes 80kWh might very well be what the OP has; I intentionally picked 50kWh as the lowest option I found on a "<major ev brand> battery kwh" search.
But that doesn't mean its for us to say that someone else's use case is wrong. Some people self host a nextcloud instance and offer access to it to friends and family. What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
My point was simply that different people have different use cases and different needs, and it definitely can become a bottomless pit if you let it.
For me, IPMI, PiKVM, TinyPilot, any sort of remote management interface that can power on/off a device and be auto powered on when power is available, so you can reasonably always access it, and having THAT on the UPS means that you can power down the compute remotely, and also power back up remotely. Means you never have to send someone to reboot your rack while youre out of town, you dont shred your UPS battery in minutes by having the server auto boot when power is available. Eliminates reliance on other people while youre not home :tada:
But again, not quite a bottomless pit, but there are constant layers of complexity if you want to get it right.
What setup did you go with for whole house backup power?
Also generators are still cheap compared to batteries?
For longer outages there is an outhouse with triple-redundant generators:
- Honda c. 2005
- Honda c. 1985
- Briggs & Stratton c. 1940
The “redundancy” here is that the first is to provide power in the event of a long power outage, and the other two are redundant museum pieces (which turn over!)
I regrettably removed our old furnace/tank when installing the air source heat pump we have now (northeast), but that’s been my biggest concern power wise
These are different requirements. The issue I described was not a power outage and having a well managed UPS wouldn't have made a difference. Nothing shut down, but we lost 5G in the area and T-Mobile's modem is janky. My point is that it's another edge case that you need to consider when self hosting, because all the remote management and PDUs in the world can't save you if you can't log into the system.
Of course there's all you need is a smart plug and a script/Home Assistant routine which pings every now and again. There are enterprise versions of this, but simple and cheap works for me.
Again, not trying to normalize 2500W, most people don’t need that (and I don’t really either), but I do make good use of it.
As for “rewiring the house for 240V”, every house* in Canada and the US is delivered “split-phase” 240V (i.e. 240V with a centre tapped neutral, providing 120V between either end of the 240V phase and neutral or 240V from phase to phase), and many appliances are 240V (dryers, water heaters, stove/ranges/ovens, air conditioners). If you have a space free in your breaker panel, adding a 240V 30A circuit should cost less than $1k if you pay an electrician, and can be DIY’d for like $150 max unless you have an ancient panel that requires rare/specialty breakers or the run is very long. It’s far from the most expensive part of a homelab unless you’re running literally just a raspberry pi or something.
*barring an incredibly small exceptional percentage
Generator was a requirement for the sump pump. My house was basically built on a swamp, so an hour in spring without it means water in the basement. Now admittedly, I spent an extra couple hundred bucks to get a 240V generator with higher capacity than strictly necessary, but it was also roughly the minimum amount of money to spend to get one that can run on gasoline or propane, which was a requirement for me. 240V to the rack cost me $45, most of that cost being the breaker (rack is right next to the panel).
> What if someone else is hosting something important on there and my power is out? My concerns are elsewhere, but there's might not be.
I host roughly a dozen services that have around 25 users at the moment, but I charge $0 for them. I make it very clear: I have a petabyte of storage and oodles of compute, feel free to use your slice, and I’ll do my best to keep everything up and available - for my own sake (and I’ve maintained over 3 nines for 8 years!). But you as a user get no guarantee of uptime or availability, ever, and while I try very hard to backup important data (onsite, offsite split to multiple locations, and AWS S3 glacier), if I lose your data, sucks to suck. So far most people are pretty happy with this arrangement.
I couldn’t possibly fathom worrying about other people’s access to my homelab during a power outage. If I wanted to care, I’d charge for access, and I’d have a standby generator, multiple WANs, a more resilient remote KVM setup, etc. But then I’d be running a business - just a really shitty one that takes tons of my time and makes me little money. And is very illegal (for some of the services I make available, at least), instead of only slightly illegal.