related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/
Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well
[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...
We've gone a step further, and made this even easier with https://zo.computer
You get a server, and a lot of useful built-in functionality (like the ability to text with your server)
This is what I do. You can do Tailscale like access using things like Pangolin[0].
You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.
> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.
This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].
[0] https://github.com/fosrl/pangolin
[1] >>46136026
I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio
There was a popular post less than a month ago about this recently >>46305585
I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.
This incident precisely shows that containerization worked as intended and protected the host.
https://github.com/jwhited/wgsd
https://www.jordanwhited.com/posts/wireguard-endpoint-discov...
I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
If you are running some public service, it might have bugs and of course we see some RCE issues as well or there can be some misconfig and containers by default dont provide enough security if an hacker tries to break in. Containers aren't secure in that sense.
Virtual machines are the intended use case for that. But they can be full of friction at time.
If you want something of a middle compromise, I can't recommend incus enough. https://linuxcontainers.org/incus/
It allows you to setup vm's as containers and even provides a web ui and provides the amount of isolation that you can trust (usually) everything on.
I'd say to not take chances with your home server because that server can be inside your firewall and can infect on a worst case scenario other devices but virtualization with things like incus or proxmox (another well respected tool) are the safest and provide isolation that you can trust with. I highly recommend that you should take a look at it if you deploy public serving services.
But I think in the end what ended up working was my frustration took over and I just copy pasted the commands from readme and if I remember correctly, they just worked.
This is really ironical considering on what thread we are on but in the end, Good readme's make self hosting on a home server easier and fun xD
(I don't exactly remember chatgpt's conversations, perhaps they might have helped a bit or not, but I am 99% sure that it was your readme which ended up helping and chatgpt etc. in fact took an hour or more and genuinely frustrated me from what I remember vaguely)
I hope QTM reaches more traction. Its build on solid primitives.
One thing I genuinely want you to perhaps take a look at if possible is creating an additional piece of software or adding the functionality where instead of the careful dance that we have to make it work (like we have to send two large data pieces from two computers, I had to use some hacky solution like piping server or wormhole itself for it)
So what I am asking is if there could be a possibility that you can make the initial node pairing (ticket?) [Sorry, I forgot the name of primitive] between A and B, you use wormhole itself and now instead of these two having to send large chunks of data between each other, they can now just send 6 words or similar
Wormhole: https://github.com/magic-wormhole/magic-wormhole
I even remember building some of my own CLI for something liek this and using chatgpt to build it xD but in the end gave up because I wasn't familiar with the codebase or how to make these two work together but I hope that you can add it. I sincerely hope so.
Another minor suggestion I feel like giving is to please have asciinema demo. I will create an asciinema patch if you want between two computers but a working demo gif from 0 -> running really really would've helped me save some/few hours
QTM has lots of potential. Iroh is so sane, it can run directly on top of ipv4 itself and talk directly if possible but it can even break through nats and you can even self host the middle part itself. I had thought about building such a project when I had first discovered QTM and you can just imagine my joy when I discovered QTM from one of your comments a long time ago for what its worth
Wishing the best of luck of your project! The idea is very fascinating. I would appreciate a visual demo a lot though and I hope we can discuss more!
Edit: I remember that qtm docs had this issue of where they really felt complex for me personally when all I wanted was one computer port mapped to another computer port and I think what helped in the end was the 4th comment if I remember correctly, I might have used LLM assistance or not or if it helped or not, I genuinely don't remember but it definitely took me an hour or two to figure things out but its okay since I still feel like the software is definitely positive and this might have been a skill issue from my side but I just want if you can add asciinema docs, I can't stress it enough if possible on how much it can genuinely help an average person to figure out the product.
(Slowly move towards the complex setups with asciinema demos for each of them if you wish)
Once again good luck! I can't stress qtm and I still strongly urge everyone to try qtm once https://gitlab.com/CGamesPlay/qtm since its highly relevant to the discussion
I've added an asciinema to the README now <https://asciinema.org/a/z2cdsoVDVJu0gIGn>, showing the manual connection steps. Thanks for the kind words. Hope you find it useful!
They tend to slip out of declarative mode and start making untracked changes to the system from time to time.
I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale
I route it through a familiar interface like slack tho as I don't like to ssh from phone or w/e using a tool I built - https://www.claudecontrol.com/
https://www.ankersolix.com/ca/products/f2600-400w-portable-s...
There are a few important things to consider, like unstable IPs, home internet limits, and the occasional power issue. Cloud providers felt overpriced for what I needed, especially once storage was factored in.
In the end, I put together a small business where people can run their own Mac mini with a static IP: https://www.minimahost.com/
I’m continuing to work on it while keeping my regular software job. So far, the demand is not very high, or perhaps I am not great at marketing XD
Tbh I did the mistake of throwing away Ansible, so testing my setup was a pain!
Since with AI, the focus should be on testing, perhaps it's sensible to drop Ansible for something like https://github.com/goss-org/goss
Things are happening so fast, I was impressed to see a Linux distro embrace using a SKILL.md! https://github.com/basecamp/omarchy/blob/master/default/omar...
And until now without AI, but I'm kind of curious but afraid that it will bring my servers down and then I can't roll back :D But perhaps if I would move over to NixOS, then it would be easy to roll back.
I am glad that it is useful to you! The "terrible search + outdated forum posts" problem is real for sure. LLMs genuinely help there by synthesizing across versions and explaining what changed.
I would say that self-hosting with AI assistance is the right approach. Use it to understand, not to blindly execute. Trust me, it is not much of a deal and you will be happy to have gone with this route afterwards!
Good luck with the setup. If you have any questions, let me know, I am always happy to help.
(I have very briefly mentioned some stuff here: >>46586406 but I can expand and be a bit more detailed as needed.)
Wrote about learning and fun here: https://fulghum.io/fun2
(and no, this product is not against TOS as it is using the official claude code SDK unlike opencode https://yepanywhere.com/tos-compliance.html)
From this thread, I've learned about Pangolin:
https://github.com/fosrl/pangolin
Which seems very compelling to me too. If it has apps that allow various devices connect to the VPN it might be worth it to me to trial using it instead of Tailscale...
It was just giving commands to run that were plain wrong and extremely destructive, and unless you already knew what they were doing you were screwed.
Here: https://chatgpt.com/share/696539b6-65f0-8010-9324-5e35da42ee...
I have 4-5 more conversations like this. It's honestly almost a piece of art, the LLM keeps spouting out shit like "Ah got it, your issue is clear now", and digging deeper into the wrong direction.
The idea is a contract is defined saying which options exist and what they mean. For backups, you’d get the Unix user doing the backup, what folders to backup and what patterns to exclude. But also what script can be run to create a backup and restore from a backup.
Then you’d get a contract consumer, the application to be backup, which declares what folders to backup either which users.
On the other side you have a contract provider, like Restic or Borgbackup which understand this contract and know thanks to it how to backup the application.
As the user, your role is just to plug-in a contract provider with a consumer. To choose which application backs up which application.
This can be applied to LDAP, SSO, secrets and more!
Another maker is Goldenmate (less I be accused of being an ad)
What you're describing is possible but you would need to market it differently if selling to non-tech people.
Now if you could make something like this https://oxide.computer/ for home users and make it affordable, that would be cool.