I use Cloudflare tunnel to make it available outside the home network. I've set up two DNS names – one for accessing it directly in the local network, and and a second one that goes through the tunnel. The Immich mobile app supports internal/external connection settings – it uses the direct connection when connected to home wifi, and the tunnel when out and about.
For uploading photos taken with a camera I either use immich-go (https://github.com/simulot/immich-go) or upload them through the web UI. There's a "publish to Immich" plugin for Adobe Lightroom which was handy, but I've moved away from using Lightroom.
I love the immich success story but it seems like it's missing a crucial use case in my view: I don't actually want a majority of the photos on my phone. I want something like a shared album that me and my wife both have access to, and so we can share photos specifically to that album (quickly and without hassle), so we can do it in the moment and both have access.
I would probably estimate 90% Of my photos are junk, But I want to isolate and share the 10% that are really special.
My app failed, but I'm thinking about reviving it as an alternative front-end to immich, to build upon that.. But I feel like I'm the only one who wants this. Everyone else seems fine with bulk photo backup for everything.
Immich put the joy back in photography for me, it's so easy to find anything, even with just searching with natural language.
Being able to scroll to dates with immich is golden. And the facial recognition is on device and works great.
I actually did the math earlier and the iCloud 12TB plan for a family is way cheaper than the equivalent s3 storage assuming frequent access, even assuming a 50% discount. so that's nice.
FWIW, I also don't use any fancy collection management and barely understand what all these Lightrooms and XMP files are for. Maybe I should, but up to this day photos for me are just a bunch of files in the folder, that I sometimes manually group into subfolders like 2025-09, mostly to make it easier on thumbnail-maker.
The project as a whole feels competent.
Stuff that should be fast is fast. E.g. upload a few tens of thousands of photos (saturates my wifi just fine), wait for indexing and thumbnailing to finish, and then jump a few years in the scroll bar - odds are very good that it'll have the thumbnails fully rendered in like a quarter of a second, and fuzzy ones practically instantly. It's transparently fast.
And the image folder structure is very nearly your full data, with metadata files along side the images, so 99% backups and "immich is gone, now what" failure modes are quite easy. And if you change the organization, it'll restructure the whole folder for you to match the new setup, quietly and correctly.
Image content searching is not perfect (is it ever?), but I can turn it on in a couple clicks, search for the breed of my dog, and get hundreds of correct matches before the first mistake. That's more than good enough to be useful, and dramatically better than anything self-hosted that I've tried before, and didn't take an hour of reading to enable.
It's "this is like actually decent" levels that I haven't seen much in self-hosted stuff. Usually it's kinda janky but still technically functional in some core areas, or abysmally slow and weird like nextcloud, but nope. Just solid all around. Highly recommended.
It keeps the originals locally when it uploads forever unless you delete them. There's a one click "free up space on this device" button to delete the local files. It's actually somewhat annoying to export in bulk, you pretty much have to use takeout.
I keep Tailscale but switched over to Pangolin for access most of my self-hosted services.
Whoever gets that link can browse it in a web browser.
I've used this to share albums of photos with gatherings of folks; it works very well. It does assume you have your Immich installation publicly available, however. (Not open to the public, but on a publicly accessible web server)
I started looking for alternatives after Synology became more restrictive with their hardware. I'm curious if anyone else has had a similar experience.
Immich also has an app that can upload photos to your server automatically. You can store them there indefinitely. There are galleries, timelines, maps for geotagged photos, etc.
The app also allows you to browse your galleries from your phone, without downloading full-resolution pictures. It's wickedly fast, especially in your home network.
> Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc?
It preserves the information from sidecar files and the original RAW files. The RAW processing is a bit limited right now, and it doesn't support HDR properly. However, the information is not lost and once they polish the HDR support, you'll just need to regenerate the miniatures.
1. You get a mesh network out of the box without having to keep track of Wireguard peers. It saves a bunch of work once you’re beyond the ~5 node range.
2. You can quickly share access to your network with others - think family & friends.
3. You have the ability to easily define fine grained connectivity policies. For example, machines in the “untrusted” group cannot reach machines in the “trusted” group.
4. It “just works”. No need to worry about NAT or port forwarding, especially when dealing with devices in your home network.
Face recognition in general just isn't as good as Google Photos.
It's still an amazing piece of software and I'd never go back, but it isn't perfect yet.
I'm curious to know which one would suit me best.
I have a daughter, and my family lives in another country, so I want to be able to share photos with them. These are the feaures I need:
- Sharing albums with people (read only). It sounds pretty simply, but even Google fucked it up somehow. I added family members by their Google account to the album, and somehow later I saw someone I didn't know was part of the album. Apparently adding people gives (or did?) them permission to share the album with other people which is weird. I want to be able to control exactly who sees the photos, and not allow them to share or download them with others. On the topic of features, I should note that zero of the other social features (comments / reactions) have ever been used.
- Shared album with my spouse (write). I take photos of the kid, she takes photos of the kid. We want to be able to both add our photos to the shared album.
- Automatic albums or grouping by faces. Being able to quickly see all the photos of our kid is really great, especially if it works with the other sharing features. On Google you could setup Live Albums that did this... (automatic add and share between multiple people) but I can't see the option anymore on Android. I feel it could be a bit simpler though, just tagging a specific face, so that all photos should be shared within my Google One Family.
- The way we use it is we have a shared album between us or all the photos, and then a curared album shared with family members of the best photos.
Other than that I just use it as a place to dump photos (automatically backed up from my phone) and search if needed. Ironically the search is not very good, but usually I can remember when the photo I need was taken roughly so can scroll through the timeline. In total my spouse and I have ~200GB of media on Google Photos, some of it is backed up elsewhere.
Immich is one of the only apps on iOS that properly does background sync. There is also PhotoSync which is notable for working properly with background sync. I'll take a wild guess that Ente may have got this working right too (at least I'd hope). This works around the limitation that iOS apps can't really run as background apps (appears to me that the app can wake up on some interval, run/sync for a little and try again on the next interval). This is much more usable then for example, the Synology apps for photo sync, which is, the last time I tried, for some reason insanely slow and the phone needs to have the app open and screen on for it fully sync.
Some issues I ran into is the Immich iOS app updating and then being incompatible with the older version of the server installed on my machine. You'd have to disable app updates for all apps, as iOS doesn't support disabling updates for individual apps.
In my specific scenario, the latest version of Immich for NixOS didn't perform a certain migration for my older version of Immich. I had to track down the specific commit that contained the version of Immich which had the migration, apply that, then I was able to get back to the latest version. Luckily, even though I probably applied a few versions before getting the right one, it didn't corrupt the Immich install.
Lesson learnt.
Yes I don't recommend doing that. My experience is that people understand you are human because they know you. They don't expect 9 9s availability but if they somehow do that can be clarified from the start : "I'm hosting this free of charge for family members because (insert your reasons here, it's important to clarify WHY it's different because Apple and BigTech in general somehow still have a ton of goodwill) but as you know as also have a job and our family life. Consequently sometimes, e.g. electricity outage or me having to update the server, there will be down time. Do no panic when this happens as the files are always safe (backup details if you want) but please do be patient. Typically it might take (insert your realistic expectation, do NOT be too optimistic) a day per month for updates. If you do have better solutions please do contribute."
... or something of the kind. I've been doing that for years and people are surprisingly understanding. IMHO it stems from the why.
The "way cheaper than the equivalent" argument reminds me of, and apologies I know it's a bit rough, Russian foreign minister days ago who criticize the EU for its plan to decouple with their oil & gas saying something like "Well if they do that they will pay a lot more elsewhere" and he's right. The point isn't the money though, the point is agency and sovereignty.
How's the offline app support? My full library (30k items) is available on my phone (not in high res). There are a lot more concessions I'm sure.
Got myself a 6800 pro. It chewed through 98k photos, many of which are raw, within 24h AFAIK. Then came face recognition, text recognition etc. Within 2-3 days all was done.
The performance is night and day. Photos and movies load instantly. Finally can watch home movies on my TV without stuttering (4k footage straight from a nikon).
The photos app is similar to the synology one. Face recognition was better for me. Have compared the amount of photos tagged to a few people and ugreen found 15% more. Have seen photos of my grandma which I didn't see for years!
There's much more positive i could say. For the negatives: no native drive app (nextcloud which supposedly was an alternative doesn't sync folders on android), no native security cam app.
I am running now 10 docker containers without a sweat. My ds920+ was so slow, that I gave up on docker entirely after a few attempts.
The photos app has some nice features which synology didn't have. Conditional albums. Baby albums.
We got to this stage of having to sync because Apple can’t stand putting more storage on client devices.
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
My understanding is that when using containers updating is an ordeal and you avoid the need my never exposing the services to the internet.
You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image
Assuming someone has added it to NixOS, yeah. There are plenty of platforms even easier than that where you can click "install" on "apps" that have already been configured.
"because a company that sells you Cloud storage has very few incentives to give away more local storage, or compress/optimize the files generated by its camera app." might be more accurate
As for not wanting most of your photos, Immich also includes AI search and facial recognition which both work really well. I can't remember if it detects near-duplicates, but I thought it did. I think you should play around with it before you leap into the giant project of making your own app.
The Android app is good but does quite often fail to open, just getting stuck on the splash screen indefinitely. Means I have to have another app for viewing photos on my phone.
One of the main reasons I wanted to install it is because my partner runs out of space on her iPhone and I don't want to pay Apple exorbitant amounts for piffling storage. Unfortunately it doesn't quite work for that; I can't find an option to delete local copies after upload.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
My partner isn't very technical, but having an Immich server we are both invested in has gotten her much more interested in self hosting and the skills to do it.
Offline support is alright, though I haven't worried about this much. I think it doesn't do any local deletion, so whatever stays in your DCIM folder is still on device.
Immich really is fantastic software, and their roadmap is promising. I hope they have enough funding to keep going.
In the context of having a phone stolen, it's possible to at least limit the damage and revoke accesses via the Tailscale control server. Then the files on device are still vulnerable, but not everything in Immich (or whatever other service is running).
You can still mount an object storage bucket to the filesystem, but it's not supported officially by Immich and you anyways have additional delay caused by the fact that your device reaches out to your server, and your server reaches out to the bucket.
It would be amazing (and I've been working on that) to have an Immich that supports natively S3 and does everything with S3.
This, together with the performance issues of Immich, is what pushed me to create immich-go-backend (https://github.com/denysvitali/immich-go-backend) - a complete rewrite of Immich's backend in Go.
The project is not mature enough yet, but the goal is to reach feature parity + native S3 integration.
You can configure the storage template for the photos and include an "album" part, so if a photo is in some album it'll get sorted into that folder. Then the file tree on disk is as you wish.
I haven't tested what it does when a photo is in multiple albums, but it does handle the no album case fine as well.
I also use Tailscale, and use cloudflare as nameserver and Caddy in front of Immich to get an nice url and https. For DNS redirects I use Adguard on the tailnet, but (mostly for family) I also set some redirects in my Mikrotik hEX (E50UG). This way Immich is reachable from anywhere and not on the internet. Unfortunately it looks like the Immich app caches the IP address somewhere? Because it always reports as disconnected whenever Tailscale turns off when I'm at home or the other way around and takes some time/attempts/restarts to get going again. It's been pretty flaky that way...
Other than that: Best selfhosted app ever. It has reminded me that video > photos, for family moments. Regularly I go back through the years for that day, love that feature.
Immich, ente and photoprism all compete in a similar space?
Seems immich is the most polished webpage, but which solution will become the next cloud for photos is to be seen. Surely it's not next cloud anymore, considering the comments here.
Although I am sure I can back them up to my PC somehow. But having them just on the server is not my favourite solution.
I considered doing that too. My main problem with it is privacy. Let's say I set up some sort of dynamic DNS to point foo.bar.example.org to my home IP. Then, after some family event, I share an album link (https://foo.bar.example.org/share/long-base64-string) with friends and family. The album link gets shared on, and ends up on the public internet. Once somebody figures out foo.bar.example.org points to my home IP, they can look up my home IP at all times.
I like to self host things so I also self host Headscale (private tailnet) and private derp proxy nodes (it is like TURN). Since derp uses https and can run on 443 using SNI I get access to my network also at hotels and other shady places where most of the UDP and TCP traffic is blocked.
Tailscale ACL is also great and requires more work to achieve the same result using OpenVPN.
And Tailscale creates a wireguard mesh which is great since not everything goes through the central server.
You should give it a try.
Immich does require some CPU and also GPU for video transcoding and vector search embedding generation.
I had Immich (and many other containers) running successfully on AMD Ryzen 2400G for years. And recently I upgraded to 5700G since it was a cheap upgrade.
Immich's current integration solutions (like "External Libraries") treat the archive as a read-only view, which leads to a fragmented user experience:
- Changes, facial recognition, or tagging remain only within Immich’s database, failing to write metadata back to the archival files in their original directory structure (last time I checked, might be better now.
- My established, meaningful directory structure is ignored or flattened in the Immich view, forcing the user to rely entirely on Immich’s internal date/AI-based organization.
My goal (am I the only one?) of having one app view all photos while maintaining the integrity and organizational schema of the archival files on disk is not yet fully met.
Immich needs a robust, bi-directional import/sync layer that respects and enhances existing directory structures, rather than just importing files into its own schema.
I know how much Adobe is hated around any creative circle, but tbf I find that Lightroom CC does this pretty well. Adobe has a well done simple helper app that does just that: downloads the entire of your library locally, with all pictures, all edits, everything. For backup purposes is perfect. Lightroom might be expensive for amateurs, but if you even just do a couple of photo jobs per year, it's worth every cent.
Apple Photos play poorly when you want to put the library on an external drive (and even more poorly when you want to put it on a networked drive).
That's why I like how Photoprism just uses my files as they are without touching them (I think immich can do that as well now, but it wasn't so in the past). I can manage the filesystem encryption myself if I want to.
It is straightforward, but so is the NixOS module system, and I could describe writing a custom module the same way you described custom Docker images.
The fact that they don't support sub-albums make it an absolute no-go to me.
I didn't knew about Lychee previous to your comment, but given that they support what should be a basic feature of photo management software (unlike Immich), I'll give it a try
Thanks for the suggestion!
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
it’s not a fair comparison with Google because Google has a much bigger target on their back. There are millions of users of Google, so the value of hacking Google is very high. The value of hacking a random Immich instance is extremely low.
The main reason: I don’t trust software NOT deleting my photos. (Yes, I have an off-site) backup, but the restore would take time.
It might not be as easy as rsync to transfer data out, but I would trust it way more than some of the folder based systems I've had with local apps that somehow get corrupted/modified between their database and the local filesystem. And I don't think ext4 is somehow magically more futureproof than Postgres. And if no-one else writes an export tool, and you feel unable to, your local friendly LLM will happily read the schema and write the SQL for you.
People seem very happy about Immich, I'm tempted to try. But people seem very about Nextcloud as well, so it's difficult to tell.
Do you have to ever open the app though? On iOS/Android?
In my case I would need it to run on the phones of my family members, and they probably will never open the app.
The selling point for me is that it is NOT TooBigTech. It doesn't have to be as good as TooBigTech, but it has to be reliable enough. In my case it means that it should be able to sync from iOS/Android, in the background, even if the user never opens the app, and it should never get out of sync and require setting up everything again. Nextcloud fails at that.
This doesn't work properly on Nextcloud (it sometimes gets out of sync and then I'm screwed because I have to reset the app on my family member's phone and have them resync for hours).
One value of Tailscale for a ton of simple use-cases is that people don't have time / don't want to learn.
Everything works well and it's comparably fast with Google Photos for me, and scrolling to specific dates works fine.
How long ago did you try it? I've only been using it for a few months so maybe it's improved over time.
I have been testing Nextcloud for backing up photos from my family members' phones. Wouldn't recommend.
The sync on iOS works well for a while, then it stops working, then some files are "locked" and error messages appear, or it just stops syncing, and the only way I find to recover is to essentially restart the sync from scratch. It will then reupload EVERYTHING for hours, even though 95% images are already on the server.
Note that in my use-case, the user never opens the app. It has to work in the background, always, and the user should not have to know about it.
Really looking for a system where I can install the app on my parents' iPhones and it backs up their photos to my server without them having to even know about the app. They won't open it, ever.
Nextcloud fails at that.
In my case I want to host on my personal server at home, so it feels actually nicer to not have E2EE. I basically would like to have the photos of all my family members on a hard disk that they could all access if needed (by plugging it into their computer).
It's not why I use sync services. All my photos fit on my devices (more or less). But I want to have seamless access to my files from both of my devices. And most importantly the sync is my first line of backup, i.e. if my phone gets obliterated I don't loose a day or two of files and photos, I only loose a couple of minutes.
And the jump of getting rid of people you hate who contribute to your project and you can do little harm to, to getting rid of people you hate who are of no use to you and you can do genuine damage to (e.g. by installing a tor exit node) is a step down if you think you could get away with it.
I really would like something like this.
It’s not too bad. As others have said, AI makes it easy to get right.
> Why not encrypt your server?
I’d like to provide the service to my semi-extended family — not just me and my partner, but also my parents and siblings. And I respect their privacy, so I want to eliminate even the possibility of me, system administrator, accessing their photos.
Obligatory link: https://youtube.com/watch?v=SxdOUGdseq4
You can do the same with any configuration manager such as puppet, salt or chief.
You'll have plenty of time to write your exportation script before postgres ever disappear completely of all the bytes stored on our planet.
Also, are you saying you don't do backups?
Tailscale makes it simple for the user - no need to set up and maintain complex configurations, just install it, sign in with your SSO and it does everything for you. Amazing!
If your solution to an issue is "just reset the Redis cache", this is when I am done.
Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Move to Shopify and LearnWorlds. Integrate the two. Stop self hosting. (They’re not large enough to do it well; and it already caused them a two week outage.)
I then have Proxmox back it up to Proxmox Backup Server running in a VM, and it has a cron job that uploads the whole backup of everything to Backblaze B2.
The backup script to B2 is a bit awful at the moment because it re-uploads the whole thing every night... I plan on switching to something better like Kopia at some point when I'll get the time
That is a totally reasonable view. But others have different preferences. I, for example, do not want to share all my photos with Google, govvies and anyone else they leak them to.
So I self host, back up and share my files with the family. I can always dump what I want to insta, etc. but it is my choice what to share, picture by picture, with default "off". And have no dark patterns trying to catch a finger accidentally hitting a "back up to cloud" for the full album.
That, to me, is a big deal, worth dealing with occasional IT hassles for. Which is just a personal preference.
Sure it could be easier/safer to manage, everything can be better.
Over the last couple of years hosting it I had a single issue with an upgrade but that was because I simply ignore the upgrade instructions and YOLOed the docker compose update.
Again, is it perfect? No. Would I expect a non tech savy user to manage their own instance? Again no.
Our quickstart.sh[1] bundles Minio, but you can configure Ente to use RustFS[2] or Garage[3] instead.
[1]: https://ente.io/help/self-hosting/#quickstart
I love that the consumer space is getting this kind of attention. It’s one of the biggest opportunities for big tech to lock people into their ecosystem, as photos are something everyone cherishes. You can extort people with ever increasing subscription fees because over time they reach a scale with their own photos that makes it inconvenient to manage themselves. It’s nice to have multiple options that are not Google or Apple.
That sounds about 100x more difficult to me
Have been using it for about 1.5 years, and I have not a single problem, which is quite incredible for a software that basically has all features that Google Photos has.
pixelfed may be what the parent want then. I don't like that it is PHP, but as long as they adhere to the ActPub protocal, we can roll our own in whatever flavor.
Nextcloud uses the location permission for some reason, presumably to wake up the app in the background once in a while? At least it can be closed (and "swiped away") for 2 months and keep syncing. Until it breaks and stops working entirely.
Upgrades are frequent but no hassle.
I have been running this for half a year. It might have been more work earlier?
My household is using this for our shared photos repository and everyone can use it. Even the kids.
There is both direct web access and an iPhone app.
Having seen a lot of companies and startups doinge exactly that, more of less everyone regrets it. Either you end up with such a lot of traffic through these vendors that you'll regret it financially, or you want to change some specific part of your page or your purchase process, which Shopify doesn't let you change, and you'll end up needing to switch or be sad, or, as I regularly have to (because we don't get the resources and time to switch): try to manipulate the site through some weird hacky Javascript snippets that manipulate the DOM after it loaded.
It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
> Stop self hosting.
Worst mantra of the century. Leading to huge dependencies, vendor lock ins, monopolies, price gauging. This is only a good idea for a prototype, and only as long as you'll not gonna run the prototype indefinitely but will eventually replace it. And maybe for one-person-companies who just want to get going and don't have resources for this.
Switching the ecosystem from something like Shopify to some other shop software requires a lot of manual work, and some of the stuff won't even be transferable 1:1.
Fixing some issue with your WordPress installation will require a person who can google and knows a little stuff about webservers, and maybe containers, and will usually go pretty fast, as WordPress is open source and runs almost half the internet, and almost every problem that will come up will have been solved in some StackOverflow thread or GitHub issue.
Usually though, if you run WordPress and you're not doing a lot of hacky stuff, you will not encounter problems. Vendors shutting you down, increasing their pricing, or shutting down vital features in their software, happens regularly though. And if it happens, shit hits the fan.
I think it's the best of every world. Self contained, with an install script. Can bring up every dependent service needed all in one command. Even your example of "a simple script" has 5 different expectations.
Also have you read some of the setup instructions for some of these things? I'd be churning out 1000 lines of ansible crap.
Either way since Proxmox 9.1 has added at least initial support for docker based containers the whole argument's out the window anyway.
The problem was when I had to change some obscure .ini file in /etc for a dependency to something new I was setting up. Three days later I'd realise something unrelated had stopped working and then had to figure out which change in the last many days caused this
For me this is at least 100x more difficult than writing a Nix module, because I'm simply not good at documenting my changes in parallel with making them
For others this might not be a problem, so then an imperative solution might be the best choice
Having used Nix and NixOS for the past 6-7 years, I honestly can't imagine myself using anything than declarative configuration again - but again, it's just a good fit for me and how my mind works
I updated the container for usual appliance maintenance. Entire thing is toast. Metadata files can't be read, mounted, permission issues and more. It's been four months since.
That’s all the author is trying to do. He isn’t trying to avoid or replace Google Photos - just have a local backup.
Even Apple has a Windows app that does that for iCloud Photos
Ente could go out of business tomorrow and I’d still have all my photos, neatly organized into folders.
And I don’t have to bother with self-hosting overhead. Or I could self host, too, if I wanted. But I still need an off-site backup so I might as well pay for the cloud service.
I’m asking because you spoke to me when you said “because I'm simply not good at documenting my changes in parallel with making them”, and I want to understand if NixOS is something I should look into. There are all kinds of things like immich that I don’t use because I don’t want the personal tech debt of maintaining them.
Much more responsive and clear UI, golang backend are two main subjective advantages.
Have you seen how bad the Nix documentation is and how challenging Nix (the language) is? Not to mention that you have to learn Yet Another Language just for this corner case, which you will not use for anything else. At least Guix uses a lisp variant so that some of the skills you gain are transferable (e.g. to Emacs, or even to a GP language like Common Lisp or Racket).
Don't get me wrong, I love the concept of Nix and the way it handles dependency management and declarative configuration. But I don't think we can pretend that it's easy.
You can do this with a few scripts and the Immich API - but that's not something the average user will do.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
Almost all my Google Photos "people" are mix-and-matched similar looking faces, so it's borderline useless. Immich isn't perfect, but it gives me the control to rerun face recognition and reassign faces when I want, even on my ancient GTX 1060.
... Why? I don't know what developer purge you're talking about, but getting rid of people running a project almost never means suddenly they'll start to get rid of users, I'm not sure why that assumption is there. Not to mention that they couldn't even "purge users" if they wanted to, unless they make the download URLs private and start including some licensing schema which, come on, hardly is realistic to be worried about...
* With NixOS, you define the configuration for the entire system in one or a couple .nix files that import each other.
* You can very easily put these .nix files under version control and follow a convention of never leaving the system in a state where you have uncommitted changes.
* See the NixOS/infra repo for an example of managing multiple machines' configurations in a single repo: https://github.com/NixOS/infra/blob/6fecd0f4442ca78ac2e4102c...
Flip a switch and then what, are you getting a isolated public URL to share? Or you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted?
Of course, you may have reasons to do that. But then you also own the maintenance.
I have never had to maintain any PG extensions. Whatever they put in the image, I just run. And so far it has just worked. Upgrades are frequent and nothing has broken on upgrade - yet at least
I decided to try Nextcloud exactly because of this. My problem with it is more that the whole thing is a bit unreliable. Like once in a while the app will get into a state where the only way I found to recover is to just erase everything and re-sync everything. And the app will resend ALL the pictures, even though they are already on the server.
And I can't do that with my family members' phones. It doesn't matter to me if the app takes a month to sync the photos, but it has to require zero maintenance. I can deal with the server side, but I need it to "just work" on the smartphones.
I totally disagree. You do need a tiny bit of command line experience to install and update it (nothing more than using a text editor and running `docker compose up`), but that's really it. All administration happens from the web UI after that. I've been using Immich for at least 2 years and I've never had to manually do something other than an update.
> Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Honestly, I can't understand what exactly you're expecting. If Google Photos suits your needs for sharing photos with others, that's great! As for Immich, have you read how it started[0]? I think it's solved the problem amazingly well and it still stays true to its initial ambitions.
[0]: https://v1.142.1.archive.immich.app/docs/overview/welcome
Immich may not be the pinnacle of all software development, but with the alternative being Google photos:
- Uploading too many photos won't clog my email and vice versa
- I'm not afraid of getting locked out of my photo account for unclear reasons and being unable to reach anyone to regain access
- If I upload family photos from the beach, then my account won't get automatically flagged/disabled for whatever
- Backups are trivially easy compared to Google takeout
- The devs are reachable and responsive. Encounter a problem? You'll at least reach a human being instead of getting stranded with a useless non-support forum
I would instead say that my (and my family's) photos are too important to me to pass their hosting on to a company known for its arbitrary decisions and then being an impenetrable labyrinth if there is an issue.
So you do pay some price, but it is an illusion to think that the price of Google photos (be that in cash, your data or your effort) is much lower.
Things that did break during this time: - my hacky remote filesystem - network connectivity of a too cheap server but these were on me and my stinginess.
Immich is the best end user focused app I've ever ran in a container.
Setup immich VM or docker container with a cloudflare tunnel
Front access with Cloudflare Access (ZeroTrust) for free.
Set "can only be accessed by users with email = xyz@myuser”
Done.
Now assuming this is the same user email as the one you shared photos with, there is a base level of security keeping the riffraff away.
Home IP is never exposed either, because it's proxied through the cf tunnel.
I'm the beginning I was doing one change, writing that change down in some log, then doing another change (this I'll mess up in about five minutes)
Now I'm creating a new commit, writing a description for it to help myself remember what I'm doing and then changing the Nix code. I can then review everything I've changed on the system by doing a simple diff. If something breaks I can look at my commit history and see every change I've ever made
It does still have some overhead in terms of keeping a clean commut history. I occasionally get distracted by other issues while working and I'll have to split the changes into two different commits, but I can do that after I've checked everything works, so it becomes a step at the end where I can focus fully on it instead of yet another thing I need to keep track of mentally
On the other hand, maybe AI can help remove some of that pain for me now. Just have Claude figure out what's wrong. (Until it decides to hallucinate something, and makes things worse)
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Its not perfect but its great to be able to just search for things in a photo and find any matches across dozens of TBs of raws, without having to have some 3rd party cloud AI nonsense do all the work.
The only thing I wish they could get integrated is support for jxl compressed raws, which requires them compile libraw with adobe's sdk.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
The quick answer is complexity and the amount of energy I have, since I'm mostly working on my homelab after a full work day
Some things also don't run that often or I don't check up on them for some time. Like hardware acceleration for my jellyfin instance stopped working at some point because I was messing around with OpenCL and I messed up something with the Mesa drivers. Didn't discover it until I noticed the fans going ham due to the added workload
I installed immich in a VM. And the VM is using GPU passthrough. I don't see how it's overkill: immich is a kitchen sink with hundreds if not thousands of dependencies and hardly a month goes by without yet another massive exploit affecting package managers.
I'm not saying VM escapes exploit aren't a thing but this greatly raises the bar.
When one install a kitchen sink as gigantic as immich, anything that can help contain exploits is most welcome.
So: immich in a VM and if you want a GPU, just do GPU passthrough.
I agree that the search using facial recognition is nice in immich that said.
Searching for "nextcloud ios background sync" shows a whole bunch of forum posts and bug reports about it not working well unless you have the application open.
One issue (https://github.com/nextcloud/ios/issues/2225) been open since 2022, seems to still be not working properly. Another (https://github.com/nextcloud/ios/issues/2497) been open since 2023.
For something that works well it seems like a ton of people have a lot of issues with it. Are you sure you're on the latest iOS version? Seems like people experience the issues when they're on a later version.
But it works on Ubuntu, it works on Debian, it works on Mac, it works on Windows, it works on a lot of things other than a Nix install.
And I have to know Docker for work anyhow. I don't have to know Nix for anything else.
You can't win on "it's net easier in Nix" than anywhere else, and a lot of us are pretty used to "it's just one line" and know exactly what that means when that one line isn't quite what we need or want. Maybe it easier after a rather large up-front investment into Nix, but I've got dozens of technologies asking me for large up-front investments.
Yeah, like TrueNAS, where they've decided it was good entire to run Kubernetes on NAS hardware, with all the fun and speed that comes with. You just hit "Install", wait five minutes, and you get something half-working but integrated with the rest of their "product".
I'll stick with configuration I can put in git, patch when needed and is easy to come back to after 6 months when you've forgotten all about the previous context you had.
Doing all that with containers is a spaghetti soup of custom scripts.
These things are a proxmox home lab user's lifeline. My only complaint is that you have to change your default host shell to bash to run them. You only have to do that for the initial container creation though.
For example, when they moved between Postgres container versions, it required a manual edit to the compose file to adjust the image. Even if you managed to get it set up initially in docker, it’s these sorts of concepts that are way more advanced than the vast majority of people who may even be interested in self-hosting.
For a hobbyist self-hoster it’s cool and fun, but not something at this point I’d trust my photos to alone. I have considered Ente for that but today it’s still iCloud Photos.
Afaict I can't use a tailnet address to talk to that (or is it magic dns I'm thinking about? it was a while since I dug in). I suppose I could have a different device be an exit node on my internal network, but at that point I figure I may as well just keep using my wireguard vpn into my home network. I'm not sure if tailscale wins me anything.
Do other people have a solution for this? (I definitely don't want to use tailscale funnel or anything. I still want all this traffic to be restricted like a vpn.)
Paying LearnWorlds + Shopify $30K a year, if it were even that extreme, is cheaper than an engineer and certainly cheaper than an outage over Giving Tuesday, as they found out the hard way. They got hacked and were down for the most high-traffic nonprofit donor day of the year in their effort to save a few bucks. It wasn’t even the plugins, but the instance underlying the shared hosting.
> It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
You’re also locked into an ecosystem. It’s called Stripe or PayPal. Almost all of that applies anyway. Don’t forget that significant amount of customizations are restricted to streamline PCI compliance, you can do illegal things very easily. Install an analytics script that accidentally captures their credit card numbers, and suddenly you’re in hot water.
> Leading to huge dependencies, vendor lock ins, monopolies, price gauging
Have you analyzed how many dependencies are in your self hosted projects? What happens to them if maintainers retire? How long did it take your self hosted projects to resolve the 10/10 CVE in NextJS? And as for price gouging, if it’s cheaper than an engineer to properly support a self-hosted solution, I’ll still make that trade as even $80K for software is cheaper than $120K to support it. If you’re at the scale where you don’t have a proper engineer to manage it, do not self host. Business downtime is always more expensive than software (in this case, 5 salaries for 2 weeks to do absolutely nothing + lost donations + reputational damage + customer damages, because “self hosting is easy and cheaper”).
I may try to package it, and if it proves to be easy to maintain, I might file an ITP.
OK, I'll stick with Ubuntu + KDE (so Kubuntu really) on all my machines.
as an un-solicited drive-by suggestion: see if they're owned by root? you may have sudo'd the original run.
since you're at least a few months behind though, do check for breaking changes: https://github.com/immich-app/immich/discussions?discussions... they've pretty consistently had instructions, but you unfortunately mostly have to know to look for it. not sure why the upgrade notification doesn't make it super incredibly painfully obvious.
On hardware that doesn't have docker, or is significantly more resource constrained somehow: yea, I completely believe it. I haven't tried that, but given the features it makes total sense that it'd be harder.
That’s true of docker too.
2) that's nice!
3) "it doesn't throw my data away" is the last selling point?! Isn't that just assumed?!
I didn't even realize this tool existed. I tried something like it awhile back, but it didn't work to my satisfaction (I don't remember why), so my awful, awful, awful workflow is to use the Google Takeout functionality to generate something like 8 .tar.gz files (50 gigabytes each), manually download each one (being prompted for authentication each time), and then rsyncing them over to my local server, and finally uncompressing them.
It's very lovely how much Google doesn't want you to exfiltrate your own data.
I wonder at which point I'll get annoyed enough to go through the effort of setting up immich. Which, naturally, will probably involve me re-working my local server as well. The yak's hair grows faster than I can shave it.
Is this like "Band-Aid" that used to be a brand name but now people just use it generically?
The community developing nix had a falling out with a couple highly unsavory groups that basically consisted of the Palmer Lucky Slaughter Bot Co. and a couple guys who keep trying to monetize the project in extremely sleazy ways. This wasn't some sort of Stalinistic purge, it was people rejecting having their name attached to actual murder and sleazy profiteering.
Wait, other comments were saying that one of Immich's weak points is backups. Someone else replied that the postgres structure is sane so you can run sql queries to get your data out if needed. Now you're saying it's plain old files. I'm confused
If any kind of apt upgrade or similar command is run in a dockerfile, it is no longer reproducible. Because of this it's necessary to keep track of which dockerfiles do that and keep track of when a build was performed; that's more out-of-band logging. With NixOS I will get the exact same system configuration if I build the same commit (barring some very exotic edge cases)
Besides that, docker still needs to run on a system, which must also be maintained, so Docker only partly addresses a subset of the issue
If Docker works for you and you're not facing any issues with such a setup, then that's great. NixOS is the best solution for me
I think the previous commenter misunderstood your question, this is the answer (you can also put it behind something like cloudflared tunnels).
Immich is a service like any other running on your server, if you want it exposed to the internet you need to do it yourself (get a domain, expose the service to the internet via your home ip or a tunnel like cloudflared, and link that to your domain).
After that, Immich allows you to share public folders (anyone with the link can see the album, no auth), or private folders (people have to auth with your immich server, you either create an account for them since you're the admin, or set up oauth with automatic account creation).
> RAM: Minimum 4GB, recommended 6GB
Wow. When factoring in the OS, that's an entire system's worth of RAM dedicated to just hosting files!
What does it use all this for? Or is this just for when it occasionally (upon uploading new pictures) loads the image recognition neural net?
I'd have to stop Immich whenever I want to do some other RAM-heavy task. All my other services (including database, several web servers with a bunch of web services, a Windows VM, git server, email, redis...) + the host OS and any redundancy caused by using containers, use 4.6GB combined, peaking to 6GB on occasion
> CPU: Minimum 2 cores, recommended 4 cores
Would be good to know how fast those cores should be. My hardware is a mobile platform from 2012, and I've noticed each core is faster than a modern Pi as well as e.g. the "dedicated cores" you get from DigitalOcean. It really depends what you run it on, not how many of them you have
I actually thought about doing this with NixOS last year, but it seemed counterproductive compared to how I self-host, I don't want to manage configurations in multiple places. If I switched everything it would likely be just as much work and then I'm reliant on Nix. Over the years I've gone from the OS being a mix of Arch and Ubuntu to mostly just Debian for my self hosting LXC or VMs. I already have the deployments templated so there's nothing for me to do other than map an IP, give it a hostname and start it.
To each their own, but I don't want to be beholden to NixOS for everything. I like the container abstraction on LXC and VMs and it's been very good to minimize the work of self-hosting over 40+ services both in my home lab and in the bare metal server I lease from Hetzner.
You're usually deep within a social bubble of some sort if you find yourself assuming otherwise.
I haven’t had any network volume issues. It’s an SMB volume provided by trueNAS mounted on a Windows machine.
I will say, if you mess up your volume like the time I took my NAS down for maintenance for a few days, the export failure wasn’t incredibly loud. I don’t think it notified and screamed at me that it wasn’t working. So I guess that is a significant risk.
Also according to https://immich.app/cursed-knowledge the notify issue was fixed July 2024.
LLM + Nix (ideally NixOS) changed everything imo.
After reading TFA last night, it was less work to tell Claude Code to get Immich running on my home server (NixOS), add the service to Tailscale, and then give me a todo list reminder of what I needed to do to mirror my Macbook iCloud/Photo.app gallery to it and then see it on my iPhone...
...than any of the times I've had to work around "black box says no", much like your example.
Just a couple years ago, this wasn't the case. I didn't have the energy to ssh into my server and remember how things are set up and then read a bunch of docs and risk having to go into a manual debug loop any time a service breaks. LLM does all that. I never even read Nix docs. LLM does that too.
In fact, it was fairly fun to finally get a good cross-platform setup working in general to divest from Apple thanks to LLM + Nix. I really like where things are going in this regard. I don't need any of this crap anymore that I used to use because it was the only way to get something that Just Worked.
By the time I lose my software job and have to compete with you lot, H1Bs, and teenagers to fold sweaters at Hollister, I won't need to use a single bit of proprietary software. It will be a huge consolation.
The main "weak point" is probably that it doesn't have S3 integration, which is entirely fair. But for my purposes, rcloning the library folder (or e.g. rsync to a btrfs for free deduplication if you reorganize) is more than good enough, because that folder provides enough data for it to restore everything I care about.
For DB backups for keeping everything, there are configurable auto-backups, but it's only a snapshot to a local filesystem. So you'd need to mirror that out somehow, but syncthing/rclone/etc exist and there are plenty of options.
Google photos isn't perfect either but I never saw these kind of issues when I was still using it.
> It's not simpler than a container option and creates a single point of issue. The container option is tested and supported by Immich, they recommend it. I don't want to be beholden to NixOS for everything.
I think there's a misunderstanding here. You aren't beholden to NixOS here. You don't have to use nixpkgs nor home-manager modules. You can make your own flakes and you can use containers, but the benefit is still that you set it up declaratively in config.
It's not incompatible with anything you've said, it's just cool that it has default configurations for things if you aren't opinionated.
> I don't want to manage configurations in multiple places.
I've accumulated one big Nix config that configures across all my machines. It's kind of insane that this is possible.
Of course, it would seem complicated looking at the end result, but I iterated there over time.
Example: https://github.com/johnae/world -- fully maintained by a clanker (https://github.com/johnae/world/pulls?q=is%3Apr+is%3Aclosed)
It's just declarative configuration, so you also get a much better deliverable at the end than running terminal commands in Arch Linux, and it ends up being less work.
Of what I selfhost, I've never felt I was having to concede on anything.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
But we live where we live.
> it is no longer reproducible
The problem I have with this is that most of the software I use isn’t reproducible, and reproducible isn’t something that is the be all and end all to me. If you want reproducible then yes nix is the only game in town, but if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
> docker still needs to run on a system
This is a fair point but very little of that system impacts the app you’re running in a container, and if you’re regularly breaking running containers due to poking around in the host, you’re likely going to do it by running some similar command whether the OS wants you to do it or not.
3) not compared to iCloud photos which I migrated from. You can export a whole album with Google at original quality with 1 click. With Apple you can only do 1000 at a time. For apple you can ask for a whole account export, but that takes a few days and gives you all photos. (Similar to Google Takeout).
For some I'm sure that's the case; it wasn't in my case.
I ran docker for several years before. First docker-compose, then docker swarm, finally Nomad.
Getting things running is pretty fast, but handling volumes, backups, upgrades of anything in the stack (OS, scheduler, containers, etc) broke something almost every time - doing an update to a new release of Ubuntu would pretty much always require backing up all the volumes and local state to external media, wiping the disk, installing the new version, and restoring from the backup
That's not to talk about getting things running after an issue. Because a lot of configuration can't be done through docker envs, it has to be done through the service. As a consequence that config is now state
I had an nvme fail on me six months ago. Recovering was as simple as swapping the drive, booting the install media, install the OS, and transfering the most recent backup before rebooting
Took about 1.5 hours and everything was back up and running without any issues
I'd still want to go through any changes with a fine tooth comb to look for security issues and to make sure I know what it is adding and removing, but it's saner than letting an LLM run amok on a live system.
Annoyingly you can't create a person that way yet with immich, but that's where digikam helps.
I keep that running on a VPS, but with with proper firewalling you could probably run it on the same machine.
And how your albums, instead of being metadata, are folders, into which files are duplicated. It's literally shittily coded malicious compliance so they can pretend to let you have your data.
(Oh, you view your folders as important metadata that should be attached to a single copy of the image? Cool cool, write a bunch of code.)
That’s my question. I’m sure it works fine on Android but I was under the impression that iOS/iPadOS restricts this unless the app is running in the foreground.
Android is more relaxed but the vendors (like Samsung etc) will go around that and implement their own aggressive background killing bots. Sometimes, this causes alarm apps to stop and not wake you up etc.
The main reason is battery life. Tragically, this makes sense due to the cesspool of spam apps that plague their ”curated” app stores. If you’re an app developer who want to use it responsibly you’re in for a world of trouble. I know because I am one of them (well, I consider myself responsible at least).
I think that the forum posts may be old, and/or a bunch of them may come from users who did not know that they had to set the location permission this way (which admittedly is unintuitive for photo syncing).
My issue is other bugs that make it painful, including the fact that I cannot trust that Nextcloud will eventually upload the whole photo gallery (it seems like some files regularly get "locked" w.r.t. "webdav", for some reason, and this never resolves).
There's actually no misunderstanding and this is exactly my point. With any Nix config you are beholden to that specific platform. What I'm saying is any other Linux distro can be dropped in with almost no changes in my existing implementation. I've already experienced breaking changes pre-Flakes with Nix and so I don't actually view it as stable as other options. Beyond that there's some politics surrounding Nix that I don't care to follow. So when you say "all you have to do is write your own Flake"... Why? I already have something that's able to be reliably reinstalled in minutes if need be. I don't need a specific declarative set of tooling to get there.
I like the idea of a declarative setup, but I don't think Nix is mature enough nor does it bring enough differentiation to the plate to be worthwhile as of yet.
If it just needs it on occasion (and I can control when by not uploading at times where I'm using it for other purposes), that would probably be worth it since I have the spare capacity 99% of the time
> How do we break the deadlock? That’s where STUN comes in. [...] In Tailscale, our coordination server and fleet of DERP (Detour Encrypted Routing Protocol) servers act as our side channel
We are so screwed.