So I’ve just installed Photostructure (going to be using Plus, currently on the trial), and for the past couple days it’s been importing photos from my NAS; so far so good, and it’s discovered over 400,000 items to import/process, and had made it about 3/4 of the way through… however, I had to restart the host which killed the LXC and when I started everything back up, it’s seems to be starting the import from scratch? Is this expected?
For context, my setup is this: I have a Proxmox host, and Photostructure has it’s own LXC. I also have TrueNas set up to serve over NFS, which is mounted on the Proxmox host and shared with the Photostructure LXC. I’ve double checked permissions, and looked at the .photostructure folder to see access times… and it seems to be updating the existing library just fine, but even though there are tons of already imported files in the library, it’s continuing on like it doesn’t see them… any ideas? It now has about 6 days remaining and I don’t mind waiting, but I lost the progress of about 4 days already and if this is going to happen whenever I restart then it’s a bit of a blocker…
I unfortunately don’t know anything about proxmox, LXC, etc. to provide any insights on why you’re experiencing this. But I can assure you that restarting the app (or container) should not restart a sync, it should pick up right where it left off. It does for me.
Howdy @yatesjr , welcome to PhotoStructure! Let’s get through these questions:
Does the app require safe shutdown?
PhotoStructure would prefer to have you shut down gracefully, as there are a bunch of cleanup operations (like taking a database backup, validating the backup, and doing some other housecleaning), but these should only take a couple seconds unless the storage is quite slow (like, spun-down remote HDD).
On launch, PhotoStructure validates the library database, and if it’s not in a good state, will perform a series of strategies to recover the SQLite db. If none of these succeed, it should throw an error to stderr and exit with a non-zero status code.
sync has a work queue (implemented as a secondary SQLite database) that should survive restarts to prevent redoing prior work, but the work queue is disregarded if there’s too much time since the last shutdown, with the thought that the disk contents may have changed sufficiently to warrant rescanning the disk.
So, barring bugs, it should restart from where it last was working.
Are you running v1.1 or a v2.1+ alpha build? v1.1 has some issues with larger libraries when running on higher CPU (8+) machines due to SQLite write log thrashing. v2.1+ should resolve this issue.
So it’s good to know what the expected behavior is. It’s been running solidly since the restart and I haven’t had to restart the host or anything so I let it continue in hopes it would finish. I’m running the v1.1.0 release and the instances is allocated 4 cores, but it’s running on a 5950x on the host so I guess there could be enough activity even with less than the 8 you mentioned? The library is rather large too… 300k ish assets processed so far, 100k ish left to go.
I’m going to see what happens when it completes the sync, assuming it completes it within a couple more days, and if it fails for whatever reason I’ll try out the alpha.
Okay so have an update, I notice that it’s been going super slow the past day or so, so I figured I’d try and shut down the server and restart it to see if it picks back up where it left off. It does pick up where it left off, so there is some good news .
v2.1 is also dramatically faster than v1.1 on initial scan. Since you’re using containers should be pretty trivial for you to try it. You might never look back at 1.1 for this, and quite a few other reasons!