"photostructure info <file>" returns proper nativePath, but "photostructure sync <file>" does not

environment:
docker, using portainer

bind mount (host → container):
/mnt/back-01 → /back-01

on host, /mnt/back-01 is an nfs mount to: 192.168.86.246/export/back-01

so, uri from info looks good:
uri: ‘psnet://192.168.86.246/export/back-01/BACKUP/PICTURES/20140420/IMG_2405.JPG’

nativePath shown in worker log when sync command is issued manually shows:

“nativePath”:“/back-01/back-01/BACKUP/PICTURES/20140420/IMG_2405.JPG”},“error”:“p: build(): missing stat info for best file³ at (missing stack)”}}

it appears the 'build() thing has not calculated to the nativePath correctly - and jammed in an additional folder - like the folder name from the host or something.

i did the manual steps above after realizing that multiple hours of syncs produced 0 images in the database.
reviewing the log files showed this happening on every image.

weird that it gets the uri right, but then not the nativePath (but info does get it right).

any thoughts? i can experiment with different mountpoints, trailing slashes, etc. but i do think there is a bug when info acts differently than sync, w.r.t. basic file info.


Welcome to PhotoStructure, @davecampbell !

Thanks for taking the time to report this! What version of PhotoStructure are you running? For docker, I suspect the latest :alpha image may be most stable.

For URI/path stuff to work properly, several things need to go right:

  1. PhotoStructure needs to extract mountpoints correctly. You can see what it thinks by running photostructure info --mountpoints.

  2. For each mountpoint, it needs to extract a reasonable volume uuid. You can see what it thinks by running photostructure info --volumes.

If you can share the results from those commands, that would help us sort this out. I don’t suspect there’s anything private in those results, but if you think there is, please DM me the result instead of adding the responses here.

Cheers!

thanks for the response - excited to help here.

no tag on the docker image, so i guess that is defaulting to :stable.

here is the result of those two commands:

`
/ps/app $ ./photostructure info --mountpoints
[ ‘/’, ‘/back-01’, ‘/back-02’, ‘/ps/library’ ]

/ps/app $ ./photostructure info --volumes
[
{
filesystem: ‘overlay’,
mountpoint: ‘/’,
size: ‘40.9 GB’,
used: ‘11.6 GB’,
available: ‘27.2 GB’,
remote: false,
updatedAt: 1702852382818,
volsha: undefined
},
{
filesystem: ‘192.168.86.246:/export/back-01’,
mountpoint: ‘/back-01’,
size: ‘983 GB’,
used: ‘890 GB’,
available: ‘93.7 GB’,
remote: true,
remoteHost: ‘192.168.86.246’,
remoteShare: ‘/export/back-01’,
updatedAt: 1702852382818,
volsha: undefined
},
{
filesystem: ‘192.168.86.246:/export/back-02’,
mountpoint: ‘/back-02’,
size: ‘983 GB’,
used: ‘175 GB’,
available: ‘808 GB’,
remote: true,
remoteHost: ‘192.168.86.246’,
remoteShare: ‘/export/back-02’,
updatedAt: 1702852382818,
volsha: undefined
},
{
filesystem: ‘/dev/mapper/ubuntu–vg-ubuntu–lv’,
mountpoint: ‘/ps/library’,
size: ‘40.9 GB’,
used: ‘11.6 GB’,
available: ‘27.2 GB’,
remote: false,
updatedAt: 1702852382818,
uuid: ‘d62b5998-e92c-4484-8157-11abd564c5a3’,
volsha: ‘2SDNmKXgy’
}
]
`

again - that all appears pretty kosher and correct.
and just odd to me that results from the info command on a specific file provide a nativePath that is reachable, yet the sync command, on that same file, does not result in the image being indexed, and the log files for that sync event do show a nativePath that is unreachable (has the additional /back-01 folder in the path).

but i’ll let you review the results from the command to proffer where there may be something going awry.

here’s a diagram of the situation.
i looked through the UUID stuff - it’s not clear where all that should be done.
on the docker host (in my case ‘firsty’)?

There are several bugs in mountpoint and volume parsing that I’ve fixed in the v2023 releases – I’d suggest pulling in the photostructure/server:alpha build

@davecampbell that diagram looks fine. I suspect after upgrading to :alpha, the /back-01 volume will have a volsha. If it still doesn’t, I suspect the PUID userid that PhotoStructure is running doesn’t have sufficient permissions to write to /back-01/.uuid.

If you’re never going to move your library to another machine, and always have /back-01 point to the same NFS volume, you don’t need to fix the lack of a volsha, but if you want to, you can do it manually: https://photostructure.com/faq/what-is-a-volume/#-how-to-manually-add-uuid-files

i jammed a .uuid file on:
/mnt/back-01
and used the UUID of the USB HDD from the OMV host.

provided a specific folder to scan on that volume (in terms of path from photostructure container), in the Settings,and we’re OFF TO THE RACES.

images indexed immediately.

i did actually change the Setting to leave images where they were, as well, but i think adding the .uuid file was the trick.

let me know if you need any more diagnostics, but using the .uuid file seemed to be what solved this issue for me.

i still think there is something fishy with the situation as it was.
i’m under the impression the URI would be considered unique and could be used in place of UUID.
and the weird double-occurrence of the malformed nativePath makes me think there is just some faulty path string manipulation or something.

and what are the requirements of the contents of that UUID file?
must it match the actual UUID of the drive?
or could it be totally random / made-up?

thanks - nothing critical here - able to now continue my evaluation.

the docker / portainer approach is pretty interesting, and crazy-easy to setup (aside from this nfs/uuid nuance).