Version 2.1.0-alpha.1 is ready for testing!

You need to place this into the settings.toml file of your library.

Just as a random mention, the ‘what’s new’ link points to version 1.0

That’s were I placed them. But doesn’t help and I get no connection to server after some time. Nothing interesting if I logcat

2 Likes

Welcome to PhotoStructure, @mexusbg!

The site.webmanifest is served as a static file: you should see it in your photostructure-for-servers/public directory.

It’s odd that the webservice is returning a 401: PhotoStructure’s static file server router returns 404s for unknown routes, so I believe that status is coming from your reverse proxy. What do you have in front of PhotoStructure?

1 Like

I guess I’m blind, but where are the sync reports stored?

They live in .photostructure/sync-reports in your PhotoStructure library.

I will add a proper link and web view for that directory hierarchy to the UI after user auth is done.

BMP support can work with PhotoStructure for Node, but only via GraphicsMagick and recompiling libVips. I’d suggest converting them (with GraphicsMagic) to PNG: that would be a lossless conversion, and should safe disk space. Native support would require pulling in a new library.

Can you email one of these .wmv files to support@photostructure.com so I can take a look? Those should be supported, but as WMV is a container format, there are a ton of different codec combinations, and ffmpeg might not be happy with whatever you’ve got (given the settings I currently feed it).

I don’t know if this is related to the new version or not, but I report this in case anyone else can correlate.

I was investigating a problem where the sync seemed to die, so I was logtailing inside the container. I was seeing lots of permissions errors with the app trying to delete items from both the imgcache and database copy in /ps/tmp.

I checked the permissions on the real filesystem and a bunch of directories were owned by root, instead of the container owner. I’ve been running the previous version for a while with no issue and the locations are the same (same docker-compose.yml with the UID set). I’ve changed the ownership of the whole mount for /ps/tmp back to my container user again and things seem happier.

There shouldn’t be any PhotoStructure process that doesn’t respect the provided UID/GID environment variables: the entry point ensures that:

So… If there are any files not owned by your configured UID, something else is amiss.

Ok, that’s interesting. I hear you, but it’s happened again. Nothing else should be interested in this directory structure, so all I can think of is either an unexpected behaviour or something in the way I’m launching, which is just docker-compose up -d.

Any suggestions for where to look or process owners to check?

Here’s a sample:

drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 2r/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 2v/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 2w/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 2z/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 35/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 36/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 37/
drwxr-xr-x  3 root       root         16 May 14 01:22 38/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 3c/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 3d/
drwxr-xr-x  3 root       root         16 May 14 01:21 3j/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 3q/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 41/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 46/
drwxr-xr-x  3 root       root         16 May 14 01:22 4d/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 4f/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 4x/
drwxr-xr-x  3 root       root         16 May 14 01:20 53/
drwxr-xr-x  3 root       root         16 May 14 01:22 55/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 5k/
drwxr-xr-x  3 root       root         16 May 14 01:20 5r/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 5x/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 62/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 68/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 6b/
drwxr-xr-x  4 root       root         26 May 14 01:22 6n/
drwxr-xr-x  3 root       root         16 May 14 01:22 6r/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 6y/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 7b/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 7c/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 7e/
drwxr-xr-x  3 root       root         16 May 14 01:22 7z/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 83/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 8k/
drwxr-xr-x  3 root       root         16 May 14 01:23 8n/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 8r/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 8u/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 8y/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 95/
drwxr-xr-x  3 root       root         16 May 14 01:23 96/
drwxr-xr-x  3 root       root         16 May 14 01:19 9d/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 9f/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 9m/
drwxr-xr-x  3 root       root         16 May 14 01:19 9n/
drwxr-xr-x  3 root       root         16 May 14 01:21 b0/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 b2/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 b6/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 b8/
drwxr-xr-x  3 root       root         16 May 14 01:20 bb/
drwxr-xr-x  3 root       root         16 May 14 01:20 bc/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 bm/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 c9/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 ce/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 cu/
drwxr-xr-x  3 root       root         16 May 14 01:22 d0/
drwxr-xr-x  3 root       root         16 May 14 01:19 de/
drwxr-xr-x  3 root       root         16 May 14 01:19 dj/
drwxr-xr-x  4 containeruser containeruser   26 May 15 01:53 du/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 dx/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 e3/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 e6/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 e9/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 eh/
drwxr-xr-x  3 root       root         16 May 14 01:22 em/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 ez/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:54 f1/
drwxr-xr-x  3 root       root         16 May 14 01:23 f2/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 fc/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 fd/
drwxr-xr-x  4 containeruser containeruser   26 May 15 01:54 fg/
drwxr-xr-x  3 root       root         16 May 14 01:23 fk/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:52 fq/
drwxr-xr-x  3 root       root         16 May 14 01:19 fw/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 fx/
drwxr-xr-x  3 root       root         16 May 14 01:22 fz/
drwxr-xr-x  3 root       root         16 May 14 01:22 g7/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 g9/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:51 gp/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:50 gw/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 h4/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:53 jn/
drwxr-xr-x  3 containeruser containeruser   16 May 15 01:55 kb/
containeruser@host:/media/bulkstorage/photostructure/cache/imgcache$

some relevant processes:

$ ps -ef|grep photostructure
root        1993    1663  0 May14 ?        00:00:00 su --preserve-environment node --command /usr/local/bin/node /ps/app/photostructure
containeruser    2044    1993  0 May14 ?        00:00:00 /usr/local/bin/node /ps/app/photostructure
containeruser    2119    2044  0 May14 ?        00:00:28 photostructure main
containeruser    4820    2119 82 May14 ?        01:53:34 photostructure sync
containeruser   34960    2119  0 00:40 ?        00:00:23 photostructure web
containeruser   72190    4820 15 01:54 ?        00:00:34 photostructure worker
containeruser   72865    4820 11 01:54 ?        00:00:19 photostructure worker

and

ps -ef|grep ps
root        1663    1565  0 May14 ?        00:00:00 /sbin/tini -- /ps/app/docker-entrypoint.sh
root        1993    1663  0 May14 ?        00:00:00 su --preserve-environment node --command /usr/local/bin/node /ps/app/photostructure
containeruser   2044    1993  0 May14 ?        00:00:00 /usr/local/bin/node /ps/app/photostructure
containeruser   74760   72865  7 01:58 ?        00:00:02 /usr/bin/perl -w /ps/app/node_modules/exiftool-vendored.pl/bin/exiftool -stay_open True -@ -
containeruser   74762    4820  6 01:58 ?        00:00:01 /usr/bin/perl -w /ps/app/node_modules/exiftool-vendored.pl/bin/exiftool -stay_open True -@ -

docker-compose.yml extract:

      # The userid to run PhotoStructure as:
      - "PUID=1000" # < CHANGE THIS LINE or delete this line to run as root. See below for details.

      # The groupid to run PhotoStructure as:
      - "PGID=1000" # < CHANGE THIS LINE or delete this line to run as root.

uid confirmation:

$ cat /etc/passwd | grep 1000
containeruser:x:1000:1000:Name:/home/containeruser:/bin/bash

I had a look a the the file you referenced above and I see the handling for /ps/tmp is different to the rest and you don’t chown the structure in there:

  if [ -z "$PS_NO_PUID_CHOWN" ]; then
    # Always make sure the settings, opened-by, and models directories are
    # read/writable by node:
    for dir in /ps/library/.photostructure/settings.toml \
      /ps/library/.photostructure/opened-by \
      /ps/library/.photostructure/models \
      /ps/config \
      /ps/logs \
      /ps/default; do
      maybe_chown "$dir"
    done

    # Special handling so we don't do something terrible if someone bind-mounts /tmp to /ps/tmp
    if [ -d /ps/tmp ]; then
      mkdir -p "/ps/tmp/.cache-$UID"
      maybe_chown "/ps/tmp/.cache-$UID"
    fi
  fi

It seems that all the files/directories owned by root are from yesterday: do you have more than one way to launch the docker container?

That’s what I’d expect: tini and su run as root, and everything else should be the UID. All the actual work is done by node and child processes.

Ah, nice sleuthing! I only added those chown commands so when you change PUID/UID (say, from root 0 to the default 1000 user id, that the new user would still have access to the files it needed to spin up the library).

So… didn’t get to play with 2.1.0 alpha a lot, but after an initial upgrade noticed that a lot of my pictures aren’t there… tried to do a restart sync, and then a full rebuild and it just ignored me…

Things were a bit “messy” on my install, and it’s gone through a bunch of testing updates and things, so just started fresh. Blew it all away (config and all) and redownloaded the docker (I’m on unRaid).

Configured as before…

I have 4 directories it’s supposed to scan. Three of them it pulls in thousands of pictures (I haven’t checked closely, but looks like most all) - but those are the three “small” directories. Then there is the main archive. It started on it, chewed just a few minutes, pulled in just a 1000 or so and stopped.

Log shows (for example):

1652886758215,/pictures/Archive/2009/2009-01_January/2009-01-04–2009-01-04_Disney – travel/2009-01-01T18.03.08_Disney-IMG_3849.jpg,enqueued,
1652886758215,/pictures/Archive/2009/2009-01_January/2009-01-04–2009-01-04_Disney – travel/2009-01-01T18.03.24_Disney-IMG_3850.jpg,enqueued,
1652886758226,/pictures/Archive/2009/2009-01_January,canceled,DirectoryWalker was ended,1131
1652886758226,/pictures/Archive/2009,canceled,DirectoryWalker was ended,1132
1652886758226,/pictures/Archive,canceled,DirectoryWalker was ended,111489
1652886760151,/pictures/Archive/1989/1989-10_October/19891007-Dillahunty_Wedding/020-29-Scan-101007-0001.jpg,failed,“BatchCluster has ended, cannot enqueue {”“id”":438,"“method”":"“buildAssetPreviews_”","“args”":[{"“assetId”":199,"“assetFiles”":[{""$ctor"":"“models.AssetFile”","“id”":219,"“assetId”":199,"“shown”":false,"“uri”":"“psfile://37Ajd4ybC/Archive/1989/1989-10_October/19891007-Dillahunty_Wedding/02.:false,”“recountAllTags”":false}]}",http://127.0.0.1:1787/asset/199,2643
~ ~

Where should I look from here? Permissions look right - for example, it pulled 4 pictures from one folder that has about 20. All set the same permissions.

I’m still seeing two problems. Scans don’t seem to populate new images into the database, even with small manually run single date folder scans.

Secondly, I’m still getting weird file ownership problems that I didn’t get with the previous version. This time, I carefully shut everything down, checked file ownership, started again with a stock standard sudo docker-compose up -d

This is the result (the top level of this is the bind mount point for /ps/tmp:

$ sudo docker logs photostructure
Please wait, setting up...
Your library is at <file:///ps/library>
PhotoStructure is ready:
  - <http://127.0.0.1:1787/>
  - <http://172.18.0.2:1787/>
  - <http://f4b1f198ce13:1787/>

Shutting down PhotoStructure...
Please wait, setting up...
Please wait, setting up...
Please wait, setting up...
Please wait, setting up...
Please wait, setting up...
Please wait, setting up...



$ ll
total 0
drwxr-xr-x 5 containeruser containeruser  48 Mar 17 09:38 ./
drwxr-xr-x 3 root       root        28 Mar 16 02:51 ../
drwxrwxr-x 2 containeruser containeruser  19 Mar 16 02:54 backup/
drwxrwxr-x 6 containeruser containeruser 135 May 17 08:47 cache/
drwxrwxr-x 4 containeruser containeruser  70 Mar 17 09:54 samples/


/cache$ ll
total 8
drwxrwxr-x  6 containeruser containeruser 135 May 17 08:47 ./
drwxr-xr-x  5 containeruser containeruser  48 Mar 17 09:38 ../
drwxr-xr-x  2 containeruser containeruser   6 May 12 09:20 .cache-1000/
drwxr-xr-x 15 containeruser containeruser 136 May 17 09:21 imgcache/
drwxr-xr-x  3 containeruser containeruser  20 May 18 21:21 local-db-0885-rqph-1u41-rutf/
-rw-r--r--  1 containeruser containeruser 126 Mar 16 02:57 .NoMedia
drwxr-xr-x  3 containeruser containeruser  37 May 12 09:21 sync-state-kt27ebt2j7/
-rw-r--r--  1 containeruser containeruser  37 Mar 16 03:01 .uuid


/cache/local-db-0885-rqph-1u41-rutf$ ll
total 0
drwxr-xr-x 3 containeruser containeruser  20 May 18 21:21 ./
drwxrwxr-x 6 containeruser containeruser 135 May 17 08:47 ../
drwxr-xr-x 4 root       root        64 May 18 03:57 models/


/cache/local-db-0885-rqph-1u41-rutf/models$ ll
total 225200
drwxr-xr-x 4 root       root              64 May 18 03:57 ./
drwxr-xr-x 3 containeruser containeruser        20 May 18 21:21 ../
drwxr-xr-x 2 root       root               6 May 18 03:57 backup/
-rw-r--r-- 1 root       root       230604800 May 18 03:57 db.sqlite3
drwxr-xr-x 2 root       root               6 May 18 03:57 .db.sqlite3.pslock/

/cache/imgcache$ ll
total 0
drwxr-xr-x 15 containeruser containeruser 136 May 17 09:21 ./
drwxrwxr-x  6 containeruser containeruser 135 May 17 08:47 ../
drwxr-xr-x  3 root       root        16 May 17 09:18 2z/
drwxr-xr-x  3 root       root        16 May 17 09:20 30/
drwxr-xr-x  3 root       root        16 May 17 09:19 38/
drwxr-xr-x  3 root       root        16 May 17 09:18 4n/
drwxr-xr-x  3 root       root        16 May 17 09:16 5d/
drwxr-xr-x  3 root       root        16 May 17 09:19 75/
drwxr-xr-x  3 root       root        16 May 17 09:20 7x/
drwxr-xr-x  3 root       root        16 May 17 09:18 82/
drwxr-xr-x  3 root       root        16 May 17 09:20 92/
drwxr-xr-x  3 root       root        16 May 17 09:19 9t/
drwxr-xr-x  3 root       root        16 May 17 09:18 cq/
drwxr-xr-x  3 root       root        16 May 17 09:18 g9/
drwxr-xr-x  3 root       root        16 May 17 09:18 gf/

If the DirectoryWalker ended, that means the error rate was too high and the “yikes, things are bad, I better not continue” circuit breaker blew.

If you set your logging to info and send me your logs I can take a look at what’s going on.

@devon can you shell into your PhotoStructure container and send me a the result of env ; ps -ef please?

$ sudo docker exec -ti photostructure sh
/ps/app # env ; ps -ef
NODE_VERSION=16.15.0
HOSTNAME=34a43ee66ea4
YARN_VERSION=1.22.18
SHLVL=1
PS_IS_DOCKER=true
HOME=/root
PS_LOG_LEVEL=info
PGID=1000
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PUID=1000
PWD=/ps/app
TZ=Pacific/Auckland
NODE_ENV=production
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 09:29 ?        00:00:00 /sbin/tini -- /ps/app/docker-entrypoint.sh
root           7       1  0 09:29 ?        00:00:00 su --preserve-environment node --command /usr/local/bin/node /ps/app/photostructure
node          18       7  0 09:29 ?        00:00:00 /usr/local/bin/node /ps/app/photostructure
node          25      18  0 09:29 ?        00:00:07 photostructure main
node          36      25  1 09:29 ?        00:00:14 photostructure sync
node          54      25  3 09:30 ?        00:00:26 photostructure web
node          77      36  0 09:30 ?        00:00:00 findmnt --poll
node         308      54  2 09:43 ?        00:00:00 /usr/bin/perl -w /ps/app/node_modules/exiftool-vendored.pl/bin/exiftool -stay_open True -@ -
root         310       0  1 09:43 pts/0    00:00:00 sh
root         318     310  0 09:43 pts/0    00:00:00 ps -ef

Dang (or good?): that’s what I hoped we’d see–that node was the owner of all the PhotoStructure processes.

So… Docker does have usermap functionality which we might be fighting. How are you configuring the bind mount for /ps/library? What version of docker are you running, and what host OS? What’s the mount entry for that volume look like?

(Also, feel free to DM me any response that has anything you consider private)

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.4 LTS"
$ docker --version
Docker version 20.10.16, build aa7e414
$ docker-compose --version
docker-compose version 1.29.2, build 5becea4c

Relevant parts of docker-compose.yml:

version: "3.7"
services:
  photostructure:
    # You can run alpha, beta, or stable builds. See
    # <https://forum.photostructure.com/t/274> for details.
    image: photostructure/server:alpha
    container_name: photostructure
    restart: on-failure
    stop_grace_period: 2m

    volumes:
      # This is where your PhotoStructure Library will be stored.
      # It must be readable, writable, and have sufficient free space.
      # If it is a remote volume, uncomment the PS_FORCE_LOCAL_DB_REPLICA
      # environment line below.

      - type: bind
        source: "/var/containers/photostructure/library/" # < CHANGE THIS LINE
        target: /ps/library/

      # This must be fast, local disk with many gigabytes free.
      # PhotoStructure will use this directory for file caching
      # and for storing a temporary database replica when your
      # library is on a remote volume.

      - type: bind
        source: "/media/bulkstorage/photostructure/cache/"
        target: /ps/tmp

      # This directory stores your "system settings"

      - type: bind
        source: "/var/containers/photostructure/config"
        target: /ps/config

      # This directory stores PhotoStructure logfiles.

      - type: bind
        source: "/var/containers/photostructure/logs"
        target: /ps/logs

<snip>

    ports:
      - 1787:1787/tcp

    environment:
      # PhotoStructure has _tons_ of settings. See
      # <https://photostructure.com/faq/environment-variables/>

      # This tells PhotoStructure to only log errors, which is the default:
      # - "PS_LOG_LEVEL=error"

      # If PhotoStructure is refusing to spin up, uncomment these lines to see what's going on:
      - "PS_LOG_LEVEL=info"
      # - "PS_LOG_STDOUT=true"

      # This is your local timezone. See <https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>
      - "TZ=Pacific/Auckland" # < CHANGE THIS LINE

      # The userid to run PhotoStructure as:
      - "PUID=1000" # < CHANGE THIS LINE or delete this line to run as root. See below for details.

      # The groupid to run PhotoStructure as:
      - "PGID=1000" # < CHANGE THIS LINE or delete this line to run as root.


That’s exactly how my primary dev box is configured, so this should work.

What’s the fstype for "/var/containers/photostructure and /media/bulkstorage/?

Containers is an NFS 3 mount using TrueNAS Core (ZFS).

Bulkstorage is xfs and a local drive.