Version 1.0.0-alpha.6 is out!

Alpha? What’s that?

Remember: alpha builds should be considered extremely experimental, and may not even launch on all platforms.

More information here.

Did you sign up for a subscription?

Thank you for your help in testing! I’ve extended everyone’s free trial period through to the end of April.

Minor general features

  • :sparkles: Added meta headers to support iOS homescreen.

  • :sparkles: Have scanned images of older photos? A new datesBeforeAreEstimated setting automatically considers all captured-at times before 1999 to be an “estimated” time, which requires files to have a tighter image correlation to be considered a duplicate of an existing asset variant. This addresses issues like this.

PhotoStructure for Desktops changes

  • :bug: Fixed the stripe checkout background

  • :bug: The link in the plans page is now clickable

PhotoStructure for Docker changes

  • :bug: Fixed a couple bogus “PS_LIBRARY_PATH must be set” errors in /settings

  • :package: If you are seeing file permission problems, temporarily set the environment variable PS_FIX_PERMISSIONS=1 and run docker-compose up (without --detach) or docker run -it photostructure/server:alpha. This will run chmod -R $UID:$GID /ps as root from within docker, so make sure UID and GID are set appropriately. This chmod should address any permission issues if you previously ran a PhotoStructure container as the root user.

  • :package: Set the environment variable UID=0 if you want to run PhotoStructure as the root user within your docker container, as it has in prior versions.

  • :bug:/:package: If either the /ps/tmp or /ps/cache directories are bind-mounted, either will be used for the cache directory. This should solve spurious EACCES errors that some alpha testers saw.

Installation instructions

See this for info about subscriptions (including a thank-you discount for alpha testers):

What’s next?

There will probably be a couple more alpha builds tomorrow as I switch from dockerhub to Github Actions to build the docker images. Github Actions gives two things:

  • multi-arch images, (x64 + arm!)
  • better tagging (so I can tag all builds with the actual specific version number: helpful when people want to revert back to a prior build if an alpha or beta has a show-stopping bug)

After this is working, I’ll make a beta build, and if there aren’t any showstopping bugs found, finally release v1.0.0. :tada:

(I’m committing the mortal devops sin of shipping and then going offline shortly afterwards, but the prior docker build was solidly borked, so things can only improve… I’ll be back online tomorrow morning).


1 Like

I’m still getting an error.

From Docker log:

{"fatal":true,"exit":true,"status":12,"pid":21,"ppid":14,"error":"ChildService(web).onStdout(): Error: ChildService(web).onStdout()setup() failed: Error: Can't read /ps/app/public. Please visit <> for help.¹⁶"}

Shutting down PhotoStructure...

From PS log:

{"ts":1618311921814,"l":"info","ctx":"Service(main)","msg":"setupErrorHandling(): not adding stdin/stdout/stderr close handlers: we're daemonized or in docker."}
{"ts":1618311921814,"l":"warn","ctx":"Sentry","msg":"Failed to set up sentry: TypeError: Cannot read property 'init' of undefined"}
{"ts":1618311921832,"l":"info","ctx":"RpcServer","msg":"Setting up RPC..."}
{"ts":1618311921836,"l":"info","ctx":"rpc.Server","msg":"listening on 1807"}
{"ts":1618311921836,"l":"info","ctx":"RpcServer","msg":"RPC service serving port 1807"}
{"ts":1618311921848,"l":"info","ctx":"WatchedChild(web:34)","msg":"_start(): spawned pid 34"}
{"ts":1618311922809,"l":"info","ctx":"rpc.Server","msg":"Connection from IPv6:::ffff:"}
{"ts":1618311921832,"l":"debug","ctx":"Service(main)","msg":"setup done.","meta":{"reject":false}}
{"ts":1618311922103,"l":"debug","ctx":"Pids","msg":"addPid() wrote /ps/config/PhotoStructure/pids/34.json","meta":{"pid":34,"cmd":"node","maxAgeMs":-1,"ppid":21,"startTime":1618311921848}}
{"ts":1618311922818,"l":"error","ctx":"WatchedChild(web:34)","msg":"onError()","meta":{"src":"ChildService(web).onStdout()","fatal":true,"ignorable":false,"errToS":"setup() failed: Error: Can't read /ps/app/public. Please visit <> for help.¹"}}
{"ts":1618311922821,"l":"error","ctx":"Error","msg":"onError(): Error: ChildService(web).onStdout()setup() failed: Error: Can't read /ps/app/public. Please visit <> for help.¹⁶\nError: ChildService(web).onStdout()setup() failed: Error: Can't read /ps/app/public. Please visit <> for help.¹⁶\n    at k.onError (/ps/app/bin/main.js:9:134369)\n    at L.onStdout (/ps/app/bin/main.js:9:131279)\n    at s.onData (/ps/app/bin/main.js:9:138359)\n    at /ps/app/bin/main.js:9:245567\n    at Array.forEach (<anonymous>)","meta":{"event":"fatal","message":"ChildService(web).onStdout()"}}
{"ts":1618311922826,"l":"warn","ctx":"Service(main)","msg":"exit()","meta":{"status":12,"reason":"ChildService(web).onStdout(): Error: ChildService(web).onStdout()setup() failed: Error: Can't read /ps/app/public. Please visit <> for help.¹⁶","waitForJobs":false,"ending":false}}
{"ts":1618311922842,"l":"info","ctx":"rpc.Server","msg":"Closing connection from IPv6:::ffff:"}
{"ts":1618311922842,"l":"warn","ctx":"DirectoryEntry","msg":"children() failed to readdir(/ps/config/PhotoStructure/pids)","meta":{"errno":-13,"code":"EACCES","syscall":"mkdir","path":"/ps/tmp/readdircache","stack":["Error: EACCES: permission denied, mkdir '/ps/tmp/readdircache'"]}}
{"ts":1618311923136,"l":"info","ctx":"WatchedChild(web:34)","msg":"onExit(): finished setting up new child undefined"}

I’m guessing it’s due to permissions but haven’t checked yet. I have to get to work and can look into it more tonight.

I’m up and running. Did a fresh install and its scanning in my library now.

While scanning though I can’t get rid of the “Your library is currently empty” message even though there are now things in the library.

Did you pick up alpha.5, or alpha.6? I believe this is fixed in alpha.6.

Also: you may want to start using the UID/GID support I just added in alpha.4. Here’s my

#!/bin/sh -x

mkdir -p $PSLIBRARY

# This must be a fast, local disk with many gigabytes free:
mkdir -p "$PSTMP"

# This directory stores your "system settings". This directory must not be
# the same as the one used by PhotoStructure for Desktops.
mkdir -p "$PSCONFIG"

# This directory stores PhotoStructure logfiles.
mkdir -p "$PSLOGS"

docker run \
  --stop-timeout 120 `# < gives PhotoStructure 2 minutes to shut down cleanly.` \
  --publish 1787:1787 \
  -e TZ="$(cat /etc/timezone)" \
  -e UID="$(id -u)" `# < makes PhotoStructure write library files as the current user (instead of using --user)` \
  -e GID="$(id -g)" \
  -e PS_LOG_LEVEL=debug \
  -v "$PSLIBRARY":/ps/library \
  -v "$PSTMP":/ps/tmp \
  -v "$PSCONFIG":/ps/config \
  -v "$PSLOGS":/ps/logs \

I don’t know how to tell which alpha I got. I deleted the image and container and re-pulled but get the same results. I can confirm I’m pulling from the :alpha channel and can tell you it was created 15 hours ago.

I still get the same error. Now I’m guessing it has nothing to to with Photostructure, but instead something to do with my environment.

./photostructure --version

or click on UI “burger” menu on top left corner and scroll to third section “about…”

I just tried out the “View by folder” function: Is it intentional that I see a tag “fs / Library”? This path seems to refer to the PhotoStructure library itself?

My docker bind mounts containing photos are:

      - type: bind
        source: /volume2/phstr-library/
        target: /ps/library

      - type: bind
        source: "/volume1/homes/lars/Fotos/"
        target: /media/fotos-lars-backup

      - type: bind
        source: "/volume1/photo/"
        target: /media/fotos-photostation

Yes. The library can be mounted at different directories, so the tag is relative to the library root (as are the AssetFile.URI references).

If you can think of a better or less surprising way to handle these paths, please suggest! :+1:

1 Like

I think I was confused because I didn’t expect the library itself to be present in the fs hierarchy, But since you can move stuff around there, it makes sense.

One thought I had which might or might not make sense on closer consideration: Depending on how (dis)organized the source libraries are, the user might prefer not to see the moint points of the source libraries in the tags:

For example:

  • I have my private photos mounted under /media/photos and business related photos under /media/business. In this case, I’d probably like to see fs/photos/... and fs/business/..., as it is currently implemented.
  • If I like to organize my photos by year/month, but am a bit messy and have my library spread around various disks, I have maybe mountpoints /media/disk1 to /media/diskn, each with a year/month structure below the mountpoint. Then I’d probably like to see fs/year/month/....

So, a library option (or maybe both tags in parallel) might be an interesting feature.

Assets will be tagged with all the “asset file variants” associated to that asset:

So if you’ve got duplicates in /media/photos, /media/backups1, /media/backups2, … the asset will show up for all those tags (thanks to hierarchical inheritance).

You can see these tags in the asset info panel:

I was not thinking about duplicates but rather the mount point being an uninteresting part of the path.

Maybe a bit more concrete. In a hierarchy like

  • /media/disk1/2018/birthday-lars
  • /media/disk1/2018/vacation-vegas
  • /media/disk1/2019/walk-in-the-park
  • /media/disk2/2019/birthday-matthew
  • /media/disk2/2021/corona-vaccination

the mount point would make browsing more cumbersome (since I’m probably not interested where the image was originally stored). One could image a separate top-level tag (folder?) for that.

That being said, this should probably be a (low prio) feature request instead of clogging up this thread.

Agreed: it’s stored as a “volume UUID SHA” (which why your URL looks like “tag/fs/2NMQsMVCK/home”). PhotoStructure stores the last-seen mountpoint for that volume as the “display name” for the mounpoint.

I included the mountpoint/volsha in the fs tag because it was helpful for me to be able to browse only the assets on a specific drive. Do you want to perhaps collapse all the mountpoints into a single virtual “drive”?

I could add a setting that says “include volsha in filesystem tags” (which defaults to true, and you could set it to false). Would that address this? If so, feel free to add a new feature request!

Did you figure out what was going on?

The home page is supposed to detect progress and automatically re-fetch assets and display them as they are imported. Did you need to hit refresh on your browser?

I’m hoping to release alpha.7 tomorrow:

The message continued to appear even after the initial scan was done. I did everything from trying different browsers to refreshing the page.

Once I restarted the container though it went away and all is good. So not sure why it initially was stuck but after a restart it cleared itself. So I wouldn’t worry about it too much.

It was on a fresh install though.

Ok, I’ll try to reproduce that today.

Is your library stored on a local volume or a remotely mounted filesystem?

It was a docker, so I had it mounted via a path. The docker runs on the unraid server that hosts the files too.