Cloud Storage + PhotoStructure + Unraid

I have a weird setup, I run Unraid yet I choose to keep my photos on cloud storage. It’s convenient, but getting those photos easily into PhotoStructure has been a challenge when using Unraid. For a while I was running PhotoStructure for Node in a Windows VM on my Unraid server pointed at my Dropbox folder but this wasn’t very clean. I wanted containerization but getting my data from Dropbox into Unraid proved to be a challenge.
There are 3 ways to do this:

  1. Dropbox Docker container:

    • Two of these are published, but neither work well. They both seem very unstable and I personally couldn’t get them to work.
  2. Sharing Dropbox folder from VM over SMB and mounting that in Unraid using Unassigned Devices.

    • This nearly worked, but I ran into issues where folders were randomly not showing up, it was unreliable, it seemed to also cause my server to hang on shutdown and it also requires that the VM be running at all times.
  3. rclone

    • This has it’s drawbacks, it’s complicated, automating it isn’t easy. When mounting cloud storage with rclone and accessing it in Docker, the storage must be mounted prior to the Docker starting. To automate this we need a shell script. We also need some error handling to detect if the mount has disconnected and then to remount it.

Rclone ended up being the best option, with the correct settings it’s nearly perfect. With the incorrect settings, it will crash your server in minutes. That’s why it’s important to research all of the options and know what you are doing before running anything. It’s also important to thoroughly test anything prior to deployment.

The most important thing to know with rclone and Unraid is how important it is to specify a cache location for rclone that is on some sort of Unraid share. Unraid runs in memory, and if a cache location isn’t specified for rclone, it will cache in memory. This will crash your server. With some time, trial and error, and GPT-4 Turbo I was able to put together a bash script to use with the User Scripts plugin on Unraid to successfully mount Dropbox and sync the images in my photo library in PhotoStructure. While I use this with Dropbox, it should work with any cloud storage provider supported by rclone, perhaps with some tweaking to the arguments if using something like FTP or SFTP.

To do this you will need 3 plugins installed, User Scripts, rclone (the plugin, not any of the Dockers), and Unassigned Devices so the /mnt/remotes location exists as this is the ideal place to mount to. If you don’t want to install unassigned devices mkdir /mnt/remotes should be okay but I haven’t really tested that.

Two scripts are required, one will be duplicated however.

The first script is the big one, this should have two copies in User Scripts. One to run on Array Start and the other to rerun on a cron job (Custom in User Scripts). I chose 0 * * * * (which will run the script at the top of every hour) as I run the Appdata Backup plugin on my server once a week. This script will start any docker containers specified and periodically check if they aren’t running and then start them if they aren’t running so as to not interfere with Appdata Backup as that requires dockers to all be stopped I run the script at the top of every hour, and then I run Appdata Backup at 3:01 am on Sundays so it shouldn’t conflict.

This script is as follows:

#!/bin/bash

# Define the mount directory
MOUNT_DIR="/mnt/remotes/your_mount_point_here"

# Define rclone remote name
RCLONE_REMOTE="your_remote_here"

# Define rclone cache directory location
RCLONE_CACHE_DIR="/mnt/cache/appdata/rclonecache"

# Define Docker containers as an array.Containers should be in quotes seperated by spaces.
DOCKER_CONTAINERS=("your_dockers_here")

# Check if the mount directory exists, create it if not
if [ ! -d "$MOUNT_DIR" ]; then
  echo "Mount directory does not exist, creating it..."
  mkdir -p "$MOUNT_DIR"
fi

# Check if rclone is already mounted at MOUNT_DIR
if mount | grep -q "$MOUNT_DIR"; then
  echo "rclone is already mounted at $MOUNT_DIR."
else
  # Mounting the drive
  echo "mounting rclone at $MOUNT_DIR."
rclone mount \
    --allow-other \
    --allow-non-empty \
    --log-level INFO \
    --poll-interval 1s \
    --dir-cache-time 1m \
    --cache-dir="$RCLONE_CACHE_DIR" \
    --vfs-cache-mode full \
    --vfs-cache-max-size 100G \
    --vfs-cache-max-age 24h \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --vfs-read-ahead 128M \
    --uid 99 \
    --gid 100 \
    --umask 002 \
    $RCLONE_REMOTE: "$MOUNT_DIR" &
fi

# Ensuring the mount operation has enough time to initialize before starting the docker containers
sleep 10

# Loop through the Docker containers array
for container in "${DOCKER_CONTAINERS[@]}"; do
  # Check if the Docker container is already running
  if docker ps | grep -q $container; then
      echo "Docker container '$container' is already running."
  else
      echo "Starting Docker container '$container'..."
      docker start $container
  fi
done

First you’ll need to define your mount point, this is where the storage will mount to in the filesystem. Then define the name of the remote. This already assumes you’ve run rclone config and set up a remote. I’m not going to cover that here, there are plenty of resources out there to explain how to configure an rclone remote. It’s the easiest part of the process. Assuming you’ve done that, next set the cache directory location. I suggest /mnt/cache/appdata/any_folder as the best location for most people. You’ll just need sufficient space on your cache drive. Then finally define any docker containers that will depend on rclone that should be started after the mount is established. You’ll want to disable docker autostart in Unraid for these containers as we need the script to start the docker(s). Multiple containers can be started. Just quote them and separate them with a space.

First the script will check if the mount directory exists, if it doesn’t it’ll create it, if it does then it moves on. Then it checks to see if there is a mount already there. If mount | grep /path/to/mount returns anything it assumes something is mounted there and then moves on to the docker, if not then it mounts the remote.

Now, these flags are optimized for my setup and for Unraid, particularly --uid 99, --gid 100, and --umask 002 to set file permissions. The other ones such as --vfs-cache-max-size 100G and --vfs-cache-max-age 24h may need to be tweaked depending on your circumstances. This sets the maximum size of the cache rclone will keep as well as how long it will keep that cache. I set --poll-interval 1s and --dir-cache-time 1m so new photos are picked up quickly. --vfs-cache-mode full is very important as that will cache both reads and writes, which we want as PhotoStructure does a lot of reading. --allow-other is needed so docker can see the files, and --allow-non-empty will allow the mount to start if for some reason it created a folder in that location and didn’t clean it up. Just be careful with that one because if you mount to a location with actual data it could overwrite it, hence using /mnt/remotes. The other options are just good baseline defaults I found online. I didn’t put much effort into them and they seem to work well for me.

After mounting the script will then check to see if the container is already running and then if it isn’t running start it. We need the container to start after the mount has been established at least one time so the docker can see the mount. If it unmounts at some point and remounts it seems like everything still works. It’s just a first time thing from my testing. It also seems to recover if the internet drops, although I’ve only tested by rebooting my router. That’s why we want this script to rerun every hour to see if rclone unmounted and then remount it, but we don’t want it running the mount command over and over and also running docker start over and over if it doesn’t need to, hence the checks I (GPT-4 Turbo) added.

The second script is simply to unmount the remote. This needs to be set to run at array stop. If this isn’t done then the server will not shut down cleanly and you’ll be forced into a parity check.

#!/bin/bash

# Define the mount directory
MOUNT_DIR="/mnt/remotes/your_mount_point"

# Unmount the Dropbox directory
fusermount -uz "${MOUNT_DIR}"

That’s pretty much it, once the remote is mounted add it in any docker template like any other share in Unraid and cloud storage will reliably behave like it is local.

2 Likes

Nice! Thanks for taking the time to write this up, @mackid1993 !

1 Like

Evidentially the main issue with SMB; folders not showing up was a regression in Unraid 6.12.9 that was fixed in 6.12.10. This would be the ideal solution since it involves the least number of moving parts, just a CIFS mount that is native to Unraid (the plugin is only a GUI) and the official Dropbox client running on my VM that is always running anyway for other things.

I’m going to let it run with this option for a few days and follow up with steps to optimize this setup if it does work.

A few things do have to be tweaked to get it working right and I’ll go into that if it ends up working out. At a high level however:

  • The bind mount had to be set to Read (/Write) Slave.
  • Unassigned Devices has a timer to delay automount. This is vital to set to allow time for the VM to start up prior to automount at array start.
  • The mount must start before the docker just like rclone, this is only a one time thing. Unassigned Devices has a built in option to run a bash script at automount, simply dropping docker start PhotoStructure in that makes things seamless.

It does seem to make my server take longer to shut down, but it does shut down cleanly. I do have my timeout in Disk settings set very high to play it safe with that, which I suggest everyone do regardless. It’s good practice on Unraid.

Can you help me understand the advantage of keeping photos on Dropbox? Am I right in guessing you’re paying at least $10/mo for their Plus plan and then using the features at Cloud photo storage and online backup - Dropbox?

I keep a lot of files on there regardless. I also use the storage for backups of data other than my photo library (the unique content on Dropbox get backed up to Backblaze B2 separately as a failsafe). I’m on the annual professional plan.

I have encrypted backups of a few TV shows that I spend a lot of time organizing and naming for Plex and wouldn’t want to lose. My heavily curated local music library is backed up there too. I also keep my Unraid appdata backup there as well.

One plus is my photos appear quickly in PhotoStructure without having to remain connected to my home VPN in order for PhotoSync to run. Leaving a VPN always on interferes with wireless Android Auto. I also don’t have to worry about losing any photos in the event of a multi drive failure+backup corruption since my photos are on a cloud storage provider.

My backup process is multi-redundant. I keep what I stated above on Dropbox using Arq Backup. I also use Arq Backup to make a copy of my Dropbox data on Backblaze B2. Then I have a local 18 TB unassigned disk in my server that gets a nightly versioned backup of all shares on my Unraid array using Arq.

As a last resort I have all of my files and media backed up using Backblaze computer backup, that’s just there as a failsafe. It works great in a Windows VM on Unraid using VirtioFS.

If all of those backups fail and my entire apartment burns down my most important memories have an original offsite copy, hence the cloud storage-- it’s paranoia. I went with Dropbox because they are the only major cloud storage provider targeted at consumers (unlike Box.com which is really meant for enterprise) that isn’t run by GAFAM.

–
To update my original post after playing with SMB on Unraid 6.12.10, I’ve decided to use rclone. SMB still causes my array to hang when stopping, and rclone works extremely well even though the files are cloud based and need to be cached locally. It caches right away when Photostructure accesses a file, so my cloud files are truly on demand and using very little space on my server when using rclone.