I have a weird setup, I run Unraid yet I choose to keep my photos on cloud storage. It’s convenient, but getting those photos easily into PhotoStructure has been a challenge when using Unraid. For a while I was running PhotoStructure for Node in a Windows VM on my Unraid server pointed at my Dropbox folder but this wasn’t very clean. I wanted containerization but getting my data from Dropbox into Unraid proved to be a challenge.
There are 3 ways to do this:
-
Dropbox Docker container:
- Two of these are published, but neither work well. They both seem very unstable and I personally couldn’t get them to work.
-
Sharing Dropbox folder from VM over SMB and mounting that in Unraid using Unassigned Devices.
- This nearly worked, but I ran into issues where folders were randomly not showing up, it was unreliable, it seemed to also cause my server to hang on shutdown and it also requires that the VM be running at all times.
-
rclone
- This has it’s drawbacks, it’s complicated, automating it isn’t easy. When mounting cloud storage with rclone and accessing it in Docker, the storage must be mounted prior to the Docker starting. To automate this we need a shell script. We also need some error handling to detect if the mount has disconnected and then to remount it.
Rclone ended up being the best option, with the correct settings it’s nearly perfect. With the incorrect settings, it will crash your server in minutes. That’s why it’s important to research all of the options and know what you are doing before running anything. It’s also important to thoroughly test anything prior to deployment.
The most important thing to know with rclone and Unraid is how important it is to specify a cache location for rclone that is on some sort of Unraid share. Unraid runs in memory, and if a cache location isn’t specified for rclone, it will cache in memory. This will crash your server. With some time, trial and error, and GPT-4 Turbo I was able to put together a bash script to use with the User Scripts plugin on Unraid to successfully mount Dropbox and sync the images in my photo library in PhotoStructure. While I use this with Dropbox, it should work with any cloud storage provider supported by rclone, perhaps with some tweaking to the arguments if using something like FTP or SFTP.
To do this you will need 3 plugins installed, User Scripts, rclone (the plugin, not any of the Dockers), and Unassigned Devices so the /mnt/remotes
location exists as this is the ideal place to mount to. If you don’t want to install unassigned devices mkdir /mnt/remotes
should be okay but I haven’t really tested that.
Two scripts are required, one will be duplicated however.
The first script is the big one, this should have two copies in User Scripts. One to run on Array Start and the other to rerun on a cron job (Custom in User Scripts). I chose 0 * * * *
(which will run the script at the top of every hour) as I run the Appdata Backup plugin on my server once a week. This script will start any docker containers specified and periodically check if they aren’t running and then start them if they aren’t running so as to not interfere with Appdata Backup as that requires dockers to all be stopped I run the script at the top of every hour, and then I run Appdata Backup at 3:01 am on Sundays so it shouldn’t conflict.
This script is as follows:
#!/bin/bash
# Define the mount directory
MOUNT_DIR="/mnt/remotes/your_mount_point_here"
# Define rclone remote name
RCLONE_REMOTE="your_remote_here"
# Define rclone cache directory location
RCLONE_CACHE_DIR="/mnt/cache/appdata/rclonecache"
# Define Docker containers as an array.Containers should be in quotes seperated by spaces.
DOCKER_CONTAINERS=("your_dockers_here")
# Check if the mount directory exists, create it if not
if [ ! -d "$MOUNT_DIR" ]; then
echo "Mount directory does not exist, creating it..."
mkdir -p "$MOUNT_DIR"
fi
# Check if rclone is already mounted at MOUNT_DIR
if mount | grep -q "$MOUNT_DIR"; then
echo "rclone is already mounted at $MOUNT_DIR."
else
# Mounting the drive
echo "mounting rclone at $MOUNT_DIR."
rclone mount \
--allow-other \
--allow-non-empty \
--log-level INFO \
--poll-interval 1s \
--dir-cache-time 1m \
--cache-dir="$RCLONE_CACHE_DIR" \
--vfs-cache-mode full \
--vfs-cache-max-size 100G \
--vfs-cache-max-age 24h \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit 1G \
--vfs-read-ahead 128M \
--uid 99 \
--gid 100 \
--umask 002 \
$RCLONE_REMOTE: "$MOUNT_DIR" &
fi
# Ensuring the mount operation has enough time to initialize before starting the docker containers
sleep 10
# Loop through the Docker containers array
for container in "${DOCKER_CONTAINERS[@]}"; do
# Check if the Docker container is already running
if docker ps | grep -q $container; then
echo "Docker container '$container' is already running."
else
echo "Starting Docker container '$container'..."
docker start $container
fi
done
First you’ll need to define your mount point, this is where the storage will mount to in the filesystem. Then define the name of the remote. This already assumes you’ve run rclone config and set up a remote. I’m not going to cover that here, there are plenty of resources out there to explain how to configure an rclone remote. It’s the easiest part of the process. Assuming you’ve done that, next set the cache directory location. I suggest /mnt/cache/appdata/any_folder
as the best location for most people. You’ll just need sufficient space on your cache drive. Then finally define any docker containers that will depend on rclone that should be started after the mount is established. You’ll want to disable docker autostart in Unraid for these containers as we need the script to start the docker(s). Multiple containers can be started. Just quote them and separate them with a space.
First the script will check if the mount directory exists, if it doesn’t it’ll create it, if it does then it moves on. Then it checks to see if there is a mount already there. If mount | grep /path/to/mount
returns anything it assumes something is mounted there and then moves on to the docker, if not then it mounts the remote.
Now, these flags are optimized for my setup and for Unraid, particularly --uid 99
, --gid 100
, and --umask 002
to set file permissions. The other ones such as --vfs-cache-max-size 100G
and --vfs-cache-max-age 24h
may need to be tweaked depending on your circumstances. This sets the maximum size of the cache rclone will keep as well as how long it will keep that cache. I set --poll-interval 1s
and --dir-cache-time 1m
so new photos are picked up quickly. --vfs-cache-mode full
is very important as that will cache both reads and writes, which we want as PhotoStructure does a lot of reading. --allow-other
is needed so docker can see the files, and --allow-non-empty
will allow the mount to start if for some reason it created a folder in that location and didn’t clean it up. Just be careful with that one because if you mount to a location with actual data it could overwrite it, hence using /mnt/remotes
. The other options are just good baseline defaults I found online. I didn’t put much effort into them and they seem to work well for me.
After mounting the script will then check to see if the container is already running and then if it isn’t running start it. We need the container to start after the mount has been established at least one time so the docker can see the mount. If it unmounts at some point and remounts it seems like everything still works. It’s just a first time thing from my testing. It also seems to recover if the internet drops, although I’ve only tested by rebooting my router. That’s why we want this script to rerun every hour to see if rclone unmounted and then remount it, but we don’t want it running the mount command over and over and also running docker start
over and over if it doesn’t need to, hence the checks I (GPT-4 Turbo) added.
The second script is simply to unmount the remote. This needs to be set to run at array stop. If this isn’t done then the server will not shut down cleanly and you’ll be forced into a parity check.
#!/bin/bash
# Define the mount directory
MOUNT_DIR="/mnt/remotes/your_mount_point"
# Unmount the Dropbox directory
fusermount -uz "${MOUNT_DIR}"
That’s pretty much it, once the remote is mounted add it in any docker template like any other share in Unraid and cloud storage will reliably behave like it is local.