First off, if you’ve drawn a pretty set of photos for exactly this just point me there with a link please!
I’m trying to better understand the photo processing pipeline PS takes as it relates to HW resources usage.
My understanding, largely from this photo (below), is that the sync process will scan for asset in your non-ps photo library (I’ll call this the messy or unstructured library, mine is…) then for each asset it will run a sha and calculate a mean hash (used for comparisons between images/videos? - guessing a bit here). Then it will create the image’s (or videos) preview sizes/transcode it to be more web friendly, then (or perhaps before the previous step, it will actually pull all the metadata, store it in the db, then place the asset in the appropriate folder in the PS_Library.
Find asset >> hash asset >> convert asset/create preview sizes/transcode >> send to PS_Library
My main question
Where does it do each operation on the filesystems? I know there is a temp space that should be on SSD, is that where transcoding/converting/preview creation occurs? Does the hash happen there too, as well as metadata inspection?
The reason I’m asking this is to understand how I might better architect my setup to maximize my hardware’s usage. For example: If tmp really is where all the above occurs, and my messy library is coming off spinning disks and going TO spinning disks then that should prevent my spinning disks from having to handle the IOPS associated with hashing/converting then there likely isn’t much I can do to speed it up beyond faster SSD or maybe even a RAM disk (a little far fetched and would require PS to make a choice of where to conduct its operations, on an SSD or on the RAM disk).
I also was unable to find answers to these questions on the docs so I figured I’d ask the question and either get pointed to the answers or maybe this could be come another resource for googler’s
Thank you in advance, and I do know that if it is all on SSD it’ll be blazing, but thats true for most things (look at the PS5’s SSD requirements for instance), however its also expensive so I wanted to take another stab at seeing if I can tweak anything, move anything, that might speed up ingests. I realize eventually it will finish and I should hopefully not have to ingest again…however we DO have that ML feature coming in the future that might need a complete library scan