GPU in docker, possible?

Is there a way in version 0.9x to use a GPU in the docker container (specifically unraid). If I were to add it like I were adding to plex would it just use it?
Example:

Interesting: ffmpeg should be able to avail itself of the GPU for encoding. The actual ffmpeg command is controlled by the ffmpegTranscodeArgs setting (in v1.0.0+), so that could be tweaked to pull in the GPU.

I looked into GPU-accelerated image downsampling, and it turns out most of the time is taken in decoding and compression, which the GPU canā€™t help with, which is why packages like sharp donā€™t support it :cry:

1 Like

I was more interested in video transcoding, which is predominately the transcoding that takes up CPU power so if I could have my GPU do that it would likely help my server.

Do you think in 1.0 there would be an ability to experiment with GPU video transcoding in docker?

And it appears that Hashing takes a ton of CPU, or maybe Iā€™m looking at it wrong. Do you think that in the future we could use blake2 hashing or are you already?

Yes, it seems like there wonā€™t be anything else I need to do to the Dockerfile: just that settings will need to get tweaked. See

PhotoStructure uses hashes everywhere: it even hashes files post-copy to verify the OS isnā€™t lying to PhotoStructure (which it does sometimes on remote filesystems), so any improvement would be great.

I got excited about new hashing algos a while back, but I didnā€™t test blake2 (as it wasnā€™t in the lowest node I was supporting), but v1.0.0 is going to require node 14.15.5+, and that includes ā€˜blake2b512ā€™,
and ā€˜blake2s256ā€™. :+1: Iā€™ll check that when I get a chance.

Switching secure hash algos would require a brand-new library, btw. Bad things would happen if the hash algo was changed mid-flight during an import.

Here were my old hashing notes:

// Secure hash research:

// SHA1 has known collisions. It should be expected for a nerd to have sample
// images that collide on their laptop.

// SHA2 224 and 256 uses 32 bit operations. SHA-512/224 provides length
// protection and is 20-50% faster than SHA224 on 64 bit hardware, but NodeJSā€™
// crypto only supports SHA-512 (not the SHA-512/224 or SHA-512/256 variant),
// which is simply SHA-512ā€™s leftmost N bits with a different initialization
// vector.

// I donā€™t see why these SHA values would need to be externally consumed, so
// people shouldnā€™t care if the SHA in the db isnā€™t a FIPS standard. I donā€™t
// want to pull in another native library dependency if I can help it.

// ALSO: I donā€™t need that many bits to ensure uniqueness! 160 was enough for
// SHA1, 192 should be plenty, and only takes 32 base64 characters (and doesnā€™t
// waste chars on padding).

// HOWEVER: versions pre-v0.3.5 used the most significant 224 bits, so when we
// build SHAs of strings (like for volume UIDs), we maintain backward
// compatibility by slicing MSB 224 bits. If we slice 192 bits and we use a
// non-8-bit-divisible radix, the values change.

// See SHA-2 is fine, and in fact the more conservative choice right now. SHA-3 didn't ... | Hacker News

// shasum -a 512224 implements SHA-512/224.
// shasum -a 512256 implements SHA-512/256.

Fascinating! Thank you for sharing!

Iā€™d be fine with a library rehash if it meant less time spent on hashes by using fpga accelerated hashing orā€¦maybe if we could use the GPUs here? I mean that is what cryptomining is unless Iā€™ve missed something. If you did it on the GPU my only concern would be that itā€™s just as good (accurate) as the cpu.

If you used opencl for hashing then you could do discrete or integrated GPUs. Might make photo structure seem futuristic in speed :slight_smile:

I do agree a whole rehash of my library would suck but may be worth it if there was an appreciable performance gain going forward

File hashing actually bound by file I/O speed.

Spitballing, the best way to get better performance from PhotoStructure is to

  1. get a faster CPU (my AMD 3900x is absolutely the best money Iā€™ve spent on tech in the last 10 years)

  2. get your cache on an SSD, and

  3. make PhotoStructure use a cluster of webserver processes (which is something I want to add soon: it currently uses only one process to serve web requests, so it can be CPU bound).

That makes sense. What part of PS would be considered the ā€œcacheā€? I can certainly move it to ssd

The previews and database, mostly: see https://photostructure.com/about/2021-release-notes/#more-storage-flexibility