Crash on initial scan

Expected Behavior

After a fresh reinstall/reinitializing of photostructure, the initial scan should not crash

Current Behavior

container stopped. This is from the docker log for the container:

Your library is at /ps/library
PhotoStructure is ready: <http://localhost:1787/>
{"fatal":true,"exit":true,"status":12,"pid":25,"ppid":18,"error":"ChildService(sync).onStdout(): Error: ChildService(sync).onStdout()unhandledRejection: Error: DbRequest {\"method\":\"run\",\"sql\":\"insert into `ProgressMeta` (`createdAt`, `name`, `progressId`, `updatedAt`, `value`) values (?, ?, ?, ?, ?) ON CONFLICT (progressId,name) DO UPDATE SET update…"}

Shutting down PhotoStructure...
Verifying & backing up /ps/tmp/local-db/models/db.sqlite3...
PhotoStructure library db has been backed up to /ps/library/.photostructure/models/backup.

Steps to Reproduce

Not sure it can be reproduced. But this is what I did before it happened:

  • deleted all the content of the library (including .photostructure hidden folder)
  • delete tmp, config and logs.
  • recreated the container
  • login to restore the plus subscription
  • leave all the defaults on the settings screen

After a few minutes of scanning/processing this error occured. Restarted the container and so far this has not reoccured.

Environment

Operating system and version: unraid 6.9.2

PhotoStructure edition: PhotoStructure for Docker

Version 1.0.0-beta.9
Edition PhotoStructure for Docker
Plan plus
OS Alpine Linux v3.13 on x64 (docker)
Node.js 16.3.0
ExifTool 12.26
Video support FFmpeg 4.3.1
HEIF support /usr/bin/heif-convert
Free memory 360 MB / 34 GB
CPUs 12 Ă— Intel(R) Xeon(R) E-2246G CPU @ 3.60GHz
Concurrency Target system use: 75% (9 syncs, 2 threads/sync)
Web uptime 9 minutes, 33 seconds
Library metrics 3,526 assets


7,244 image files
718 tags
Library path /ps/library
Health checks * Library and support directories are OK

  • Used memory used by web (31 MB) is OK
  • RSS memory used by web (110 MB) is OK

I see the following foreign key constraints errors in the logs

2021-06-25T01:55:41.420Z log warn  Transactions errorIsRetriable() { stack:
   [ 'SqliteError: FOREIGN KEY constraint failed',
     '    at /ps/app/bin/sync.js:9:674785',
     '    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)',
     '    at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)',
     '    at /ps/app/bin/sync.js:9:695529',
     '    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728)' ] }
2021-06-25T01:55:41.420Z log warn  Transactions tx(): raised { stack:
   [ 'SqliteError: FOREIGN KEY constraint failed',
     '    at /ps/app/bin/sync.js:9:674785',
     '    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)',
     '    at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/method/transaction.js:65:24)',
     '    at /ps/app/bin/sync.js:9:695529',
     '    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728)' ] }
2021-06-25T01:55:41.421Z log error DbRequest SqliteError: code SQLITE_CONSTRAINT_FOREIGNKEY: FOREIGN KEY constraint failed
SqliteError: FOREIGN KEY constraint failed
    at /ps/app/bin/sync.js:9:674785
    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)
    at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)
    at /ps/app/bin/sync.js:9:695529
    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728) { method: 'run',
  sql:
   'insert into `ProgressMeta` (`createdAt`, `name`, `progressId`, `updatedAt`, `value`) values (?, ?, ?, ?, ?) ON CONFLICT (progressId,name) DO UPDATE SET updatedAt=excluded.updatedAt,value=excluded.value',
  bindings: [ 1624586141418, 'processedImageCount', 1, 1624586141418, '1178' ] }
2021-06-25T01:55:41.424Z log warn  Transactions errorIsRetriable() { cause: {},
  retriable: false,
  fatal: false,
  stack:
   [ 'SqliteError: FOREIGN KEY constraint failed',
     '    at /ps/app/bin/sync.js:9:674785',
     '    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)',
     '    at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)',
     '    at /ps/app/bin/sync.js:9:695529',
     '    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728)' ] }
2021-06-25T01:55:41.424Z log warn  Transactions tx(): raised { cause: {},
  retriable: false,
  fatal: false,
  stack:
   [ 'SqliteError: FOREIGN KEY constraint failed',
     '    at /ps/app/bin/sync.js:9:674785',
     '    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)',
     '    at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)',
     '    at /ps/app/bin/sync.js:9:695529',
     '    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728)' ] }
2021-06-25T01:55:41.425Z log info  Error isFatalErrorAllowed() { result: false,
  mainService: false,
  postProbation: true,
  lowErrorRate: false,
  fatalErrorRatePerMinuteSetting: 10 }
2021-06-25T01:55:41.425Z log error Error onError(): Error: DbRequest {"method":"run","sql":"insert into `ProgressMeta` (`createdAt`, `name`, `progressId`, `updatedAt`, `value`) values (?, ?, ?, ?, ?) ON CONFLICT (progressId,name) DO UPDATE SET updatedAt=excluded.updatedAt,value=excluded.value","bindings":…
SqliteError: FOREIGN KEY constraint failed
    at /ps/app/bin/sync.js:9:674785
    at Function.<anonymous> (/ps/app/bin/sync.js:9:695519)
 at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)
    at /ps/app/bin/sync.js:9:695529
    at Object.t.handleDbRetries (/ps/app/bin/sync.js:9:676728) { event: 'fatal', message: 'unhandledRejection' }
2021-06-25T01:55:41.425Z log error Service(sync) exit() { status: 12,
  reason:
   'unhandledRejection: Error: DbRequest {"method":"run","sql":"insert into `ProgressMeta` (`createdAt`, `name`, `progressId`, `updatedAt`, `value`) values (?, ?, ?, ?, ?) ON CONFLICT (progressId,name) DO UPDATE SET updatedAt=excluded.updatedAt,value=excluded.value","bindings":…',
  waitForJobs: false,
  ending: false }

Thanks for reporting this!

Here’s the table in question, ProgressMeta:

CREATE TABLE ProgressMeta
-- Used to store state of a sync.
-- Prevents future sync jobs from re-doing prior work.
(
  id integer NOT NULL PRIMARY KEY,
  progressId integer NOT NULL,
  name varchar(255) NOT NULL,
  value varchar(255) NOT NULL,
  createdAt bigint NOT NULL,
  updatedAt bigint NOT NULL,
  FOREIGN KEY(progressId) REFERENCES Progress(id)
);

CREATE UNIQUE INDEX progressmeta_fk_name_udx 
  ON ProgressMeta (progressId, name);

There’s only one thing that can cause a foreign key issue, and that’s if the Progress row gets deleted. I’ll look at the update codepaths and see if something might be removing rows inappropriately.

Noticing something else unexpected (to me; maybe it’s expected in beta.9). No more asset counts??

Thanks for reporting this!

I found that bug yesterday while I was testing initial sync. Beta.10 should resolve this issue (sync now does the tag counting).

Well, now I am seeing lots of contraint errors on other tables. Also the sync has stalled (see other thread). So, not sure what’s going on. Should I scrap everything and start from scratch and hope it doesn’t occur? Or wait for beta.10 to try again?

{"ts":1624627263941,"l":"error","ctx":"DbRequest","msg":"SqliteError: code SQLITE_CONSTRAINT_UNIQUE: UNIQUE constraint failed: AssetFile.assetId, AssetFile.shown\nSqliteError: UNIQUE constraint failed: AssetFile.assetId, AssetFile.shown\n at /ps/app/bin/sync-file.js:9:672308\n at Function.<anonymous> (/ps/app/bin/sync-file.js:9:693042)\n at Function.sqliteTransaction (/ps/app/node_modules/better-sqlite3/lib/methods/transaction.js:65:24)\n at /ps/app/bin/sync-file.js:9:693052\n at Object.t.handleDbRetries (/ps/app/bin/sync-file.js:9:674251)","meta":{"method":"run","sql":"insert into AssetFile (aperture, assetId, capturedAtLocal, capturedAtOffset, capturedAtPrecisionMs, capturedAtSrc, createdAt, fileSize, focalLength, height, id, iso, lensId, make, meanHash, mimetype, mode0, mode1, mode2, mode3, mode4, mode5, mode6, model, mountpoint, mtime, rotation, sha, shown, shutterSpeed, updateCount, updatedAt, uri, version, width) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ON CONFLICT (uri) DO UPDATE SET aperture=excluded.aperture,assetId=excluded.assetId,capturedAtLocal=excluded.capturedAtLocal,capturedAtOffset=excluded.capturedAtOffset,capturedAtPrecisionMs=excluded.capturedAtPrecisionMs,capturedAtSrc=excluded.capturedAtSrc,fileSize=excluded.fileSize,focalLength=excluded.focalLength,height=excluded.height,iso=excluded.iso,lensId=excluded.lensId,make=excluded.make,meanHash=excluded.meanHash,mimetype=excluded.mimetype,mode0=excluded.mode0,mode1=excluded.mode1,mode2=excluded.mode2,mode3=excluded.mode3,mode4=excluded.mode4,mode5=excluded.mode5,mode6=excluded.mode6,model=excluded.model,mountpoint=excluded.mountpoint,mtime=excluded.mtime,rotation=excluded.rotation,sha=excluded.sha,shown=excluded.shown,shutterSpeed=excluded.shutterSpeed,updateCount=excluded.updateCount,updatedAt=excluded.updatedAt,version=excluded.version,width=excluded.width","bindings":[4.5,1483,2004040415561900,-240,0,"tags:DateCreated",1624586439760,1161668,"17.5 mm",1944,3052,100,"LensID:Unknown 7-21mm","Canon","BwhgYPDw8PJ5XNzMwMCAwHZczMzAwMDA","image/jpeg",1536,3181,658,1540,1568,3599,3177,"PowerShot S50","/",1624566429869,0,"Bsm8DeuNT2aR44SEMMM3DAn5Unol8hQH",1,"1/8",4,1624627263938,"pslib:/2004/2004-04-04/IMG_0078.JPG",11,2592]}}

Ok: there’s definitely something amok here: I’ve only seen this happen on buggy posix-like filesystems (like earlier versions of Big Sur on M1).

This would potentially happen if the database file is unexpectedly truncated: there will be in-memory references (to Progress and Asset rows), work is done, and then when we go to add a child row, the new truncated DB doesn’t have those rows.

I’ll think about how that could possibly happen.

Thanks for reporting this!

@avdp what filesystem did you bind /ps/tmp and /ps/library to?

Make sure /ps/tmp is a fast local disk (SSD would be great).

If /ps/library is a local filesystem (either an SSD or HDD), you can skip the whole “support SQLite on remote filesystems” codepaths by setting the environment variable PS_FORCE_LOCAL_DB_REPLICA=false. This reduces the number of moving db parts dramatically.

I’d nuke the /ps/library/.photostructure directory and re-import from scratch, btw: if key constraints are failing, assume the other database rows are garbage/invalid.

All the folders are mapped to an unraid cache drive, which is a nvme ssd. File system is btrfs. So it doesn’t get any faster than this….

I’ve already taken the steps of nuking everything 3 times today, and I get the crash minutes after the scan started all 3 times. All 3 times when I restarted the countainer it would run without crashing but still showed sql errors in the logs. Even though it didn’t crash, the scan froze pretty quickly. The current run (with hwaccel set to no) hasn’t frozen yet. 8 hours to go…. We shall see…

Yeah, if all drives are local, definitely run with PS_FORCE_LOCAL_DB_REPLICA=false.

We may have a winner! Wiped everything out again and recreate the container with PS_FORCE_LOCAL_DB_REPLICA=false. No crashing so far, and the scan seems to be faster. I also noticed that pictures started showing in the UI immediately, whereas previously they did not. The scan also seems faster. We’ll see where we’re at in about 8 hours, but feeling optimistic.