What are the rules about concurrently accessing a persistent database

Something worth noting is that SQLite has issues with locking when stored on NFS-like volumes (vboxsf, NFS, SMB, mvfs, etc.) on many systems which cause SQLite to give that error even before you’ve successfully opened the database. These volumes may implement fcntl() read/write locks incorrectly. ( http://www.sqlite.org/faq.html#q5 )

Assuming that’s not the issue, it’s also worth mentioning that SQLite doesn’t really natively support concurrent “connections” ( http://www.sqlite.org/faq.html#q6 ) as it uses file system locks to ensure that two writes don’t occur at the same time. (See section 3.0 of http://www.sqlite.org/lockingv3.html)

Assuming all of this is known, you may also check which version of sqlite3 you have available to your environment, as some changes to the way in which different kinds of locks are acquired occurred in the the 3.x series: http://www.sqlite.org/sharedcache.html

Edit:
Some additional information from the persist-sqlite3 library
This package includes a thin sqlite3 wrapper based on the direct-sqlite package, as well as the entire C library

‘Thin’ wrapper made me decide to take a look at it to see just how thin it is; looking at the code it doesn’t look as if the persistent wrapper has any guards against a statement to the pool failing except the required guard to translate/emit the error and interrupt execution, though I must provide the caveat that I am not comfortable with Haskell.

It appears that you will have to guard against a statement in the pool failing and reattempt, or that you limit the pool size at initialization to 1 (which seems less than ideal.)

Leave a Comment

deneme bonusudeneme bonusu veren sitelerOnwin Güncel Giriştürkçe altyazılı pornocanlı bahis casino