We have a multi-threaded application in which we want to use unqlite. It's a fairly simple application. There is thread t1 that is receiving events from an external source. As it it receives the events, we dump them into the db using unqlite_kv_store() by adding a key to it. Bulk of the data is a serialized json string.
The thread t2 is reading these events one at time from the top and once it has read the event, it tries to delete the entry.
t2's logic is something like following (in a loop):
unqlite_kv_cursor_init() << Init the cursor
unqlite_kv_cursor_first_entry() << get the first entry
unqlite_kv_cursor_valid_entry() << validate it
unqlite_kv_cursor_key() << get the key length
unqlite_kv_cursor_key() << get the key data
unqlite_kv_cursor_release() << release the cursor
unqlite_kv_fetch() << fetch based on key
unqlite_kv_delete() << based on the key
We have built the library with the UNQLITE_ENABLE_THREADS
In the case of high event load, we are often seeing the record being corrupt (partial data is retrieved). The buffer passed to fetch the record is statically allocated but guaranteed to be big enough. We even tried adding our own locking on top of unqlite to guaranteed synchronization between t1 and t2 but no luck. The DB we open is with a backing file on windows and thru-out its life, we don't issue any commit from the program.
Any suggestions on what might be going wrong or what could we do to debug it more?