UnQLite Users Forum

Record getting corrupted / partially recorded

append delete Atul

We have a multi-threaded application in which we want to use unqlite. It's a fairly simple application. There is thread t1 that is receiving events from an external source. As it it receives the events, we dump them into the db using unqlite_kv_store() by adding a key to it. Bulk of the data is a serialized json string.
The thread t2 is reading these events one at time from the top and once it has read the event, it tries to delete the entry.
t2's logic is something like following (in a loop):
unqlite_kv_cursor_init() << Init the cursor
unqlite_kv_cursor_first_entry() << get the first entry
unqlite_kv_cursor_valid_entry() << validate it
unqlite_kv_cursor_key() << get the key length
unqlite_kv_cursor_key() << get the key data
unqlite_kv_cursor_release() << release the cursor
unqlite_kv_fetch() << fetch based on key
unqlite_kv_delete() << based on the key
We have built the library with the UNQLITE_ENABLE_THREADS
In the case of high event load, we are often seeing the record being corrupt (partial data is retrieved). The buffer passed to fetch the record is statically allocated but guaranteed to be big enough. We even tried adding our own locking on top of unqlite to guaranteed synchronization between t1 and t2 but no luck. The DB we open is with a backing file on windows and thru-out its life, we don't issue any commit from the program.
Any suggestions on what might be going wrong or what could we do to debug it more?

Reply RSS


append delete #1. devel


The UnQLite cursor interfaces (i.e. unqlite_kv_cursor_init(), unqlite_kv_cursor_key(), etc.) are not thread-safe unlike the core when compiled with UNQLITE_ENABLE_THREADS like you did.
The trick is to protect the cursors by some locking mechanism like a mutex for example before calling them and release that mutex once you finish working with the called method.

Finally, did you call unqlite_commit() in some of your threads manually or everything is handled automatically by the engine?

append delete #2. Atul

As I mentioned, I added my own locks to ensure protection of the unqlite APIs across the 2 threads. The cursor APIs (including all other ones) are guarded.

No, specifically commit is not being called.

The corruption of records is observed under high stress condition i.e the writer is writing constantly like almost in a while(1) loop. The reader periodically reads from the db by navigating records using the cursor and once a record is read, it is deleted.

append delete #3. chm


under high load of read/write operations, it would be safe to call unqlite_commit()[1] manually say each 1000 iterations to make sure that everything reach the surface of the disk.

Don't forget to update to the latest version of the library.

[1]: https://unqlite.org/c_api/unqlite_commit.html

append delete #4. Atul

Does it make a difference if the buffer being written is unaligned?

append delete #5. Atul

Or let me rephrase, are there any alignment requirements for the buffer written into the db?

append delete #6. chm

Could you provide any C/C++ code snippet for this particular case? I suspect a misuse of some of the library interfaces. Email me directly [chm at symisc dot net].


(Leave this as-is, it’s a trap!)

There is no need to “register”, just enter the same name + password of your choice every time.

Pro tip: Use markup to add links, quotes and more.

Your friendly neighbourhood moderators: chm_at_symisc, devel_at_symisc