App using QTreeView and QStandardItemModel does not catch up - qt

I'm working on a programm (notifyfs) which takes care for caching of directory entries and watching the underlying filesystem for changes. The cache is stored in shared memory, (gui) clients can make use of the cache very easily.
Communication between server (notifyfs) and clients can go using a socket or via the shared memory self, by sharing a mutex and condition variable.
When a client wants to load a directory it does the following:
a. select a "view", which is a data struct in shared memory, which consists of a shared mutex, conditionvariable and a small queue (array), to communicate add/remove/change events with the client.
b. the client populates his/her model with what it already finds in the shared memory
c. send a message to the server with a reference to the view, and an indication to the path it wants to load its contents. This maybe a path, but if possible the parent entry.
d. server receives the message (does some checks), sets a watch on the directory, and synces the directory. When the directory has not yet been in the cache this means that every entry it detects is stored in the cache. While doing so it signals the view (the data in shared memory) an entry is added, and it stores this event in the array/queue.
e. the gui client has a special thread watching this view in shared memory constantly for changes using the pthread_cond_wait call. This thread is a special io thread, which can send three signals: entry added, entry removed and entry changed. The right parameters it reads from the array queue: a reference to the entry, and what the action is. These three signals are connected to three slots in my model, which is based upon a QStandardItemModel.
This works perfectly. It's very fast. When testing it I had a lot of debugging output. After removing these to test it without this extra slow io, it looks like the QTreeView can't catch up the changes. When loading a directory it loads two third of it, and when going to load another directory, this gets less and less.
I've connected the different signals from the special thread to the model using Qt::QueuedConnection.
Adding a row at a certain row is done using the insertRow(row, list) call, where row is of course the row, and list is a QList of items.
I've been looking to this issue for some time now, and saw that all the changes are detected by the special io thread, and that the signals are received by the model. Only the signal to the QTreeView is not received somehow. I've been thinking, do I have to set the communication between the models signal and the receiving slot of the treeview also to "Qt::QueuedConnection"? Maybe something else?

suggested in the reactions was to put the model in a special thread. This was tempting, but is not the right way to solve this. The model and the view should be in the same thread.
I solved this issue by doing as much as possible when it comes to providing the model with data by the special io thread. I moved some functions which populated the model to this special io, and used the standard calls to insert or remove a row. That worked.
Thanks to everyone giving suggestions,
Stef Bon

Related

OS dev: triple fault when trying to enable paging

I am building a simple OS for learning purposes and I am (currently; I followed different tutorials earlier and customized something by myself) following this tutorial for enabling paging. I'm using QEMU instead of Bochs as my emulator.
If I keep paging disabled everything works fine (even the very basic kmalloc() I implemented), but as soon as I set the PG bit in the cr0 register (i.e. enable paging), everything crashes and QEMU reboots: I suspect that some of the structures (i.e. page directory, page tables, etc.) I have are not created or loaded properly, but I have no way of checking.
I've been trying to solve this problem since a while now, but haven't found a solution. Can anyone see where my mistake is?
Here you can find my complete code: https://github.com/davidedellagiustina/ScratchOS (commit 83b5c8c). Paging code is located in src/cpu/paging.*.
Edit: Setting up a super-basic page directory following exactly this tutorial results in working code. Basing on this simple example, I'm trying to build up the more complex structures (i.e. page_t, page_table_t, page_directory_t) in order to understand the mistake.
In general:
pointers should be for virtual addresses only (and should never be used for physical addresses)
physical addresses should probably be using a typedef (e.g. like typedef uint32_t phys_address_t) so that later (when you want to support PAE/Physical Address Extensions) you can change the type (e.g. use typedef uint64_t phys_address_t instead) without breaking everything. This also means you get compile-time warnings/errors when you make silly mistakes (e.g. using a virtual address/pointer where you need a physical address/unsigned integer).
almost all of the kernel should be using pointers/virtual addresses for everything. Physical addresses are only used by some device drivers (for bus mastering/DMA) and for the physical memory management itself (to allocate physical pages for page tables, etc; before mapping them into a virtual address space). This includes high level memory management ("kmalloc()" should return a void * pointer and not a physical address).
during boot, there's a small period of time when none of the kernel's normal code can work because it uses virtual addresses and paging hasn't been initialized yet. To minimize the size of this period of time (and code duplication caused by having 2 versions of functions - one for "before paging initialized" and another for "after paging initialized") you want to initialize paging as soon as possible; either with a dedicated piece of assembly language startup code that's executed before "main()" (possibly using "statically allocated at compile time" memory in the kernel's ".bss" section for the page directory and page tables), or in the boot loader itself (which is cleaner and more powerful/flexible). Things like setting up a valid kernel stack, and initializing (physical, virtual then heap) memory management, can/should wait until after paging has been initialized.
for identity mapping; you'd only need 2 loops (one to create page directory entries and another to create all page table entries), where both loops can be like this (just with different initial values in eax, ecx and edi):
.nextEntry:
stosd
add eax,0x00001000
loop .nextEntry
identity mapping isn't great. Normally you want the kernel at a high virtual address (e.g. 0xC0000000) with an area of "deliberately not used to catch NULL pointers" at 0x0000000, and user-space (processes, etc) using normal virtual addresses between them (e.g. maybe starting at virtual address 0x00400000). This makes it annoying for the code that initializes paging and the kernel's linker script (which is why it's cleaner to initialize paging in the boot loader and avoid the mess in the kernel). For this case; you will need to temporarily identity map one page (the page containing the final "mov cr0" that enables paging and the jmp kernel_entry that transfers control to code/the kernel at a higher address), and will want to delete that temporarily identity mapped page after kernel's main is started.
you will need to become "very familiar" with the debugging capabilities of your emulator. Qemu has a log that can provide very useful clues, and includes a built in monitor that offers a variety of commands (see https://en.wikibooks.org/wiki/QEMU/Monitor ). You should be able to replace the "mov cr0" (that enables paging) with an endless loop (.die: jmp die), then use the monitor to stop the emulator after it reaches the endless loop and inspect everything (contents of cr3, contents of physical memory) and find out what is wrong with the page directory or page table entries (and do similar immediately after paging is enabled to inspect the virtual address space before your code does anything with it). Qemu also allows you to attach a remote debugger (GDB).
I found out I was missing all the flags in the page directory entries (and especially the read/write and the kernel mode ones), as I was putting there just the page table address. I will keep my repository public and I will continue the development from now on, in case anyone needs it in the future.
Edit: Also, I forgot to initialize all the pages (with address and presence bit) when I created a new page table.

Qt: Catch external changes on an SQLite database

)
I'm deveoping a program using an SQLite database I acces via QSqlDatabase. I'd like to handle the (hopefully rare) case when some changes are done to the database which are not caused by the program while it's running (e. g. the user could remove write access, move or delete the file or modify it manually).
I tried to use a QFileSystemWatcher. I let it watch the database file, and in all functions wrtiting something to it, I blocked it's signals, so that only "external" changes would trigger the changed signal.
Problem is that the check of the QFileSystemWatcher and/or the actual writing to disk of QSqlDatabase::commit() seems not to happen in the exact moment I call commit(), so that actually, first the QFileSystemWatcher's signals are blocked, then I change some stuff, then I unblock them and then, it reports the file to be changed.
I then tried to set a bool variable (m_writeInProgress) to true each time a function requests a change. The "changed" slot then checks if a write action has be requested and if so, sets m_writeInProgress to false again and exits. This way, it would only handle "external" changes.
Problem is still that if the change happens in the exact moment the actual writing is going on, it's not catched.
So possibly, using a QFileSystemWatcher is the wrong way to implement this.
How could this be done in a safe way?
Thanks for all help!
Edit:
I found a way to solve a part of the problem. Starting an exclusive lock on the database file prevents other connections from changing it. It's quite simple, I just have to execute
PRAGMA locking_mode = EXCLUSIVE
BEGIN EXCLUSIVE
COMMIT
and handle the error that emerges if another instance of my program trys to access the database.
What's left is to know if the user (accidentally) deleted the file during runtime ...
First of all, there's no SQLITE support for this: SQLITE only supports monitoring changes created over a database connection within your direct control. Whatever happens in a separate process concurrently with your process, or when your process is not running, is by design completely out of your control.
The canonical solution to this problem is to encrypt the database with a key specific to your application (and perhaps user, etc.). Then, no third-party process can modify the database using SQLITE. Of course any process can corrupt your database, or get rid of it -- that's too bad. You can detect corruption trivially by using cryptographic signatures, perhaps even error correcting codes so as to be able to restore the data should a certain amount of corruption happen. You don't need notifications of someone moving or deleting the database file: you will know when you attempt to open the database and the "file not found" error is given back to you.
Of course all of the above requires a custom VFS implementation. That's very much par for the course.

Hardware and Sotfware saves during Context Switch in xv6

I'm studying the xv6 context switch on Operating Systems: Three Easy Pieces book. I can not fully understand the Saving and Restoring Context section of Chapter 6 (page 8).
Why there are two types of register saves/restore that happen during the Context Switch protocol?
What is the difference between the mentioned user registers and kernel registers?
What is the meaning of:
By switching stacks, the kernel enters the call to the switch code in the context of one process (the one that was interrupted) and returns in the context of another (the soon-to-be-executing one).
Why there are two types of register saves/restore that happen during the Context Switch protocol?
Assuming you are talking about p. 10. The text is a bit misleading (but not as nearly bad as I have seen in some books). They are comparing register save in interrupts those to context switches. It's really not a good comparison.
Register saves in interrupt handling is done the same way as you do it in a function call (and not like it is done in a context switch). You have to preserve any register values you are going to muck with at the start interrupt handling then restore them before the interrupt handler return. You are only dealing with general purpose registers as well (ie not process control registers).
Register save in context switches are done en-masse. All the process's registers get saved at once. An interrupt service routine might save 4 registers while a context switch might save more than 30.
What is the difference between the mentioned user registers and kernel registers?
Some registers are accessible and modifiable in user mode. The general purpose registers would certainly be user registers. The processor status is a mixed bag because it can be read in user mode, it can be modified in some ways in user mode by executing instructions but it is generally read only in user mode. You might call that a user register or might not.
There are other registers that are only accessible in kernel mode. For example, there will be registers that define the process's page table. Other registers will define the system dispatch table.
Note here the only some of the kernel mode registers are process registers (e.g. those setting up page tables) and need to be saved and restored with the process. Other kernel registers are system wide (e.g. those for timers and the system dispatch table). Those do not change with the process.
By switching stacks, the kernel enters the call to the switch code in the context of one process (the one that was interrupted) and returns in the context of another (the soon-to-be-executing one).
This is a little bit misleading in the excerpt but might make more sense if I read the book carefully.
A process context switch requires changing all the per-process registers to a block whose structure is defined by the CPU. What I find misleading in your excerpt is that the context switch involves more than just switching stacks.
Typically a context change looks something like:
SAVE_PROCESS_CONTEXT_INSTRUCTION address_of_the_current_process_context_block
LOAD_PROCESS_CONTEXT_INSTRUCTION address_of_the_next_process_context_block
As soon as you load a process context you are in the new process. That switch includes changing the kernel mode stack.
Some operating systems use terminology in their documentation that implies interrupts (especially) and (sometimes) exceptions being handlers are not done in the context of a process. In fact, the CPU ALWAYS executes in the context of a process.
As soon as you execute the context switch instruction you are in the new process BUT in an exception or interrupt handler in kernel mode. The change in the kernel stack causes the return from the exception or interrupt to resume the new process's user mode code.
So you are already in the context of the process with the PCB switch.The resulting change in the kernel mode stack pointer (ie establishing a new kernel mode stack) causes return from the exception or interrupt to pick up where the new process was before it entered kernel mode (via exception or interrupt)

Qt and RTI DDS interaction---Need some guidance

I am making a GUI where I have multiple forms on QStackedWIdget. Now I want the data in these forms to be updated as and when available. The data will be recieved through RTI DDS. Can some one suggest me some examples or links where the GUI data is updated from Non GUI thread.
Thank You.
You have several options at your disposal. I will explain the one that seems to suit your situation best, as far as I can assess from your question.
First you need to know that on the subscriber side, there are three different possible kinds of interaction between your application and the DDS DataReaders: polling, listeners and waitsets. Polling basically means that your application queries the DataReader when it deems necessary, for example at a fixed rate. Using listeners means that your application provides the middleware with some callback functions which get invoked whenever new data has arrived. Waitsets are similar to a socket select, where your application thread is blocked until data arrives, or a time-out occurs -- typically followed by an action to access the DataReader.
For GUI applications, it is common to use a polling mechanism as opposed to a listener approach that you are probably using. In stead of reading the data as it arrives, and immediately updating the GUI widgets, you can let your GUI read or take data from the DataReaders at a fixed rate, for example at 5 Hz.
With that approach, you take control over when you access DDS and you can do it at the exact rate required, no matter how fast the data gets updated inside your DataReader. Additionally, your question of data being updated by a non-GUI thread is resolved, because you access the DDS DataReader from your own context.
A potential disadvantage of using polling could be that the updating of the widgets happens with some delay, for example if you poll at 5 Hz, your maximum extra delay will be 200 msec. That is usually not a problem for GUI applications though.

How do I prevent SQLite database locks?

From sqlite FAQ I've known that:
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
So, as far as I understand I can:
1) Read db from multiple threads (SELECT)
2) Read db from multiple threads (SELECT) and write from single thread (CREATE, INSERT, DELETE)
But, I read about Write-Ahead Logging that provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.
Finally, I've got completely muddled when I found it, when specified:
Here are other reasons for getting an SQLITE_LOCKED error:
Trying to CREATE or DROP a table or index while a SELECT statement is
still pending.
Trying to write to a table while a SELECT is active on that same table.
Trying to do two SELECT on the same table at the same time in a
multithread application, if sqlite is not set to do so.
fcntl(3,F_SETLK call on DB file fails. This could be caused by an NFS locking
issue, for example. One solution for this issue, is to mv the DB away,
and copy it back so that it has a new Inode value
So, I would like to clarify for myself, when I should to avoid the locks? Can I read and write at the same time from two different threads? Thanks.
For those who are working with Android API:
Locking in SQLite is done on the file level which guarantees locking
of changes from different threads and connections. Thus multiple
threads can read the database however one can only write to it.
More on locking in SQLite can be read at SQLite documentation but we are most interested in the API provided by OS Android.
Writing with two concurrent threads can be made both from a single and from multiple database connections. Since only one thread can write to the database then there are two variants:
If you write from two threads of one connection then one thread will
await on the other to finish writing.
If you write from two threads of different connections then an error
will be – all of your data will not be written to the database and
the application will be interrupted with
SQLiteDatabaseLockedException. It becomes evident that the
application should always have only one copy of
SQLiteOpenHelper(just an open connection) otherwise
SQLiteDatabaseLockedException can occur at any moment.
Different Connections At a Single SQLiteOpenHelper
Everyone is aware that SQLiteOpenHelper has 2 methods providing access to the database getReadableDatabase() and getWritableDatabase(), to read and write data respectively. However in most cases there is one real connection. Moreover it is one and the same object:
SQLiteOpenHelper.getReadableDatabase()==SQLiteOpenHelper.getWritableDatabase()
It means that there is no difference in use of the methods the data is read from. However there is another undocumented issue which is more important – inside of the class SQLiteDatabase there are own locks – the variable mLock. Locks for writing at the level of the object SQLiteDatabase and since there is only one copy of SQLiteDatabase for read and write then data read is also blocked. It is more prominently visible when writing a large volume of data in a transaction.
Let’s consider an example of such an application that should download a large volume of data (approx. 7000 lines containing BLOB) in the background on first launch and save it to the database. If the data is saved inside the transaction then saving takes approx. 45 seconds but the user can not use the application since any of the reading queries are blocked. If the data is saved in small portions then the update process is dragging out for a rather lengthy period of time (10-15 minutes) but the user can use the application without any restrictions and inconvenience. “The double edge sword” – either fast or convenient.
Google has already fixed a part of issues related to SQLiteDatabase functionality as the following methods have been added:
beginTransactionNonExclusive() – creates a transaction in the “IMMEDIATE mode”.
yieldIfContendedSafely() – temporary seizes the transaction in order to allow completion of tasks by other threads.
isDatabaseIntegrityOk() – checks for database integrity
Please read in more details in the documentation.
However for the older versions of Android this functionality is required as well.
The Solution
First locking should be turned off and allow reading the data in any situation.
SQLiteDatabase.setLockingEnabled(false);
cancels using internal query locking – on the logic level of the java class (not related to locking in terms of SQLite)
SQLiteDatabase.execSQL(“PRAGMA read_uncommitted = true;”);
Allows reading data from cache. In fact, changes the level of isolation. This parameter should be set for each connection anew. If there are a number of connections then it influences only the connection that calls for this command.
SQLiteDatabase.execSQL(“PRAGMA synchronous=OFF”);
Change the writing method to the database – without “synchronization”. When activating this option the database can be damaged if the system unexpectedly fails or power supply is off. However according to the SQLite documentation some operations are executed 50 times faster if the option is not activated.
Unfortunately not all of PRAGMA is supported in Android e.g. “PRAGMA locking_mode = NORMAL” and “PRAGMA journal_mode = OFF” and some others are not supported. At the attempt to call PRAGMA data the application fails.
In the documentation for the method setLockingEnabled it is said that this method is recommended for using only in the case if you are sure that all the work with the database is done from a single thread. We should guarantee than at a time only one transaction is held. Also instead of the default transactions (exclusive transaction) the immediate transaction should be used. In the older versions of Android (below API 11) there is no option to create the immediate transaction thru the java wrapper however SQLite supports this functionality. To initialize a transaction in the immediate mode the following SQLite query should be executed directly to the database, – for example thru the method execSQL:
SQLiteDatabase.execSQL(“begin immediate transaction”);
Since the transaction is initialized by the direct query then it should be finished the same way:
SQLiteDatabase.execSQL(“commit transaction”);
Then TransactionManager is the only thing left to be implemented which will initiate and finish transactions of the required type. The purpose of TransactionManager – is to guarantee that all of the queries for changes (insert, update, delete, DDL queries) originate from the same thread.
Hope this helps the future visitors!!!
Not specific to SQLite:
1) Write your code to gracefully handle the situation where you get a locking conflict at the application level; even if you wrote your code so that this is 'impossible'. Use transactional re-tries (ie: SQLITE_LOCKED could be one of many codes that you interpret as "try again" or "wait and try again"), and coordinate this with application-level code. If you think about it, getting a SQLITE_LOCKED is better than simply having the attempt hang because it's locked - because you can go do something else.
2) Acquire locks. But you have to be careful if you need to acquire more than one. For each transaction at the application level, acquire all of the resources (locks) you will need in a consistent (ie: alphabetical?) order to prevent deadlocks when locks get acquired in the database. Sometimes you can ignore this if the database will reliably and quickly detect the deadlocks and throw exceptions; in other systems it may just hang without detecting the deadlock - making it absolutely necessary to take the effort to acquire the locks correctly.
Besides the facts of life with locking, you should try to design the data and in-memory structures with concurrent merging and rolling back planned in from the beginning. If you can design data such that the outcome of a data race gives a good result for all orders, then you don't have to deal with locks in that case. A good example is to increment a counter without knowing its current value, rather than reading the value and submitting a new value to update. It's similar for appending to a set (ie: adding a row, such that it doesn't matter which order the row inserts happened).
A good system is supposed to transactionally move from one valid state to the next, and you can think of exceptions (even in in-memory code) as aborting an attempt to move to the next state; with the option to ignore or retry.
You're fine with multithreading. The page you link lists what you cannot do while you're looping on the results of your SELECT (i.e. your select is active/pending) in the same thread.

Resources