In a kde3 game called ksirtet (a tetris clone) when playing against a computer the human player cannot move the tetris piece left/right. I'm trying to fix it but cannot debug in gdb. After the line "kapp->exec()" gdb stops responding, the game runs and I cannot input any command do gdb to see what's going on. So the question is about debugging kde event loop and any help would be much appreciated.
Generally speaking, you wouldn't want to debug into the event loop unless necessary. That said, you probably want to scatter a sprinkling of breakpoints at places of interest, especially where you think the code should be running after the key press. If you try to step through the event loop code from the beginning, you'll run into problems trying to interact with the program you want to debug.
Additionally, if I recall correctly, you can control-c in gdb, and it will interrupt the program at its current point of execution, and restore control to you. If you really want to see what is going on, try queueing up some events in the game (mash a bunch of keys quickly), then interrupt gdb and step through what the program is doing in response to those events. You'll have to be very quick, though, as the event loop processing on a modern computer is very fast.
Related
I would like to find some programming errors concerning a GUI application with a QGraphicsView. To test manually, a mouse click onto the QGraphicsView component is required. Due to my imperfect mouse handling, various pixel coordinates result in a completely altered execution path with many changed variable values.
In addition, clicking manually for debugging purposes is rather time-expensive due to nasty errors (infinite loops, SIGSEVs, ...).
How is it possible to automate such a task like mouse-clicking similar to unit tests (with QTest) while being able to debug (control the program flow through breakpoints, ...)? Many thanks in advance.
EDIT: I am able to simulate the required mouse clicks in unit tests. I am not interested in executing a hopefully correct subprogram completely and get the values of some assertions as result, but in automating a sequence of mouse clicks to execute the program up to the first breakpoint (not until its end) and then carefully execute the single instructions manually. Perhaps, I've missed the link.
The problem is solved. The mouse clicks led to the execution of an algorithm. I was able to call the algorithm with fixed arguments directly during window initialization. In this way, I could intervene at a later moment and did not need to emulate mouse clicks anymore.
In his Qt event loop, networking and I/O API talk, Thiago Macieira mentions that nesting of QEventLoop's should be avoided:
QEventLoop is for nesting event Loops... Avoid it if you can because it creates a number of problems: things might reenter, new activations of sockets or timers that you were not expecting.
Can anybody expand on what he is referring to? I maintain a lot of code that uses modal dialogs which internally nest a new event loop when exec() is called so I'm very interested in knowing what kind of problems this may lead to.
A nested event loop costs you 1-2kb of stack. It takes up 5% of the L1 data cache on typical 32kb L1 cache CPUs, give-or-take.
It has the capacity to reenter any code already on the call stack. There are no guarantees that any of that code was designed to be reentrant. I'm talking about your code, not Qt's code. It can reenter code that has started this event loop, and unless you explicitly control this recursion, there are no guarantees that you won't eventually run out of stack space.
In current Qt, there are two places where, due to a long standing API bugs or platform inadequacies, you have to use nested exec: QDrag and platform file dialogs (on some platforms). You simply don't need to use it anywhere else. You do not need a nested event loop for non-platform modal dialogs.
Reentering the event loop is usually caused by writing pseudo-synchronous code where one laments the supposed lack of yield() (co_yield and co_await has landed in C++ now!), hides one's head in the sand and uses exec() instead. Such code typically ends up being barely palatable spaghetti and is unnecessary.
For modern C++, using the C++20 coroutines is worthwhile; there are some Qt-based experiments around, easy to build on.
There are Qt-native implementations of stackful coroutines: Skycoder42/QtCoroutings - a recent project, and the older ckamm/qt-coroutine. I'm not sure how fresh the latter code is. It looks that it all worked at some point.
Writing asynchronous code cleanly without coroutines is usually accomplished through state machines, see this answer for an example, and QP framework for an implementation different from QStateMachine.
Personal anecdote: I couldn't wait for C++ coroutines to become production-ready, and I now write asynchronous communication code in golang, and statically link that into a Qt application. Works great, the garbage collector is unnoticeable, and the code is way easier to read and write than C++ with coroutines. I had a lot of code written using C++ coroutines TS, but moved it all to golang and I don't regret it.
A nested event loop will lead to ordering inversion. (at least on qt4)
Lets say you have the following sequence of things happening
enqueued in outer loop: 1,2,3
processing 1 => spawn inner loop
enqueue 4 in inner loop
processing 4
exit inner loop
processing 2
So you see the processing order was: 1,4,2,3.
I speak from experience and this usually resulted in a crash in my code.
I am looking into various OS designs in the hopes of writing a simple multitasking OS for the DCPU-16. However, everything I read about implementation of preemptive multitasking is centered around interrupts. It sounds like in the era of 16-bit hardware and software, cooperative multitasking was more common, but that requires every program to be written with multitasking in mind.
Is there any way to implement preemptive multitasking on an interruptless architecture? All I can think of is an interpreter which would dynamically switch tasks, but that would have a huge performance hit (possibly on the order of 10-20x+ if it had to parse every operation and didn't let anything run natively, I'm imagining).
Preemptive multitasking is normally implemented by having interrupt routines post status changes/interesting events to a scheduler, which decides which tasks to suspend, and which new tasks to start/continue based on priority. However, other interesting events can occur when a running task makes a call to an OS routine, which may have the same effect.
But all that matters is that some event is noted somewhere, and the scheduler decides who to run. So you can make all such event signalling/scheduling occur only only on OS calls.
You can add egregious calls to the scheduler at "convenient" points in various task application code to make your system switch more often. Whether it just switches, or uses some background information such as elapsed time since the last call is a scheduler detail.
Your system won't be as responsive as one driven by interrupts, but you've already given that up by choosing the CPU you did.
Actually, yes. The most effective method is to simply patch run-times in the loader. Kernel/daemon stuff can have custom patches for better responsiveness. Even better, if you have access to all the source, you can patch in the compiler.
The patch can consist of a distributed scheduler of sorts. Each program can be patched to have a very low-latency timer; on load, it will set the timer, and on each return from the scheduler, it will reset it. A simplistic method would allow code to simply do an
if (timer - start_timer) yield to scheduler;
which doesn't yield too big a performance hit. The main trouble is finding good points to pop them in. In between every function call is a start, and detecting loops and inserting them is primitive but effective if you really need to preempt responsively.
It's not perfect, but it'll work.
The main issue is making sure that the timer return is low latency; that way it is just a comparison and branch. Also, handling exceptions - errors in the code that cause, say, infinite loops - in some way. You can technically use a fairly simple hardware watchdog timer and assert a reset on the CPU without clearing any of the RAM; an in-RAM routine would be where RESET vector points, which would inspect and unwind the stack back to the program call (thus crashing the program but preserving everything else). It's sort of like a brute-force if-all-else-fails crash-the-program. Or you could POTENTIALLY change it to multi-task this way, RESET as an interrupt, but that is much more difficult.
So...yes. It's possible but complicated; using techniques from JIT compilers and dynamic translators (emulators use them).
This is a bit of a muddled explanation, I know, but I am very tired. If it's not clear enough I can come back and clear it up tomorrow.
By the way, asserting reset on a CPU mid-program sounds crazy, but it is a time-honored and proven technique. Early versions of Windows even did it to run compatibility mode on, I think 386's, properly, because there was no way to switch back to 32-bit from 16-bit mode. Other processors and OSes have done it too.
EDIT: So I did some research on what the DCPU is, haha. It's not a real CPU. I have no idea if you can assert reset in Notch's emulator, I would ask him. Handy technique, that is.
I think your assessment is correct. Preemptive multitasking occurs if the scheduler can interrupt (in the non-inflected, dictionary sense) a running task and switch to another autonomously. So there has to be some sort of actor that prompts the scheduler to action. If there are no interrupting devices (in the inflected, technical sense) then there's little you can do in general.
However, rather than switching to a full interpreter, one idea that occurs is just dynamically reprogramming supplied program code. So before entry into a process, the scheduler knows full process state, including what program counter value it's going to enter at. It can then scan forward from there, substituting, say, either the twentieth instruction code or the next jump instruction code that isn't immediately at the program counter with a jump back into the scheduler. When the process returns, the scheduler puts the original instruction back in. If it's a jump (conditional or otherwise) then it also effects the jump appropriately.
Of course, this scheme works only if the program code doesn't dynamically modify itself. And in that case you can preprocess it so that you know in advance where jumps are without a linear search. You could technically allow well-written self-modifying code if it were willing to nominate all addresses that may be modified, allowing you definitely to avoid those in your scheduler's dynamic modifications.
You'd end up sort of running an interpreter, but only for jumps.
another way is to keep to small tasks based on an event queue (like current GUI apps)
this is also cooperative but has the effect of not needing OS calls you just return from the task and then it will go on to the next task
if you then need to continue a task you need to pass the next "function" and a pointer to the data you need to the task queue
I'm currently writing a programme which has a function to hash a number of files in the background. I've read the Qt4 documentation a number of times over and I still can't really figure out which threading option is best for this.
http://doc.qt.io/qt-5/thread-basics.html
There's really no need to update the GUI when it's done with each file, I just don't wish to block the GUI and I really only need a single signal/slot connection upon completion. I'm thinking of extending QThread for a hashing thread. Does this sound reasonable/right?
I have this article bookmarked as it nicely illustrates the use of QThread and highlights some common misconceptions about it. Sample code available, which runs without blocking the GUI. Sample is hosted on RapidShare, but they seem to have implemented some sort of timed waiting period since I last used it.
This sounds like a good place to use the QtConcurrent::map() function. The map function can apply the same operation to a container of objects, in your case, files. Once you start the map function, you can create a QFutureWatcher and connect to its finished signal to be notified when all of the work is done.
another quick question, I want to make simple console based game, nothing too fancy, just to have some weekend project to get more familiar with C. Basically I want to make tetris, but I end up with one problem:
How to let the game engine go, and in the same time wait for input? Obviously cin or scanf is useless for me.
You're looking for a library such as ncurses.
Many Rogue-like games are written using ncurses or similar.
There's two ways to do it:
The first is to run two threads; one waits for input and updates state accordingly while the other runs the game.
The other (more common in game development) way is to write the game as one big loop that executes many times a second, updating game state, redrawing the screen, and checking for input.
But instead of blocking when you get key input, you check for the presence of pending keypresses, and if nothing has happened, you just continue through your loop. If you have multiple input sources (keyboard, network, etc.) they all get put there in the loop, checking one after another.
Yes, it's called polling. No, it's not efficient. But high-end games are usually all about pulling the maximum performance and framerates out of the computer, not running cool.
For added efficiency, you can optionally block with a timeout -- saying "wait for a keypress, but no longer than 300 milliseconds" so you can continue on with your loop.
select() comes to mind, but there are other ways of waiting or checking for input as well.
You could work out how to change stdin to non-blocking, which would enable you to write something like tetris, but the game might be more directly expressed in an event-driven paradigm. Maybe it's a good excuse to learn windows programming.
Anyway, if you want to go the console route, if you are using the microsoft compiler, then you should have kbhit() available (via conio.h) which can tell you whether a call to fgetc on stdin would block.
Actually should mention that the MinGW gcc compiler 3.4.5 also supports kbhit().