How to determine which window was moved/resized from WM_EXITSIZEMOVE message? - autoit

I've been processing all the numerous individual WM_MOVE, WM_SIZING, and WM_SIZE messages for a multiple-GUI/window application, but I've just learned of the WM_EXITSIZEMOVE message and would like to use it if it lets me avoid all those intermediate messages. But since no parameters are provided by that message, how can I determine which GUI/window has been moved or resized? Or is my thinking incorrect?

All of the messages you reference are only sent to the window that was affected by that operation, which is why there are no parameters provided that identify the window. If the window receives it, it is the window that was just moved, sized, or is exiting size/move.
In other words, if you have windows A and B, and B is resized or moved, then it will receive the messages and window A will not.

Related

Vulkan: trouble understanding cycling of framebuffers

In Vulkan,
A semaphore(A) and a fence(X) can be passed to vkAcquireNextImageKHR. That semaphore(A) is subsequently passed to vkQueueSubmit, to wait until the image is released by the Presentation Engine (PE). A fence(Y) can also be passed to vkQueueSubmit. Client code can check when the submission has completed by checking fence(Y).
When fence(Y) signals, this means the PE can display the image.
My question:
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR? To me, it doesn't seem that it would be by checking fence(X), because that is for client code to know when the image can be written to by vkQueueSubmit, isn't it? After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done. Or, can the same fence(X) be used to query the image availability after the call to vkQueuePresentKHR?
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
The reason this is causing trouble for me is that in an asynchronous, 60fps, triple buffered app (throwaway learning code), things get out of wack like this:
Send an initial framebuffer to the PE. This framebuffer is now unavailable for 16 milliseconds.
Within the 16ms, acquire a second image/framebuffer, submit commands, but don't present.
Do the same as #2, for a third image. We submit it before 16ms.
16ms have gone by, so we vkQueuePresentKHR the second image.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 is not yet done being used, because I have acquired three images at this point.
How to know if image #1 is available again without calling vkAcquireNextImageKHR?
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR?
You usually do not need to know.
Either you need to acquire a new VkImage, or you don't. Whether PE has finished or not does not even enter that decision.
Only reason wanting to know is if you want to measure presentation times. There's a special extension for that: VK_GOOGLE_display_timing.
After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done.
Well, you can reuse the fence. But the Implementation has stopped using it as soon as it was signaled and won't be changing its state anymore to anything, if that's what you are asking (and so you are free to vkDestroy it or do other things with it).
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
Hopefully I cover it below, but I am not precisely sure what the problem here is. I don't know how to eat a soup without a spoon neither. Simply use a spoon— I mean vkAcquireNextImageKHR.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 >is not yet done being used, because I have acquired 3 images at this point.
How to know if image #1 is available again without calling >vkAcquireNextImageKHR?
How is it any different than image #1 and #2?
Yes, you may have already acquired all the images the swapchain has to offer, or the PE is "not ready" to give away an image even if it has two.
In the first case the spec advises against calling vkAcquireNextImageKHR with timeout of UINT64_MAX. It is a simple matter of counting the successful vkAcquireNextImageKHR calls vs the vkQueuePresentKHRs. One way may be to simply do one vkAcquireNextImageKHR and then do one vkQueuePresentKHR.
In the second case you can simply call vkAcquireNextImageKHR and you will eventually get the image.
In order to use a swapchain image, You need to acquire it. After that the actual availability of the image for rendering purposes is signaled by the semaphore (A) or the fence (X). You can either use the semaphore (X) during the submission as a wait semaphore or wait on the CPU for the fence (X) and submit after that. For performance reasons the semaphore is a preferred way.
Now when You present an image, You give it back to the Presentation Engine. From now on You cannot use that image for whatever purposes. There is no way to check when that image is available again for You so You can render into it again. You cannot do that. If You want to render into a swapchain image again, You need to acquire another image. And during this operation You once again provide a semaphore or a fence (probably different than those provided when You previously acquired a swapchain image). There is no other way to check when an image is again available than through calling the vkAcquireNextImageKHR() function.
And when You want to implement triple-buffering, You should just select appropriate presentation mode (mailbox mode is the closest match). You shouldn't wait for a specific time before You present an image. You just should present it when You are done rendering into it. Your synchronization should be entirely based on acquire, present commands and semaphores or fences provided during these operations and during submission. Appropriate present mode should do the rest. Detailed explanation of different present modes is available in Intel's tutorial.

Qt: advantage of creating child window only once an then just show() and hide()

I'm building an application using Qt (on linux). My application basically consists of 2 Windows To keep things simple lets just call them "A" and "B".
A is the application's "main window", some kind of "idle" window which is displayed (maximized) when the app has "nothing to do". It contains a lot of pushbuttons.
If the user presses one of those PBs, Window B should be displayed (maximized too). On B, the user does some work, leaves ("closes") B and Window A is redisplayed.
Now, since Win B needs a lot of data, even some of it requested from a server over the network, I wonder if it was a good idea to "create" Win-B only once (at the end of Win-A's ctor ), and later when needed just show(), and when work is done just hide() it. ???
Maybe someone of you Qt gurus out there can give me some advice?
Thanks a lot!
Norbert
If you want to keep the data of window B, showing and hiding is the way to go. If you want to show a clean dialog everytime a user request window B, you should create and destroy it every time.

What are the differences between event and signal in Qt

It is hard for me to understand the difference between signals and events in Qt, could someone explain?
An event is a message encapsulated in a class (QEvent) which is processed in an event loop and dispatched to a recipient that can either accept the message or pass it along to others to process. They are usually created in response to external system events like mouse clicks.
Signals and Slots are a convenient way for QObjects to communicate with one another and are more similar to callback functions. In most circumstances, when a "signal" is emitted, any slot function connected to it is called directly. The exception is when signals and slots cross thread boundaries. In this case, the signal will essentially be converted into an event.
Events are something that happened to or within an object. In general, you would treat them within the object's own class code.
Signals are emitted by an object. The object is basically notifying other objects that something happened. Other objects might do something as a result or not, but this is not the emitter's job to deal with it.
My impression of the difference is as follows:
Say you have a server device, running an infinite loop, listening to some external client Events and reacting to them by executing some code.
(It can be a CPU, listening to interrupts from devices, or Client-side Javascript browser code, litsening for user clicks or Server-side website code, listening for users requesting web-pages or data).
Or it can be your Qt application, running its main loop.
I'll be explaining with the assumption that you're running Qt on Linux with an X-server used for drawing.
I can distinguish 2 main differences, although the second one is somewhat disputable:
Events represent your hardware and are a small finite set. Signals represent your Widgets-layer logic and can be arbitrarily complex and numerous.
Events are low-level messages, coming to you from the client. The set of Events is a strictly limited set (~20 different Event types), determined by hardware (e.g. mouse click/doubleclick/press/release, mouse move, keyboard key pressed/released/held etc.), and specified in the protocol of interaction (e.g. X protocol) between application and user.
E.g. at the time X protocol was created there were no multitouch gestures, there were only mouse and keyboard so X protocol won't understand your gestures and send them to application, it will just interpret them as mouse clicks. Thus, extensions to X protocol are introduced over time.
X events know nothing about widgets, widgets exist only in Qt. X events know only about X windows, which are very basic rectangles that your widgets consist of. Your Qt events are just a thin wrapper around X events/Windows events/Mac events, providing a compatibility layer between different Operating Systems native events for convenience of Widget-level logic layer authors.
Widget-level logic deals with Signals, cause they include the Widget-level meaning of your actions. Moreover, one Signal can be fired due to different events, e.g. either mouse click on "Save" menu button or a keyboard shortcut such as Ctrl-S.
Abstractly speaking (this is not exactly about Qt!), Events are asynchronous in their nature, while Signals (or hooks in other terms) are synchronous.
Say, you have a function foo(), that can fire Signal OR emit Event.
If it fires signal, Signal is executed in the same thread of code as the function, which caused it, right after the function.
On the other hand, if it emits Event, Event is sent to the main loop and it depends on the main loop, when it delivers that event to the receiving side and what happens next.
Thus 2 consecutive events may even get delivered in reversed order, while 2 consecutively fired signals remain consecutive.
Though, terminology is not strict. "Singals" in Unix as a means of Interprocess Communication should be better called Events, cause they are asynchronous: you call a signal in one process and never know, when the event loop is going to switch to the receiving process and execute the signal handler.
P.S. Please forgive me, if some of my examples are not absolutely correct in terms of letter. They are still good in terms of spirit.
An event is passed directly to an event handler method of a class. They are available for you to overload in your subclasses and choose how to handle the event differently. Events also pass up the chain from child to parent until someone handles it or it falls off the end.
Signals on the other hand are openly emitted and any other entity can opt to connect and listen to them. They pass through the event loops and are processed in a queue (they can also be handled directly if they are in the same thread).

Flex Remote Object multiple parallel calls

I'm on Flash Builder 4.5 and I'm using remote object with amfphp and when I call two method (method1 and method2) at the same time the response of method2 always arrives after method1's response even though method2 is much more faster to return the result.
Here's the scenario:
I set a remote object which refers to a remote php class "Newsletter" which contains the sendNewsletter and getProgress methods.
Here's the code:
-sendNewsletter() reads the email archive and send the newsletter. After each email has sent it writes a log into the database.
-getProgress() reads the log wrote by sendNewsletter, counts how many email have been sent, compares it with the total number of the email that have to be sent and return the progress percentage
From the flex interface the users select a Newsletter to be sent and click on a "send" button which calls a function that calls the sendNewsletter() and then instantiate a loop of calls to getProgress (as you can see when getProgress returns something it calls the setProgress which updates a progress bar and calls getProgress again until the progress percentage reach 100%.
So right after I call sendNewsletter() I call getProgress() on the same remoteClass().
sendNewsletter() can take several minutes to complete (in my tests for sending 4 email it takes about 4 seconds so I think that sending thousands of email will take much more!!) and the trouble I'm encountering here is that getProgress() result arrives only after sendNewsletter() concludes its execution while what I would like to achieve is:
-call sendNewsletter()
-while sendNewsletter() does its stuff() call getProgress several time in order to get the progress percentage:
What I've got now:
call to sendNewsletter()----------------------->response
call to getProgress()------------------------------------->response after sendNewsletter()
What I want to achieve:
sendNewsletter()------------------------------------------------------------------>response()
getProgress()--->response, getProgress() again--->response-->getProgress()-->respone-->etc...
I read many post on how to work around this problem but no solution worked for me.
I tried to to "emulate" to different channel by creating two remote object with endpoint set once to gateway.php?parallel=0 and once gateway.php?parallel=1, but flash builder still send everything in one big request and get the response in one big http packet (I need tow different packet since sendNewsletter takse ages to complete compared to getProgress)
I also tried to delay the call of getProgress() after sendNewsletter() with a Timer of 500ms and flash builder makes two different calls (I can see them in firebug) but the call of getProgress gives response after sendNewsletter() anyway.
I alse tried to call sendNewsletter this way
this.myNewsletter.getOperation("sendNewsletter").send(idNewsletter)
this.myNewsletter.getOperation("sendNewsletter").cancel()
in order to let flash builder forget about the response but no way!!!
So far the only way to work around I found is creating a common httpservice which refers to a php which instantiate the Newsletter class and calls the getProgress method.
By using two different channel I can call the getProgress httpservice while sendNewsletter is being execute. It works but I don't like it and I don't want to create an httpservice for each method I need to call in background, so I want to achieve this with remote object only.
Anyone has addressed the same problem?
You Flash builder guru, I know you're around, please help me!!!!!
Thanks in advance!!!
Bye,
Luke
P.S.
sorry if this post is a little bit long but the situation it's quite complicated.
I don't know exactly, what you want to..
But when working with Remote Object, there is a best practices to use Responder to handle the responses that were arrived parallel from the single Remote Object.
So try to add responder to your service calls like
remoteObject.methodCall().addResponder(new YourResponder(resultEvent, faultEvent));
So when specific response will be come, it will be handled by your different custom responders.
And by that you will be able to handle your response separately.

How to debug unresponsive async NSURLConnection

and by unresponsive I mean that after the first three successful connections, the fourth connection is initiated and nothing happens, no crashes, no delegate functions called, no data is sent out (according to wireshark)... it just sits there?!
I've been beating my head against this for a day and half...
iOS 4.3.3
latest xCode, happens the same way on a real device as in the simulator.
I've read all the NSURLConnection posts in the Developer Forums... I'm at a loss.
From my application delegate, I kick off an async NSURLConnection according to Apple docs, using the App Delegate as the delegate for the NSURLConnection.
From my applicationDidFinishLaunching... I trigger the initial two queries which successfully return XML that I then pass off to a OperationQueue to be parsed.
I can even loop, repeating these queries with no issues, repeated them 10 times and worked just fine.
The next series of five queries are triggered via user input. The first query runs successfully and returns the correct response, then the next query is created and when used to create a NSURLConnection (just like all the others), just sits there.?!
The normal delegate calls I see on all the other queries are never seen.
Nothing goes over the wire according to Wireshark?
I've reordered the queries and regardless of the query, after the first one the next one fails (fails as in does nothing, no errors or aborts, just sits there)
It's obviously in my code, but I am blind to it.
So what other tools can I use to debug the async NSURLConnection... how can I tell what it's doing? if at all.
Any suggestions for debugging a NSURLConnection or other ways accomplish doing the same thing a NSURLConnection does??
Thanks for any help you can offer...
OK tracked it down...
I was watching the stack dump in each thread as I was about to kick off each NSURLConnection, the first three were all in the main thread as expected... the fourth one ended up in a new thread?! In one of my NSOperation thread?!?!
As it turns out I inadvertently added logic(?) that started one my NSURLConnection in the last NSOperation call to didFinishParsing: so the NSURLConnection was async started and then the NSOperation terminated... >.<
So I'll move the NSURLConnection out of the didFinishParsing and it should stay in the main loop and I should be good!

Resources