AutoIt: Run next command after finishing the previous one - autoit

I'm currently writing a macro that performs a series of control sends and control clicks.
They must be done in the exact order.
At first I didn't have any sleep statements, so the script would just go through each command regardless whether the previous has finished or not (ie: click SUBMIT before finish sending the input string)
So I thought maybe I'll just put some sleep statements, but then I have to figure out how best to optimize it, AND I have to consider whether others' computers' speeds because a slow computer would need to have longer delays between commands. That would be impossible to optimize for everyone.
I was hoping there was a way to force each line to be run only after the previous has finished?
EDIT: To be more specific, I want the controlsend command to finish executing before I click the buttons.

Instead of ControlSend, use ControlSetText. This is immediate (like GuiEdit).

My solution: use functions from the user-defined library "GuiEdit" to directly set the value of the textbox. It appears to be immediate, thus allowing me to avoid having to wait for the keystrokes to be sent.

Related

How can I make a program wait in OCaml?

I'm trying to make a tetris game in ocaml and i need to have a piece move through the graphics screen at a certain speed.
I think that the best way to do it is to make a recursive function that draws the piece at the top of the screen, waits half a second or so, clears that piece from the screen and redraws it 50 pixels lower. I just don't know how to make the programm wait. I think that you can do it using the Unix module but idk how..
Let's assume you want to take a fairly simple approach, i.e., an approach that works without multi-threading.
Presumably when your game is running it spends virtually all its time waiting for input from the user, i.e., waiting for the user to press a key. Most likely, in fact, you're using a blocking read to do this. Since the user can take any amount of time before typing anything (up to minutes or years), this is incompatible with keeping the graphical part of the game up to date.
A very simple solution then is to have a timeout on the read operation. Instead of waiting indefinitely for the user to press a key, you can wait at most (say) 10 milliseconds.
To do this, you can use the Unix.select function. The simplest way to do this is to switch over to using a Unix file descriptor for your input rather than an OCaml channel. If you can't figure out how to make this work, you might come back to StackOverflow with a more specific question.

Awesome WM - os.execute vs afwul.spawn

I was wondering if there are any difference between lua's
os.execute('<command>')
and awesome's
awful.spawn('<command>')
I noticed that lain suggests to use os.execute, is there any reason for that, or it's a matter of personal taste?
Do not ever use os.execute or io.popen. They are blocking functions and cause very bad performance and noticeable increase in input (mouse/keyboard) and client repaint latency. Using these functions is an anti-pattern and will only make everything worst.
See the warning in the documentation https://awesomewm.org/apidoc/libraries/awful.spawn.html
Now to really answer the question, you have to understand that Awesome is single threaded. It means it only does a single thing at a time. If you call os.execute("sleep 10"), your mouse will keep moving but your computer will otherwise be frozen for 10 seconds. For commands that execute fast enough, it may be possible to miss this, but keep in mind that due to the latency/frequency rules, any command that takes 33 millisecond to execute will drop up to 2 frames in a 60fps video game (and will at least drop one). If you have many commands per second, they add up and wreck your system performance.
But Awesome isn't doomed to be slow. It may not have multiple threads, but it has something called coroutine and also have callbacks. This means things that take time to execute can still do so in the background (using C threads or external process + sockets). When they are done, they can notify the Awesome thread. This works perfectly and avoid blocking Awesome.
Now this brings us to the next part. If I do:
-- Do **NOT** do this.
os.execute("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
The label will display foo. But If I do:
-- Assumes /tmp/foo.txt does not exist
awful.spawn.with_shell("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
Then the label will be empty. awful.spawn and awful.spawn.with_shell will not block and thus the io.popen will be execute wayyy before sleep 1 finishes. This is why we have async functions to execute shell commands. There is many variants with difference characteristics and complexity. awful.spawn.easy_async is the most common as it is good enough for the general "I want to execute a command and do something with the output when it finishes".
awful.spawn.easy_async_with_shell("sleep 1; echo foo > /tmp/foo.txt", function()
awful.spawn.easy_async_with_shell("cat /tmp/foo.txt", function(out)
mylabel.text = out
end)
end)
In this variant, Awesome will not block. Again, like other spawn, you cannot add code outside of the callback function to use the result of the command. The code will be executed before the command is finished so the result wont yet be available.
There is also something called coroutine that remove the need for callbacks, but it is currently hard to use within Awesome and would also be confusing to explain.

Get non-blocking stdin data in Dart

Would really like to retrieve the entire line as it is typed before it is submitted so checks can be run before the user adds a line break? How would one accomplish this?
Thank you for reading.
You can archive this by setting stdin.lineMode to false. In that case you get a stream event for every character typed instead of only one per line. If you want to handle the outputing of the entered character on your own, you can also disable stdin.echoMode. In that case you have to pring the entered characters on your own. You have to enable it again after your program exits, otherwise the terminal stays in that mode.
One problem is that you are unable to reactivate echoMode in case of a program crash as there is no global crash handler. See issue 17743 for that.

gtkProgressBar in RGtk2

I am trying to add a gtkProgressBar to a little interface I created for an R script (using the RGtk2 package).
If I do something simple, as:
for (i in 1:50)
{
gtkProgressBarSetFraction(progress, i/50)
Sys.sleep(1)
}
everything runs smoothly and the bar is updated every second.
However, when I go to my actual code, I have a loop in which I do something like
for(i in 1:1000)
{
gtkProgressBarSetFraction(progress, i/1000)
#do some heavy computation here
}
The problem here is that the interface "freezes" and the progress bar is only updated at the end of the loop, therefore defeating completely its use...
Am I missing something here? How can I periodically "wake up" the interface so that it refreshes?
Thank you
nico
EDIT: OK, I solved the problem, but I still don't understand what is going on. I added a Sys.sleep call after the gtkProgressBarSetFraction and now the interface updates happily. To reduce "wasted time" I just did Sys.sleep(0.0001) (so for 1000 cycles I would only have ~0.1-1s more computing time, which is acceptable). Anyone could explain why is this happening?
To process one event: gtkMainIterationDo(FALSE). To process all pending events: while(gtkEventsPending()) gtkMainIteration().
This code is needed because of the way that the R and Gtk event loops interact - at every point, either R or Gtk is in control, and needs to manually hand off control to the other. Sys.sleep is one way of doing that, and these RGtk2 specific functions are another.
Almost all GUIs are made using concept called event loop. The program has some queue of messages and is itself infinitely looped in a process of picking new messages from the queue and executing them. The queue is populated by some events received from OS, like key strokes, mouse clicks, resizing windows, etc., and by the messages thrown by the program itself.
It doesn't seem like this, but R also has its own event loop (for graphics, but not only, and it is somewhat extended by RGtk, and while it is generally complicated I won't go into details).
Now when you call gtkProgressBarSetFraction, the progress bar is not directly updated, but a message requesting redraw is created and pushed to the queue. So, it makes no effect till it will be picked by the event loop iteration, but this won't happen until R will finish execution of your script OR when the loop will be fired exceptionally (by an internal timeout or by some functions, like Sys.sleep()).

How to get IMediaControl.Run() to start a file playing with no delay

I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next.
I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames.
I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches:
Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video.
Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file.
Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least).
Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time.
The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing.
Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable.
More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this.
Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch.
Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.
Still more: perhaps I could eliminate this delay by loading the AVIs from memory rather than from a file. Unfortunately, I have no idea how to do this. IMediaControl only has a RenderFile() method, not something like a RenderStream or RenderMemory method.
If you call IMediaControl::Run on a stopped graph, the graph manager will post the call to a worker thread (so there's some variability). On the worker thread, the graph will be paused. Render filters only complete a pause transition once they have received data, so once GetState() returns S_OK, the graph manager knows that the graph is fully cued. At this point, it picks a time roughly 10ms into the future, and calls Run on each filter with that time as the start point. Since it takes time to tell each filter to Run, the dshow Run method has a parameter which is the refclock time at which a sample timestamped zero should be played -- i.e. the time at which the actual transition to run mode should take place.
To synchronise this with another graph, you first have to ensure that both graphs have the same clock. Query the graph (not the filter) for IMediaFilter, and call GetSyncSource on one graph and SetSyncSource on the other. Then you need to pause the second graph, so that it is cued and ready. When you want to start it, call IMediaFilter::Run instead of IMediaControl::Run, and you can pass your own start time. This still has to be a few milliseconds into the future, so the best thing might be to set the start time of the second graph to be the first graph's start time plus its duration (for an indexed container of uncompressed streams, the duration should be accurate).
Another approach is to use multiple graphs. Separating source from rendering would allow you to switch seamlessly between sources since they feed into a common render graph. There is sample source code for this approach at www.gdcl.co.uk/gmfbridge.
G

Resources