How can I make a program wait in OCaml? - unix

I'm trying to make a tetris game in ocaml and i need to have a piece move through the graphics screen at a certain speed.
I think that the best way to do it is to make a recursive function that draws the piece at the top of the screen, waits half a second or so, clears that piece from the screen and redraws it 50 pixels lower. I just don't know how to make the programm wait. I think that you can do it using the Unix module but idk how..

Let's assume you want to take a fairly simple approach, i.e., an approach that works without multi-threading.
Presumably when your game is running it spends virtually all its time waiting for input from the user, i.e., waiting for the user to press a key. Most likely, in fact, you're using a blocking read to do this. Since the user can take any amount of time before typing anything (up to minutes or years), this is incompatible with keeping the graphical part of the game up to date.
A very simple solution then is to have a timeout on the read operation. Instead of waiting indefinitely for the user to press a key, you can wait at most (say) 10 milliseconds.
To do this, you can use the Unix.select function. The simplest way to do this is to switch over to using a Unix file descriptor for your input rather than an OCaml channel. If you can't figure out how to make this work, you might come back to StackOverflow with a more specific question.

Related

Vulkan: trouble understanding cycling of framebuffers

In Vulkan,
A semaphore(A) and a fence(X) can be passed to vkAcquireNextImageKHR. That semaphore(A) is subsequently passed to vkQueueSubmit, to wait until the image is released by the Presentation Engine (PE). A fence(Y) can also be passed to vkQueueSubmit. Client code can check when the submission has completed by checking fence(Y).
When fence(Y) signals, this means the PE can display the image.
My question:
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR? To me, it doesn't seem that it would be by checking fence(X), because that is for client code to know when the image can be written to by vkQueueSubmit, isn't it? After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done. Or, can the same fence(X) be used to query the image availability after the call to vkQueuePresentKHR?
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
The reason this is causing trouble for me is that in an asynchronous, 60fps, triple buffered app (throwaway learning code), things get out of wack like this:
Send an initial framebuffer to the PE. This framebuffer is now unavailable for 16 milliseconds.
Within the 16ms, acquire a second image/framebuffer, submit commands, but don't present.
Do the same as #2, for a third image. We submit it before 16ms.
16ms have gone by, so we vkQueuePresentKHR the second image.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 is not yet done being used, because I have acquired three images at this point.
How to know if image #1 is available again without calling vkAcquireNextImageKHR?
How do I know when the PE has finished using the image after a call to vkQueuePresentKHR?
You usually do not need to know.
Either you need to acquire a new VkImage, or you don't. Whether PE has finished or not does not even enter that decision.
Only reason wanting to know is if you want to measure presentation times. There's a special extension for that: VK_GOOGLE_display_timing.
After the image is sent to vkQueueSubmit, it seems the usefulness of fence(X) is done.
Well, you can reuse the fence. But the Implementation has stopped using it as soon as it was signaled and won't be changing its state anymore to anything, if that's what you are asking (and so you are free to vkDestroy it or do other things with it).
I don't know when the image is available again after a call to vkQueuePresentKHR, without having to call vkAcquireNextImageKHR.
Hopefully I cover it below, but I am not precisely sure what the problem here is. I don't know how to eat a soup without a spoon neither. Simply use a spoon— I mean vkAcquireNextImageKHR.
Now, if I call vkAcquireNextImageKHR, the whole thing can fail if image #1 >is not yet done being used, because I have acquired 3 images at this point.
How to know if image #1 is available again without calling >vkAcquireNextImageKHR?
How is it any different than image #1 and #2?
Yes, you may have already acquired all the images the swapchain has to offer, or the PE is "not ready" to give away an image even if it has two.
In the first case the spec advises against calling vkAcquireNextImageKHR with timeout of UINT64_MAX. It is a simple matter of counting the successful vkAcquireNextImageKHR calls vs the vkQueuePresentKHRs. One way may be to simply do one vkAcquireNextImageKHR and then do one vkQueuePresentKHR.
In the second case you can simply call vkAcquireNextImageKHR and you will eventually get the image.
In order to use a swapchain image, You need to acquire it. After that the actual availability of the image for rendering purposes is signaled by the semaphore (A) or the fence (X). You can either use the semaphore (X) during the submission as a wait semaphore or wait on the CPU for the fence (X) and submit after that. For performance reasons the semaphore is a preferred way.
Now when You present an image, You give it back to the Presentation Engine. From now on You cannot use that image for whatever purposes. There is no way to check when that image is available again for You so You can render into it again. You cannot do that. If You want to render into a swapchain image again, You need to acquire another image. And during this operation You once again provide a semaphore or a fence (probably different than those provided when You previously acquired a swapchain image). There is no other way to check when an image is again available than through calling the vkAcquireNextImageKHR() function.
And when You want to implement triple-buffering, You should just select appropriate presentation mode (mailbox mode is the closest match). You shouldn't wait for a specific time before You present an image. You just should present it when You are done rendering into it. Your synchronization should be entirely based on acquire, present commands and semaphores or fences provided during these operations and during submission. Appropriate present mode should do the rest. Detailed explanation of different present modes is available in Intel's tutorial.

How to automatically update time in cics

I have two questions first is the main one.
1. I was able to display date in a cics map but what i need is, i want it to be ticking i.e., it should be display everysecond updated.
2. I have a COBOL-DB2 program which automatically inserts the data from database(DB2) to a file. I want this program to be called on a timestamp basis i.e., every 1hr, 2hr, or every day.
Thank you
You can do this, but you will need to change modify traditional psuedo-conversationl approach. Instead of returning and waiting for a user event, you can start your tran after some number of seconds with your current commarea and quit. If a user event occurs in that time, you can cancel your start request, if it doesn't, you can refresh the screen timestamp and repeat.
It is kinda a pain just to get a timestamp refreshed. Doesn't make much sense to bother with unless you have a really good reason.
The DB2 stuff is plain easy. Start your tran using interval control, the same START AFTER() described above, and you can have it run hourly, or bihourly, or whatever.
I don't think that you need to modify your pseudo-conversational approach to achieve what you need. Just issue a EXEC CICS START command with a one second delay (just do this once) for a small program that just issues a Send Map (or TC Write) to the terminal facility. Ideally reserve a common area on the screen so all transactions can use a common program. At some point, when the updates are no longer required, CANCEL the START request.The way I see it, the timer update transaction will mix in nicely with you user-initiated transaction flow. If a user transaction is active when the start timer pops, the timer update program will just be delayed a little.
While this should work, you need to bear in mind that you might be driving 3,600 transactions per hour for each user. Is this feature really worth all that?
This is not possible in standard CICS using maps. The 3270 protocol does not lend itself to continually updating screens. The majority of automatic updating screens such as consoles and monitoring displays use native VTAM methods, building their own data streams.
It might be possible to do this using unformatted data, but I would not recommend it in CICS. Pseudo-conversational CICS does not have a program in control during screen display, and conversational programming is highly discouraged.
You can't really do this in CICS, which was designed for pseudo-interactive responses at best. It was designed for use on mainframes where your terminal was sent a whole page or screen, the program read the screen as received (which has some fields the user would update and if you didn't change them the terminal did not send the data back) then, the CICS transaction having taken a part of a screen containing changes, sends the response back and quits.
This makes for very efficient data entry and inquiry programs. But realize, when the program has finished processing the screen, it's quit, it's gone, and it's not even in memory any more, all the resources have been reclaimed. This allows the company to run a mainframe with 300 terminals and maybe 10 megabytes of real memory, because when the program is waiting for you to respond, it's not using any resources at all, if there are 200 people running a data entry program, they are running a re-entrant program in which all 200 of them are running the same copy of the same program and the only thing they're using is maybe 1K of writable storage per user for the part that has to read a screen or a file record and do some calculations. Think about that, 200 people are running the same program and all of them, simultaneously, are using one module that uses 20K of memory for the application - and it's the same 20K for every single one of them - and 1K each of actual read/write data.
Think about that for a moment, the first user to start that data entry program uses 20K of memory for the application, plus 1K for the writable data. Each user after that who is being processed on that program uses an additional 1K of memory, that's all. When they're sitting there looking at the terminal, all they might be using is 4 bytes in a table to tell the system there's a terminal connected. No resources are used at all.
To be able to have a screen updated on a regular basis means that something has to keep running, which is not something CICS does very well. CICS is not intended to be used for interactive processing the way a PC does because you're actually running live on the PC.
EXEC CICS ASK TIME END-EXEC to update the timestamp.
EXEC CICS SEND MAP DATA ONLY END-EXEC to update the screen.
However, using the suggested
EXEC CICS START TRANSID ('name' | namefld)
DELAY (time)
END-EXEC.
is actually the better way.

How to get IMediaControl.Run() to start a file playing with no delay

I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next.
I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames.
I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches:
Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video.
Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file.
Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least).
Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time.
The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing.
Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable.
More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this.
Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch.
Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.
Still more: perhaps I could eliminate this delay by loading the AVIs from memory rather than from a file. Unfortunately, I have no idea how to do this. IMediaControl only has a RenderFile() method, not something like a RenderStream or RenderMemory method.
If you call IMediaControl::Run on a stopped graph, the graph manager will post the call to a worker thread (so there's some variability). On the worker thread, the graph will be paused. Render filters only complete a pause transition once they have received data, so once GetState() returns S_OK, the graph manager knows that the graph is fully cued. At this point, it picks a time roughly 10ms into the future, and calls Run on each filter with that time as the start point. Since it takes time to tell each filter to Run, the dshow Run method has a parameter which is the refclock time at which a sample timestamped zero should be played -- i.e. the time at which the actual transition to run mode should take place.
To synchronise this with another graph, you first have to ensure that both graphs have the same clock. Query the graph (not the filter) for IMediaFilter, and call GetSyncSource on one graph and SetSyncSource on the other. Then you need to pause the second graph, so that it is cued and ready. When you want to start it, call IMediaFilter::Run instead of IMediaControl::Run, and you can pass your own start time. This still has to be a few milliseconds into the future, so the best thing might be to set the start time of the second graph to be the first graph's start time plus its duration (for an indexed container of uncompressed streams, the duration should be accurate).
Another approach is to use multiple graphs. Separating source from rendering would allow you to switch seamlessly between sources since they feed into a common render graph. There is sample source code for this approach at www.gdcl.co.uk/gmfbridge.
G

QTimer firing issue in QGIS(Quantum GIS)

I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.

Is it a bad idea to implement a timer loop in Flex?

In our game project we did have a timer loop set to fire about 20 times a second (the same as the application framerate). We use this to move some sprites around.
I'm wondering if this could cause problems and we should instead do our updates using an EnterFrame event handler?
I get the impression that having a timer loop run faster than the application framerate is likely to cause problems... is this the case?
As an update, trying to do it on EnterFrame caused very weird problems. Instead of a frame every 75ms, suddenly it jumped to 25ms. Note, it wasn't just our calculation claimed the framerate was different, suddenly the animations sped up to a crazy rate.
I'd go for the Enter frame, in some special cases it can be useful to have two "loops" one for logic and one for the visuals, but for most games I make I stick to the Enter frame-event listener. Having a separate timer for moving your stuff around is a bit unnecessary since having it set to anything except the framerate would make the motion either jerky or just not visible (since the frame is not redrawn).
One thing to consider however is to decouple your logic from the framerate, this is most easily accomplished by using getTimer (available in both as2 and as3) to calculate the time that has expired since the last frame and adjusting the motions or whatever accordingly.
A timer is no more reliable than the enter frame event, flash will try to keep up with whatever rate you've set, but if you're doing heavy processing or complex graphics it will slow down, both timers and framerate.
Here's a rundown of how Flash handles framerates and why you saw your content play faster.
At the deepest level, whatever host application that Flash is running in (the browser usually) polls flash at some interval. That interval might be every 10ms in one browser, or 50ms in another. Every time time that poll occurs, Flash does something like this:
Have (1000/framerate) miliseconds passed since the last frame update?
If no: do nothing and return
If yes: Execute a frame update:
Advance all (playing) timelines one frame
Dispatch all events (including an ENTER_FRAME event
Execute all frame scripts and event handlers with pending events
Draw screen updates
return
However, certain kinds of external events (such as keypresses, mouse events, and timer events) are handled asynchronously to the above process. So if you have an event handler that fires when a key is pressed, the code in that handler might be executed several times between frame updates. The screen will still only be redrawn once per frame update, unless you use the updateAfterEvent() method (global in AS2, attached to events in AS3).
Note that the asynchronous behavior of these events does not affect the timing of frame updates. Even if you use timer events to, for example, redraw the screen 50 times per second, frame animations will still occur at the published framerate, and scripted animations will not execute any faster if they're driven by the enterFrame event (rather than the timer).
The nice thing about using enter frame events, is your processing will degrade at the same pace as the rendering and you'll get a screen update right after the code block finishes.
Either method isn't guaranteed to occur at a specific time interval. So your event handler should be determining how long it's been since it last executed, and making decisions off of that instead of purely how many times it's run.
I think timerEvent and Enter Frame are both good options, I have used both of them in my games. ( Did you mean timerEvent by timer loop? )
PS: notice that in slow machines the timer may not refresh quick enough, so you may need to adjust your code to make game work "faster" in slow machines.
I would suggest using a class such as TweenLite ( http://blog.greensock.com/tweenliteas3/ ) which is lightweight at about 3kb or if you need more power you can use TweenMax, which i believe is 11kb. There are many advantages here. First off, this "engine" has been thoroughly tested and benchmarked and is well known as one of the most resource friendly ways to animate few or even many things. I have seen a benchmark, where in AS3, 1,500 sprites are being animated with TweenLite and it holds a strong 20 fps, as where competitors like Tweener would bog down to 9 fps http://blog.greensock.com/tweening-speed-test/. The next advantage is the ease of use as I will demonstrate below.
//Make sure you have a class path pointed at a folder that contains the following.
import gs.TweenLite;
import gs.easing.*;
var ball_mc:MovieClip = new MovieClip();
var g:Graphics = ball_mc.graphics;
g.beginFill(0xFF0000,1);
g.drawCircle(0,0,10);
g.endFill();
//Now we animate ball_mc
//Example: TweenLite.to(displayObjectName, totalTweeningTime, {someProperty:someValue,anotherProperty:anotherValue,onComplete:aFunctionCalledWhenComplete});
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5});
So this takes ball_mc and moves it to 400 from its current position on the x axis and during that same Tween it reduces or increases the alpha from its current value to 0.5.
After importing the needed class, it is really only 1 line of code to animate each object, which is really nice. We can a also affect the ease, which I believe by default is Expo.easeOut(Strong easeOut). If you wanted it to bounce or be elastic such effects are available just by adding a property to the object as follows.
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Bounce.easeOut});
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Elastic.easeOut});
The easing all comes from the gs.easing.* import which I believe is Penner's Easing Equations utilized through TweenLite.
In the end we have no polling (Open loops) to manage such as Timer and we have very readable code that can be amended or removed with ease.
It is also important to note that TweenLite and TweenMax offer far more than I have displayed here and it is safe to say that I use one of the two classes in every single project. The animations are custom, they have functionality attached to them (onComplete: functionCall), and again, they are optimal and resource friendly.

Resources