I am working on an application which requires video frame capture from different frame-grabber cards. I am using directshow ISampleGrabberCB::SampleCB callback to receive pointer to the new frame. Now I want to know when exactly this callback gets called ? Is it guaranteed that every time frame-grabber receives a new frame it will automatically get called ?
I was trying for 120Hz signals with various resolutions but I this callback is only getting called 50-55 times. So there is a possibility that my frame-grabber is not able to capture at that rate (although theoretically its capable). I want to find out whether this callback is bottleneck or framegrabber card.
Thank You
SampleCB is immediately called called from streaming thread, you have one call for every frame. While in callback, you block further streaming, that is you need to return control from your callback in order to resume (in particular, if your callback is "slow", it can reduce effective FPS).
Related
We are using signal r to fetch data for 20 tiles.
We call FetchData() on all 20 tiles at the same time, it then fires of a message on signal r to request that data. (each tile has subscribed to get the answers)
We find that each tile will populate its data one at a time, as if the signal r only fetches the next tiles response after the first tile has been completed?
I know this is super high level, but in my mind, it worked like an AJAX request. Where if I fired off 20 requests in a row, they would all randomly return out of order?
By default, SignalR only allows a single execution from a client at a time. So if you fire of 20 concurrent calls to invoke a hub method, it'll run those in sequence (there are exceptions to this rule but that's the idea). This article talks about some of the configuration https://learn.microsoft.com/en-us/aspnet/core/signalr/configuration?view=aspnetcore-6.0&tabs=dotnet#configure-server-options. You'd want to increase MaximumParallelInvocationsPerClient so something other than 1.
Given an asynchronous IMFSourceReader connected to a synchronous only IMFTransform.
Then for the IMFSourceReaderCallback::OnReadSample() callback is it a good idea not to call IMFTransform::ProcessInput directly within OnReadSample, but instead push the produced sample onto another queue for another thread to call the transforms ProcessInput on?
Or would I just be replicating identical work source readers typically do internally? Or put another way does work within OnReadSample run the risk of blocking any further decoding work within the source reader that could have otherwise happened more asynchronously?
So I am suggesting something like:
WorkQueue transformInputs;
...
// Called back async
HRESULT OnReadSampleCallback(... IMFSample* sample)
{
// Push sample and return immediately
Push(transformInputs, sample);
}
// Different worker thread awoken for transformInputs queue samples
void OnTransformInputWork()
{
// Transform object is not async capable
transform->TransformInput(0, Pop(transformInputs), 0);
...
}
This is touched on, but not elaborated on here 'Implementing the Callback Interface':
https://learn.microsoft.com/en-us/windows/win32/medfound/using-the-source-reader-in-asynchronous-mode
Or is it completely dependent on whatever the source reader sets up internally and not easily determined?
It is not a good idea to perform a long blocking operation in IMFSourceReaderCallback::OnReadSample. Nothing is going to be fatal or serious but this is not the intended usage.
Taking into consideration your previous question about audio format conversion though, audio sample data conversion is fast enough to happen on such callback.
Also, it is not clear or documented (depends on actual implementation), ProcessInput is often instant and only references input data. ProcessOutput would be computationally expensive in this case. If you don't do ProcessOutput right there in the same callback you might run into situation where MFT is no longer accepting input, and so you'd have to implement a queue anyway.
With all this in mind you would just do the processing in the callback neglecting performance impact assuming your processing is not too heavy, or otherwise you would just start doing the queue otherwise.
This MSDN page describes the need for some filters to return VFW_S_CANT_CUE from GetState() in the paused state if there's a possibility that the filter can't deliver while paused. That all seems clear enough. It seems if there's any doubt for a particular then it's probably better to return VFW_S_CANT_CUE to make sure that Pause() doesn't hang.
Delivering Samples
Are there any downsides to returning VFW_S_CANT_CUE though? Is resuming streaming from the paused state likely to perform poorly or lose sync if a mux or demux filter in the graph returns VFW_S_CANT_CUE?
I've inherited source code for several filters that sometimes return VFW_S_CANT_CUE for reasons that aren't clear to me (for example only returning VFW_S_CANT_CUE if no output samples have been delivered). I'm wondering if there any risks from always returning VFW_S_CANT_CUE.
Return of VFW_S_CANT_CUE disables synchronization with renderers during stopped/paused transition: Filter Graph Manager is not waiting for renderers to report that they are ready, which in case of video renderer means that it receives a banner frame and presents it (I suppose with sending EC_PAUSED notification). Disabled synchronization means that IMediaControl::Pause returns immediately and does not wait for banner frame, what live sources might prefer to do.
The only downside I can think of is that having Pause call completed you cannot be sure that video renderer presents valid frame and not blackness instead. I suppose the unclear reasoning behind VFW_S_CANT_CUE you are seeing is attempts of the developer to avoid deadlocks he stumbled on during debugging.
If filter returns VFW_S_CANT_CUE in GetState() method (i.e. LiveSource), Pause() method will not wait for samples to be queued. And because of this, stream time startd when filter graph is started.
Otherwise, filter graph will wait until several samples have been queued. And only after that stream time will be started (because after Pause(), Run() method called)
It is hard for me to understand the difference between signals and events in Qt, could someone explain?
An event is a message encapsulated in a class (QEvent) which is processed in an event loop and dispatched to a recipient that can either accept the message or pass it along to others to process. They are usually created in response to external system events like mouse clicks.
Signals and Slots are a convenient way for QObjects to communicate with one another and are more similar to callback functions. In most circumstances, when a "signal" is emitted, any slot function connected to it is called directly. The exception is when signals and slots cross thread boundaries. In this case, the signal will essentially be converted into an event.
Events are something that happened to or within an object. In general, you would treat them within the object's own class code.
Signals are emitted by an object. The object is basically notifying other objects that something happened. Other objects might do something as a result or not, but this is not the emitter's job to deal with it.
My impression of the difference is as follows:
Say you have a server device, running an infinite loop, listening to some external client Events and reacting to them by executing some code.
(It can be a CPU, listening to interrupts from devices, or Client-side Javascript browser code, litsening for user clicks or Server-side website code, listening for users requesting web-pages or data).
Or it can be your Qt application, running its main loop.
I'll be explaining with the assumption that you're running Qt on Linux with an X-server used for drawing.
I can distinguish 2 main differences, although the second one is somewhat disputable:
Events represent your hardware and are a small finite set. Signals represent your Widgets-layer logic and can be arbitrarily complex and numerous.
Events are low-level messages, coming to you from the client. The set of Events is a strictly limited set (~20 different Event types), determined by hardware (e.g. mouse click/doubleclick/press/release, mouse move, keyboard key pressed/released/held etc.), and specified in the protocol of interaction (e.g. X protocol) between application and user.
E.g. at the time X protocol was created there were no multitouch gestures, there were only mouse and keyboard so X protocol won't understand your gestures and send them to application, it will just interpret them as mouse clicks. Thus, extensions to X protocol are introduced over time.
X events know nothing about widgets, widgets exist only in Qt. X events know only about X windows, which are very basic rectangles that your widgets consist of. Your Qt events are just a thin wrapper around X events/Windows events/Mac events, providing a compatibility layer between different Operating Systems native events for convenience of Widget-level logic layer authors.
Widget-level logic deals with Signals, cause they include the Widget-level meaning of your actions. Moreover, one Signal can be fired due to different events, e.g. either mouse click on "Save" menu button or a keyboard shortcut such as Ctrl-S.
Abstractly speaking (this is not exactly about Qt!), Events are asynchronous in their nature, while Signals (or hooks in other terms) are synchronous.
Say, you have a function foo(), that can fire Signal OR emit Event.
If it fires signal, Signal is executed in the same thread of code as the function, which caused it, right after the function.
On the other hand, if it emits Event, Event is sent to the main loop and it depends on the main loop, when it delivers that event to the receiving side and what happens next.
Thus 2 consecutive events may even get delivered in reversed order, while 2 consecutively fired signals remain consecutive.
Though, terminology is not strict. "Singals" in Unix as a means of Interprocess Communication should be better called Events, cause they are asynchronous: you call a signal in one process and never know, when the event loop is going to switch to the receiving process and execute the signal handler.
P.S. Please forgive me, if some of my examples are not absolutely correct in terms of letter. They are still good in terms of spirit.
An event is passed directly to an event handler method of a class. They are available for you to overload in your subclasses and choose how to handle the event differently. Events also pass up the chain from child to parent until someone handles it or it falls off the end.
Signals on the other hand are openly emitted and any other entity can opt to connect and listen to them. They pass through the event loops and are processed in a queue (they can also be handled directly if they are in the same thread).
I'm on Flash Builder 4.5 and I'm using remote object with amfphp and when I call two method (method1 and method2) at the same time the response of method2 always arrives after method1's response even though method2 is much more faster to return the result.
Here's the scenario:
I set a remote object which refers to a remote php class "Newsletter" which contains the sendNewsletter and getProgress methods.
Here's the code:
-sendNewsletter() reads the email archive and send the newsletter. After each email has sent it writes a log into the database.
-getProgress() reads the log wrote by sendNewsletter, counts how many email have been sent, compares it with the total number of the email that have to be sent and return the progress percentage
From the flex interface the users select a Newsletter to be sent and click on a "send" button which calls a function that calls the sendNewsletter() and then instantiate a loop of calls to getProgress (as you can see when getProgress returns something it calls the setProgress which updates a progress bar and calls getProgress again until the progress percentage reach 100%.
So right after I call sendNewsletter() I call getProgress() on the same remoteClass().
sendNewsletter() can take several minutes to complete (in my tests for sending 4 email it takes about 4 seconds so I think that sending thousands of email will take much more!!) and the trouble I'm encountering here is that getProgress() result arrives only after sendNewsletter() concludes its execution while what I would like to achieve is:
-call sendNewsletter()
-while sendNewsletter() does its stuff() call getProgress several time in order to get the progress percentage:
What I've got now:
call to sendNewsletter()----------------------->response
call to getProgress()------------------------------------->response after sendNewsletter()
What I want to achieve:
sendNewsletter()------------------------------------------------------------------>response()
getProgress()--->response, getProgress() again--->response-->getProgress()-->respone-->etc...
I read many post on how to work around this problem but no solution worked for me.
I tried to to "emulate" to different channel by creating two remote object with endpoint set once to gateway.php?parallel=0 and once gateway.php?parallel=1, but flash builder still send everything in one big request and get the response in one big http packet (I need tow different packet since sendNewsletter takse ages to complete compared to getProgress)
I also tried to delay the call of getProgress() after sendNewsletter() with a Timer of 500ms and flash builder makes two different calls (I can see them in firebug) but the call of getProgress gives response after sendNewsletter() anyway.
I alse tried to call sendNewsletter this way
this.myNewsletter.getOperation("sendNewsletter").send(idNewsletter)
this.myNewsletter.getOperation("sendNewsletter").cancel()
in order to let flash builder forget about the response but no way!!!
So far the only way to work around I found is creating a common httpservice which refers to a php which instantiate the Newsletter class and calls the getProgress method.
By using two different channel I can call the getProgress httpservice while sendNewsletter is being execute. It works but I don't like it and I don't want to create an httpservice for each method I need to call in background, so I want to achieve this with remote object only.
Anyone has addressed the same problem?
You Flash builder guru, I know you're around, please help me!!!!!
Thanks in advance!!!
Bye,
Luke
P.S.
sorry if this post is a little bit long but the situation it's quite complicated.
I don't know exactly, what you want to..
But when working with Remote Object, there is a best practices to use Responder to handle the responses that were arrived parallel from the single Remote Object.
So try to add responder to your service calls like
remoteObject.methodCall().addResponder(new YourResponder(resultEvent, faultEvent));
So when specific response will be come, it will be handled by your different custom responders.
And by that you will be able to handle your response separately.