Saving media streams based on size and time criteria in a direct show filter - directshow

I just have simple filter graph whick takes media streams from rtsp source[ generally h264 and mp4 ] and save them using an mp4 muxer to a file...
RtspSourceFilter ---> MP4 Muxer ---> File Writer.
It works OK. But i have constraint[ new requirment now]. I have to write file based on two criteria: their size and duration...Suppose that user can define rules such as:
if duration > 1 hour or size > 1 gb then write stream to new file
In my graph in order to this,
I have to stop my graph based on conditions and create and start new
one with new file name...
That is bad since at ever file i have to re-connect my source and possibly lost some data...
What is the best way to deal with it ?
My Solution: [ But not satisfied with it ]
I have the source code of RtspSourceFilter and MP4Muxer[open source] so that forgot FileWriter...MP4 Muxer became a writer with Muxer...So stop it internally and write when necessary and then cretae new file...Do some buffering for not looosing data...
RTSP Source Filter ---> New MP4 Writer [ a writer with mp4 muxer in it]
But this introduce unnecessary complexity...Now i became maintainer of MUX operation via New MP4 Writer...Since i have no time to really understand what Mux do, i have to modify-hack it to behave what i want... Analogy: I have car and i will make a helicopter from it...It will be very ugly and un-trusted helicopter...Probably my New MP4 Filter [code] will be so...[ Big Ball of Mud]

It sounds like GMFBridge may be of use to you. It allows you to create one source graph, and multiple sink graphs. Then when your constraint is met, bridge the source graph to a new sink graph.
If you put the bridge in buffer (non-discard) mode, you should not loose any samples.
However, you will have to investigate if this solution works for you. Have a look at the sample applications for a quick overview.

Related

How do we continue to communicate with the assistant in a same stream?

https://github.com/googlesamples/assistant-sdk-cpp
In run_assistant_text.cc with follow step we can only make a communicate with assistant
std::shared_ptr<ClientReaderWriter<AssistRequest, AssistResponse>> stream(std::move(assistant->Assist(&context)));
stream->Write(request);
stream->Read(&response)
In the example we can only write a request in the stream.
Can we write muilt-request on the stream? like fllowing step:
std::shared_ptr<ClientReaderWriter<AssistRequest, AssistResponse>> stream(std::move(assistant->Assist(&context)));
stream->Write(request1);
stream->Read(&response1)
stream->Write(request2);
stream->Read(&response2)
Yes you can!  You can call stream->Write multiple times and/or put stream->Write in a loop if you like.
It would be a good idea to check the status of stream->Read to make sure you have read everything.

Understanding elasticsearch circuit_breaking_exception

I am trying to figure out why I am getting this error when indexing a document from a python web app.
The document in this case is a base64 encoded string of a file of size 10877 KB.
I post it to my web app, which then posts it via elasticsearch.py to my elastic instance.
My elastic instance throws an error:
TransportError(429, 'circuit_breaking_exception', '[parent] Data
too large, data for [<http_request>] would be
[1031753160/983.9mb], which is larger than the limit of
[986932838/941.2mb], real usage: [1002052432/955.6mb], new bytes
reserved: [29700728/28.3mb], usages [request=0/0b,
fielddata=0/0b, in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb]')
I am trying to understand why my 10877 KB file ends up at a size of 983mb as reported by elastic.
I understand that increasing the JVM max heap size may allow me to send bigger files, but I am more wondering why it appears the request size is 10x the size of what I am expecting.
Let us see what we have here, step by step:
[parent] Data too large, data for [<http_request>]
gives the name of the circuit breaker
would be [1031753160/983.9mb],
says, how the heap size will look, when the request would be executed
which is larger than the limit of [986932838/941.2mb],
tells us the current setting of the circuit breaker above
real usage: [1002052432/955.6mb],
this is the real usage of the heap
new bytes reserved: [29700728/28.3mb],
actually an estimatiom, what impact the request will have (the size of the data structures which needs to be created in order to process the request). Your ~10MB file will probably consume 28.3MB.
usages [
request=0/0b,
fielddata=0/0b,
in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb
]
This last line tells us how the estmation is being calculated.

How to check which SQL query is so CPU intensive

Is there any possible way to check which query is so CPU intensive in _sqlsrv2 process?
Something which give me information about executed query in that process in that moment.
Is there any way to terminate that query without killing _sqlsrv2 process?
I cannot find any official materials in that subject.
Thank You for any help.
You could look into client database-request caching.
Code examples below assume you have ABL access to the environment. If not you will have to use SQL instead but it shouldn't be to hard to "translate" the code below
I haven't used this a lot myself but I wouldn't be surprised if it has some impact on performance.
You need to start caching in the active connection. This can be done in the connection itself or remotely via VST tables (as long as your remote session is connected to the same database) so you need to be able to identify your connections. This can be done via the process ID.
Generally how to enable the caching:
/* "_myconnection" is your current connection. You shouldn't do this */
FIND _myconnection NO-LOCK.
FIND _connect WHERE _connect-usr = _myconnection._MyConn-userid.
/* Start caching */
_connect._Connect-CachingType = 3.
DISPLAY _connect WITH FRAME x1 SIDE-LABELS WIDTH 100 1 COLUMN.
/* End caching */
_connect._Connect-CachingType = 0.
You need to identify your process first, via top or another program.
Then you can do something like:
/* Assuming pid 21966 */
FIND FIRST _connect NO-LOCK WHERE _Connect._Connect-Pid = 21966 NO-ERROR.
IF AVAILABLE _Connect THEN
DISPLAY _connect.
You could also look at the _Connect-Type. It should be 'SQLC' for SQL connections.
FOR EACH _Connect NO-LOCK WHERE _Connect._connect-type = "SQLC":
DISPLAY _connect._connect-type.
END.
Best of all would be to do this in a separate environment. If you can't at least try it in a test environment first.
Here's a good guide.
You can use a Select like this:
select
c."_Connect-type",
c."_Connect-PID" as 'PID',
c."_connect-ipaddress" as 'IP',
c."_Connect-CacheInfo"
from
pub."_connect" c
where
c."_Connect-CacheInfo" is not null
But first you need to enable connection cache, follow this example

Asterisk: Record application is generating empty files

User making the call is asked to dial an extension. This is done by 1#playing a prompt with Background and then 2#wait_for_digit. Based on the extension that has been dialed, the destination number is determined and the call is forwarded to that number.
If the called person doesn't not answer, then Playback is used to play a prompt that asks the user to record the voice message; recording the voice message is done with the Record application.
This Record application is always generating empty wav files, size 44 bytes. If I remove the 1#playing a prompt with Background the Record application is generating proper files. If the Background is included, all recordings are empty.
I am using Perl Asterisk::AGI module.
$agi->exec('Answer');
....
.....
$agi->exec('Background', 'en/extra/please-enter-the-extension,n'); # this is the troubling part
my $my_extension = $agi->wait_for_digit(5000);
....
.....
$agi->exec('Playback', 'en/extra/the-party-you-are-calling&en/extra/is-curntly-busy,noanswer');
$agi->exec('Playback', 'en/vm-intro,noanswer');
my $file = 'xyz.wav';
$agi->exec('Record', "$file,0,10,k");
...
...
What should I do to make it work as I want it to?
Thank you.
UPDATE 1:
The same script is working without glitches now. Not sure if something unrelated to the script has changed.
Most likly you have check your codecs. IF you use g729 or g723 and no transcoder,it just can't write in wav format.

Flex slow first Http request

When i use loader.load(request); for the first time, my flex freeze for 10 secondes before posting the data (i can see the web server result in real time).
However if redo a similar POST with other data but same request.url, it's instantaneous.
// Multi form encoded data
variables = new URLVariables();
variables.user = "aaa";
variables.boardjpg = new URLFileVariable(data.boardBytes, "foo.jpg");
request = new URLRequestBuilder(variables).build();
request.url = "http://localhost:8000/upload/";
loader.load(request);
How can i see what is taking so long ?
Thanks !
Ok, this is an old question, anyway I find it searching for other things so quick adding this
URLFileVariables nor URLRequestBuilder are core classes in AS3, so I guess you're using some custom library to build your request. I don't know which library you use, but it seems that the purpose is to serialize some binary data to build a POST. Serializing usually takes some times the first time (lookup initialization and the like) and goes faster next, a well known example is Remoting in his different flavours

Resources