MediaRecorder API chunks as an independent videos - webapi

I'm trying to build simple app that would stream video from camera using browser to the remote server.
For the camera access from browser I've found a wonderful WebRTC API: getUserMedia.
Now for the streaming it to the server IIUC the best way would be to use some of the WebRTC_API for transporting and then use some server side library to deal with it.
However, at first I went with a bit different approach:
I've user MediaRecorder based on the stream from camera. And then I was setting the timeslice for the MediaRecorder.start() to be few hundred Ms, e.g. 200. And I had some assumptions in wrt MediaRecorder which are not in sync with what I was observing:
I've observed weird behaviour(wrt to my assumptions about MediaRecorder):
If there was only 1 chunk uploaded to server -> it opens just fine.
If there are multiple chunks -> none of them loads correctly, they give errors: Could not determine type of stream. But then if I use ffmpeg to concat all the chunks - resulting file is fine. Same happens if I'm concatenating the blobs from MediaRecorder.ondataavailable on the client.
Thus the question:
Can the chunks in theory be independent video files? Or it is not what MediaRecorder was designed for? If it is not - then why do we even have the option to give timeslice parameter in its start() method?
Bonus question
If we're setting timeslice comparatively small, e.g. 10ms -> lots of data blobs that are sent to MediaRecorder.ondataavailable are of size 0. Where we can find some sort of guarantees/specs on the minimal timeslice that we can use, so that the data blobs are meaningful?
In the documentation there are the following:
If timeslice is not undefined, then once a minimum of timeslice milliseconds of data have been collected, or some minimum time slice imposed by the UA, whichever is greater, start gathering data into a new Blob blob, and queue a task, using the DOM manipulation task source, that fires a blob event named dataavailable at recorder with blob.
So, my guess is that it is somehow related to some data blobs being of 0 size. What does it "some minimum time slice imposed by the UA" mean?
PS
Happy to provide code if needed. But the question is not about some specific code. It is to get understanding of the assumptions behind the MediaRecorder API and why they are there.

The timeslice parameter does not allow to create independent media chunks; instead, it gives an opportunity to save data (e.g. on the filesystem, or uploaded to a server) on a regular basis, rather than holding potentially large media content in memory.

Related

ESP32 Partition and Data Storage

I am trying to write firmware code for RFID device which will have config data storage as well as the temporary storage that maybe can be read and then if convenient be removed.
I am using Arduino IDE to program this on an ESP32 Wroom32. I have tried to understand how the storage actually works, finding various resources. One being datasheet of the same, that says that there could be 4 MB of program code storage possible, and that sounds fantastic, my question is if for example I take EEPROM library and save about 214 bytes to config which will rarely be touched, where is it exactly being stored? Is it simply in NVS? I can see that the default settings show me about 1310720 Bytes of storage and I know that I can utilise other partitions as well to store more in case I ever try to have more sketch storage than 1310720 Bytes.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit? Would it cause any kind of problems if I try to use the other such partitions to write the code? Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes? A lot of the resources are confusing me, here are the data that I am referring to from online 1 and 2. Any idea would help me proceed very further.
P.S. I am aware that the EEPROM library has been deprecated and I shall use either Preferences or littlefs for better management but if I am aware correctly I can still utilise them, and without much issue that will work since there is still compatibility for that. I am also curious about using inbuilt SRAM of RTC with the RTC attribute RTC_DATA_ATTR, since I hope to also utilise deep sleep mode incorporated.
My question is if I am trying to store data such as config and real time data, how much would I possibly be able to store? Is there a limit?
It depends. First on the module; there is ESP32-WROOM with 4MB flash but you could also order different flash sizes.
Then the question is: how big is your application (code)? Obviously this needs to be saved on the flash as well, reducing the total usable amount for data storage (by the size of the application). Also there is a bootloader which needs some small space as well.
Next, ESP32 is using a partition scheme. One partition is reserved for the bootloader. The rest can be divided between one or more application partitions, NVS partitions, and possibly other utility partitions (i.e. OTAData).
If you are using the OTA functions, there will be at least 3 application partitions of equal size, further reducing the total usable amount for data storage.
So the absolute upper limit of what you can store using NVS functions is the size of your NVS partition. However since it's a key-value storage, you must take into account the size of the key, which can be considerably larger than the data you store (up to 12 times for a 12 character key and a uint8 value).
So there is no way to say exactly how much data you can put into the system without knowing exactly how you're going to use it. For example, you could store one very large "blob" value that could take "up to 97.6%" of the partition size. But you could not store 10 "blob" values of 1/10 (9.76%) the size since you must take into account the keys and some flash metadata used internally.
Would it cause any kind of problems if I try to use the other such partitions to write the code?
That depends on what these partitions are used for. If you override the partition table, or bootloader, or your application code, yes there will be problems. If there is "free space" then it won't be a problem, but then you should redefine this free space as NVS space. It's nice of Espressif to provide this NVS library, dont work around it, work with it.
Using Espressif's esptool you can create custom partition tables where you could minimize the size of the application partition to just barely fit your application, and maximize the NVS partition size. This way you will get the most storage out of your device without manually implementing a filesystem. If you are using OTA, you should leave some empty room in your application partition, in case your application code grows, as it usually does.
Will it be only NVS that is storing that data or can I utilise the other app0, app1, spiffs etc to store extra Bytes?
You absolutely can, but you will destroy whatever data is on that partition. And you will have lots of work to do, because you'll have to implement all of this yourself (basically roll your own flash driver).
If you don't need OTA, you dont need app0/app1 partitions at all.
Note that SPIFFS is also a way to store data, except it's not key-value but file-based. If you dont need it, remove that partition, and fill the space with your NVS partition.
On the other hand, SPIFFS is probably a better alternative if you are really tight on flash space, since you can omit the key and do your own referencing.

How to speedup bulk importing into google cloud datastore with multiple workers?

I have an apache-beam based dataflow job to read using vcf source from a single text file (stored in google cloud storage), transform text lines into datastore Entities and write them into the datastore sink. The workflow works fine but the cons I noticed is that:
The write speed into datastore is at most around 25-30 entities per second.
I tried to use --autoscalingAlgorithm=THROUGHPUT_BASED --numWorkers=10 --maxNumWorkers=100 but the execution seems to prefer one worker (see graph below: the target workers once increased to 2 but reduced to 1 "based on the ability to parallelize the work in the currently running step").
I did not use ancestor path for the keys; all the entities are the same kind.
The pipeline code looks like below:
def write_to_datastore(project, user_options, pipeline_options):
"""Creates a pipeline that writes entities to Cloud Datastore."""
with beam.Pipeline(options=pipeline_options) as p:
(p
| 'Read vcf files' >> vcfio.ReadFromVcf(user_options.input)
| 'Create my entity' >> beam.ParDo(
ToEntityFn(), user_options.kind)
| 'Write to datastore' >> WriteToDatastore(project))
Because I have millions of rows to write into the datastore, it would take too long to write with a speed of 30 entities/sec.
Question: The input is just one huge gzipped file. Do I need to split it into multiple small files to trigger multiple workers? Is there any other way I can make the importing faster? Do I miss something in the num_workers setup? Thanks!
I'm not familiar with apache beam, the answer is from the general flow perspective.
Assuming there are no dependencies to be considered between entity data in various input file sections then yes, working with multiple input files should definitely help as all these files could then be processed virtually in parallel (depending, of course, on the max number of available workers).
You might not need to split the huge zipfile beforehand, it might be possible to simply hand off segments of the single input data stream to separate data segment workers for writing, if the overhead of such handoff itself is neglijible compared to the actual data segment processing.
The overall performance limitation would be the speed of reading the input data, splitting it in segments and handoff to the segment data workers.
A data segment worker would further split the data segment it receives in smaller chunks of up to the equivalent of the max 500 entities that can be converted to entities and written to the datastore in a single batch operation. Depending of the datastore client library used it may be possible to perform this operation asyncronously, allowing the split into chunks and conversion to entities to continue without waiting for the previous datastore writes to complete.
The performance limitation at the data segment worker would then be the speed at which the data segment can be split into chunks and the chunk converted to entities
If async ops aren't available or for even higher throughput, yet another handoff of each chunk to a segment worker could be performed, with the segment worker performing the conversion to entities and datastore batch write.
The performance limitation at the data segment worker level would then be just the speed at which the data segment can be split into chunks and handed over to the chunk workers.
With such approach the actual conversion to entities and batch writing them to the datastore (async or not) would no longer sit in the critical path of splitting the input data stream, which is, I believe, the performance limitation in your current approach.
I looked into the design of vcfio. I suspect (if I understand correctly) that the reason I always get one worker when the input is a single file is due to the limit of the _VcfSource and the VCF format constraint. This format has a header part that defines how to translate the non-header lines. This causes that each worker that reads the source file has to work on an entire file. When I split the single file into 5 separate files that share the same header, I successfully get up to 5 workers (but not any more probably due to the same reason).
One thing I don't understand is that the number of workers that read can be limited to 5 (in this case). But why we are limited to have only 5 workers to write? Anyway, I think I have found the alternative way to trigger multiple workers with beam Dataflow-Runner (use pre-split VCF files). There is also a related approach in gcp variant transforms project, in which the vcfio has been significantly extended. It seems to support the multiple workers with a single input vcf file. I wish the changes in that project could be merged into the beam project too.

Muxing non-synchronised streams to Haali

I have 2 input streams of data that are being passed to a Haali Muxer (mp4 format).
Currently I stream these to Haali directly in a DirectShow graph without a clock. I wondered if I should be trying to write these to the muxer synchronised, or whether it happily accepts a stream of audio data that stops before the video data stream stops. (I have issues with the output file not playing audio after seeking, and I'm not sure why this could occur)
I can't find much in the way of documentation for muxing with the Haali muxer, does anyone know the best place to look for info on this filter?
To have the streams multiplexed into single MP4 file you need single instance of multiplexer (Haali, GDCL, commercial, wrapper over mp4v2 library, over Media Foundation sink etc) with two (or more) input pins on it connected to respective sources, which in turn are going to be written as tracks.
Filter graph clock does not matter. Clock is for presentation, and file writers accept incoming data and write it as soon as possible anyway. It is more accurate to remove the clock, as you seem to already be doing, but having standard clock is not going to be different.
Data is synchronized using time stamps on individual media samples, parts of media streams. Multiplexer builds internal queues for every stream and then consumes data from the streams to build single file, in a sort of way that original stream data is interleaved. If one stream supplies too much data, that is, if data is available too early while another stream supplies data slowly, multiplexer blocks further data reception on this particular stream by not returning from respective processing call (IPin::Receive) expecting that during this wait the slow stream provides additional input. Eventually, what multiplexer looks at when matching data from different streams is data time stamps.
To obtain synchronized data in resulting MP4 file you, thus, need to make sure the payload data is properly time stamped. Multiplexer will take care of the rest.
This also includes that the time stamps should be monotonously increasing within a stream, and key frames/splice points are respectively indicated. Otherwise some multiplexers might issue a failure immediately, other would produce the output file but it might have playback issues (esp. seeking).

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

implementing a download manager that supports resuming

I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support

Resources