How to retrieve application-specific data from MP4 file using DirectShow - directshow

I need to retrieve non-video, non-audio application data which is embedded in an MP4 file. The data consists of measurements taken at the same time as the MP4 was recorded, which need to be rendered as charts in sync with the video & audio. The charts won't be rendered using DirectShow.
The data can be written into the MP4 file in one of three ways:
1. as multiple top-level mdat boxes
2. as multiple top-level boxes with proprietary FourCC
3. as a third track.
Which of the above methods of embedding the data would be most appropriate for DirectShow? What would the steps be to retrieve the data?
I have sample MP4 files in all of the three above formats and I can play the video and audio using Haali splitter. Does it come down to whether the MP4 source filter supports the reading of data? I would like to avoid having to write my own MP4 source filter if possible!
Many thanks

As you might have known, there is no stock filter for MP4. And your best way is to see what exactly is supported on the filter that you are going to use. For example, it is highly unlikely that these filters are going to make custom format data available.
The good news is that decent multiplexer/demultiplexer MP4 filters are available in source http://www.gdcl.co.uk/mpeg4/ If the measurements are timestamped, then additional track looks best to me. You can always put extra data into track description box. Source code availability enables you to add reasonable support for your custom format without much of a trouble.

Related

Checking how many times a document has been read in Firestore?

I am working on a video based app that keeps track of how many views that video has received. I originally planned on having a field for view_count in my document that I would write to after someone watches a video.
However, knowing how many writes that could end up leading to, I started to wonder if it's possible to see a breakdown of how many reads have been made for each document in a collection and use that number instead. Since the videos are short, I figured this would be an accurate number for the view count.
Is this possible to access this kind of data?
Firestore does not expose any per-document access metrics. The available monitoring options are shown on this page on monitoring usage.
If you want something beyond that you'll have to build it yourself, as you originally intended.

QR Code with multiple information type

I was wondering if there's a way to build a QR code with two kinds of data - one text data and two link URLs. Is it possible to do it?
A QR Code is a two-dimensional barcode capable of storing (according to Wikipedia) up to 2,953 bytes of binary data or 4,296 simple alphanumeric characters. The data can contain whatever you like.
The difficulty with storing multiple URLs in a QR-code is not that it is impossible, but that most scanner apps in smart phones and so on will only process a single URL. If you are writing the scanner app too then, yes, it it possible, otherwise it is possible but probably not advisable.
If you wish to store a single URL and some contact details you might look at storing a vCard in your QR code (here is a generator; I have no affiliation with this project).
It's indeed possible, but all scanner apps will not recognize all the data, and only one show one data. This QR code generator has a Multi URL feature that can redirect based on different parameters as time, location, device, ...
It is possible. we can enter text,URL,v card on a single QR code.
Well, actually, the QR code is "only" storing characters, so you could imagine having an app or any software that read the QR code content, which contains data and two URL, which split the string to open two tab.

Is there a benefit storing protobuf in sqlite?

In my mobile app (hybrid), I want to allow the user to take his data to another device. There will be no server side components from my end. The data user would carry would contain images, audio, video along with text and timestamps etc. My design evolved as below
1. Store each entry in a JSON file with image, audio and video as Data URI and export this file to cloud sync platforms. The problem with this approach is that, even though JSON is better than XML, there could be better options. See below
2. Store each entry in a BSON file with image, audio and video as Data URI and export this file to cloud sync platforms. The problem with this approach is that as mentioned in its site still the field names will be repeated and protobuf could be a better fit.
3. Store each entry in a protocol buffer file with image, audio and video as Data URI and export this file to cloud sync platforms.
Then when I stumbled across greenDAO they were mentioning
greenDAO lets you persist protocol buffer (protobuf) objects directly
into the database.
What is the benefit I will be getting by storing the protobuf object in sqlite DB? Will be able to export sqlite file instead of file containing object in protobuf format?
Well, the data still has to be serialized somehow into the database. greenDAO just hides the serialization from you. Since you have specific needs, you are probably best building your own solution, better tailored for your needs.
If you don't anticipate the field names changing, why not just store the entries as database rows? This has a number of nice advantages, including the ability to have sortable and searchable entries.

How to use IMediaObject?

I just want to query the number of streams in a file. But an unimaginable difficulty has emerged from this simple task.
It seems the query involves using IMediaObject.I have searched IMediaObject documentation in DirectShow. It only lists out its functions but it has no samples and description on how to use it.
I have also searched Windows 7 SDK. The only demonstration is in dmoenum
The initiation is incapsulated in ShowSelectedDMOInfo(const GUID *pCLSID)
what types can pCLSID be? Any samples are out there to illustrate how to use IMediaObject?
I just want to query the number of streams in a file
IMediaObject is not of any help. It only returns number of streams it is designed to accept on the input and deliver on the output, according to its design. Typical DMO has one input and one output stream, completely irrelevant to file streams.
In DirectShow you can query streams from demultiplexing filter for the respective file format. These are rarely (if ever) packaged as DMOs.

How to combile live video streams using VideoMixingRenderer9?

I want to combine multiple live streams in single stream. So my code will take multiple streams as input & after combining it will provide single stream as output.
I have created an application which takes multiple video files as input & shows them in single video.
Waiting for reply.
-Shashank
The only way you can get mixed output from the VMR is to implement a custom allocator/presenter. Then you will get access to the surface containing the mixed image for output.

Resources