How should a DirectShow muxer filter with multiple input pins implement IMediaSeeking pass through? - directshow

I'm working on a bug with a custom audio mixer filter (I have the source) where the input audio streams seem to get out of sync after any seeking with some input sources (I don't have the source for these) when more than one input is connected.
After seeking the timestamps etc look correct but the actual data in the streams is out of sync with the timestamps.
The audio mixer has a custom IMediaSeeking implementation that passes on IMediaSeeking::SetPositions calls to each input pin. This would seem to be the correct approach. If there's more than one source filter the SetPosition calls need to be passed on to each source. It's then up to the source filter to only implement seeking on only one of its pins (as documented in MSDN).
Would it be better to inherit a pass through implementation from CPosPassThru so that it supports IMediaPosition too? Some filters seem to use IMediaPosition calls rather than IMediaSeeking.
Is there anything specific a muxer filter has to do to pass on seeking calls to multiple input pins. Any good example source code out there? The Monogram blog on writing a muxer filter doesn't seem to cover seeking.

For the benefit of future readers, the following seems to work OK. The sync bug was elsewhere.
The audio mixer has a custom IMediaSeeking implementation that passes on IMediaSeeking::SetPositions calls to each input pin.
If there's more than one source filter the SetPosition calls need to be passed on to each source. It's then up to the source filter to only implement seeking on only one of its pins (as documented in MSDN).
It doesn't seem to be necessary to forward IMediaPosition upstream.

Related

SYCL DPC++ auto detect device

This question might be trivial, unfortunately I haven't found the answer I was looking for.
I used dpct migration tool to port some cuda code to Intel DPC++ and then I further optimized everything I needed and eventually got rid of everything related to dpct expect the super handy
dpct::get_current_device();
which basically solves all the previous pain I had to put compile options to select the appropriate device and control them with Makefiles and so on.
Is there any way to do this without using dpct ?
I had a look at how dpct does this (here) but it looks pretty non-straightforward and it relies on other internal functions.
Is there any way to avoid this ?
I'm not totally clear from your question whether you want to 1) grab a handle to your device or 2) select a device on which to run stuff, so I'll try to answer both. Note that dpct::get_current_device() isn't actually selecting a device, it's just returning the device which you have already selected earlier in your program.
Typically when using SYCL we start with a sycl::queue, which we use to submit kernels, memory copy operations etc. From a sycl::queue you can access your device with:
sycl::device d = q.get_device();
But it seems like you may instead be asking for the simplest way to select a device. In this case, the simplest approach is to construct your queue with one of SYCL's provided device selectors:
sycl::queue q{sycl::gpu_selector()};
sycl::queue q{sycl::cpu_selector()};
sycl::queue q{sycl::default_selector()};
Note that the last option (sycl::default_selector()) is probably what dpct is currently doing for you.

Use Julia to perform computations on a webpage

I was wondering if it is possible to use Julia to perform computations on a webpage in an automated way.
For example suppose we have a 3x3 html form in which we input some numbers. These form a square matrix A, and we can find its eigenvalues in Julia pretty straightforward. I would like to use Julia to make the computation and then return the results.
In my understanding (which is limited in this direction) I guess the process should be something like:
collect the data entered in the form
send the data to a machine which has Julia installed
run the Julia code with the given data and store the result
send the result back to the webpage and show it.
Do you think something like this is possible? (I've seen some stuff using HttpServer which allows computation with the browser, but I'm not sure this is the right thing to use) If yes, which are the things which I need to look into? Do you have any examples of such implementations of web calculations?
If you are using or can use Node.js, you can use node-julia. It has some limitations, but should work fine for this.
Coincidentally, I was already mostly done with putting together an example that does this. A rough mockup is available here, which uses express to serve the pages and plotly to display results (among other node modules).
Another option would be to write the server itself in Julia using Mux.jl and skip server-side javascript entirely.
Yes, it can be done with HttpServer.jl
It's pretty simple - you make a small script that starts your HttpServer, which now listens to the designated port. Part of configuring the web server is that you define some handlers (functions) that are invoked when certain events take place in your app's life cycle (new request, error, etc).
Here's a very simple official example:
https://github.com/JuliaWeb/HttpServer.jl/blob/master/examples/fibonacci.jl
However, things can get complex fast:
you already need to perform 2 actions:
a. render your HTML page where you take the user input (by default)
b. render the response page as a consequence of receiving a POST request
you'll need to extract the data payload coming through the form. Data sent via GET is easy to reach, data sent via POST not so much.
if you expose this to users you need to setup some failsafe measures to respawn your server script - otherwise it might just crash and exit.
if you open your script to the world you must make sure that it's not vulnerable to attacks - you don't want to empower a hacker to execute random Julia code on your server or access your DB.
So for basic usage on a small case, yes, HttpServer.jl should be enough.
If however you expect a bigger project, you can give Genie a try (https://github.com/essenciary/Genie.jl). It's still work in progress but it handles most of the low level work allowing developers to focus on the specific app logic, rather than on the transport layer (Genie's author here, btw).
If you get stuck there's GitHub issues and a Gitter channel.
Try Escher.jl.
This enables you to build up the web page in Julia.

multipart/mixed support in Netty

By browsing the source code and playing with some toy examples I got to the conclusion that Netty currently (as of 5.0.0 alpha2) supports only multipart/form-data, but not multipart/mixed, at least not as specified in RFC1342 (sec. 7.2). It looks like mixed is supported inside a part in multipart/form-data though.
Is that really the case or am I missing something?
Since I get the very same question, I post here what could be an beginning of answear...
However, the current implementation seems to have 2 limitations:
1) it supports only multipart/form-data. I would like to also be able
to use multipart/mixed, which is very similar on the wire (see
http://www.w3.org/Protocols/rfc1341/7_2_Multipart.html ). I think that
the encoder/decoder could be extended to understand multipart/mixed
and still create the same kinds of HttpDatas.
Yes, the current codec is focused on multipart/form-data. I shall be possible to extend or propose a new one (based on it probably) to enable the support of multipart/mixed.
The current codec was made based on user needs (mine in the beginning, others following). Since no one yet has requested a support for multipart/mixed, it was not coded, except for internal multipart/mixed code.
The reference is RFC1867.
As Netty loves contributions, you are more than welcome to propose yours ;-)
2) it seems that is it only possible to use efficient HttpDatas like
FileUpload if you are in multipart/form-data. I would like to be able
to add a FileUpload to the request, and by this way make the contents
of the file be the body of the request, without making it a multipart
request. I think this could be done by extending the Standard Post
Encoder to understand FileUploads.
This could a bit more complicated since it has to be done without multipart, which holds currently the FileUpload class.
Maybe a good direction could be to switch to ChunkFile or ChunkNioFile and to combine it with "your" HttpCodec or in your "HttpHandler" when doing the body request, in order to pass the content through the ChunkFile.
Hoping this helps you in the right direction...

Generating a waveform from an audio (or video) file?

I'm trying to understand how I can generate a waveform from an audio (or video) file to display to the user.
I've been googling around for quite a while now and can't determine if this is even possible in Qt without using something like FFmpeg. I've seen all of these classes: QMediaPlayer, QMediaContent, QMediaResource, QAudioProbe and experimented with the Qt Media Player Example but am just not seeing where I can access the actual audio buffer.
So I have 2 questions:
Is what I want to do even possible without 3rd party libraries?
If it is possible, can some kind soul outline what I need to read and understand in order to access the audio data
I have tried the suggestions from this question (Audio visualization with QMediaPlayer) but the result of audioProbe->setSource(player) is always false and the method processBuffer never gets called.
audioProbe = new QAudioProbe(this);
bool success = audioProbe->setSource(player);
qDebug() << success;
connect(audioProbe, SIGNAL(audioBufferProbed(QAudioBuffer)), this, SLOT(processBuffer(QAudioBuffer)));
Update: Adding some additional detail in the hope of clarifying things.
For testing/learning I am using the Media Player Example which ships with Qt, so it is set up correctly with Q_OBJECT etc.
For audio, I tested with both .mp3 and .wav files. FWIW, the player example won't play video for some reason (.mp4, .avi were tested)
The player in the code is QMediaPlayer – which inherits from QMediaObject. The example code for the Player class is here. I added my code (in original comment above) right after the player is instantiated. I also tried adding it once media is loaded.
I tried declaring my slot first as private, then as public – either way, it is never called.
Frustrating that such a simple thing is so hard.
Going the "no external library" route will likely just lead to more of a headache and more work than is necessary. The other advantage of going with an established library is you won't be bound to one file format, as not all formats store their data the same way. If the audio format is uncompressed (wav or other) you can read the header until you get to the data chunk. An answer to this question here details this in C. You should be able to get an idea for the file format from this to apply it to another language.
You will want to understand how many channels are in the wav file, bit depth, and also the sampling rate before you can do anything worthwhile with the data. All this info can be grabbed from the header.
It turns out that QAudioProbe is not supported on OSX – the platform I am working on. Took quite a while (a "Qt while. . .") to ferret that info out so I am posting it here explicitly.
See this document for full details: Qt 5.5.0 Multimedia Backends

Providing raw MP3/AAC data to Flex/Flash from a custom container

Having had a quick look at the Flex docs I can't seem to find any reference to providing audio content to be played from a custom (possibly encrypted - don't worry, it's not that evil) container format. Is this possible and if so, could someone point me in the right direction.
Or if that's not possible, some way to hook into the disk/network (disk is much more important in this case) I/O of the sound playing mechanism to provide a supported container in memory from a custom wrapper.
Since Flash Player 10, it's posible to write PCM / raw audio data to a Sound Object.
Basically, you call play on an "empty" Sound Object and it will start dispatching periodically a SampleDataEvent, requesting data. You then can write to the audio stream through the data ByteArray exposed by the event object.
http://help.adobe.com/en_US/FlashPlatform//reference/actionscript/3/flash/events/SampleDataEvent.html?filter_flex=4
http://www.adobe.com/devnet/flash/articles/dynamic_sound_generation/index.html
Also, if you're interested in good articles and reference for audio programming in Actionscript, you might want to check out Andre Michelle's stuf:
http://blog.andre-michelle.com/
http://lab.andre-michelle.com/
A flash.media.Sound must either be:
constructed/loaded with a URLRequest,
inherit its data through embedding
There currently is no provision for directly piping mp3 (or aac, or video) data to a any "media" object, such as Sound. You can only get the Sound object to download the data for itself. There are people who are upset about this, including myself; you are not alone!
I say "at this stage" because it's not unthinkable that Adobe will update the API to make this possible in a future version. For the now, you're best to go with the decoding-to-a-dynamic-sound workaround mentioned by Juan, if you really need to be able to do this.
And post a feature request at Adobe's bug tracker, or vote on an existing one!

Resources