Media Foundation Transforms with Two Inputs - ms-media-foundation

Is it possible to insert a custom AsyncMFT, modified to accept multiple MFTs as inputs (e.g. MFTa and MFTb connecting to MFTc), into an IMFMediaSession (the mediasession object is created with MFCreateMediaSession)? I've seen references online which state a custom mediasession is needed but this seems overkill.
I'm to the point where mftrace (including debugging in code) simply reports Catasrophic Failure when the MediaSession Starts (fails immediately after myMediaSession->Start(NULL, &startPos)). The topology loads fine and both of the input MFTs work fine if they don't connnect to the same node.

Under windows 7 it is not possible.
Read this : Mutl-input and multi-output
Becky Weiss from Microsoft gives answers :
The MFv1 pipeline isn't going to support multi-input MFTs. It happens that the Beta 2 Media Session doesn't explicitly validate against this yet
I'd say that multi-input MFTs is something that future versions of Media Foundation can be expected to support; but for now, we don't yet have that support.
I suppose MFv1 refers to Vista, and MFv2 to windows 7.
The links you provide "About MFTs", just tell you you can write a transform with multiple input, that's all. But the problem is not Media Foundation transforms.
The problem is that the native media session is not able to handle multiple input stream from transform, whatever the type of connection you use (source > transform, transform > transform).
The message you are getting "Catasrophic Failure", was the same i was getting using AudioMixerMFT and the native media session.
Can this be done on Windows 10 without a custom media session?
I don't know. Check mdsn forum link above, someone asks for this yesterday.
PS : If you choose to write a custom media session for your case, i can help. it would be a good exercise because of the use of AsyncMFT.
I moved the project wich contains a custom media session here : github/mofo7777 (under MFNode > MFNodePlayer).

Related

Translate API - different result from the web service

When using the translation API, I get a different translation (and worse) than if I use translate.google.com.
I am working on a project for a client, and the client was dissatisfied with the translation and noticed the difference.
Do these two service use different engines? I read that the API uses nmt-mode now, and that translate.google.com already uses the same engine.
Both set to translate from Norwegian to English.
Any more information that can clear this up?
Thanks!
The result differences between the translate.google.com and the Translation API calls are considered as an expected behavior that can be generated due to maintenance tasks and the logic used by the internal processes; However, the engines used for each service seems to be private information.
Based on this, it is normal to get some variances when using the API. I think you can use the model parameter option as an available workaround in case you want to specify which of the available models to use, as well as take a look on the Specifying a model official documentation to get detail information about this alternative.
It's almost about 3 years later and the problem still remains!
So I was trying to translate a dataset with the Google Translate API, but in the end it failed to translate some texts to the target language (in my case, Persian/Farsi). So I decided to check them to see if there's a pattern and maybe translate them using the web version of Google Translate.
As I was doing so, I figured that the web version actually could translate some of those untranslated texts, BUT not all. When trying to find a reason for such behaviour, I found out that most of them were names and not sentences. But as we know, names can easily be written with the target language characters as the translation. But why the API doesn't transform those names while the web version does? This photo will explain everything perhaps:
verified translation
As can be seen, some translations have a badge indicating that the translation has been verified, while some others don't.
So to recap, my guess is that maybe the API is set to only use verified translations, but as for the web version, even unverified translations are allowed since you can edit or report them.

Web test recording: automatically insert assertions during recording?

I need to automate as much as possible the recording of Web test scenarios. Selenium IDE or better Katalon plugin for Chrome seem very effective for this. However what's missing in the recording are the assertions. I've so far found no real alternative than to "add them by hand" after the recording is done.
Now I know which parts of my pages contain relevant output text, i.e. are subject to test. For instance based on ID patterns, class names, tag hierarchy etc.
So given that my web app is in a "known good state", I could theoretically grab the text content of the relevant tags during the recording, and insert my assertions in the recorded scenario right there and then. My aim is to automate this.
Is there any way to do this in Katalon plugin, Selenium IDE or any other automated web recording tool? I've read about Katalon Extension Scripts but as far as I understand it, these cannot do what I want?
-- edit -- trying to rephrase and be more concrete --
During my recording, on certain events (e.g. on page load) I want the tool to find all elements that match certain selectors, and for each match store an assertion in the scenario that asserts the actual current value (e.g. div.innerText or input.value) of the element on the page. I want to define the events and the selectors that should trigger the insertion of assertions and the expression that defines the asserted value.
example
Suppose my webapp has a search page. I enter data in input fields, and hit the "search" button. These actions are recorded by most tools like Katalon Recorder. Now on the next page, the search results will show. Each search result will be in a div class="result". Suppose while recording I got two search results "foo" and "bar". So I want the tool to store in the scenario, while recording, an assertion that the first result should be "foo" and the second should be "bar", based on my rule that all $("div.result") should have their "innerText" asserted upon page load.
Avoid using Selenium IDE, as compatibility with Firefox has been discontinued since Firefox version 55, you will thus not be able to run your tests on recent versions of Firefox.
When performing actions in the browser, it is relatively easy to record those actions to re-run them again. It is 100% clear what button you just pressed.
You can probably do a million different assertions on a page, it would be difficult for any tool to guess which things you would like to assert and then automatically add those assertions so I would be surprised if you would find a tool that would do exactly what you want.
What is keeping you from writing your own automated tests in code from scratch? From my experience, coding your own tests is not that much slower, but once you are used to doing this you will be able to tackle more complex problems with much more ease.
I have no experience with Katalon.
You can't add assertions in recording time, but you can use Selenese after recording too.
Check official reference here: https://docs.katalon.com/display/KD/Selenese+%28Selenium+IDE%29+Commands+Reference
For what it's worth, I've managed to get what I needed as follows:
locate the Extension directory of Katalon Recorder in my Chrome
copy the entire contents to Eclipse
modify the source content/recorder.js, method Recorder.attach() by adding the following:
var self = this;
$(...).each(function(i, el) {
var target = self.locatorBuilders.buildAll(el);
if (el.tagName == "SELECT" || el.tagName == "INPUT")
recorder.record("assertValue", target, el.value, false);
else
recorder.record("assertText", target, el.innerText, false);
});
(note ... are the JQuery selectors that define the areas that I know will contain relevant data in application. This could be tweaked either in this source (e.g. by adding more selectors), or in the application itself (e.g. by adding a signaling class to certain tags in the HTML just to trigger assertions).
in chrome, activate "developer mode" and load the modified plugin.
While recording, assertions are now automatically added for the relevant parts (... in the above) of my web app, on each page load.
happy!

KDE Taskbar Progress

I am trying to show a progress in the taskbar of the plasma desktop using the KDE Frameworks. In short, it want to do the same thing as dolphin, when it copies files:
I'm kinda stuck, because I don't even know where to get started. The only thing I found that could be useful is KStatusBarJobTracker, but I don't know how to use it. I could not find any tutorials or examples how to do this.
So, after digging around, and thanks to the help of #leinir, I was able to find out the following:
Since Plasma 5.6 KDE supports the Unitiy DBus Launcher-API, which can be used, for example, to show progress
I found a post on AskUbuntu that explains how to use the API with Qt
The real problem is: This only works, if you have a valid desktop file in one of the standard locations! You need to pass the file as parameter of the DBus message to make it work.
Based on this information, I figured out how to use it and created a GitHub repository, that supports cross platform taskbar progress, and uses this API for the linux implementation.
However, here is how to do it anyways. It should work for KDE Plasma and the Unity desktop, maybe more (haven't tried any others):
Create a .desktop file for your application. For test purpose, this can be a "dummy" file, that could look like this:
[Desktop Entry]
Type=Application
Version=1.1
Name=MyApp
Exec=<path_to>/MyApp
Copy that file to ~/.local/share/applications/ (or wherever user specific desktop files go on your system)
In your code, all you need to do is execute the following code, to update the taskbar state:
auto message = QDBusMessage::createSignal(QStringLiteral("/com/example/MyApp"),
QStringLiteral("com.canonical.Unity.LauncherEntry"),
QStringLiteral("Update"));
//you don't always have to specify all parameters, just the ones you want to update
QVariantMap properties;
properties.insert(QStringLiteral("progress-visible"), true);// enable the progress
properties.insert(QStringLiteral("progress"), 0.5);// set the progress value (from 0.0 to 1.0)
properties.insert(QStringLiteral("count-visible"), true);// display the "counter badge"
properties.insert(QStringLiteral("count"), 42);// set the counter value
message << QStringLiteral("application://myapp.desktop") //assuming you named the desktop file "myapp.desktop"
<< properties;
QDBusConnection::sessionBus().send(message);
Compile and run your application. You don't have to start it via the desktop file, at least I did not need to. If you want to be sure your application is "connected" to that desktop file, just set a custom icon for the file. Your application should show that icon in the taskbar.
And thats basically it. Note: The system remembers the last state when restarting the application. Thus, you should reset all those parameters once when starting the application.
Right, so as it turns out you are right, there is not currently a tutorial for this. This reviewboard request, however, shows how it was implemented in KDevelop, and it should be possible for you to work it out through that :) https://git.reviewboard.kde.org/r/127050/
ps: that there is no tutorial now might be a nice way for you to hop in and help out, by writing a small, self contained tutorial for it... something i'm sure would be very much welcomed :)

Concatenate Multiple media files into one output/listening to Media Foundation Events

I've written an application that will transcode and manipulate media files using Microsoft Media Foundation, but now I've got to make the same application concatenate/join media files together.
Is there any existing documentation on doing something like this? Any pointers/hints? Any existing code that does this?
If not, I figure I've got to write or find a custom media source-something like a ConcatenatingMediaSource(a source that wraps the series of sources it's concatenating together), but I'm unsure if this is the best course to accomplish this.
EDIT:
It seems the relevant event I need to be concerned with are MEEndOfPresentation-this indicates that a source(or perhaps one of my embedded sources) has reached the end of all it's streams.
MSDN docs state that if a wrapped source fires this event, I have the ability set a new PresentationDescriptor on my Source. Perhaps I could just return the next embedded source's PresentationDescriptor?
Right now I'm held up on how to actually listen to an individual source's events. How to do this isn't exactly clear (at least to someone who mostly writes code for the JVM).
EDIT:
I think I want to use a SequenceSource; it's part of the API but seems fairly undocumented.

Accessing the WebOS clipboard From an Enyo Application

When developing a WebOS application with Enyo, is it possible to access the clipboard contents? That is, if I copy a bit of text to the clipboard on a Touchpad or Pre device , can I programmatically grab that piece of text, or programmatically replace it?
From what I've read in the SDK documents, I assume I'd need a Service to do this. Is this correct?
If so, which service? Are there a list of services available, and/or is there a way to reflect into the framework to see which services are available?
(New to WebOS development, so error on the side of speaking loudly and slowly)
I think you are looking for the getClipboard method on the enyo.dom. However, when I try:
enyo.dom.getClipboard(enyo.bind(this, "gotClipboard"));
gotClipboard: function() {
this.log(JSON.stringify(arguments));
}
I just get {"0",""}, even though I have text in the clipboard. It makes me wonder if this isn't fully baked yet. One argument will be the text in the clipboard when it works.
If I try the companion enyo.dom.setClipboard, I get a NOT_FOUND_ERR: DOM Exception 8.
Found both of these functions in here: https://developer.palm.com/content/api/reference/enyo/enyo-api-reference.html

Resources