Directshow: setting MPEG2VIDEOINFO extra info for MEDIASUBTYPE_HVC1 - directshow

I've wrapped the x265 code into a DirectShow filter and can't get the video to playback with LAV Filters.
I can't find any documentation from Microsoft with respect to how the MPEG MPEG2VIDEOINFO should be filled in when outputting MEDIASUBTYPE_HVC1 so I've tried using a similar to H.264.
Connection is fine, but playback results in a black window.
In the GraphStudioNext property window I see a "HEVC Decoder Configuration Record" structure but the values output by my filter don't look right.
Does anyone know how to set the extra data of the MPEG2VIDEOINFO header? A link to some documentation on how this should be configured would be great as well.

Related

Extract audio stream from http site (online radio)

I'm new here so firstly sorry for all my mistakes.
I'm trying to add one more radio station to my raspberryPi based online radio player (of course for private use only). It is an polish radio, radio Wawa. Here is the official site and stream: https://www.eskago.pl/radio/wawa But unfortunately on official site there are some adds before stream start (and I don't see the stream url :( ). I found an unofficial site with stream: https://pl.radioonline.fm/sluchac/Radio-WAWA Here there is no adds but still it's some complicated for me to extract stream which would be useful to play e.g. in omxplayer.
I found that the url for stream is http://waw.ic.smcdn.pl/t050-1.aac but the site is adding to this url a timestamp and a mistery hash. Full request looks like:
http://waw.ic.smcdn.pl/t050-1.aac?timestamp=1546208561&hash=25d2e0deebc354c9e9b5c37b74b64f21
Now is the question: is it possible to play this with command line only (the best option with omxplayer)? And how?
Thanks.

Token image in Google Authenticator or FreeOTP

For a project, I implemented an OTP 2nd factor authentification. Everything is working fine, I am able to generate a QRcode for the encryption seed, read it on an Androïd smartphone and use the 6 digits generated code to authenticate in my app.
I read that v1.5 of FreeOTP is now supporting addition of an image to each service, quote:
On Android, we released a major release which brings many new features and UI refinements. The biggest of these is image support. Images can be selected for each token. Images can also be provisioned to the device via an undocumented OTP URI query parameter.
I see that some services did succeed in adding an image for their service (for example OVH), but cannot find the proper URI syntax to do so ...
To be more precise, I am not asking for the method to manually add an image to a token in the FreeOTP app, I'm looking for the correct URI to generate the QRCode that would ideally include a link to the image to be displayed. I'm pretty sure I never manually added an image for OVH.
The correct URI to generate the QR code with reference to the image you want to use in FreeOTP includes a querystring parameter pointing to its publicly available location:
...&image=http<s>://<image-path>
The image should be a .png. Fully qualify path and protocol.
Add this to the existing string already created for the QR code. You have to UrlEncode the whole string before generating the QR code.
For clarity the format of the data before urlencoding should be:
otpauth://totp/(<issuer>:)<accountnospaces>?secret=xxxxxxxxxx(&issuer=<issuer>)(&image=<imageuri>)
Brackets denote optional elements. For example:
otpauth://totp/Google:SampleName?MQ2TQNLEGMYTMOBXGY3Q&issuer=Google&image=http://google.com/image/logo.png
Then you urlencode it:
otpauth%3A%2F%2Ftotp%2FGoogle%3ASampleName%3FMQ2TQNLEGMYTMOBXGY3Q%26issuer%3DGoogle%26image%3Dhttp%3A%2F%2Fgoogle.com%2Fimage%2Flogo.png
Then you generate a QR code however you like. For example, Google Chart API:
https://chart.googleapis.com/chart?cht=qr&chs=400x400&chl=otpauth%3A%2F%2Ftotp%2FGoogle%3ASampleName%3FMQ2TQNLEGMYTMOBXGY3Q%26issuer%3DGoogle%26image%3Dhttp%3A%2F%2Fgoogle.com%2Fimage%2Flogo.png
While this feature is supported by FreeOTP, other apps do not. It's not part of the spec for TOTP, although it should be.

DTM giving a different report suite for custom links and page calls?

I'm getting some very strange behavior in DTM. When our page loads (from a local instance of the website) we get the expected call going out with the proper dev report suite. When a custom link call is made from that page, for some reason DTM sends it with a production report suite. If I look in Adobe Analytics for the custom link name reported under the prod RSI, it does not show up in there.
Any ideas on what is going on and how I can fix this issue?
This is my shot in the dark based on what you have said, and it is based on the assumption that your statements are true (e.g. you aren't seeing pink elephants, that the request was indeed showing your prod rsid in the proper portion of the request url, that you did in fact check your prod rsid after an acceptable amount of time has past, no segment or other filter shenanigans, etc..: in short, that you do know how to accurately perform the basic QA song and dance).
Under that assumption, the below is a scenario that can plausibly reproduce what you are describing. I could be partially right or totally off for your specific situation, but there's really no way for me to know for sure without having access to your DTM instance.
The Scenario
Long story short is it sounds like you have a blend of custom code and DTM automatic settings enabled, and DTM is overriding and/or not caring about your custom code for link tracking.
More specifically, it sounds to me like you have AA implemented as a tool in DTM, and in the config settings, you have your production and staging rsids specified in the text fields.
Then in the General section, you either do NOT have values specified for Tracking Server and Tracking Server Secure, or else they are set to the wrong values.
Then, in the Library Management section, you have either selected "Managed by Adobe" in which case DTM takes care of the library, or else you have selected "Custom" and you are adding the library yourself AND you have NOT checked "Set report suites using custom code below".
Then, somewhere in DTM (e.g. the Library Management > Custom code box, or Customize Page Code codebox) you have code that pops rsid stuff (e.g. s.account, s_account, dynamicAccountList stuff), and possibly also trackingServer and trackingServerSecure.
Finally, you (like most other people, because DTM's double script include for staging vs. prod is.. dumb) just use the prod script include on your page, and either use the debug/staging mode or rely on whatever rsid routing logic you've setup to route to dev.
So.. when the page is first loaded, DTM loads the AA library and it sets variables and stuff based on what you specified in the tool config. During this time, it is also popping any custom code blocks you have in the tool config, which may or may not override what you have specified in the tool config fields, depending on what you enabled. Then after that, it pops stuff you have in page load rules (if any), etc..
But then comes the link click.. As I have mentioned in other posts on SO, DTM has this caveat (IMO bug) about how it references the AA object after the initial page load/AA request: basically, it doesn't. Instead, it makes use of internal methods (the main one being a .getS() method) to create a new instance of the AA object, based on whatever things you have configured in the tool config section. Well here's the rub.. it does NOT account for or execute any custom coding you have done in code boxes in the tool config section.
So that basically happens whenever an event based or direct call rule is triggered, and it effectively screws you. Why does DTM do this? I do not know. IMO Adobe needs to change this feature caveat bug. Either they should refactor DTM to execute the code boxes, OR they could, you know.. just reference the original AA object created, like any normal script would do..
But in any case..
So for example, my theory here is that page loads fine, points to dev rsid based on your setup. But then you click a link and an event triggers, and DTM makes a new AA object not caring about your custom code, so all it has to go on is what you have in the tool's config fields.
Since DTM doesn't actually have any rules around the prod vs. dev rsids you specify in those fields (you have to write custom code in the custom code boxes - that DTM ignores!), it just pops the prod rsid, because that's the script include you have on your page.
Then as far as not seeing the data actually show up in your prod rsid: again, since DTM ignores what you set in your custom code boxes, it's defaulting to what is specified in the trackingServer fields in the tool config, and my assumption here is they are either blank or wrong (you should be able to look at the request url to adobe to verify this). This theory is because you said the prod rsid is right, and you see a request being made. So the next culprit would be wrong tracking server specified.
So, that is my theory of what's going on. Maybe it's all right, maybe it's some right, hopefully it may point you in the right direction at least.
Edit:
If you can confirm that this is indeed how you have things setup, then you will naturally ask "Okay, well what do I do about that?". As I have said in a lot of my other SO answers.. basically, your only option is to uncheck all the settings that make DTM automate AA, and in all your rules, keep the AA section disabled and whatever AA vars you wanna set, set them yourself and make the s.t() or s.tl() call yourself in a 3rd party script code box, so that it continues to reference and pop based off the originally instantiated AA object.
Update
Based on your comments below, okay so yeah.. that sounds like what I described, and accounts for prod rsid popping. As for data not showing up in report.. so if you are certain tracking server is set correct (the request url looks good) then this isn't a DTM issue. Here are some other explanations for why the data wouldn't show up:
Are you sure the request is being sent to your prod rsid? I don't know what you are looking at to verify this, but this is where you should be looking: In the request URL to AA: "http://[trackingServer value]/b/ss/[s.account value]/1..."
Click request isn't making it to Omniture. Verify in a packet sniffer that the request is actually made and that you are getting a 200 OK or NS_Binding_Aborted response.
You aren't waiting long enough to check for the data. Even basic hit data and looking at "real time" reports takes a little bit of time to show up.
You have a segment/filter active that's not jiving with the data you are trying to look at. Make sure that you don't have anything applied. Or, if you are using those things to find your data (and aren't seeing it), ensure that you are correctly applying it.
You recently created the rsid and the "go live" date hasn't passed yet. Data will not show up in the report suite until up to 24 hours after the specified "go live" date.
You have a vista rule in place that's affecting data showing up. Some companies have a vista rule in place for a number of reasons and there are a million ways it could affect data (e.g. routing to a different report suite). For shits and grins, check your dev (or other rsids) to see if your data showed up there. Even if that doesn't make sense, at least it's a step forward.
You have a bots / ip exclusion rule in place that's catching data from your location.
The data sent in from the link click isn't relevant to the report. For example, maybe you are looking at e.g. prop10 report and prop10 isn't actually sent in the click request.
I know a lot of these are basic things to check, and no doubt you've checked, but check again. Have someone else check for you to be sure. I'm not questioning g your abilities here, but even the best of coders forget to cross their t's and dot their i's sometimes, and manage to miss obvious things. If you are sure about all of these then contact Adobe ClientCare because I really can't think of anything else that wouldn't involve an issue with Adobe's backend.
I ran into a similar problem with my implementation. Essentially what I did was set the s.account variable directly inside the doPlugins, so it would be set on all tracking calls. I wrote specifics here also: DTM Tracking Account

Why EC_DISPLAY_CHANGED is sent even though a monitor change / switch didn't occur?

On the initial graph start, appoxly after 10 video samples, i keep receiving from the GraphManager the EC_DISPLAY_CHANGE event, even though, i didn't physically move the graph from one monitor to another, I only started it on the secondary monitor.
I tried to search for additional information regarding the causes the cause CGraphManager to send it but couldn't find any.
I've additionally used the following code snippet to handle the particular event by myself.
if (FAILED(hr = m_spMediaEventEx->CancelDefaultHandling(EC_DISPLAY_CHANGED)))
return hr;
Thanks for the help
EC_DISPLAY_CHANGE on MSDN:
If the display mode changes, the video renderer might need to choose another format. By sending this message, the renderer signals to the filter graph manager that it needs to be reconnected. During the reconnection, the renderer can select a new format.
The typical scenario is a video renderer expecting to be shown up on primary monitor, and then positioned onto secondary. The renderer generates the event in order to update itself through filter graph transition. You see the event after a few samples are already streamed because the event is handled asynchronously. To work this around, use IVMRMonitorConfig::SetMonitor and friends to position the renderer correctly well in advance.
Note that under normal circumstances, the event and reconnected is just a small delay and should be handled transparently.
By canceling default behavior, you are canceling the following exactly. And you are expected to take care yourself of what default action is trying to fix.
Default Action
The filter graph manager temporarily stops the graph, and then disconnects and reconnects the video renderer. It does not pass the event to the application.

Can Xively show a log of text messages from a device?

Is Xively ONLY capable of showing graphs of values, or sending responses to triggers, etc? Is there any way to create a little scrollable message window and see log-type messages as they happen?
Seems like most of the info on the Xively site is marketing hype and a formal API spec, along with some glossy examples and high prices.
Thanks
Xively datastreams can store string values. You will need to implement a log viewer yourself, which is quite simple to achieve using JavaScript.

Resources