Given two codecs with the same merit value, how does DirectShow decide which one to use through the 'intelligent connect' mechanism?
It throws a dice.
Seriously the behavior is undefined, both decoders have chances to be taken first. In case of rejection, filter graph would try the other one.
The intelligent connect msdn page sheds some light on this.
Starting from Windows 7, a new system is used, and the merit system is only used as a fallback when no filter is found after searching for one using the new approach.
Starting in Windows 7, DirectShow has a list of preferred filters for
certain media subtypes. If there is a preferred filter for the media
type that is being rendered, the Filter Graph Manager tries that
filter next. An application can modify the list of preferred filters
by using the IAMPluginControl interface. Changes to the list affect
the application's current process, and are discarded after the process
ends.
In case the merit system is used, the msdn page only mentions the following:
Then it tries them in order of merit, from highest to lowest. (It uses additional criteria to choose between filters with equal merit.)
Related
I want to be able to "suppress" the default HotChocolate (server) behavior and ignore schema errors on query.
Let me try to give example.
Let assume we do have the following schema:
type Query {
myTypes: [MyType]
}
type MyType{
id: ID!
name: String
}
If I try to make query like:
{
myTypes {
id,
name
}
}
Everything is okay.
However, if I make something like:
{
myTypes {
id,
name,
notKnownProperty
}
}
The query fails.
Which in general is desired behavior (the whole benefit of strict schema), but not in my particular case.
The use case is when we have 2 or more services, that expose one and the same schema (with different data) and some of those services evolve with different speed. Unfortunately we cannot control versions of those services, as some of them are on-premise. And we end up in situation where some of the services have newer versions (with more fields on the same types) than others. This restricts us to use only the oldest clients available.
We would like to do the opposite - use the newest clients and handle missing fields in them (with default values for example).
One possible solution is to be implemented some sort of versioning. Either in the path (aka /graphql/v1) or in the object types itself (aka MyType1, MyType2, etc.).
But I personally don't like those approaches.
Any ideas and input are appreciated !
Thanks in advance!
Once you want to use new clients they should be backwards compatible, i.e. be able to use the older versions of the API correctly. For that, it's obviously not enough to treat non-existing values like having default (null) values. It's a big difference between when your end-user doesn't have a phone number and when they has but the old API version doesn't provide it.
So, you need to have a more powerful strategy to implement backwards compatibility. In case you don't want to use the versioning as it is, there are at least two other strategies available in regards to GraphQL (not only hotchocolate):
Your client apps could initially analyze the scheme to check which properties are missed and decide how to build requests to the API having that knowledge.
Your client apps could change their strategy on-the-fly. I.e. not querying the scheme but handling the errors of queries instead. Since the errors in Graph QL are pretty structured, the client can easily understand which properties are missed and adjust the future requests accordingly.
I am trying to get a user's availability, but in one use case, I want to ignore their actual availability rules and even their current schedule and calendar. Basically I am using cronofy in this use case to just provide me a list of times.
According to the docs https://docs.cronofy.com/developers/api/scheduling/availability/
I should be able to specify participants.members.available_periods.start and participants.members.available_periods.end. I've read and re-read these params over and over and am sure I am sending it as specified, but cronofy still returns to me only times that the user is not "busy".
Am I still not understanding this param? Is there another way to ignore a user's calendar, ie ignore their "busy" time slots?
The intent of the participants.members.available_periods parameter is to define a specific participant's availability hours for the query. Useful in multi-person queries when one or more participants have ad-hoc shift patterns or other complicated working hours. You can choose to specify these here or use our Available Periods endpoint along with Managed Availability to have the availability query consider the latest set of Available Periods when it is run.
The Availability query isn't designed to ignore all calendar events for an individual participant but there is another way you can achieve what you're looking for.
Application Calendars are designed as drop in replacements for synced-calendar Accounts in Cronofy. So you can create one of more of these via the API and use them in the Availability query as a stand-in for the participant.
They still support Managed Availability and can have events created in them. So if you wanted to ensure that your application doesn't double book over the events it already knows about, you can just create the events as your application books them.
I hope this helps. Our support team (support#cronofy.com) are always happy to talk through the specifics of your use case if that would be helpful.
-- UPDATE --
We've decided to support this as a first class concept in our API.
You can now pass an empty array to the participants.member.calendar_ids attribute to indicate that you don't want any calendars included within the availability query. And thus, only the Availability Rules or query periods will be considered. Thanks for the question.
More information here.
I've created a topology for a video file which contains just one stream (no audio).
It contains three nodes which are connected in order:
a source stream node
an Mpeg4Part2VideoDecoder as transform node
an activate object for the EVR as output node
Calling SetTopology(), allowing for a partial topology results in working playback. However, I am trying to resolve the full topology myself.
Therefore, I first need to bind my output node to a media sink. I followed the guidelines specified in the manual, and all the required calls seem to succeed. When setting the full topology, I receive the MESessionTopologySet event.
Unfortunately, playback doesn't work, but I don't get any errors.
Is there anything else required when creating a full topology?
I recall reading somewhere in the msdn docs that the topology loader which is used when setting a partial topology also sets media types. Is this required, and if so where can I find more information on this?
Matt Andrews answered this one for me on the msdn forums.
You definitely need to negotiate your own media types if you are
bypassing the topology loader. This means obtaining the source's
media type from IMFMediaTypeHandler, setting it on the downstream
transform, and then for each node down the chain, querying the
available input and output types to find a compatible media type. It
is much easier to use the topoloader unless you have a specific need
to avoid it.
I am building a web application where i am using multilingual support.
I am using variables for label text display, so that administrators can
change a value in one place and that change is reflected throughout the application.
Please suggest which is better/less time consuming for displaying label text?
using relational db interaction.
constant variable.
xml interaction.
How could I find/calculate the processing time of the above three?
'Less time Consuming' is easy, and completely intuitive; constants will always be faster than retrieving the information from any external source, even another in-memory source. Probably even faster than retrieving it from a variable (which is where any of the other solutions would have to end up putting the data)
But I suspect there is more to it than that. If you need to support the ability to change that data (and even if not), you may consider also using Resource Files, which would enable you to replace all such resources based on language/culture.
But you could test the speed fairly easily using the .NET 4 StopWatch class, or the system tickcount (not sure of the object offhand where that comes from) if you don't have 4.0
db interaction, in that case the rate of db-interaction would be increased, unless you apply some cache logic.
Constants, Manageability issues.
XML, parsing time+High rate of IO etc.
Create three unit test for each choice.
Load test them and compare the results.
I've been playing with RSS feeds this week, and for my next trick I want to build one for our internal application log. We have a centralized database table that our myriad batch and intranet apps use for posting log messages. I want to create an RSS feed off of this table, but I'm not sure how to handle the volume- there could be hundreds of entries per day even on a normal day. An exceptional make-you-want-to-quit kind of day might see a few thousand. Any thoughts?
I would make the feed a static file (you can easily serve thousands of these), regenerated periodically. Then you have a much broader choice, because it doesn't have to run below second, it can run even minutes. And users still get perfect download speed and reasonable update speed.
If you are building a system with notifications that must not be missed, then a pub-sub mechanism (using XMPP, one of the other protocols supported by ApacheMQ, or something similar) will be more suitable that a syndication mechanism. You need some measure of coupling between the system that is generating the notifications and ones that are consuming them, to ensure that consumers don't miss notifications.
(You can do this using RSS or Atom as a transport format, but it's probably not a common use case; you'd need to vary the notifications shown based on the consumer and which notifications it has previously seen.)
I'd split up the feeds as much as possible and let users recombine them as desired. If I were doing it I'd probably think about using Django and the syndication framework.
Django's models could probably handle representing the data structure of the tables you care about.
You could have a URL that catches everything, like: r'/rss/(?(\w*?)/)+' (I think that might work, but I can't test it now so it might not be perfect).
That way you could use URLs like (edited to cancel the auto-linking of example URLs):
http:// feedserver/rss/batch-file-output/
http:// feedserver/rss/support-tickets/
http:// feedserver/rss/batch-file-output/support-tickets/ (both of the first two combined into one)
Then in the view:
def get_batch_file_messages():
# Grab all the recent batch files messages here.
# Maybe cache the result and only regenerate every so often.
# Other feed functions here.
feed_mapping = { 'batch-file-output': get_batch_file_messages, }
def rss(request, *args):
items_to_display = []
for feed in args:
items_to_display += feed_mapping[feed]()
# Processing/returning the feed.
Having individual, chainable feeds means that users can subscribe to one feed at a time, or merge the ones they care about into one larger feed. Whatever's easier for them to read, they can do.
Without knowing your application, I can't offer specific advice.
That said, it's common in these sorts of systems to have a level of severity. You could have a query string parameter that you tack on to the end of the URL that specifies the severity. If set to "DEBUG" you would see every event, no matter how trivial. If you set it to "FATAL" you'd only see the events that that were "System Failure" in magnitude.
If there are still too many events, you may want to sub-divide your events in to some sort of category system. Again, I would have this as a query string parameter.
You can then have multiple RSS feeds for the various categories and severities. This should allow you to tune the level of alerts you get an acceptable level.
In this case, it's more of a manager's dashboard: how much work was put into support today, is there anything pressing in the log right now, and for when we first arrive in the morning as a measure of what went wrong with batch jobs overnight.
Okay, I decided how I'm gonna handle this. I'm using the timestamp field for each column and grouping by day. It takes a little bit of SQL-fu to make it happen since of course there's a full timestamp there and I need to be semi-intelligent about how I pick the log message to show from within the group, but it's not too bad. Further, I'm building it to let you select which application to monitor, and then showing every message (max 50) from a specific day.
That gets me down to something reasonable.
I'm still hoping for a good answer to the more generic question: "How do you syndicate many important messages, where missing a message could be a problem?"