Question on PositioningManager Data Source - here-api

I'm working on integrate the Navigation into an Android app and had a customer report that it took 30 seconds for the app to detect that they want off route before re-routing them. This led me to look at the different data sources. I see the following options.
Does anyone have any details on which is going to be the most accurate and efficient ? I am using the GPS_NETWORK and the devices have cellular connections.
Abstract class that defines the interface for providing positions updates from an automotive location data source (e.g.
LocationDataSourceDevice
Abstract class that defines the interface for providing positions updates from a platform location data source.
LocationDataSourceGoogleServices
Abstract class that defines the interface for providing positions updates from the google location services data source.
LocationDataSourceHERE
Abstract class that defines the interface for providing positions updates from a HERE hybrid location data source.```

For now, we are not providing the option for advance positioning in GSM, CDMA, Wifi networks etc. You can refer our documentation guide to look into the features.

Related

How can we access NetworkMapCache in Contract-States library of CordApp

I am trying to implement an Validator class in Contract-States library of CordApp, which have several validation methods that are inherited by Model classes in their init() fun, so that each time a Model class is called/initialized the validation happens on the spot.
Now I'm stuck at a point, where I need to validate whether the incoming member name(through a Model class) matches with Organisation name of the node, I need to access the NetworkMap for that. How can I do that?
In Work-Flow library each flow extends FlowLogic class that implements ServiceHub interface and through that we can access the NetworkMap, but how to do that in Contract-States library?
P.S. - I'm trying to avoid any circular dependency (Contract-States lib should not depend on Work-Flow lib)
The short answer, you can't.
The long answer:
The difference between flow validations and contract validation is that the latter (contracts) must be deterministic meaning for the same input they must always give the same output whether it's now or after 100 years, in the current node or any other node.
The reason for that is because any time (even in the future) when a node receives a transaction it must validate that transaction which includes validating the inputs which in return requires validating the transaction that created those inputs and so on, until you get a fully validated graph of all the inputs that lead to the outputs that were used as inputs and so on.
That's why the contract should return the same result any time, and that's why it should be deterministic, and that's why contracts (unlike flows) don't have access to external resources (like HTTP calls, or even the node's database).
Imagine if the contract was relying on the node's database for some validation rule, as you know, states are distributed on a need to know basis (i.e. only to participants of the state); so one node might have the state that you're using as validation source and another node won't, so the contract's output (transaction valid/invalid) will differ between nodes, and that breaks the deterministic concept.
Contracts only have access to the transaction components: inputs, outputs, attachments, signatures, time-windows, reference states.
Good news, there are other ways to implement your requirement:
Using an attachment that has the list of nodes that are allowed to be part of the transaction, this method should be used if the blacklist is not updated frequently and you can see the example here.
Using reference states, where you can create a state that has the allowed parties and require the existence of that reference state in your transaction; this method should be used when the blacklist is more frequently updated. You can read about reference states here.
Using Oracles, this option is in case there is a world organization (or for instance Ministry of Trade of some country) that provides an Oracle which returns the list of blacklisted parties; and you use that Oracle in your transaction. You can read about Oracles here.

Managing a connection to an external component in an Aggregate

I just discovered Axon framework and am very eager to use it as it answers a lot of my questions and concerns about DDD.
The first project I'd like to use it on contains small cameras which are controlled via the GigEVision protocol (over TCP and UDP for the control and stream channels). I think my problem would be the same for any case where we maintain a connection to an external component or more generally we want to link an external component lifecycle to Axon's lifecycles.
I'd like to have an Aggregate named Camera to which I can send Commands to grab 1 image or start grabbing N images at a certain FPS.
What I'm not sure about is how to manage the connection to an external component in my Aggregate.
- Should I inject the client to my camera in my Camera Aggregate and consider connecting to it as part of my protocol / business commands? In this case how would I link the camera lifecycle (a camera get disconnected all of a sudden) to the aggregate lifecyle (create a corresponding CameraDisconnectedEvent)?
- Should the connection be handled in a side car Saga which get the camera client injected, the saga starting on ConnectionRequestedEvent and stopping as soon as we get a connection error from the camera. I would get the same issue of linking the connection lifecycle to the lifecycle of the Saga I think.
- Am I leaking implementation details in the business layer and should manage the issue an other way?
- Am I just using the wrong tool for this job and should not try to force it into Axon?
Thank you very much in advance, hope my message and issues make sense.
Best regards,
First and foremost what you should do, is ensure the language spoken by the GigEVision protocol by no means transitions over into your other domain.
These two should be separate and remain so, as they cover different concerns.
This brings to light the necessity to have a translation layer of some sort.
More formuly called a context mapping. From a DDD perspective, you would take this even further by talking about an Anti-Corruption Layer.
The name already says it, you add a layer to ensure you are not corrupting your domain logic with that from another domain.
Another useful topic to read up on here would be the notion of a Bounded Context.
I digress though, let's go back to the problem at hand: where to position this anti-corruption layer.
What is currently unclear to me, is what domain requirements are in place why the connection is required to be maintained all the time when requesting for images.
Is the command you want to send requesting for a live feed? Or just "some" images from a given time frame?
In both scenarios I am not immediately convinced that any of these operations requires the validation through a single Camera aggregate to be honest.
You could still model this in a command and event format, as the messaging paradigm is very helpful to allow clean segregation of concerns.
But given the current description, I am uncertain whether you need DDD's idea of an Aggregate to model a single Aggregate in.
I might be wrong on this note, but I just don't know enough about your domain at this stage.
That's my two cents to the situation, hoping this helps!

Specifying a full topology with MediaFoundation

I've created a topology for a video file which contains just one stream (no audio).
It contains three nodes which are connected in order:
a source stream node
an Mpeg4Part2VideoDecoder as transform node
an activate object for the EVR as output node
Calling SetTopology(), allowing for a partial topology results in working playback. However, I am trying to resolve the full topology myself.
Therefore, I first need to bind my output node to a media sink. I followed the guidelines specified in the manual, and all the required calls seem to succeed. When setting the full topology, I receive the MESessionTopologySet event.
Unfortunately, playback doesn't work, but I don't get any errors.
Is there anything else required when creating a full topology?
I recall reading somewhere in the msdn docs that the topology loader which is used when setting a partial topology also sets media types. Is this required, and if so where can I find more information on this?
Matt Andrews answered this one for me on the msdn forums.
You definitely need to negotiate your own media types if you are
bypassing the topology loader. This means obtaining the source's
media type from IMFMediaTypeHandler, setting it on the downstream
transform, and then for each node down the chain, querying the
available input and output types to find a compatible media type. It
is much easier to use the topoloader unless you have a specific need
to avoid it.

How to restrict users to subsets of data in ASP.Net 2.0+

Imagine an ASP.Net 2.0+ app that uses the built-in role-based security to restrict users to certain pages or actions.
Further suppose that rules exist that restrict individual users to subsets of data based on the user's attributes (however those are implemented). For example, a manager can only look at performance history for his or her own subordinates. A sales manager can only look at sales target achievement information for his or her own sales reps. A sales rep can only look at pending orders for his or her own customers.
These rules affect how dropdowns and other multi-record displays are filled, and also what values can be typed in to textboxes for search and lookup purposes. There are many other possible functions and screen types that could potentially be affected. So this is a cross-app concern.
My question: what kind of patterns or techniques would make implementing such restrictions across an application easier?
You could implement the repository pattern. Then when you make calls to it you pass in the current user and restrict data returned based on that user or pass in the user when you construct the repository.
Repository Pattern
Some like
public class DataRepository
{
private _user;
public DataRepority(User user)
{
_user =user;
}
public IEnumerable<SalesData> GetMonthlySalesData(User user)
{
//code here
}
}
A lot of water has gone under the bridge since the OP's original question. The answers provided were great then but they all require coding.
Since then, attribute-based access control (abac), an access control model put forward by NIST, has matured considerably. ABAC helps you express your authorization logic as configurable policies which you define, maintain, and execute externally in a central policy decision point.
There are several solutions out there that implement ABAC. I recommend you check out Wikipedia for more information.
Consider using your own custom attribute for these cross cutting concerns and implement possibly with a claims based identity system (ex. IClaimsIdentity - Windows Identity Foundation) for required attributes.
Since you are controlling data here based on users - I would also look into the Model View Presenter pattern for webforms since you are binding data, etc.
See: http://msdn.microsoft.com/en-us/library/ff647117.aspx
This allows you to better test your output based on whatever defined permissions you have and provides a better way to track your bindings to combo boxes, etc. than sticking a bunch of junk in your code behind.

MVVM Messaging vs RaisePropertyChanged<T>

What is the difference between MVVM messaging and RaisePropertyChanged.
I'm trying to run a function in View model A when a property in view model B changes, which approach is a better one to use - Messaging or RaisePropertyChanged broadcast?
Thanks,
Nikhil
Messaging decouples your view models. It's like a Tweet, you send out a message into the air and someone can read it or someone could register to listen for it. PropertyChanged is used by the UI to know that something changed and to redraw the values.
Messaging is definitely the best choice. MVVM light has a built in option to broadcast the message. You can use the mvvminpc code snippet.
It's surprising your post wasn't answered sooner. Maybe this answer will still be useful to someone out there.
To follow #Kevin's post:
Messages are indeed used for decoupled comunication. This means that once the message is sent one or more recipients - who have registered their interest in a particular message type - are notified.
In general I use INotifyPropertyChanged when the comunication between the view and the view-model is concerned (via databinding) and messages when I want to communicate between multiple view models or upward from the view-model to the view and databinding is not concerned.
When receiving messages in the view model make sure that you call Cleanup to unregister from the Messenger. If you handle the message in the view - where no Cleanup is available - register for the Unloaded event and call Messenger.Unregister(this) from there.
Depends on how View model A and View model B relate to each other.
If View Model A already has direct reference to B for a valid reason (e.g. A "owns" B i.e. it is parent-child relationship) then just subscribe to B.PropertyChanged event. That way B doesn't need to remember that something depends on it which is in spirit of reactive programming.
(offtopic: in case if B has longer lifetime than A then don't forget to unsubscribe from event because "long living publisher / short living subscriber" can result in memory leaks).
If A and B are unrelated / in very different parts of the app then messaging might be a better choice.
Messaging should be used when it is solving a specific problem. If you uses it everywhere your application becomes difficult to maintain and later understand the code.

Resources