I am trying to set up a project with ngrx/store and normalizer to flatten the data. I am finding that if I denormalize the data from the selector, then I can't(makes no difference) use OnPush change detection on the components. Due to the fact that denormalize returns new instances.
Is there a way around this? At which level is best to handle denormalize?
Related
Structurally there is one project with two organizations. Each organization "resides" in own cartridges.
There are two gross calculations created, each has own rule set and both are registered in Component Framework.
With this configuration second defined calculation overrides first one.
How can that be architecturally solved - to separate basket calculation based on organization?
Or i will need to have one gross calculation with one rule set and in that set different rules with analyzing site/app and moving this calculation classes to some common cartridge for both organizations?
With the described preconditions you can go into the direction #johannes-metzner pointed you.
The basket calculation resolves the RuleSet implementation by its name, which is resolved by a call to a pipeline extension point.
So you could try to provide own implementations for the pipeline extenstion point ProcessBasketCalculation-GetRuleSet with higher priority then the default implementation. The implementation has to return the RuleSetName specific for your organization. The calculation should then resolve the RuleSet behind and use it for the calculation.
You can also provide different implementation app specific. So for app X in org1 you can bind your Ruleset A and for app Y in org2 you can bind Ruleset B.
I have an entity that represents a relationship between two entity groups but the entity belongs to one of the groups. However, my queries for this data are going to be mostly with the other entity group. To support the queries I see I have two choices a) Create a global index that has the other entity group key as prefix b) Move the entity into the other entity group and create ancestor index.
I saw a presentation which mentioned that ancestor indexes map internally to a separate table per entity group while there is a single table for the global index. That makes me feel that ancestors are better than using global indexes which includes the ancestor keys as prefix for this specific use case where I will always be querying in the context of some ancestor key.
Looking for guidance on this in terms of performance, storage characteristics, transaction latency and any other architectural considerations to make the final call.
From what I was able to found I would say it depends on the of the type of work you'll be doing. looked at this docs and it suggest you Avoid writing to an entity group more than once per second. Also indexing a property could result in increased latency. Also it states that If you need strong consistency for your queries, use an ancestor query, in that docs there are many advice's on how to avoid latency and other issues. it should help you on taking a call.
I ended up using a 3rd option which is to have another entity denormalized into the other entity group and have ancestor queries on it. This allows me to efficiently query data for either of the entity groups. Since I was using transactions already, denormalizing wouldn't cause any inconsistencies and everything seems to work well.
How can I define a global variable within a component that can be accessed by all runnables that belong to this component without using IRVs in a component model?
There are three possible ways to achieve this:
InternalBehavior.staticMemory: this kind of variable is typically defined if you want to make a variable in your code visible to a measurement and calibration system, i.e. it is possible to derive an A2L description of the variable for downstream processing in a M&C tool. This variant is only a viable option if the enclosing software-component isn’t multiply instantiated.
SwcInternalBehavior.arTypedPerInstanceVariable: here you define a variable that is supported in multiply defined software-components. The variable has a modeled data type and is allocated by the RTE that also provides a dedicated API for accessing the variable.
SwcInternalBehavior.perInstanceMemory: here you define a variable by directly using the C data type, i.e. there is no modeling of the data type. The variable is allocated by the RTE that also provides a dedicated API for accessing the variable.
None of the mentioned approaches provide any form of automatic consistency mechanism. Securing data consistency is entirely left to the application software with the help of mechanisms standardized by AUTOSAR.
The answer is: Per-Instance-Memory (PIM)
As we know, when saving data in a redux store, it's supposed to be transformed into a normalized state. So embedded objects should be replaced by their ids and saved within a dedicated collection in the store.
I am wondering, if that also should be done if the relationship is a composition? That means, the embedded data isn't of any use outside of the parent object.
In my case the embedded objects are registrations, and the parent object is a (real life) event. Normalizing this data structure to me feels like a lot of boilerplate without any benefit.
State normalization is more than just how you access the data by traversing the object tree. It also has to do with how you observe the data.
Part of the reason for normalization is to avoid unnecessary change notifications. Objects are treated as immutable so when they change a new object is created so that a quick reference check can indicate if something in the object changed. If you nest objects and a child object changes then you should change the parent. If some code is observing the parent then it will get change notifications every time a child changes even though it might not care. So depending on your scenario you may end up with a bunch of unnecessary change notifications.
This is also partly why you see lists of entities broken out into an array of identifiers and a map of objects. In relation to change detection, this allows you to observe the list (whether items have been added or removed) without caring about changes to the entities themselves.
So it depends on your usage. Just be aware of the cost of observing and the impact your state shape has on that.
I don't agree that data is "supposed to be [normalized]". Normalizing is a useful structure for accessing the data, but you're the architect to make that decision.
In many cases, the data stored will be an application singleton and a descriptive key is more useful than forcing some kind of id.
In your case I wouldn't bother unless there is excessive data duplication, especially because your would have to then denormalize for the object to function properly.
In my design I have to store a lot of properties(say a 20 properties) in a same datastore table.
But usually most of the entities will occupy a minimum of only 5 properties.
Is this design a resource consuming idea? Will the unused properties consume any memory or performance?
Thanks,
Karthick.
If I understand your question correctly, you are envisioning a system where you have: A Kind in your Datastore where the Entities for that Kind can have differing subsets of a common property-key space W. Entity 1's property set might be {W[0], W[1]}, and Entity 2's property set might be {W[1], W[2], W[5]}. You want to know whether this polymorphism (or "schemalessness") will cost you space, and whether each Entity, as in some naive MySQL implementations
The short answer is no - due to the schemaless nature of Datastore, having polymorphic entities in a kind (the entities have all different names and combinations of properties) will not consume extra space. The only way to have these "unused" properties consume extra space is if you actually did set them on the entity but set them to "null". If you are using the low-level API, you are manually adding the properties to the entity before saving it. Think of these as properties on a JSON object. If they aren't there, they aren't there.
In MySQL, having a table with many NULL-able columns can be a bad idea, depending on the engine, indexes, etc... but take a look at this talk if you want to learn more about how the Datastore actually stores it's data using BigTable. It's a different storage implementation underneath, and so there are different best practices or possibilities.