How can I define a global variable within a component that can be accessed by all runnables that belong to this component without using IRVs in a component model?
There are three possible ways to achieve this:
InternalBehavior.staticMemory: this kind of variable is typically defined if you want to make a variable in your code visible to a measurement and calibration system, i.e. it is possible to derive an A2L description of the variable for downstream processing in a M&C tool. This variant is only a viable option if the enclosing software-component isn’t multiply instantiated.
SwcInternalBehavior.arTypedPerInstanceVariable: here you define a variable that is supported in multiply defined software-components. The variable has a modeled data type and is allocated by the RTE that also provides a dedicated API for accessing the variable.
SwcInternalBehavior.perInstanceMemory: here you define a variable by directly using the C data type, i.e. there is no modeling of the data type. The variable is allocated by the RTE that also provides a dedicated API for accessing the variable.
None of the mentioned approaches provide any form of automatic consistency mechanism. Securing data consistency is entirely left to the application software with the help of mechanisms standardized by AUTOSAR.
The answer is: Per-Instance-Memory (PIM)
Related
I need to write a QAbstractItemModel class that represents a hierarchy of different object types. I want to be able at some point, show a table/list containing only level 1 elements, only level 2, and so on.
I am working on a network protocol analyzer tool, something like wireshark. I am capturing socket.recv and socket.send events from a process. In my model those events are called NetworkEvent. Each network event may contain one or more Packet. Each packet has one or more Message, where a message, based on its code, is parsed as a defined struct.
Here is an image of the program class hierarchy:
The main window has a list and a tree. I expect to be able to show:
a table/list containing only network events including its attributes.
a table/list containing only packets including its attributes.
a table/list containing only packets based on a network event.
a tree containing a packet/message hierarchy (with fields and sub structures)
a table/list containing only messages
a table/list containing only messages based on a packet
a tree containing a message hierarchy (with fields and sub structures).
So I thought the best idea was to model the QAbstractItemModel as a tree. First problem I encountered is that while each class has the concept of "children", each one has a different field that represents childrens, so I have to take care of that inside the QAbstractItemModel.
Also, because a table/list of EventNetworkdoesn't have same columns as table/list of Packet, nor Message, I can't properly use the same model to define all possible ways to show the data. I suppose the correct way to do this would be defining proxy models for each kind of view.
Is there any better or easy way to approach this? Or is this the way to go?
So you create a common base family of polymorphic classes, and use a base pointer as the data source for the model. One single role - the data object, from then individual delegates can access their specific data fields without having to bother implementing everything using roles. The role-centric use-case is really only applicable for isomorphic data sets.
Then you can customize the visual representation based on the actual individual object and view type.
I wouldn't marry to any particular representation. I'd only implement a list interface, this gives more flexibility how to represent the structure, you can draw simple lists as list view or table view, and you can also have tables or trees that are made of lists of lists.
In programming, it is always a tree, which is very obvious if you structure your application well, so it is a simple manner of choosing how you visualize each node and its contents. It is common to even have different ways of visualizing the same data, both in terms of visual structure and actual delegates.
You will have a tremendously easier time implementing this in QML, especially the actual visual representation, using the generic object model outlined here. Although if your object count is very high, you might want to implement it as a single abstract item model rather than have every object be a list model to avoid the overhead. But unless you deal with millions and millions of items, the overhead is well worth the return.
I think you are on the right course with thinking of using multiple proxy models, one for each table. I would suggest starting with QSortFilterProxyModel and implement your own filtering algorithm.
I will also suggest you might want to identify the type of data with using custom Qt::ItemDataRole enumerations, one for each type. I think it will make the filtering easier.
You may want to experiment with List, Table, and Tree Views to see which suits your purpose the best. The beauty of the Model-View system is you can easily change the View once you have the model(s) down.
Moving resources is easy, but will the ARM template, which uses ‘uniqueString(resourceGroup().id)’, still update the moved resources or create new resources?
It will create new resources. From the documentation on uniquestring it creates a deterministic unique hash from the parameters. You're feeding the resourceGroup id to create the hash. If that resource group id changes, so will the hash.
If you're appending a value to create uniqueness - a simple way is to change the template to refer to a variable called suffix, and directly assign the previously created unique id to the suffic variable.
If you wanted to create a portable, reusable template that would allow you to move resources between groups, then you'll need a different value to seed your deterministic hash, and not something that's likely to change. I use subscription().id frequently.
Another thing to look at is the template function documentation which allows you to define your own functions. You can encapsulate the unique naming logic in there also.
Probably not the answer you wanted to here though.
I wanted to modify my problem and broke it down to some groups.
None of the groups have solvers attached to it. So they are just groups (composed of few components) that make it easy for the user to
differentiate various parts of the product. (each being a group).
I am confused with the IndepVarComp() and Metadata (declared in the initialization)
So far I have always used a single group and a single IndepVarComp() where the outputs were always the design variables.
If I continue this approach I could use metadata and pass large dictionaries i.e self.options.declare('aa', types=dict).
But looking at other codes based on openmdao, they seem to use indepvarcomp as if it is the metadata (ie. they dont change during iterations.and they are used as a component inside that group)
Could you guide me if one or the other is right way to
IndepVarComp outputs should be used for any output that could conceivably be a design variable or a parameter that you might wish to change in your run script.
Metadata should be used for anything that is explicitly intended to be constant. It is intended to be set once during instantiation and then not changed after the fact.
Can somebody please refer me how do we separate out the models while separating out each service to run of their own? So basically a few of the models we have right now overlaps among services. I went through some of the procedures which ask to use canonical pattern, however I also got that we should not use canonical pattern.
One of the solution would be to keep all the models in a common place which we are doing right now. But its seems problematic for managing the services in the form of one repository per one service.
Duplicate models are also fine with me if I can find good logic for it.
Vertical slicing and domain decomposition leads to having each vertical slice have it's own well defined fields (not entities) that belong together (bounded context/business functions), defining a logical service boundary and decomposing the Service Boundary to Business Components and Autonomous Components (smallest unit of work).
Each component owns the data it modifies and is the only one in the system that can change the state of that data, you can have many copies of the data for readers but only one logical writer.
There for it makes more sense not to share these data models as they are internal to the component responsible for the data model's state (and is encapsulated).
Data readers can use view models and these are dictated more by the consumer then the producer, the view/read models hold the "real"/"current" state of the transactional data (which is private to the data modifier), read data can be updated by the data processor after any logical state change.
So it makes more sense to publish view/read models for read only consumers...
Check out Udi Dahan's video
Does that make sense?
I am learning Java 8 newly , i see one definition related to functional programming which is "A program created using only pure functions , No Side effects allowed".
One of side effects is "Modifying a data structure in place".
i don't understand this line because at last some where we need to speak with database for storing or retrieving or updating the data.
modifying database is not functional means how we will speak with database in functional programming ?
"Modifying a data structure structure in place" means you directly manipulate the input datastructure (i.e. a List). "Pure functions" mean
the result is only a function of it's input and not some other hidden state
the function can be applied multiple times on the same input producing the same result. It will not change the input.
In Object Oriented Programming, you define behaviour of objects. Behaviour could be to provide read access to the state of the object, write access to it, or both. When combining operations of different concerns, you could introduce side effects.
For example a Stack and it's pop() operation. It will produce different results for every call because it changes the state of the stack.
In functional programming, you apply functions to immutable values. Functions represent a flow of data, not a change in state. So functions itself are stateless. And the result of a function is either the original input or a different value than the input, but never a modified input.
OO also knows functions, but those aren't pure in all cases, for example sorting: In non-functional programming you rearrange the elements of a list in the original datastructure ("in-place"). In Java, this is what Collections.sort()` does.
In functional programming, you would apply the sort function on an input value (a List) and thereby produce a new value (a new List) with sorted values. The function itself has no state and the state of the input is not modified.
So to generalize: given the same input value, applying a function to this value produces the same result value
Regarding the database operations. The contents of the database itself represent a state, which is the combination of all its stored values, tables etc (a "snapshot"). Of course you could apply a function to this data producing new data. Typically you store results of operations back to the db, thus changing the state of the entire system, but that doesn't mean you change the state of the function nor it's input data. Reapplying the function again, doesn't violate the pure-function constraints, because you apply the data to new input data. But looking at the entire system as a "datastructure" would violate the constraint, because the function application changes the state of the "input".
So the entire database system could hardly be considered functional, but of course you could operate on the data in a functional way.
But Java allows you to do both (OO and FP) and even mix both paradigms, so you could choose whatever approach fits your needs best.
or to quote from this answer
If you have several needs intermixed, mix your paradigms. Do not
restrict yourself to only using the lower right corner of your
toolbox.