How can I define a global variable within a component that can be accessed by all runnables that belong to this component without using IRVs in a component model?
There are three possible ways to achieve this:
InternalBehavior.staticMemory: this kind of variable is typically defined if you want to make a variable in your code visible to a measurement and calibration system, i.e. it is possible to derive an A2L description of the variable for downstream processing in a M&C tool. This variant is only a viable option if the enclosing software-component isn’t multiply instantiated.
SwcInternalBehavior.arTypedPerInstanceVariable: here you define a variable that is supported in multiply defined software-components. The variable has a modeled data type and is allocated by the RTE that also provides a dedicated API for accessing the variable.
SwcInternalBehavior.perInstanceMemory: here you define a variable by directly using the C data type, i.e. there is no modeling of the data type. The variable is allocated by the RTE that also provides a dedicated API for accessing the variable.
None of the mentioned approaches provide any form of automatic consistency mechanism. Securing data consistency is entirely left to the application software with the help of mechanisms standardized by AUTOSAR.
The answer is: Per-Instance-Memory (PIM)
we have several Test components grouped. I'd like to do some parameter validation at the beginning and skip the component altogether when certain conditions are met. I wanted to use ExitComponent for this, however I figured this does not only leave the component but the whole group.
I really do not want to use extensive if-else statement ranging over my whole component, which is the only solution I can see now.
Example:
'Skip component if value is empty
if Parameter("Par1) = "" Then
'Cannot use ExitComponent as I do not want to leave the whole component group
?????
endif
'Start processing data in the component
Does anyone have an idea?
The approach from BPT is to use the ALM Wizards and Forms in order to create and configure almost all Aspects of your Tests. If you select a flow or a Test Case you can configure the Run Condition of each Subcomponent / flow in the Test Script Tab. As the Linked documentation tells, you can do it based on Parameters.
Here is the tutorial for setting Run Conditions.
P.S: In case you have to check complex things and not simple parameters, well:
Create a component that checks the complex stuff (the relation of stellar objects regarding the sun - just kidding, of course some AUT specific condition) and shares the info with the world via an Output Parameter. The subsequent components can of course then react to the Parameter.
I have set up a couple problems in openMDAO, I want to extract the "params" vector for one, and use that to set the input for another. Basically the first optimizes some stuff, then I want to use that solution in another problem to do something else (see Implementing AMMF within OpenMDAO).
I am trying to make this general where I do not have to explicitly name the variables that need to be exchanged. This way if the two problems take the same variables as inputs it should just work...
Now when I run the problem, I can access a params member from the group, but that params is initialized with the default values. Not the values of the last run. So how do I get that vector?
I guess a second part to this questions is how can you "set" all the parameters in one operation.
Silly limitation of stack overflow is that I cannot use the word problem in the title. I get it, but what if I want to refer to an openMDAO object called problem?
Typically you should not need to access the params vector of a Problem in almost any situation. You should only need to interact with the unknowns vector, which you can do via the Problem itself (e.g. prob['some_var']).
In your case, to make something totally automatic, based on naming only, you might actually need to get the unknowns vector itself, from the root group (root.unknowns).You can loop over that like a dictionary, and get (var_name, meta_data) pairs. You can use that to get the variable value and then use it to set the same variable name in whatever downstream problem you wish to use.
If you assume that the two problems are totally, identical, you could just blindly loop over all the values in the unknowns dictionary. But if they are not the same, but just have SOME of the same variable names, you'll have to be a bit more cautious and check to see if the variable from the first problem exists in the second.
I'm having trouble understanding this documentation for the search/6 function in the eclipse constraint programming framework.
I understand that the choice parameter basically affects the value ordering.
It also seems like the selection method chooses the variable ordering, but I don't entirely understand all the options for it.
I don't really understand the other parameters so I was wondering if someone could explain them in words. I have a pretty good understanding of the theory of constraint logic programming so feel free to refer to those concepts. I just don't understand a lot of the CS lingo in that documentation (arity, etc.)
Thank you
I'll try to answer it as briefly as possible, since search/6 is one of the most complex predicates you can find in the ECLiPSe system.
Any more detailed follow-up questions would probably better be asked in the ECLiPSe user mailing list, though.
The search/6 predicate is a generic predicate for controlling the search for a solution of a CLP problem. It allows the user to control the shape of the search tree (the order of variables along the branches, the order of the branches, and the portion of the search tree that is visited). The predicate has 6 parameters: search(+L, ++Arg, ++Select, +Choice, ++Method, +Option). (+ and ++ denote the mode of the parameter)
The first two parameters go together. L is either a list of variables or a list of terms. If it's the former, the Arg must be 0, if it's the latter, then Arg denotes the position of the variables that should be instantiated during the search, e.g.:
search([A,B],0,input_order,indomain,complete,[]).
or
search([p(1,A),p(2,B)],2,input_order,indomain,complete,[]).
In both cases, the variables A and B are instantiated during search.
The third parameter is the selection method. This method is used by search/6 to select the next variable from the list L to instantiate.
The simplest option is input_order: the search simply iterates of the variables in the list. In the examples above, it would instantiate A first, then B. The other options consider the domain size and/or the number of constraints attached to the variables, and make the selection accordingly. E.g., first_fail chooses the variable with the smallest domain. If the current domain of A is [1,2,3] and B has the domain [1,3], then B will be selected and instantiated first. If more than one variable has the same smallest domain size, then the first of these by input order will be selected. Selection methods that take the domain size into account achieve a dynamic variable ordering, since the domain sizes will change (shrink) during search, depending on the amount of propagation that the constraints achieve.
The other selection methods should now be self-explanatory.
It is also possible to define your own selection method, provided that the predicate that implements it has arity 2, i.e., has two parameters. The predicate must take a variable as input and calculate some criterion value. The variable with the smallest criterion value will be selected.
The fourth parameter is the choice method. Once a variable is selected, the choice method controls the order in which the values in its domain are tried during search.
The simplest option is indomain, which chooses the values in the variable's current domain in ascending order. I.e., if variable A has the domain [1,3,5], then the search will initially bind A to 1, on backtracking bind it to 3, and finally to 5. indomain_middle will start with 3, then 1, then 5.
The more complex choice methods (i.e., other than indomain) will remove a tried value on backtracking, i.e., basically add additional constraints like A#\=1. This will cause additional propagation which may in turn allow earlier detection of infeasibilities. You can see the effect when running the n-queens example from the search/6 documentation that you linked to in your question.
Again, it is also possible to define your own choice method. The predicate must be of arity 1 or 3. If the arity is 1, then the predicate takes one variable as input and binds it to a value (or makes some other choice which alters the domain of the variable). If the arity is 3, then you can use the two additional parameters to pass along some state information which you can use to make the choice.
The fifth parameter is the search method. This controls the size of the section of the search tree that the search should explore (whereas the selection method controls the order of the variables along the branches of the tree, and the choice method controls the order of the branches in the search tree).
The simplest option is complete, which searches the tree left-to-right until the tree is exhausted. All other options (apart from symmetry breaking) are incomplete search methods, i.e., there will be branches in the search tree that are left unexplored. If the solution is on the leaf of such an unexplored branch, then it will not be found. You have to make sure that selection and choice methods shape the search tree in a way that the incomplete search method is able to find the solution. The option bbs, for instance, restricts the number of backtracks that can be made during search. If that number is exhausted, then the search will stop.
Symmetry breaking will only exclude branches that are equivalent (symmetrical) to other branches, in some way.
The sixth parameter is a list of possible additional options, described in the search/6 documentation. Normally, you won't need them.
Have a stored procedure that produces a number--let's say 50, that is rendered as an anchor with the number as the text. When the user clicks the number, a popup opens and calls a different stored procedure and shows 50 rows in a html table. The 50 rows are the disaggregation of the number the user clicked. In summary, two different aspx pages and two different stored procedures that need to show the same amount, one amount is the aggregate and the other the disaggregation of the aggregate.
Question, how do I test this code so I know that if the numbers do not match, there is an error somewhere.
Note: This is a simplified example, in reality there are 100s of anchor tags on the page.
This kind of testing falls outside of the standard / code level testing paradigm. Here you are explicitly validating the data and it sounds like you need a utility to achieve this.
There are plenty of environments to do this and approaches you can take, but here's two possible candidates
SQL Management Studio : here you can generate a simply script that can run through the various combinations from the two stored procedures ensuring that the number and rows match up. This will involve some inventive T-SQL but nothing particular taxing. The main advantage of this approach is you'll have bare metal access to the data.
Unit Testing : as mentioned your problem is somewhat outside of the typical testing scenario where you would oridnarily Mock the data and test into your Business Logic. However, that doesn't mean you cannot write the tests (especially if you are doing any Dataset manipulation prior to this processing. Check out this link and this one for various approaches (note: if you're using VS2008 or above, you get the Testing Projects built in from the Professional Version up).
In order to test what happens when the numbers do not match, I would simply change (temporary) one of the stored procedure to return the correct amount +1, or always return zero, etc.