I wanted to modify my problem and broke it down to some groups.
None of the groups have solvers attached to it. So they are just groups (composed of few components) that make it easy for the user to
differentiate various parts of the product. (each being a group).
I am confused with the IndepVarComp() and Metadata (declared in the initialization)
So far I have always used a single group and a single IndepVarComp() where the outputs were always the design variables.
If I continue this approach I could use metadata and pass large dictionaries i.e self.options.declare('aa', types=dict).
But looking at other codes based on openmdao, they seem to use indepvarcomp as if it is the metadata (ie. they dont change during iterations.and they are used as a component inside that group)
Could you guide me if one or the other is right way to
IndepVarComp outputs should be used for any output that could conceivably be a design variable or a parameter that you might wish to change in your run script.
Metadata should be used for anything that is explicitly intended to be constant. It is intended to be set once during instantiation and then not changed after the fact.
I am testing a login system which stores a session in a variable. The variable is dynamic; meaning, its value changes for every request. I followed this tutorial http://sjpknight.com/correlating-dynamic-values-in-jmeter/ for doing this.
Now, the thing is, this session variable (${session_store}) auto generates only for the first time it is called in my http samplers (I call it thrice in my application). The second time I call it, the value it has is the default value I stored in it in the Regular Expression Extractor element. Meaning, the auto generated value in the first variable call is not received by the second variable call. Right now, I am having an error since the values of the two variable calls are not the same. Any suggestions on how to fix this?
Add View Results Tree listener to your Test Plan and inspect the response in all 3 cases. As far as I understand you expect the same or similar response as in the 1st case and most likely it is different. You can test your Regular Expressions agains live responses directly in the View Results Tree listener using RegExp Tester mode
In general when it comes to Regular Expressions and HTML you can never tell whether your expression provide a match or not as it depends on multiple factors and even a slight change in the markup, i.e. value you're looking for starts coming in 2 lines, will ruin your test so I would recommend to go for CSS/JQuery Extractor or XPath Extractor instead
I'm designing an API to query the history of a value over a time period. Think about a temperature value, and you want to query all the values for today.
I have a from and a to parameter to specify the boundaries of the query.
The values available may not exactly match the boundaries requested. For example, if from is 2016-02-17T00:00:00Z, the first value may be on 2016-02-17T00:04:30Z. To fully represent a graph of the period, it is necessary to retrieve one more value outside the given range. The value on 2016-02-16T23:59:30Z is useful and it would be convenient for the user to not have to make another query to retrieve it.
So as the API designer I'm thinking about a parameter with a pair a of boolean values that would tell for each boundary: give me one more value if there is no value exactly on the boundary.
My question is how to name this parameter as English is not my native language.
Here are a few ideas I have so far but with which I'm not totally satisfied:
overflow=true,true
overstep=true,true
edges=true,true
I would also appreciate any links to existing APIs with that feature, either web API or in programming languages.
Is it possible to make this more of a function/RPC that a traditional rest resource endpoint, so rather than requesting data for a resource between 2 dates like
/myResource?from=x&to=x
something more like
/getGraphData?graphFrom=&graphTo=x
Whilst its only a naming thing, it makes it a bit more acceptable to retrieve results for a task wrapped with outer data, rather than violating parameters potentially giving unexpected or confusing results.
I have a list of algorithms that I want to run on a dataset. For example, say my dataset is a list of addresses. I need to check the validity of the addresses but I have several different algorithms for validating. Say I have validation_one and validation_two. But in the future I will need to add validation_three, validation_four, etc. I need ALL of the validations to run on the address list, even the new ones when they get added.
Is there a design pattern that fits into this? I know strategy is for selecting an algorithm but I specifically need a way to apply all the algorithms on the dataset.
You have not stated a language.. but assuming it has generics.
Given a DataSet<T>
Assuming also that there is no cross validation required (i.e. each T can be validated entirely by its own data)
Declare a validation Strategy, with a single method.
IValidate<T>{bool validate(T item);}
validation_one, validation_two…. Will implement this strategy
Have a List<IValidate<T>> which you can add and remove implementations to.
Foreach item in the dataset call each strategy in the list.
It’s then your choice to how you deal with failures.
This sounds like the Chain of Responsibility where all handlers in the chain accept the request and pass it to the next handler.
Composite.
Which is basically what the two other answers equate to.
The first answer is an implementation of COmposite. The second one is usign Chain of Rersponsibility as a Composite.
I'm having trouble understanding this documentation for the search/6 function in the eclipse constraint programming framework.
I understand that the choice parameter basically affects the value ordering.
It also seems like the selection method chooses the variable ordering, but I don't entirely understand all the options for it.
I don't really understand the other parameters so I was wondering if someone could explain them in words. I have a pretty good understanding of the theory of constraint logic programming so feel free to refer to those concepts. I just don't understand a lot of the CS lingo in that documentation (arity, etc.)
Thank you
I'll try to answer it as briefly as possible, since search/6 is one of the most complex predicates you can find in the ECLiPSe system.
Any more detailed follow-up questions would probably better be asked in the ECLiPSe user mailing list, though.
The search/6 predicate is a generic predicate for controlling the search for a solution of a CLP problem. It allows the user to control the shape of the search tree (the order of variables along the branches, the order of the branches, and the portion of the search tree that is visited). The predicate has 6 parameters: search(+L, ++Arg, ++Select, +Choice, ++Method, +Option). (+ and ++ denote the mode of the parameter)
The first two parameters go together. L is either a list of variables or a list of terms. If it's the former, the Arg must be 0, if it's the latter, then Arg denotes the position of the variables that should be instantiated during the search, e.g.:
search([A,B],0,input_order,indomain,complete,[]).
or
search([p(1,A),p(2,B)],2,input_order,indomain,complete,[]).
In both cases, the variables A and B are instantiated during search.
The third parameter is the selection method. This method is used by search/6 to select the next variable from the list L to instantiate.
The simplest option is input_order: the search simply iterates of the variables in the list. In the examples above, it would instantiate A first, then B. The other options consider the domain size and/or the number of constraints attached to the variables, and make the selection accordingly. E.g., first_fail chooses the variable with the smallest domain. If the current domain of A is [1,2,3] and B has the domain [1,3], then B will be selected and instantiated first. If more than one variable has the same smallest domain size, then the first of these by input order will be selected. Selection methods that take the domain size into account achieve a dynamic variable ordering, since the domain sizes will change (shrink) during search, depending on the amount of propagation that the constraints achieve.
The other selection methods should now be self-explanatory.
It is also possible to define your own selection method, provided that the predicate that implements it has arity 2, i.e., has two parameters. The predicate must take a variable as input and calculate some criterion value. The variable with the smallest criterion value will be selected.
The fourth parameter is the choice method. Once a variable is selected, the choice method controls the order in which the values in its domain are tried during search.
The simplest option is indomain, which chooses the values in the variable's current domain in ascending order. I.e., if variable A has the domain [1,3,5], then the search will initially bind A to 1, on backtracking bind it to 3, and finally to 5. indomain_middle will start with 3, then 1, then 5.
The more complex choice methods (i.e., other than indomain) will remove a tried value on backtracking, i.e., basically add additional constraints like A#\=1. This will cause additional propagation which may in turn allow earlier detection of infeasibilities. You can see the effect when running the n-queens example from the search/6 documentation that you linked to in your question.
Again, it is also possible to define your own choice method. The predicate must be of arity 1 or 3. If the arity is 1, then the predicate takes one variable as input and binds it to a value (or makes some other choice which alters the domain of the variable). If the arity is 3, then you can use the two additional parameters to pass along some state information which you can use to make the choice.
The fifth parameter is the search method. This controls the size of the section of the search tree that the search should explore (whereas the selection method controls the order of the variables along the branches of the tree, and the choice method controls the order of the branches in the search tree).
The simplest option is complete, which searches the tree left-to-right until the tree is exhausted. All other options (apart from symmetry breaking) are incomplete search methods, i.e., there will be branches in the search tree that are left unexplored. If the solution is on the leaf of such an unexplored branch, then it will not be found. You have to make sure that selection and choice methods shape the search tree in a way that the incomplete search method is able to find the solution. The option bbs, for instance, restricts the number of backtracks that can be made during search. If that number is exhausted, then the search will stop.
Symmetry breaking will only exclude branches that are equivalent (symmetrical) to other branches, in some way.
The sixth parameter is a list of possible additional options, described in the search/6 documentation. Normally, you won't need them.