For some of our components it would be useful to know whether it's being executed as part of a finite difference calculation or not. One example could be a meshing component where we'd want to maintain the same node count and distribution function during FD and allow for remeshing during major iteration steps. In the old OpenMDAO we could detect this from a component's itername. Would it be possible to reintroduce this or is that info already available to the Component class?
I can't think of any current way to figure out if you are inside an FD when solve_nonlinear is being called, but it's a good idea for the reasons that you mention.
We don't currently have that capability, but others have also asked to be informed when solve_nonlinear is being run for complex step as well.
One way to do this would be to introduce an optional_argument to solve_nonlinear such as call_mode="fd" or call_mode="cs" or call_mode="solve". The only problem with this approach is that its very backwards incompatible.
Another approach would be to add a regular python attribute to the component that you could check like self.call_mode="solve", etc. This one would be a pretty easy change and I think it would serve the purpose.
One last possible way would be to put a flag into the unknowns/params vector. So you would check params.call_mode to see what mode. This is somewhat sensible since its the param values that change when you're going to complex-step.
I think I like the last option the best. Both solve_nonlinear and apply_nonlinear need to know about this information. But none of the other methods do. So making it a component attribute seems a little out of place.
Related
As the title states, the get_val() function allows the user to retrieve the value of a input, output or residual. Is the anything like a get_partial(of=..., wrt=...) that allows a user to retrieve a derivative? Or what would be the best way to go about retrieving that from the problem or model?
For getting a general derivative in a system, the recommended practice is to use the compute_totals method.
Even if you just want to look at a partial derivative, you can use the of and wrt arguments to point to just the specific partial. You'll get a total, but it should be equal to the partial.
The general debugging practice for looking at partials is to use check_partials. This will give you full values of all the partials to look at. But if you need an algorithmic approach as part of a run script, then use compute_totals.
OpenMDAO stores outputs, so obtaining those is a matter of getting a value that's already there (hence get_val).
For derivatives, depending on the way in which OpenMDAO is used, there's no guarantee that the totals are present in memory, so they must be computed when needed.
I need to know if the data for training that is passed in the neuralnet call is randomized in the routine or does the routine uses the data in the same order that is given. I really need to know this info for a project that I am working on, and I have not being able to figure it out by looking at the source.
Thnx!
Look into the code - thats one of the most important advantages of FOSS: you can actually check what it is doing (neuralnet is pure R, so you don't even need to fear that you need to dig into FORTRAN or C code, and you can use debug to step through the code with example data to get an overview).
Moreover, if necessary, you can even introduce e.g. a new parameter that allows you to switch off randomization if needed.
Possibly maintainer ("neuralnet") would be willing to help you as well (and able to answer much faster than about everyone else here on SE).
I'm writing some children's Math Education software for a class.
I'm going to try and present problems to students of varying skill level with randomly generated math problems of different types in fun ways.
One of the frustrations of using computer based math software is its rigidity. If anyone has taken an online Math class, you'll know all about the frustration of taking an online quiz and having your correct answer thrown out because your problem isn't exactly formatted in their form or some weird spacing issue.
So, originally I thought, "I know! I'll use an expression parser on the answer box so I'll be able to evaluate anything they enter and even if it isn't in the same form I'll be able to check if it is the same answer." So I fire up my IDE and start implementing the Shunting Yard Algorithm.
This would solve the problem of it not taking fractions in the smallest form and other issues.
However, It then hit me that a tricky student would simply be able to enter most of the problems into the answer box and my expression parser would dutifully parse and evaluate it to the correct answer!
So, should I not be using an expression parser in this instance? Do I really have to generate a single form of the answer and do a string comparison?
One possible solution is to note how many steps your expression evaluator takes to evaluate the problem's original expression, and to compare this to the optimal answer. If there's too much difference, then the problem hasn't been reduced enough and you can suggest that the student keep going.
Don't be surprised if students come up with better answers than your own definition of "optimal", though! I was a TA/grader for several classes, and the brightest students routinely had answers on their problem sets that were superior to the ones provided by the professor.
For simple problems where you're looking for an exact answer, then removing whitespace and doing a string compare is reasonable.
For more advanced problems, you might do the Shunting Yard Algorithm (or similar) but perhaps parametrize it so you could turn on/off reductions to guard against the tricky student. You'll notice that "simple" answers can still use the parser, but you would disable all reductions.
For example, on a division question, you'd disable the "/" reduction.
This is a great question.
If you are writing an expression system and an evaluation/transformation/equivalence engine (isn't there one available somewhere? I am almost 100% sure that there is an open source one somewhere), then it's more of an education/algebra problem: is the student's answer algebraically closer to the original expression or to the expected expression.
I'm not sure how to answer that, but just an idea (not necessarily practical): perhaps your evaluation engine can count transformation steps to equivalence. If the answer takes less steps to the expected than it did to the original, it might be ok. If it's too close to the original, it's not.
You could use an expression parser, but apply restrictions on the complexity of the expressions permitted in the answer.
For example, if the goal is to reduce (4/5)*(1/2) and you want to allow either (2/5) or (4/10), then you could restrict the set of allowable answers to expressions whose trees take the form (x/y) and which also evaluate to the correct number. Perhaps you would also allow "0.4", i.e. expressions of the form (x) which evaluate to the correct number.
This is exactly what you would (implicitly) be doing if you graded the problem manually -- you would be looking for an answer that is correct but which also falls into an acceptable class.
The usual way of doing this in mathematics assessment software is to allow the question setter to specify expressions/strings that are not allowed in a correct answer.
If you happen to be interested in existing software, there's the open-source Stack http://www.stack.bham.ac.uk/ (or various commercial options such as MapleTA). I suspect most of the problems that you'll come across have also been encountered by Stack so even if you don't want to use it, it might be educational to look at how it approaches things.
I'm a (Py)Qt newbie, porting C# GUI code to Qt for a couple of days now. One question that I keep asking myself is why are QAbstractItemModel subclasses required to supply a parent() method, and why are they required to supply, in the resulting QModelIndex, the row of a child in the parent?
This requirement forces me to add another layer over my tree data (because I don't want to call indexOf(item) in parent(), it wouldn't be very efficient) that remembers row indexes.
I ask this because it's the first time I see a model based view require this. For example, NSOutlineViewDataSource in Cocoa doesn't require this.
Trolltech devs are smart people, so I'm sure there's a good reason for this, I just want to know what reason.
The quick answer is, "they thought it best at the time." The Qt developers are people just like you and me -- they aren't perfect and they do make mistakes. They have learned from that experience and the result is in the works in the form of Itemviews-NG.
In their own words from the link above:
Let’s just say that there is room for improvement, lots of room!
By providing a parent that contains a row and column index, they provide one possible way to implement trees and support navigation. They could just as easily have used a more obvious graph implementation.
The requirement is primarily to support trees. I couldn't tell you the reason, since I'm not a Qt dev... I only use the stuff. However, if you aren't doing trees, you could probably use one of the more-tuned model classes and not have to deal with the overhead of supplying a parent. I believe that both QAbstractListModel and QAbstractTableModel handle the parent portion themselves, leaving you free to just worry about the data you want.
For trees, I suspect that one of the reasons they need the parent is that they try to keep to only asking for the information they need to draw. Without knowing all of the items in a tree (if it wasn't expanded, for example), it becomes much harder to provide an absolute position of a given item in a tree.
As for the quandry of using indexOf(item) in the parent function, have you considered using QModelIndex's internalId or internalPointer? I'm assuming they are available in PyQt... they can be used by your model to track things about the index. You might be able to use that to shortcut the effort of finding the parent's index.
When I try to refactor my functions, for new needs, I stumble from time to time about the crucial question:
Shall I add another variable with a default value? Or shall I use only one array, where I´m able to add an additional variable without breaking the API?
Unless you need to support a flexible number of variables, I think it's best to explicitly identify each parameter. In most cases you can add an overloaded method that has a different signature to support the extra parameter while still supporting the original method signature. If you use an array for passing variables it just makes it too confusing for users of your API. Obviously there are some inputs that lend themselves to an array (a list of points in a polygon, a list of account IDs you wish to perform an action on, etc.) but if it's not a variable that you would reasonably expect to be an array or list, you should pass it into the method as a separate parameter.
Just like many questions in programming, the right answer is "it depends".
To take Javascript/jQuery as an example, one good rule of thumb is whether the parameter will be required each time the function is called or whether it is optional. For example, the main jQuery function itself requires an expression to determine what element(s) the operation will affect:
jQuery(expresssion)
It makes no sense to try to pass this parameter as part of an array as it will be required every time this function is called.
On the other hand, many jQuery plugins require several miscellaneous parameters that may be optional. By convention, these are passed as parameters via an 'options' array. As you said, this provides a nice interface as new parameters can be added without affecting the existing API. This makes the API clean as well since the user can ignore those options that are not applicable.
In general, when several parameters are involved, passing them as an array is a nice convention as many of them are certainly going to be optional. This would have helped clean up many WIN32 API's, although it is more difficult to deal with arrays in C/C++ than in Javascript.
It depends on the programming language used.
If you have a run-of-the-mill OO language, you should use an object that you can easily extend, if you are really concerned about API consistency.
If that doesn't matter that much, there is the option of changing the method signature and overloading the method with more / different parameters.
If your language doesn't support either and you want the API to be binary stable, use an array.
There are several considerations that must be made.
Where is the function used? - Only in code you created? One place or hundreds of places? The amount of work that will need to be done to maintain existing code is important. Remember to include the amount of time it will take to communicate to other programmers that may currently be using your function.
How critical is the new parameter? - Do you want to require it to be used? If it has a default value, will that default value break existing use of the function in any subtle ways?
Ease of comprehension - How many parameters are already passed into the function? The larger the number, the more confusing and error prone it will be. Code Complete recommends that you restrict the number of parameters to 7 or less. If you need more than that, you should try to abstract some or all of the related parameters into one object.
Other special considerations - Do you want to optimize your efforts for any special conditions such as code speed or size? Are there any special considerations that must be taken into account for your execution environment? Keep in mind your goals for the project and make sure you aren't working against them with whatever design choice you make.
In his book Code Complete, Steve McConnell decrees that a function should never have more than 7 arguments, and rarely even that many. He presents compelling arguments - that I can't cite from memory, alas.
Clean Code, more recently, advocates even fewer arguments.
So unless the number of things to pass is really small, they should be passed in an enveloping structure. If they're homogenous, an array. If not, then a reasonably lightweight object should be built for the purpose.
You should do neither. Just add the parameter and change all callers to supply the proper default value. The reason is that parameters with default values can only be at the end, and will not be able to add any more required parameters anywhere in the parameters list, without having a risk of misinterpretation.
These are the critical steps to disaster:
1. add one or two parameters with defaults
2. some callers will supply it, and some will rely on defaults.
[half a year passed]
3. add a required parameter (before them)
4. change all callers to accept the required parameter
5. get a phone call, or other event which will make you forget to change one of the instances in part#2
6. now your program compiles perfectly, but is invalid.
Unfortunately, in function call semantics we usually don't have a chance to say, by name, which value goes where.
Array is also not a proper solution. Array should be used as a connection of similar objects, upon which there's a uniform activity performed. As they say here, if it's worth refactoring, it's worth refactoring now.