I need the ability to specify an exit flag if something failed. The OpenMDAO documentation for the pyoptsparse_driver has the option for an exit flag. However, when I run it with an exit flag as an option it says that Option 'exit_flag' has not been added. Also I am also not sure how to actually specify if something failed or not. Would I need to pass the flag out of the component that failed in the solve_nonlinear() and somehow use that to set the option on the pyoptsparse_driver? I want to do something kind of like this but I'm not sure on the syntax and I can't find an example:
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'], exit_flag = function(params['x'])
self.exit_flag = exit_flag
There are a number of issues here:
1) "How do I propagate failure information from a component up to the optimizer?"
We don't currently have a way of handling this. Its something we'll be working on in the near future though
2) If a component does fail, what is the proper response?
Depends on what you're doing. For a DOE, you should probably just log the failed case and keep going. For a gradient free method, probably some kind of objective penalization is warranted. For a gradient based algorithm, you likely need to back-track on the line-search (or use some other similar kind of walk back mechanism).
3) In the event that it all fails, can the driver report an overall exit status out.
Again, we don't have this implemented yet in a general way. The option you found in pyopt_sparse driver is a mistake in the doc-string. There is an exit_flag attribute that gets set based on the internal pyopt state though.
Related
First I have a package called DataBaseLayer and it has an S3 method called LoadFromTable(data_request). Second there is another package called RiskCalculator which determines several types of risks and does requests to the database by means of the package DataBaseLayer. Before "triggering" RiskCalculator (by means of an execute function defined in it) a connection to some schema of the database is set up and the method LoadFromTable will refer to that particular schema.
For some tests that I need to perform I have to switch schema depending on the value in data_request that enters LoadFromTable(data_request). Thus what I actually need is to insert a little check in LoadFromTable. As a note, currently there is only a default method implemented, i.e. LoadFromTable.default, and it would thus suffice even to only insert that check in that specific method.
My question is thus twofold:
1. Is there a general way to insert a piece of code before any LoadFromTable method is called, naively said: to insert a piece of code just before UseMethod("LoadFromTable", data_request) is "called".
2. If there is no such way, can we at least insert a piece of code just before LoadFromTable.default is called (for in my case that would now suffice).
As a final note, I can imagine you might say that the whole structure should be changed, and I agree, however, that is not an option for I am not the owner of these packages.
Thanks for your help.
It’s strongly discouraged, and fundamentally the wrong approach, to change code in loaded packages, so I won’t discuss it here (but I’ll mention that this is done via the assignInNamespace function).
But your case can be solved much easier: just override the LoadFromTable generic function in the RiskCalculator package as follows:
LoadFromTable = function (request) {
# TODO: perform your check here.
DataBaseLayer::LoadFromTable(request)
}
Now if you load your RiskCalculator package and call the function either explicitly (via RiskCalculator::LoadFromTable) or implicitly (via LoadFromTable after attaching the RiskCalculator package), your implementation will be called.
Try trace:
library(DataBaseLayer)
trace(LoadFromTable, quote(print("Hello")))
The library statement is important, even if you don't otherwise access that package yourself.
I am new to gRPC and trying to use it in my existing system. However, I get this unused parameter error while compiling it.
server_grpc.cc:100:39: error: unused parameter ‘context’[-Werror=unused-parameter]
Status MyFunc(ServerContext* context, const QueryRequest* request,
Probably the context parameter is used in some other cases. But, in simple hello world type of example it is not used. Is there a way to compile the protocol buffer without generating the ServerContext parameter ?
I know I can make the compiler ignore warning messages. But, just wondering if it can be done without affecting the way my system is being compiled right now.
I would like to know how the context is used ? It would be great if anybody can give pointers to how to use this context. I might find a use of it in my work.
The ServerContext is provided to, well, add context for every RPC you get. It'll allow you to tweak certain aspects of the RPC, such as deal with authentication, or add metadata to your response back to the client. You may or may not need that parameter, obviously, depending on your needs.
We didn't want to add an option for this specifically, because that'd complexify the code and tool for little benefit, so the code generator and the function signature force you to have that parameter at all times. Now this isn't really a big deal, because in C++, you can specifically ask your compiler to ignore a parameter in a specific instance, for example with the following:
Status SayHello(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
(void) context; // ignore that variable without causing warnings
std::string prefix("Hello ");
reply->set_message(prefix + request->name());
return Status::OK;
}
And that's how I'd suggest you to take care of that warning in that specific instance, without causing your whole project to not have warnings enabled.
What's the proper way to interrupt a long chain of compose or pipe functions ?
Let's say the chain doesn't need to run after the second function because it found an invalid value and it doesn't need to continue the next 5 functions as long as user submitted value is invalid.
Do you return an undefined / empty parameter, so the rest of the functions just check if there is no value returned, and in this case just keep passing the empty param ?
I don't think there is a generic way of dealing with that.
Often when working with algebraic data types, things are defined so that you can continue to run functions even when the data you would prefer is not present. This is one of the extremely useful features of types such as Maybe and Either for instance.
But most versions of compose or related functions don't give you an early escape mechanism. Ramda's certainly doesn't.
While you can't technically exit a pipeline, you can use pipe() within a pipeline, so there's no reason why you couldn't do something like below to 'exit' (return from a pipeline or kick into another):
pipe(
// ... your pipeline
propOr(undefined, 'foo'), // - where your value is missing
ifElse(isNil, always(undefined), pipe(
// ...the rest of your pipeline
))
)
I'm writing a program in Scala and trying to remain as functionally pure as is possible. The problem I am facing is not Scala specific; it's more to do with trying to code functionally. The logic for the function that I have to code goes something like:
Take some value of type A
Use this value to generate log information
Log this information by calling a function in an external library and evaluate the return status of the logging action (ie was it a successful log or did the log action fail)
Regardless of whether the log succeeded or failed, I have to return the input value.
The reason for returning the input value as the output value is that this function will be composed with another function which requires a value of type A.
Given the above, the function I am trying to code is really of type A => A i.e. it accepts a value of type A and returns a value of type A but in between it does some logging. The fact that I am returning the same value back that I inputted makes this function boil down to an identity function!
This looks like code smell to me and I am wondering what I should do to make this function cleaner. How can I separate out the concerns here? Also the fact that the log function goes away and logs information means that really I should wrap that call in a IO monad and call some unsafePerformIO function on it. Any ideas welcome.
What you're describing sounds more like debugging than logging. For example, Haskell's Debug.Trace.trace does exactly that and its documentation states: "These can be useful for investigating bugs or performance problems. They should not be used in production code."
If you're doing logging, the logging function should only log and have no further return value. As mentioned by #Bartek above, its type would be A -> IO (), i.e. returning no information () and having side-effects (IO). For example Haskell's hslogger library provides such functions.
I found this in rtt-estimator.h the constructor sets the value for m_initialEstimatedRtt which I believe directly controls the Retransmit Timeout value.
I am not sure how to set the value for m_initialEstimatedRtt.
I see a method named SetCurrentEstimate that could be used to change that value but I am not sure at what stage in the simulation I should modify it if I use that so I prefer to control the initial.
Also I'm wondering what is the default value set in the examples and where can I find it?
There are many ways to set that variable, chiefly through the attribute system. The attriobute associated to that variable is ns3::RttEstimator::InitialEstimation from rtt-estimator.cc)
If you have followed the standard script layout, all you need is to use the following command-line argument:
--ns3::RttEstimator::InitialEstimation=1.0s
The tutorial gives a gentle introduction to the use of attributes through the command-line and environment variables:
http://www.nsnam.org/docs/release/3.19/tutorial/html/tweaking.html#using-command-line-arguments
There are more details there:
http://www.nsnam.org/docs/release/3.19/manual/html/attributes.html
You might find the ConfigStore useful too:
http://www.nsnam.org/docs/release/3.19/manual/html/attributes.html#configstore