Composing functions in Chainer - chainer

I wish to create a new Function object in Chainer by composing a number of existing functions. I haven't been able to find any method to do this in the docs. I could implement the composite function directly, this might be computationally more efficient. Or I could use the existing forward and backward methods of the existing functions.
What is the recommended approach?

If you can achieve the computation by existing chainer functions, I think it is ok to create your function by just compositing the existing functions at first.
You can consider optimizing the code later, once you found it becomes the bottleneck of computation.

Related

Replaceing Component in OpenMDAO Group

In OpenMDAO 0.x there was a 'replace' method that let you swap components, not clear you can do that easily in 1.x. I have Problem that my outer loop algorithm has to run multiple times, and in some cases want to swap the computationally expensive custom MDA component out for a MetaModel Component which has the same i/o. Is there a quick and dirty way to do this at runtime?
I would just define a custom group class that takes an argument telling it which one to use. Rather than use a replace I suggest just re-instantiating the whole problem and calling setup again.
If, for some reason you don't want to call setup more than once (maybe its a big model and setup is slow), then I suggest you just create two problem instances. One will have the MMDA and the other will have the meta-model. In your outer loop you can just call whichever one is appropriate.
Sounds confusing to combine MetaModel and MDA problems. Having a MDA model suggests that the problem params and model are explicit, while the MetaModel suggests that the problem params and model are implicit which are two very different problems. Running two different problems conditionally seems contrary to the MDAO paradigm of a single system of non-linear equations and could get very messy to implement, IMHO.
If the intent of the MetaModel is to refine the initializing guess for the MDA, then perhaps the Component Implicit State concept in v1.7 is the built-in method to employ in the MDA problem and abandon the MetaModel thus reducing the problem to one set of equations? Beware that I have not tested the Component Implicit State method.
Otherwise, all OpenMDAO classes are just python classes and can have their solve_nonlinear methods modified to include conditional logic. Perhaps your project could create a parent problem and a parent group which control conditionally the execution of the solver and the data flows as needed between the MetaModel and the custom MDA?
Your thoughts?

Should the usage of reflection be avoided in Go?

I'm new to Go and also new to the concept of reflection, but should and can the usage of reflect package be avoided in Go? Is there a scenario where reflect is unavoidable?
There are a few problem domains where reflection makes it easier to write reusable libraries:
marshalling/unmarshalling, plenty of examples in the standard library, e.g. encoding/json, encoding/xml
formatting, e.g. text/template, html/template, fmt.Printf.
However there is a price you pay for using reflection:
compile time errors become runtime errors (e.g. fmt.Printf("%d", stringVariable))
performance becomes worse
Very often an alternative solution exists that do not require reflection such as code generation, that is used by marshalling libraries like protobuf or thrift.
I agree with #volker that you should use reflection only when you know that it will simplify already existing code and aware of all downsides.
You should avoid reflection.
Some packages (e.g. fmt) cannot be implemented without reflection as you cannot typeswitch on all existing and upcoming types.
If you are new to Go: Keep away from reflection.

Is Elixir's Module.register_attribute mutability?

Is it a way to create mutable state with modules? How can using this be a good idea? Wouldn't that kind of break the immutability idea from functional programming?
No because it's used at compile-time. It's kind of #define in C.
You can see example https://gist.github.com/mprymek/8379066 where attribute "sensors" is used to accumulate functions defined with macro "sensor". When you have all these functions accumulated, you can automatically generate function "run_all" which runs all of them. Of course all of this must be done at compile-time.

How Object Oriented Programming and Functional programming can be used together?

Scala claims than OO and FP can be combined.
I wonder how this can be achieved in practice. I mean object can change, so making them immutable means i have to create a new object whenever something changes right? This doesn't seem too effective to me.
By the way, if i make external reference to an object property from a function, doesn't it hurts referential transparency?
Don't think of this as one paradigm imposing restrictions on the other but as how can one take the best of both paradigms.
As a simple example:
Objects have functions which can be internal to an object. Now the internal functions can be immutable within an object and those results of a function can be used to change the state of a object.
Thinking at a different level one can use functions to create a library that can be used by objects.
How I like to make the best of both is I tend to make libraries (modules) for the more abstract processing using a functional language and then use OO languages for the layers closer to human and external processing. This is not a hard and fast rule but a guideline from where I start.

Using generic functions of R, when and why?

I'm developing an major upgrade to the R package, and as part of the changes I want to start using the S3 methods so I can use the generic plot, summary and print functions. But I think I'm not totally sure I understand why and when to use generic functions in general.
For example, I currently have a function called logLikSSM, which computes the log-likelihood of a state space model. Instead of using this functions, I could make function logLik.SSM or something like that, as there is generic function logLik in R. The benefit of this would be that logLik is shorter to write than logLikSSM, but is there really any other point in this?
Similar case, there is a generic function called simulate in stats package, so in theory I could use that instead of simulateSSM. But now the description of the simulate function tells that function is used to "Simulate Responses", but my function actually simulates the hidden states, so it really doesn't fit into the description of the simulate function. So probably in this case I shouldn't use the generic function right?
I apologize if this question is too vague for here.
The advantages of creating methods for generics from the core of R include:
Ease of Use. Users of your package already familiar with those generics will have less to remember making it easier to use your package. They might even be able to do a certain amount without reading the documentation. If you come up with your own names then they must discover and remember new names which is an added cognitive burden.
Leverage Existing Functionality. Also any other functions that make use of generics you create methods for can then automatically use yours as well; otherwise, they would have to be changed. For example, AIC uses logLik.
A disadvantage is that the generic involves the extra level of dispatch and if logLik is in the inner loop of an optimization there could be an impact (although possibly not material). In that case you could check the performance of calling the generic vs. calling the method directly and use the latter if it makes a significant difference.
Regarding the case that your function has a completely different purpose than the generic in the core of R, then it might be more confusing than helpful so you might, in that case, not create a method but have your own function name.
You might want to read the zoo Design manual (see link to zoo Design under Vignettes near the bottom of that page) which discusses the design ideas that went into the zoo package. These include the idea being discussed here.
EDIT: Added disadvantates.
good question.
I'll split your Question into two parts; here's the first one:
i]s there really any other point in [making functions generic]?
Well, this pattern is usually invoked when the develper doesn't know the object class for every object he/she expects a user to pass in to the method under consideration.
And because of this uncertainty, this design pattern (which is called overloading in many other languages) is invokved, and which requires R to evaluate the object class, then dispatch that object to the appropriate method given the object type.
The second part of your Question: [i]n this case I shouldn't use [the generic function] right?
To try to give you an answer useful beyond the detail of your Question, consider what happens to the original method when you call setGeneric, passing that method in.
the original function body is replaced with code for performing a top-level dispatch based on type of object passed in. This replaces the original function body, which just slides down one level so that it becomes the default method that the top level (generic) function dispatches to.
showMethods() will let you see all of those methods which are called by the newly created dispatch function (generic function).
And now for one huge disadvantage:
Ease of MISUse:
Users of your package already familiar with those generics might do a certain amount without reading the documentation.
And therein lies the fallacy that components, reusable objects, services, etc are an easy panacea for all software challenges.
And why the overwhelming majority of software is buggy, bloated, and operates inconsistently with little hope of tech support being able to diagnose your problem.
There WAS a reason for static linking and small executables back in the day. But this generation of code now, get paid now, debug later if ever, before the layoffs/IPO come, has no memory of the days when code actually worked very reliably and installation/integration didn't require 200$/hr Big 4 consultants or hackers who spend a week trying to get some "simple" open source product installed and productively running.
But if you want to continue the tradition of writing ever shorter function/method names, be my guest.

Resources