I wonder about the idea of representing and executing programs using graphs. Some kind of stackless model where the each node in the graph represents a function and the edges represent arguments to the functions. In this way a function doesn't return the result to its caller,but passes the result as an arg to another function node. Total nonsense? Or maybe it is just a state machine in disguise? Any actual implementations of this anywhere?
This sounds a lot like a State machine.
I think Dybvig's dissertation Three Implementation Models for Scheme does this with Scheme.
I'm pretty sure the first model is graph-based in the way you mean. I don't remember whether the third model is or not. I don't think I got all the way through the dissertation.
for javascript you might want to checkout node-red (visual) or jsonflow (json)
Related
I would like to test my recently created algorithm on large (50+ node) graphs. Preferrably, they would specifically be challenging graphs, and known tours would exist (for at least most of them).
Problem sets for this problem do not seem as easy to find as with the TSP. I am aware of the Flinder's challenge set available at http://www.flinders.edu.au/science_engineering/csem/research/programs/flinders-hamiltonian-cycle-project/fhcpcs.cfm
However, they seem to be directed. I can probably alter my algorithm to work for directed, but it will take time and likely induce bugs. I'd prefer to know if it can work for undirected first.
Does anyone know where problem sets are available? Thank you.
quick edit:
Now I am unsure if the flinder's set is directed or not.... It doesn't say. Examples make it seem like maybe it actually is undirected.
Check this video:
https://www.youtube.com/watch?v=G1m7goLCJDY
Also check the in depth sequel to the video.
You can determine yourself how many nodes you want to add to the graph.
It does require you to construct the data yourself, which should be deable.
One note: the problem is about a path, not a cycle, but you can overcome this by connecting the start and end node.
I know that there is possibility to export/import h2o model, that was previously trained.
My question is - is there a way to transform h2o model to a non-h2o one (that just works in plain R)?
I mean that I don't want to launch the h2o environment (JVM) since I know that predicting on trained model is simply multiplying matrices, applying activation function etc.
Of course it would be possible to extract weights manually etc., but I want to know if there is any better way to do it.
I do not see any previous posts on SA about this problem.
No.
Remember that R is just the client, sending API calls: the algorithms (those matrix multiplications, etc.) are all implemented in Java.
What they do offer is a POJO, which is what you are asking for, but in Java. (POJO stands for Plain Old Java Object.) If you call h2o.download_pojo() on one of your models you will see it is quite straightforward. It may even be possible to write a script to convert it to R code? (Though it might be better, if you were going to go to that trouble, to convert it to C++ code, and then use Rcpp!)
Your other option is to export the weights and biases, in the case of deep learning, implement your own activation function, and use them directly.
But, personally, I've never found the Java side to be a bottleneck, either from the point of view of dev ops (install is easy) or computation (the Java code is well optimized).
The Problem object in OpenMDAO is programmed to behave like a dictionary of all the Problem variables declared in the objects and what-not. Now I can iterate over normal dictionaries with for loops like:
for key,value in my_dict.iteritems():
do_something(key,value)
Could something like this be done with OpenMDAO problems?
I have a bunch of helpful utilities for working with dictionaries. I would like to use those to work with OpenMDAO problems as well.
Thanks!
I'm not exactly sure what you want to do, but it sounds like you want to iterate over all variables in the model? One way you could do that is to iterate over prob.root.unknowns, which is the vector that contains all the connected variables in the top System of your model. It is recursive in the sense that it includes connections that are specified in sub-systems. However, it doesn't include anything that isn't relevant for data-passing, so any Component inputs that aren't at least hooked up to an IndepVarComp won't show up in it.
The problem isn't really like a dictionary, we just define the __getitem__ and __setitem__ methods on it as a convenience for the user (see code). If you want to access the underlying dict-like object you can access prob.root.unknowns
instead. This is still not actually a dictionary, but a VecWrapper instance, but it is dict-like and has the the necessary methods to be used like one in a duck-typing sense.
I need to know if the data for training that is passed in the neuralnet call is randomized in the routine or does the routine uses the data in the same order that is given. I really need to know this info for a project that I am working on, and I have not being able to figure it out by looking at the source.
Thnx!
Look into the code - thats one of the most important advantages of FOSS: you can actually check what it is doing (neuralnet is pure R, so you don't even need to fear that you need to dig into FORTRAN or C code, and you can use debug to step through the code with example data to get an overview).
Moreover, if necessary, you can even introduce e.g. a new parameter that allows you to switch off randomization if needed.
Possibly maintainer ("neuralnet") would be willing to help you as well (and able to answer much faster than about everyone else here on SE).
I have a large sequence of data-maps and each map needs to be classified in a nested fashion.
i.e. a given item may be an A or a B (as determined by a function), if it is a B then it may be a C or a D (determined by another function) and so on down. At each stage more data relating to the classification may be added to each map. The functions to do the classification are themselves quite complex and may need to bring in additional data to make the determinations.
Would a self-recursive multimethod be a good way to structure the code to do this? I would dispatch on the most specific type so far determined for an item, or return the best current classification when nothing further can be done.
I could get the desired effect with nested ifs inside a single classification function but gosh is that ugly.
Is a multimethod a good fit here or am I over-complicating things and missing a simpler way of structuring the code?
Seems like multimethods might be useful here. I guess all the complexity is in the dispatch function? So once you classify the top level you fire the multimethod again with more info that triggers a different instance?
Another way to think this is to base it around traversing the decision tree instead of traversing your input. I wonder whether using clojure.zip to traverse a tree of classification functions might be an interesting solution. Your classification function at each node could tell you how to next traverse the tree (which child to go to). You wouldn't need clojure.zip necessarily but it has the tree navigation in it already.
multimethods are great because they allow this level of dispatch when the complexity of the problem calls for it. I say go for it if it does what you want.
Perhaps you could build an isa hierarchy to help