I am trying to implement a simple case of deep Q learning in R, using the neuralnet package.
I have an initial network with initial random weights. I use it to generate some experience for my agent and as a result, I get states and targets. Then I fit the states to the target and get a new network with new weights.
How do I have to combine the new weights and the initial weights? Do I simply keep the new weights and discard the initial weights?
Related
I clustered some data I have using FactoMineR::HCPC in R and now I am trying to cluster new data using the model I have trained. However I can not find a function that will give me such prediction. So far I've been able to see that I can use predict.PCA with my new data set and the previous PCA I implemented on the trainig data set and use that result on the function HCPC. I know that for kmenas there exist a predict which will take as input the trained model and the new data. Does anybody know if there is an equivalent for HCPC?
Hi I am a newbie in Depp learning fields.
I ran a neural network model (regression) with 2 hidden layers in R (neuralnet Package). then I used the the compute function to get the predicted probabilities.Now I want to regenerate predicted output using the equation used in the neural net. for example, following are weights received from the model object
Intercept.to.1layhid1 4.55725020215
Var1.to.1layhid1 -13.61221477737
VAr2.to.1layhid1 0.30686384857
var1.to.1layhid2 0.23527690062
var2.to.1layhid2 0..67345678
1layhid.1.to.target 1.95414397785
1layhid.2.to.target 3.68009136857
Can any one help me derive a equation with the above weights so that I can replicate the output
Thanks
In order to get the output for new data, you can always use predict function using the fitted model, which is the returned object from neuralnet function.
For instance, if your model is the following:
neuralFit = neuralnet(trainData)
Then you reproduce the output with the following:
predict(neuralFit,newdata)
Otherwise, you'll need to compute the result manually. But you need to understand your network architecture first.
Is it currently possible to create a custom R clustering model, where you can define your own clustering model? Because AzureML does not let you connect Customer R Model with Train Clustering Model.
This is a critical limitation of AzureML when it comes to clustering.
Note: I know that you can create it in Execute R Script, but I want to be able to save the model so when new test data is inputted, I would assign it to the respective clusters.
I am using R along with the neuralnet package see docs (https://cran.r-project.org/web/packages/neuralnet/neuralnet.pdf). I have used the neural network function to build and train my model.
Now I have built my model I want to test it on real data. Could someone explain if I should use the compute or prediction function? I have read the documentation and it isnt clear, both functions seem to do similar?
Thanks
The short answer is to use compute to do predictions.
You can see an example of using compute on the test set here. We can also see that compute is the right one from the documentation:
compute, a method for objects of class nn, typically produced by neuralnet. Computes the outputs
of all neurons for specific arbitrary covariate vectors given a trained neural network.
The above says that you can use covariate vectors in order to compute the output of the neural network i.e. make a prediction.
On the other hand prediction does what is mentioned in the title in the documentation:
Summarizes the output of the neural network, the data and the fitted
values of glm objects (if available)
Moreover, it only takes two arguments: the nn object and a list of glm models so there isn't a way to pass in the test set in order to make a prediction.
I am using the bnlearn package in r, which generates Bayesian networks using data. I am trying to get more connections between the data nodes, and hence, I am trying to decrease the weight threshold necessary to generate arcs between the nodes. I am using the gs function in the bnlearn package, which uses a grow-shrink algorithm. So far, I have tried modifying the alpha threshold, but that appears to change the threshold of error.
Ultimately, my goal is to have the algorithm create more arcs between the points.
Thanks
You might need to first find the weight of all arcs, and selectively filter them yourself. I don't think bnlearn has that built in.