how to initialize layers by numpy array in keras - initialization

I want to convert a pre-trained caffe model to keras, then i need to initialize the layers , layer by layer.
I saved the weights and biases in a mat file and I loaded them to python workspace.
I know "weights" parameter get the numpy array but not how?
Thanks

You can get more information about how to set the weight of a model in the Keras Layers Documentation. Basically you use :
layer.set_weights(weights): sets the weights of the layer from a list of Numpy arrays (with the same shapes as the output of get_weights).
Or you can directly initialize them when you create the layer. Every layer has a parameter weights that you can set with a numpy array. Read each layer's documentation to feed the right weights format. For example, Dense() layers accept this format for the parameter weights :
List of Numpy arrays to set as initial weights. The list should have 2 elements, of shape (input_dim, output_dim) and (output_dim,) for weights and biases respectively. source

Related

How do I feed a TFRecord dataset into a keras model if the dataset has a dictionary with mutiple inputs and a single target?

I have a TFRecord dataset I would like to feed to a keras model. It consists of a dictionary with multiple input features and a single ouput feature. In the tensorflow documentation it says for the input x: "A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights)."
If i simply feed it the way it is I get "ValueError: Missing data for input "input_2". Expected the following keys: ['input_2']". How do I now separate the inputs from the target in the dataset and create a tuple to feed the model, as stated in the docs? I've looked everywhere and only found cases where the featres were not a time-series or the input was a single one.

I am confused to make mlmodel updatable using coreml3 tools

I have a regressor mlmodel trained using mobilenetv2。The last several layers are as follows:
I wanna to make this mlmodel to a updatable mlmodel and train the innerProduct layer (fully-connected layer in pytorch). I have converted the mlmodel referencing to this blog:
https://machinethink.net/blog/coreml-training-part4/ . But I found that the updatable mlmodel's second training input is default set to "score_true" and it is just a value(datatype: int32).
However, the output of softmax layer is a vector with 10 float values. So how can I set the second training input to a vector, because the ground truth is a vector with 10 float values.
And I look up the API of CrossEntropyLoss int coremltools3.3. Its input param can accept a vector of length N. So how can I change the default generated score_true from a intVal to a vector?
Thanks very much.
What you pass into the score_true MLMultiArray is the index of the class. You don't need to one-hot encode this yourself, i.e. no need to turn it into a vector of length N.

package "fdapace" (R) - How to access the principal components of the functional principal component analysis

After applying the FPCA() function of the "fdapace" package on a dataset, the function returns a FPCA object with various values and fields. Unfortunately I don't know which of those fields contain the Principal components and how to access them or plot them. I know that there is a documentation for the package but as a beginner it doesn't really help me(no criticism intended). You can find the documentation here: fdapace.pdf
The estimate of the functional principal components (FPCs) are saved in xiEst in the result list, a matrix each row of which is the FPCs for a subject in the data. You can make whatever plots you want with this information. See the following for an example.
res = FPCA(Ly, Lt)
res$xiEst # This is the matrix containing the FPC estimates.
Plotting the first eigenfunction:
workGrid = FPCAsparse$workGrid
phi1=FPCAsparse$phi[,1]
plot(workGrid,phi1)
Plotting the mean function:
mu=FPCAsparse$mu
workGrid = FPCAsparse$workGrid
plot(workGrid,mu)

How can I extract the filters learned in a Convolution layer?

I have a Convolution layer in my R code, created as:
conv1 <- mx.symbol.Convolution(data=data, kernel=c(10,1), num_filter=10)
Once the net is fully trained, how can I extract the 10 filters?
The filter weights are in the weight parameter of the Convolution. Assuming you have used the standard layout, as in your example, the
weights will have shape (num_filter, channels, kernel[0], kernel[1]).
For example
conv1.weight.data()[0]
accesses the weight tensor of the first filter from the current context.

Adding a transformed parameter to stanfit object

I have a stanfit object called fit returned by rstan::stan(...) to infer a parameter theta. I can now analyse theta using e.g. rstan::summary(fit, pars="theta").
I later realised that I am more interested in inference about the square of theta. I should have included a transformed parameters block in the STAN model to include theta_squared as a parameter in the output.
Is it possible to add the transformed parameter theta_squared <- theta^2 to the existing stanfit object, as if it had been calculated in a transformed parameters block?
I don't know if you can (or should) add a parameter to the stanfit object manually.
At least you can get the MCMC samples by as.data.frame(fit), and then play with it as you wish, including defining theta^2.
You can get a lot of those same graphs (rhat, ac, etc) using ShinyStan, which does allow you to add a quantity like this (if it's a scalar). For example,
library("shinystan")
# create shinystan object (sso)
sso <- as.shinystan(fit)
# add theta_squared to sso
sso <- generate_quantity(sso, fun = function(x) x^2,
param1 = "theta", new_name = "theta_squared")
# launch the shinystan interface
launch_shinystan(sso)

Resources