Let's say, for example, I have built the following CNN model using Keras:
model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(32,32,3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(512))
model.add(Dense(10, activation='softmax'))
I wish to be able to transform the above model into a mathematical formula.
I understand the basic structure of a CNN as follows:
where
However, I do not know how to go from the above recursive formula to something like this (the first two summations are weights and the second two are adjustable biases):
Do I need to trace each weight, each bias and each connection of every neuron? If so, how?
Furthermore, I would highly appreciate it if someone could provide a generalized strategy for tackling such a problem (like finding a math formula to suit a different kind of classifier).
Lastly, is this an easy task and is it a worthwhile one?
Related
Colleagues, I have a graph-1 and I need to predict the value of another graph-2 based on its data
graphs have a correlation, that's for sure - using machine learning, I can predict graph-2 according to graph-1, but I would like to have a mathematical formula for which the prediction will be
My plan is just to make an approximation, there are цуи-sites where mathematical formulas are automatically selected and, as a result, take a formula that has the least average approximation error,% well, then use this formula and see
maybe there is a smarter way
Please see the image
I have started to work with GAM in R and I’ve acquired Simon Wood’s excellent book on the topic. Based on one of his examples, I am looking at the following:
library(mgcv)
data(trees)
ct1<-gam(log(Volume) ~ Height + s(Girth), data=trees)
I have two general questions to this example:
How does one decide when a variable in the model estimation should be parametric (such as Height) or when it should be smooth (such as Girth)? Does one hold an advantage over the other and is there a way to determine what is the optimal type for a variable is? If anybody has any literature about this topic, I’d be happy to know of it.
Say I want to look closer at the weights of ct1: ct1$coefficients. Can I use them as the gam-procedure outputs them, or do I have to transform them before analyzing them given that I am fitting to log(Volume)? In the case of the latter, I guess I would have to use exp (ct1$coefficients)
I am trying to understand how the GAM model works. I understand the optimization process and the leave one out process of developing the spline, but I do not know what R (or potentially all ways of fitting GAM) uses for the starting values; mustart, etastart and start.
I looked at the glm.fit code and it appears that the family and link function used in the gam model are used in finding these starting values but trying to reverse engineer an answer has been quite frustrating. Was wondering if there is a better way to find out what these values are.
I was trying to understand how may I fit a VAR model that is specific and
not general.
I understand that fitting a model such as general VAR(1) is done by
importing the "vars" package from Cran
for example
consider that y is a matrix of a 10 by 2. then I did this after importing vars package
y=df[,1:2] # df is a dataframe with alot of columns (just care about the first two)
VARselect(y, lag.max=10, type="const")
summary(fitHilda <- VAR(y, p=1, type="const"))
This work fine if no restriction is being made on the coefficients. However, if I would like to fit this restricted VAR model in R
How may I do so in R?
Please refer me to a page if you know any? If there is anything unclear from your prespective please do not mark down let me know what is it and I will try to make it as clear as I understand.
Thank you very much in advance
I was not able to find how may I put restrictions the way I would like to. However, I find a way to go through that by doing as follow.
Try to find the number of lags using a certain information criterion like
VARselect(y, lag.max=10, type="const")
This will enable you to find the lag length. I found it to be one in my case. Then afterwards fit a VAR(1) model to your data. which is in my case y.
t=VAR(y, p=1, type="const")
When I view the summary. I find that some of the coefficients may be statistically insignificant.
summary(t)
Then afterwards run the built-in function from the package 'vars'
t1=restrict(t, method = "ser", thresh = 2.0, resmat = NULL)
This function enables one to Estimation of a VAR, by imposing zero restrictions by significance
to see the result write
summary(t1)
I want to estimate the forward looking version of the Taylor rule equation using the iterative nonlinear GMM:
I have the data for all the variables in the model, namely (inflation rate), (unemployment gap) and (effective federal funds rate) and what I am trying to estimate is the set of parameters , and .
Where I need help is in the usage of the gmm() function in the {gmm} R package. I 'think' that the parameters of the function that I need are the parameters:
gmm(g, x, type = "iterative",...)
where g is the formula (so, the model stated above), x is the data vector (or matrix) and type is the type of GMM to use.
My problem is with the data matrix parameter. I do not know the way in which to construct it (not that I don't know of matrices in R and all the examples I have seen on the internet are not similar to what I am attempting to do here. Also, this is my first time using the gmm() function in R. Is there anything else I need to know?
Your help will be much appreciated. Thank you :)