What does InvalidArgumentError in tensorflow 2 mean? - python-3.6

I am new tensorflow. I am trying to implement Linear Regression with custom training, following this tutorial.
But when I try to compute W*x + b
I am getting this error
tf.add(tf.matmul(W,x),b)
InvalidArgumentError: cannot compute Add as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:Add]
I initialized W and b
W = tf.Variable(np.random.rand(1,9))
b = tf.Variable([1],dtype = tf.float32)
x = tf.Variable(np.random.rand(9,100))
But when I changed the initialisation of b to
b = tf.Variable(np.random.rand(1))
I did not get any error. What is the reason for this?

The result of np.random.rand(1,9) (and other initializations) is of type np.float64. Using this with tf.Variable gives a tensor of type tf.float64.
The parameters to Tensorflow's add must be of the same type. The result of matmul is of type tf.float64 and b is of type tf.float32. You need to cast one to the other's type.
In Tensorflow, you can either do this (recommended, going by convention):
# Can be done in a single line too
matmul_result = tf.matmul(W,x)
matmul_result = tf.cast(matmul_result, tf.float32)
tf.add(matmul_result, b)
Or you can do this:
tf.add(tf.matmul(W,x), tf.cast(b, tf.float64))
You can also directy change the type of numpy's array:
W = tf.Variable(np.random.rand(1,9).astype(np.float32))

Related

Split output of LSTM to do computation on each vector in keras/tensorflow

I'm trying to implement a custom layer in keras. At the end it should implement an Attention Layer. So what i want to do is take the output of an LSTM and make a computation on every vector of the output.
I've got an LSTM with return-sequence=True. So I get an output with a shape like (batch_size, num_vectors, dim_vector).
How can I access a single vector in the call-function of the custom-layer? Or better, how can i split the input tensor to get a list of tensors with shape (dim_vector).
So it should be batch_size * num_vectors vectors/tensors in this list.
What i want to do looks kind of like this:
for i in range(num_vectors):
x_i = list_of_vectors/tensors[i]
W = self.W.eval().transponse()
W1 = self.W1.eval()
b = self.b.eval()
b1 = self.b1.eval()
activated = self.kernel_activation(W1.dot(x_i) + b1)
score = W.dot(activated) + b
scores.append(score)
What looks kind a promising but is poorly documented is K.gather(). Maybe someone could explain how it is working or has a better idea how to deal with my problem.
I also tried tf.unstack() to get a list. But this doesn't work because the dimensions of my input-tensor are unknown except of dim_vector.
I'm working with keras on tensorflow-backend.
Thanks in advance

passing dictionaries of parameters and initial values to scipy.integrate.odeint

I'm trying to integrate a system of differential equations using spicy.itegrate.odeint.
First, parameters and initial conditions are sampled and returned in two dictionaries (x0 and p). Then the model is created and written as a function to a file, looking roughly as follows (with dummy equations):
def model(x, t, p):
xdot = [
x['rate1'], p["a"]
x['rate2'], p["b"] * x["state1"] - p["c"] * x["state2"]
x['rate3'], p["c"] * x["state2"]
x["state4"], x["rate1"] + x["rate2"]
x["state5"], - x["rate2"] + x["rate3"]
]
return xdot
This is so that I can easily generate different models from simple inputs. Thus, what might normally be hardcoded variables, are now keys in a dictionary with the corresponding value. I do this because assigning variables dynamically is considered bad practice.
When I try to integrate the system using odeint, as follows
sol = odeint(model, x0, t, args=(p,),
atol=1.0e-8, rtol=1.0e-6)
where, thus, x0 is a dictionary of initial conditions, and p of parameters (and t a list of floats). I get the following error:
TypeError: float() argument must be a string or a number, not 'dict'
Obviously, scipy is not happy with my attempt to pass a dictionary to parameterize and initialize my model. The question is if there is a way for me to resolve this, or whether I am forced to assign all values in my dictionary to variables with the name of their corresponding key. The latter does not allow me to pass the same set of initial conditions and parameters to all models, since they differ both in states and parameters. Thus, wish to pass the same set of parameters to all models, regardless of wether the parameters are in the model or not.
For performance reasons scipy functions like odeint work with arrays where each parameter is associated with a fixed position.
A solution to access parameters by name is to convert them to a namedtuple which gives them both, a name and a position. However, the conversion needs to be done inside the function because odeint passes the parameters as a numpy.array to the model function.
This example should convey the idea:
from scipy.integrate import odeint
from collections import namedtuple
params = namedtuple('params', ['a', 'b', 'c', 'd'])
def model(x, t0):
x = params(*x)
xdot = [1,
x.a + x.b,
x.c / x.a,
1/x.d**2] # whatever
return xdot
x0 = params(a=1, b=0, c=2, d=0.5)
t = [0, 0.5, 1]
sol = odeint(model, x0, t)

R - Picard Method for General Polynomial

I am currently writing a program in R to find solutions of a general polynomial difference equation using Picard's method.
For an insight in the mathematics behind it (as math mode isn't available here):
https://math.stackexchange.com/questions/2064669/picard-iterations-for-general-polynomials/2064732
Now since then I've been trying to work with the Ryacas package for integration. However I ran into trouble trying to work with the combination of expression and integration function.
library(Ryacas)
degrees = 3
a = c(3,5,4,6)
x0 = -1
maxIterations(10)
iteration = vector('expression', length = maxIterations)
iteration[1] = x0
for(i in 2:maxIterations){
for(i in 1:degrees){
exp1 = expression( a[i] * iteration[i-1] ^ i)
}
iteration[i] = x0 + Integrate(exp1, t)
}
but this results in
"Error in paste("(", ..., ")") :
cannot coerce type 'closure' to vector of type 'character'"
and exp1 = expression(a[j] * iteration[i-1]^j) instead of an actual expression as I tried to achieve. Is there anyway I can make sure R reads this as a real expression (i.e. for example 3 * ( x0 ) ^ j for i = 2)?
Thanks in advance!
Edit:
I also found the Subst() function, and currently trying to see if anything is fixable using it. Now I am mainly struggling to actually set up an expression for m coefficients of a, as I can't find a way to create e.g. a for loop in the expression() command.

What are x1_step1_xoffset, x1_step1_gain and x1_step1_ymin in a neural network generated by genFunction in Matlab?

I'm working with Matlab's Neural Network toolbox and I have generated a neural network function with genFunction.
I would like to know what mapminmax_apply function does, what are these variables used for and their meaning in the neural network:
% Input 1
x1_step1_xoffset = [0.151979470539401;-89.4008362047824;0.387909026651698;0.201508462422352];
x1_step1_gain = [2.67439342164766;0.0112020512930696;3.56055585104964;4.09080417195814];
x1_step1_ymin = -1;
Here it's the mapminmax_apply function:
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings_gain,settings_xoffset,settings_ymin)
y = bsxfun(#minus,x,settings_xoffset);
y = bsxfun(#times,y,settings_gain);
y = bsxfun(#plus,y,settings_ymin);
end
And here it's the call to the function with the above variables:
% Input 1
Xp1 = mapminmax_apply(X{1,ts},x1_step1_gain,x1_step1_xoffset,x1_step1_ymin);
I think:
the mapminmax function can also return the settings it uses (amongst others, offset, gain and ymin). For some reason in the code spat out by the NN function, these settings are given at the begining of the file, under Input1, in the form of x1_step1_xoffset, etc.
mapminmax('apply',X,PS) will apply the settings in PS to the mapminmax algorithm.
So, I think the code generated here has more steps than you necessarily need. You could get rid of the Input1 steps and just use a simple xp1 = mapminmax(x1'), instead of the mapminmax_apply
Cheers
Matlab NN toolbox automatically normalizes the features of the dataset.
The functions mapminmax_apply and mapminmax_reverse are related to normalizing the features.
The function mapminmax_apply exactly converts/normalizes input range to -1 to 1.
Since the output will also come out as a normalized vector/value(between -1 to 1) it needs to be reversed normalized using the function mapminmax_reverse .
Cheers

Rstudio - Error in user-created function - Object not found

First thing's first; my skills in R are somewhat lacking, so there is a chance I may be using something incorrectly in the following. If I go wrong somewhere, please let me know.
I've been having a problem in Rstudio where I try to create 2 functions for formulae, then use nls() to create a model using those, with which I will make a plot. When I try to run the line for creating it, I get an error message saying an object is missing. It is always the last object in the function of the first "formula", in this case, 'p'.
I'll provide my code here then explain what I am trying to do for a little context;
DATA <- read.csv(file.choose(), as.is=T)
formula <- function(m, h, g, p){(2*m)/(m+(sqrt(m^2+1)))*p*g*(h^2/2)}
formula.2 <- function(P, V, g){P*V*g}
m = 0.85
p = 766.42
g = 9.81
P = 0.962
h = DATA$lithothick
V = DATA$Vol
fit.1 <- nls(formula (P, V, g) ~ formula(m, h, g, p), data = DATA)
If I run it how it is shown, I get the error;
Error in (2 * m)/(m + (sqrt(m^2 + 1))) * p : 'p' is missing
However it will show h if I rearrange the objects in the formula to (m,g,p,h)
Error in h^2 : 'h' is missing
Now, what I'm trying to do is this; I have a .csv file with 3 thicknesses (0.002, 0.004, 0.006 meters) and 3 volumes (10, 25, 50 milliliters). I am trying to see how the rates of strength and buoyancy increase (in relation to each other) as the thickness and volume for each object (respectively) increases. I was hoping to come out with a graph showing the upward trend for each property (strength and buoyancy), as I believe them to be unequal (one exponential the other linear). I hope that isn't more confusing than clarifying, but any pointers would be GREATLY appreciated.
You cannot overload functions this way in R, what you can do is provide optional arguments (which is a kind of overload) with syntax function(mandatory, optionnal="")
For what you are trying to do, you have to use formula.2 if you want to use the 3-arguments formula.
A workaround could be to use one function with one optionnal argument and check if this argument has been used. Something like :
formula = function(m, h, g, p="") {
if (is.numeric(p)) {
(2*m)/(m+(sqrt(m^2+1)))*p*g*(h^2/2)
} else {
m*h*g
}
}
This is ugly and a very bad way to do it (your variables do not really mean the same thing from one call to the other) but it works.

Resources