THNN.lua:110: input and gradOutput shapes do not match: input [90 x 271], gradOutput [271] - torch

I have a torch code that can run ok on one machine but with the error "THNN.lua:110: input and gradOutput shapes do not match: input [90 x 271], gradOutput [271]" on another machine. The data is same. I think is the version problem. But i don't know how to fix it. Any help?
The problem refers to backward of logsoftmax module. But when I print the shape, there is no problem.
tack traceback:
[C]: in function 'v'
/root/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'LogSoftMax_updateGradInput'
/root/torch/install/share/lua/5.1/nn/LogSoftMax.lua:12: in function 'updateGradInput'
/root/torch/install/share/lua/5.1/nngraph/gmodule.lua:420: in function 'neteval'
/root/torch/install/share/lua/5.1/nngraph/gmodule.lua:454: in function 'updateGradInput'

Related

Reset the function wrongly assigned as string in python using Jupyter

I was practicing plotting when by mistake I assigned ylabel and title as below:
plt.ylabel = "No. of Hospitals"
plt.title = 'Hospitals by State'
This changed the two functions to string, as confirmed in the image below. (First blue circle)
Then I changed the statement to set these correctly to:
plt.ylabel("No. of Hospitals")
plt.title('Hospitals by State')
Now, I get the error
TypeError: 'str' object is not callable
In one of the stackoverflow article, I learned once the function is wrongly assigned then only way to fix this is restart the kernel. I don't want to restart the kernel and run all 500+ jobs above this mistake. I also tried importing matplotlib and sns again by calling pyplot as plt2 that didn't work either. (Second Blue circle in the image).
I was wondering is there a way to reset the function back to normal from the string status now?
I do understand that I can write the dataframe of interest to a file and then read it back in the new notebook. However, I'm sure many will agree knowing the process and ability to reset the function back to normal will help many in future without costly work around(s).
You may redefine them, like:
plt.title = lambda *args, **kwargs: plt.gca().set_title(*args, **kwargs)
plt.ylabel = lambda *args, **kwargs: plt.gca().set_ylabel(*args, **kwargs)
See also the original code of matplotlib.pyplot.title and matplotlib.pyplot.ylabel.

Plothraw PARIGP (or similar) doesn't work (latexit crash)

I'm a new user of PARI/GP, and after writing my script, I wanted to make a graph of it. As my function take an integer and return a number, it's closer to a sequence. Actually, I didn't know how to do it, so I read the documentation of PARI/GP, and after that I made some test in order to obtain a graph from a list.
After reading an answer in stackoverflow (Plotting multiple lists in Pari), I wanted to test with the following code:
plothraw([0..200], apply(i->cos(i*3*Pi/200), [0..200]), 0);
But when I do it, it tries to open something on latexit, but then it crash and give me a problem report.
I didn't even know that I had an app named latextit, maybe it was install during the installation of PARI/GP. Anyway, how can I fix this?
PARI/GP definitely doesn't install latexit.
The way hi-res graphics work on the Win32 version of PARI/GP is to write down an Enhanced Metafile (.EMF) in a temp directory and ask the system to
"open" it. When you installed latexit it probably created an association in the registry to let it open .EMF files
i3Pi does not mean what you think, it just creates a new variable with that name. You want i * 3 * Pi instead.
The following constructions both work in my setup
plothraw([0..200], apply(i->cos(i*3*Pi/200), [0..200]), 0);
plothraw([0..200], apply(i->cos(i*3*Pi/200), [0..200]), 1);
(the second one being more readable because a red line is drawn between successive points; I have trouble seeing the few tiny blue dots)
Instead of apply, you can use a direct constructor as in
vector(201, i, cos((i-1) * 3 * Pi / 200))
which of course can be computed more efficiently as
real( powers(exp(3*I*Pi/200), 200) )
(of course, it doesn't matter here, but compare both commands at precision \p10000 or so ...)

How to feed data properly in tensorflow

I have been learning Tensorflow and understanding feed_dict has been a challenge. Take for example the following piece of code i am working on
p=0
self.sequence_length=25
with tf.Session() as sess:
init.run()
char_to_ix={ch:ix for ix,ch in enumerate(self.words)}
ix_to_char={ix:ch for ix,ch in enumerate(self.words)}
words_in_input=self.data[p:p+self.sequence_length]
inputs=[char_to_ix[ix] for ix in words_in_input]
words_in_target=self.data[p+1:p+self.sequence_length+1]
targets=[char_to_ix[ix] for ix in words_in_target]
onex=sess.run([selected_next_letter],feed_dict={self.X:inputs,self.y:targets})
p=p+1
This gives the error: Shapes of all inputs must match: values[0].shape = [25] != values[1].shape = []
However, when I edit the code to
with tf.Session() as sess:
init.run()
char_to_ix={ch:ix for ix,ch in enumerate(self.words)}
ix_to_char={ix:ch for ix,ch in enumerate(self.words)}
words_in_input=self.data[p:p+self.sequence_length]
inputs=[char_to_ix[ix] for ix in words_in_input]
words_in_target=self.data[p+1:p+self.sequence_length+1]
targets=[char_to_ix[ix] for ix in words_in_target]
for x,y in zip(inputs,targets):
onex=sess.run([selected_next_letter],feed_dict={self.X:x,self.y:y})
It executes.
My questions is: Is it possible to feed the whole list such as inputs and targets in the feed_dict or must I input it through a loop one by one. I ask this because the tutorials I have been reading, I see a whole list being passed in a feed_dict such as
loss_val = sess.run([train_op, loss_mean], feed_dict={
images_batch:images_batch_val,
labels_batch:labels_batch_val
})
Usually the reason for that error is because your input array(x) isn’t the same size as your labels array(y). As the error states it looks like your labels array is empty. Before doing anything tensorflowy make sure both x and y arrays have values in them and that they are of the same size.
To answer your question, yes you can use lists when training and is the preferred way of using tensorflow.

TensorFlow: Can't invoke streaming_sparse_precision_at_k

Upon trying to calculate precision#k, I get an exception. To what follows is the a simple code that reproduces the problem.
First the code defines the variable scope:
initializer = tf.random_uniform_initializer(-0.1, 0.1, seed=1234)
with tf.variable_scope("model", reuse=None, initializer=initializer)
Then it calls those lines:
predictions = tf.Variable(tf.ones([2, 10], tf.int64))
labels = tf.Variable(tf.ones([2, 1], tf.int64))
precision = tf.contrib.metrics.streaming_sparse_precision_at_k(predictions, labels, 5)
tf.initialize_all_variables().run()
(I know this code is meaningless, and tries to calculate the precision given 2 fixed matrices...)
Then I get the following exception:
W tensorflow/core/framework/op_kernel.cc:936] Failed precondition:
Attempting to use uninitialized value
model/precision_at_5/false_positive_at_5 [[Node:
model/precision_at_5/false_positive_at_5/read = IdentityT=DT_DOUBLE,
_class=["loc:#model/precision_at_5/false_positive_at_5"], _device="/job:localhost/replica:0/task:0/gpu:0"]]
The same goes when I tried to invoke streaming_sparse_recall_at_k instead of streaming_sparse_precision_at_k.
The installed version is r0.10 on linux with python 2.7.
Please help... Thanks in advance :)
Unfortunately, tf.initialize_all_variables() doesn't initialize "local" variables (which tend to be internal implementation details for ops like tf.contrib.metrics.streaming_sparse_precision_at_k() and tf.train.string_input_producer(), as opposed to variables used as model weights).
You'll need to add a line to your program that runs tf.initialize_local_variables() before running the evaluation op:
sess.run(tf.initialize_local_variables()) # or `tf.initialize_local_variables().run()`

Scilab Error 10000

Hi I am new to scilab and don't have much mathematical background.
I have been following code for another example and am being shown error 10000 for the following code:
function [z]=f(x,y)
z=0.026*(1.0-(y/ym))*y;
endfunction;
ym=12000;
x0=1950;y0=2555;xn=5;h=10;
x=[x0:h:xn];
y=ode("rk",y0,x0,x,f);
disp("x y")
disp("--------")
disp([x'y']);
function z=fe(x)
z=ym/(1-(1-ym/y0)*e^(-k*(t-t0)));
endfunction;
xe=(x0:h/10:xn);
n=length(xe)
for i=1:n
ye(i)=fe(xe(i));
end;
plot (x,y,'ro',xe, ye,'-b');legend ('rk4','Exact',3);
xtitle('solving dy/dx=k(1-y/ym)y','x','y');
I have worked through several other error messages. I am lost and don't know if the problem is in the code or the way I set up the problem. The following is the current error message:
!--error 10000
plot: Wrong size for input argument #2: A non empty matrix expected.
at line 57 of function checkXYPair called by :
at line 235 of function plot called by :
plot (x,y,'ro',xe, ye,'-b');legend ('rk4','Exact',3);
at line 25 of exec file called by :
I would appreciate any help.
Thanks
Start by adding clear as first statement, this will erase all variables before running your function. In the above script you don't declare ye.
Also the statement x=[x0:h:xn]; is strange with those values for x0,h and xn. You are now trying to get a list of x-values starting at 1950 and with positive steps of 10 up until 5 is reached.
I would recommend to try each line and see if the outcome is as expected. You do not need to know everything about the code, but probaly x and y should contain at least some values. The error is telling you that it expected a non-empty matrix for argument 2. This is y, so essentially it is telling you y is empty.

Resources