Understanding pyinstaller - pyinstaller

I write a few small scripts to python ever so often, but sometimes Pyinstaller does not compile my code right, and i dont have a deep enough understanding to know why.
So i am so sorry that i am posting my whole (i promise) short code, but i have NO idea where the problem could be. The code works fine when its run in pyCharm, so the focus is on why Pyinstaller dosent not compile it right.
When i run Pyinstaller i get no errors, but when i then run the exe i get the following: Current thread 0x000027d0 (most recent call first):
So if anyone could have a look through and give me direction to where i go wrong that would be great. Thanks in advance.
from math import *
from tkinter import *
def calc_outcome(k,N,n,r):
a = factorial(k - 1) / (factorial(r - 1) * factorial(k - r))
b = factorial(N - k) / (factorial(n - r) * factorial(N - k - n + r))
c = factorial(N) / (factorial(N - n) * factorial(n))
return(a * b)/c
def runner():
N = int(PopulationSize.get())
n = int(SampleSize.get())
startHand = 7
avgGameLenght = 15
print("Draw:")
string = " "
for x in range(1,n+1):
string += '{:>10}'.format(str(x) + "/" + str(n))
print(string + "\n")
holder = []
for x in range(N):
holder.append(0)
for k in range(N+1):
string = ""
string += str('{:2}'.format(k)) + ": "
if k == startHand+1 or k == avgGameLenght+1:
print("-"*((n*10)+4))
for r in range(1,n+1):
try:
holder[r-1] += calc_outcome(k,N,n,r)
string += str('{:>10}'.format('{:.1%}'.format(holder[r - 1])))
except:
string += '{:>10}'.format("-")
if k > 0:
print(string)
def changeEntry(PS, PM):
if PS == "P":
if PM == "P":
holder = int(PopulationSize.get()) + 1
PopulationSize.delete(0,END)
PopulationSize.insert(0,holder)
else:
holder = int(PopulationSize.get()) -1
if holder < 0: holder = 0
PopulationSize.delete(0, END)
PopulationSize.insert(0, holder)
else:
if PM == "P":
holder = int(SampleSize.get()) + 1
SampleSize.delete(0,END)
SampleSize.insert(0,holder)
else:
holder = int(SampleSize.get()) -1
if holder < 0: holder = 0
SampleSize.delete(0, END)
SampleSize.insert(0, holder)
vindue = Tk()
PopulationSize = Entry(vindue, width=3, font="Arial, 24", justify=CENTER)
SampleSize = Entry(vindue, width=3, font="Arial, 24", justify=CENTER)
PopulationSize.insert(0,60)
SampleSize.insert(0,4)
knap = Button(vindue, text="Run", command=runner, font="Arial, 16")
knapPP = Button(vindue, text="P+", font="Arial, 16", command=lambda: changeEntry("P","P"))
knapPM = Button(vindue, text="P-", font="Arial, 16", command=lambda: changeEntry("P","M"))
knapSP = Button(vindue, text="S+", font="Arial, 16", command=lambda: changeEntry("S","P"))
knapSM = Button(vindue, text="S-", font="Arial, 16", command=lambda: changeEntry("S","M"))
PopulationSize.grid(row=0, column=0)
SampleSize.grid(row=1, column=0)
knap.grid(row=2, column=0)
knapPP.grid(row=0, column=1)
knapPM.grid(row=0, column=2)
knapSP.grid(row=1, column=1)
knapSM.grid(row=1, column=2)
vindue.mainloop()
This is the output from Pyinstaller (just in case its needed):
C:\..\Scripts>pyinstaller.exe --onefile C:\..\Test.py
394 INFO: PyInstaller: 3.4
395 INFO: Python: 3.4.4
396 INFO: Platform: Windows-10-10.0.17134
401 INFO: wrote C:\..\Test.spec
404 INFO: UPX is not available.
405 INFO: Extending PYTHONPATH with paths
['C:\\..', 'C:\\..\\Scripts']
405 INFO: checking Analysis
446 INFO: checking PYZ
473 INFO: checking PKG
480 INFO: Building because C:\..\Test.exe.manifest changed
480 INFO: Building PKG (CArchive) PKG-00.pkg
2302 INFO: Building PKG (CArchive) PKG-00.pkg completed successfully.
2305 INFO: Bootloader c:\..\run.exe
2306 INFO: checking EXE
2312 INFO: Building because manifest changed
2312 INFO: Building EXE from EXE-00.toc
2314 INFO: Appending archive to EXE C:\..\Test.exe
2711 INFO: Building EXE from EXE-00.toc completed successfully.
C:\..\Scripts>

Related

ValueError: shapes (2,1000) and (2,2,1000) not aligned: 1000 (dim 1) != 2 (dim 1)

I'm implementing a MLP to test a simple NN architecture, hoping to scale up to a bigger network with a larger dataset. My end goal is making a working phone recognizer for TIMIT data, as part of my internship.
To build the MLP, I used the suggestions of this video: https://www.youtube.com/watch?v=Z97XGNUUx9o.
And the proposal of my teacher to use the following inputs:
X = np.random.rand(5,1000)
y = X[4:5,:]
The error message is the following:
ValueError Traceback (most recent call last)
Cell In [63], line 7
5 build_model()
6 mlp = MLP(1000, [1000], 1000)
----> 7 mlp.train(inputs,targets, 50, 0.1)
8 output = mlp.forward_propagate(input)
Cell In [62], line 117, in MLP.train(self, inputs, targets, epochs, learning_rate)
115 output = self.forward_propagate(input)
116 error = target - output
--> 117 self.back_propagate(error)
118 self.gradient_descent(learning_rate=1)
119 sum_error += self._mse(target,output)
Cell In [62], line 96, in MLP.back_propagate(self, error)
94 current_activations = self.activations[i]
95 current_activations_reshaped = current_activations.reshape(current_activations.shape[0], -1)
---> 96 self.derivatives[i] = np.dot(current_activations, delta)
97 error = np.dot(delta, self.weights[i].T)
98 return error
File <__array_function__ internals>:180, in dot(*args, **kwargs)
ValueError: shapes (2,1000) and (2,2,1000) not aligned: 1000 (dim 1) != 2 (dim 1)
This is the relevant code:
class MLP(object):
def __init__(self, num_inputs=3, hidden_layers=[3,3], num_outputs=2):
self.num_inputs = num_inputs
self.hidden_layers = hidden_layers
self.num_outputs = num_outputs
layers = [num_inputs] + hidden_layers + [num_outputs]
weights = []
for i in range(len(layers) - 1):
w = np.random.rand(layers[i], layers[i + 1])
weights.append(w)
self.weights = weights
activations = []
for i in range(len(layers)):
a = np.zeros(layers[i])
activations.append(a)
self.activations = activations
derivatives = []
for i in range(len(layers) - 1):
d = np.zeros((layers[i], layers[i+1]))
derivatives.append(d)
self.derivatives = derivatives
def forward_propagate(self,inputs):
activations = inputs
self.activations[0] = inputs
for i in range(len(self.weights)):
net_inputs = np.dot(activations,self.weights)
activations = self._sigmoid(net_inputs)
self.activations[i+1] = activations
return activations
def back_propagate(self, error):
for i in reversed(range(len(self.derivatives))):
activations = self.activations[i+1]
delta = error * self._sigmoid_derivative(activations)
delta_reshaped = delta.reshape(delta.shape[0], -1).T
current_activations = self.activations[i]
current_activations_reshaped = current_activations.reshape(current_activations.shape[0], -1)
self.derivatives[i] = np.dot(current_activations, delta)
error = np.dot(delta, self.weights[i].T)
return error
def _sigmoid_derivative(self,x):
return x * (1.0 - x)
def _sigmoid(self,x):
y = 1.0 / (1+np.exp(-x))
return y
def gradient_descent(self, learning_rate):
for i in range(len(self.weights)):
weights = self.weights[i]
derivatives = self.derivatives[i]
weights += derivatives + learning_rate
def _mse(self,target,output):
return np.average((target-output)**2)
def train(self,inputs,targets,epochs,learning_rate):
for i in range(epochs):
sum_error = 0
for input,target in zip(inputs,targets):
output = self.forward_propagate(input)
error = target - output
self.back_propagate(error)
self.gradient_descent(learning_rate=1)
sum_error += self._mse(target,output)
print("Error: {} at epoch {}".format(sum_error/len(inputs), i))
And this is how I ran it:
if __name__ == "__main__":
X, y = load_dataset()
inputs = X
targets = y
build_model()
mlp = MLP(1000, [1000], 1000)
mlp.train(inputs,targets, 50, 0.1)
output = mlp.forward_propagate(input)
Thanks in advance!
I tried to do what the video said, to set up an MLP, as was the suggestion of the teacher, but I don't know how to solve the shape error.

how to avoid this error (Error : display Surface quit ) when rendering open-AIgym?

I am trying to solve the mountain car problem in AI gym, but when I use env. render()it works the first time, but when I try to render the simulation again after 2000 runs it gives the below error ( error: display Surface quit). How can I avoid this error?
I am using windows, and I am running the code on a jupyter notebook.
import gym
import numpy as np
import sys
#Create gym environment.
discount = 0.95
Learning_rate = 0.01
episodes = 25000
SHOW_EVERY = 2000
env = gym.make('MountainCar-v0')
discrete_os_size = [20] *len(env.observation_space.high)
discrete_os_win_size = (env.observation_space.high - env.observation_space.low)/ discrete_os_size
q_table = np.random.uniform(low=-2, high=0, size=(discrete_os_size + [env.action_space.n]))
# convert continuous state to discrete state
def get_discrete_state(state):
discrete_State = (state - env.observation_space.low) / discrete_os_win_size
return tuple(discrete_State.astype(int))
for episode in range(episodes):
if episode % SHOW_EVERY == 0:
render = True
print(episode)
else:
render = False
ds = get_discrete_state(env.reset())
done = False
while not done:
action = np.argmax(q_table[ds])
new_state, reward, done, _ = env.step(action)
new_discrete_state = get_discrete_state(new_state)
if episode % SHOW_EVERY == 0:
env.render()
if not done:
max_future_q = np.max(q_table[new_discrete_state])
current_q_value = q_table[ds + (action, )]
new_q = (1-Learning_rate) * current_q_value + Learning_rate * (reward +
discount * max_future_q )
q_table[ds + (action, )] = new_q
elif new_state[0] >= env.goal_position:
q_table[ds + (action, )] = 0
ds = new_discrete_state
env.close()
I faced the same problem, cuz when you call env.close() it closes the environment so in order run it again you have to make a new environment. Just comment env.close() if you want to run the same environment again.

Fastai v2 dataset has no show_batch method

I am having trouble with my Datablock not having show_batch methods when customising to my own use case.
I am trying to port some of my code from fastai v1 to v2. Working through the Datablock tutorial https://docs.fast.ai/tutorial.datablock.html
My Datablock & Dataset:
dblock = DataBlock(get_items = get_image_files,
get_y = parent_label,
splitter = RandomSplitter())
dsets = dblock.datasets("PlantVillage-Dataset/raw/color/")
dsets.train[0] # this works
The error I get when I try dsets.show_batch():
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-56-5a2f74730596> in <module>
----> 1 dsets.show_batch()
~/.pyenv/versions/3.7.8/envs/fastai/lib/python3.7/site-packages/fastai/data/core.py in __getattr__(self, k)
315 return res if is_indexer(it) else list(zip(*res))
316
--> 317 def __getattr__(self,k): return gather_attrs(self, k, 'tls')
318 def __dir__(self): return super().__dir__() + gather_attr_names(self, 'tls')
319 def __len__(self): return len(self.tls[0])
~/.pyenv/versions/3.7.8/envs/fastai/lib/python3.7/site-packages/fastcore/transform.py in gather_attrs(o, k, nm)
163 att = getattr(o,nm)
164 res = [t for t in att.attrgot(k) if t is not None]
--> 165 if not res: raise AttributeError(k)
166 return res[0] if len(res)==1 else L(res)
167
AttributeError: show_batch
dls = dblock.dataloaders(path)
dls.show_batch()
After intialising the Datablock I needed to construct a dataloader for batch construction.

Unexpected error While adding string in print() functuon

I have a simple function fact() to print factorial of a number which is to input at runtime.
Everything works fine ata this code given below.
# Find factorial of a number...
def fact():
number = int(input('Please enter a number: '))
tmp = 1
while number > 0:
tmp *= number
number -= 1
print(tmp)
ask = input('Do you want to try again... [y/n]: ')
if ('y' or 'Y') in ask:
fact()
else:
print('Thank you for using my tool. Good bye')
fact()
But if I add some string in first print() function I get syntax error for the line "ask = input...". Here is the code below.
# Find factorial of a number...
def fact():
number = int(input('Please enter a number: '))
tmp = 1
while number > 0:
tmp *= number
number -= 1
print("Factorial of %d is %d" %(number, tmp)
ask = input('Do you want to try again... [y/n]: ')
if ('y' or 'Y') in ask:
fact()
else:
print('Thank you for using my tool. Good bye')
fact()
I have one last problem. My program asks me if I want to try again. If I type y and enter, in works as it should be. But if I type Y, it exicute else statement.
I am using python3.6.4rc1 in Debian.
this should work!
def fact():
number = int(input("Please enter a number: "))
tmp = 1
while number > 0:
tmp *= number
number -= 1
print("Factorial of %d is %d" %(number, tmp))
ask = input("Do you want to try again... [y/n]: ")
if ask in ['y','Y']:
fact()
else:
print('Thank you for using my tool. Good bye')
fact()
I had to preserve value of number variable as it will become 0 at end of loop. Otherwise my program will print Modulas of 0 is 'some_number. Now I have the correct code...
def fact():
number = int(input("Please enter a number: "))
preserve_number = number
tmp = 1
while number > 0:
tmp *= number
number -= 1
print("Factorial of %d is %d" %(preserve_number, tmp))
ask = input("Do you want to try again... [y/n]: ")
if ask in ['y','Y']:
fact()
else:
print('Thank you for using my tool. Good bye')
fact()

TensorFlow: dimension error. how to debug?

I'm a beginner with TF
I've tried to adapt a code which is working well with some other data (noMNIST) to some new data, and i have a dimensionality error, and i don't know how to deal with it.
To debug, i'm trying to use tf.shape method but it doesn't give me the info i need...
def reformat(dataset, labels):
#dataset = dataset.reshape((-1, num_var)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
type(train_dataset)
Training set (790184, 29) (790184, 39) Validation set (43899, 29)
(43899, 39) Test set (43899, 29) (43899, 39)
# Adding regularization to the 1 hidden layer network
graph1 = tf.Graph()
batch_size = 128
num_steps=3001
import datetime
startTime = datetime.datetime.now()
def define_and_run_batch(beta):
num_RELU =1024
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
print(tf.shape(weights_RELU) )
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
# beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
print("ok")
print(tf.shape(weights_RELU) )
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
#
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_and_run_batch(0.005)
Tensor("Shape:0", shape=(2,), dtype=int32) ok Tensor("Shape_1:0",
shape=(2,), dtype=int32)
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
94 return(test_acc,round(l,5))
95
---> 96 define_and_run_batch(0.005)
in define_and_run_batch(beta)
54 print(tf.shape(weights_RELU) )
55 valid_prediction = tf.nn.softmax(
---> 56 tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
57
58
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc
in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse,
name)
949 transpose_a=transpose_a,
950 transpose_b=transpose_b,
--> 951 name=name)
952
953 sparse_matmul = gen_math_ops._sparse_mat_mul
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
684 """
685 return _op_def_lib.apply_op("MatMul", a=a, b=b, transpose_a=transpose_a,
--> 686 transpose_b=transpose_b, name=name)
687
688
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.pyc
in apply_op(self, op_type_name, name, **keywords)
653 op = g.create_op(op_type_name, inputs, output_types, name=scope,
654 input_types=input_types, attrs=attr_protos,
--> 655 op_def=op_def)
656 outputs = op.outputs
657 return _Restructure(ops.convert_n_to_tensor(outputs), output_structure)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2040
original_op=self._default_original_op, op_def=op_def) 2041 if
compute_shapes:
-> 2042 set_shapes_for_outputs(ret) 2043 self._add_op(ret) 2044
self._record_op_seen_by_control_dependencies(ret)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in set_shapes_for_outputs(op) 1526 raise RuntimeError("No
shape function registered for standard op: %s" 1527
% op.type)
-> 1528 shapes = shape_func(op) 1529 if len(op.outputs) != len(shapes): 1530 raise RuntimeError(
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/common_shapes.pyc
in matmul_shape(op)
87 inner_a = a_shape[0] if transpose_a else a_shape[1]
88 inner_b = b_shape[1] if transpose_b else b_shape[0]
---> 89 inner_a.assert_is_compatible_with(inner_b)
90 return [tensor_shape.TensorShape([output_rows, output_cols])]
91
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc
in assert_is_compatible_with(self, other)
92 if not self.is_compatible_with(other):
93 raise ValueError("Dimensions %s and %s are not compatible"
---> 94 % (self, other))
95
96 def merge_with(self, other):
ValueError: Dimensions Dimension(29) and Dimension(30) are not
compatible
the whole code is on my github
https://github.com/FaguiCurtain/Kaggle-SF
the Udacity Assignment 3 file is working
the original data is here
https://www.kaggle.com/c/sf-crime/data
in Udacity, the data were images and each image was a 28x28 matrix which was reformatted into flattened vectors of size 784
in the Kaggle-SF file, i am feeding vectors of size 29, and labels can take 39 different values.
thanks for your help
In debug mode you can check shapes of you Tensors.
by the way you error is valid_prediction assignment. to make it better for debugging and reading it's better to define each step in a separate line. you are using 4 operation in 1 line. BTW in debug mode (for example in Pycharm) you can inspect the element and check what is causing the problem
To check the dimensions, you can directly print the tensors. When you print the tensors, you can view the dimensions. I suggest if you are a beginner, try using the 'tf.layers' package which contains high level wrappers for the various layers one would need to build a CNN in tensorflow. By using this, you can avoid having to deal with the various low level operations like 'matmul' and adding bias for example. The activations can also be directly applied by the layers without having to implement it manually.
As far as debugging is concerned, from the code since you have merged the operations, its hard to see what is going on under the hood unless we can use a proper debugger. If you are not using an IDE, I suggest using 'pudb'.

Resources