Suppress Interactive.jl / Reactive.jl signal info in Jupyter Notebook - julia

I`m using Jupyter Notebook in combination with Interactive.jl / Reactive.jl .
When I`m using #manipulate with a slider and afterwards rerun the block of code I see these info/warning messages highlighted in red:
[Input{Int64}] 180
[Input{Int64}] 210
[Input{Int64}] 255
[Input{Int64}] 180
[Input{Int64}] 210
[Input{Int64}] 135
Any idea on what causes these messages? How to suppress them?
The specific code I used was
function complex_round(complex_number,digits_after_comma)
return round(real(complex_number),digits_after_comma) + round(imag(complex_number),digits_after_comma)*im
end
#manipulate for θ in 0:15:360, ϕ in 0:15:360 # let us manipulate the value of θ
psi = [cosd(θ/2),e^(2*pi*im*ϕ/360)*sind(θ/2)]
rho = complex_round(kron(psi',psi),3)
psi = complex_round(psi,3)
# Some very annoying hack required to make the print
# I will look into Compose/Cairo.
text = Array(Any,(2,6))
# text[row,column]
text[1,1] = "psi"
text[2,1] = "="
text[1:2,2] = psi
text[1,3] = "|"
text[2,3] = "|"
text[1,4] = "rho"
text[2,4] = "="
text[1:2,5:6] = rho
text
end
Help is much appreciated!

Related

SCILAB ode: How to solve 2nd order ODE

I am coming from ode45 in MATLAB trying to learn ode in scilab. I ran into an exception I am not sure how to address.
function der = f(t,x)
wn3 = 2800 * %pi/30; //rad/s
m = 868.1/32.174; //slugs
k = m*wn3^2; //lbf/ft
w = 4100 * %pi/30; //rad/s
re_me = 4.09/32.174/12; //slug-ft
F0 = w^2*re_me; //lbf
der(1) = x(2);
der(2) = -k*x(1) + F0*sin(w*t);
endfunction
x0 = [0; 0];
t = 0:0.1:5;
t0 = t(1);
x = ode(x0,t0,t,f);
plot(t,x(1,:));
I get this error message that I don't understand:
lsoda-- at t (=r1), mxstep (=i1) steps
needed before reaching tout
where i1 is : 500
where r1 is : 0.1027287737654D+01
Excessive work done on this call (perhaps wrong jacobian type).
at line 35 of executed file C:\Users\ndomenico\Documents\Scilab\high_frequency_vibrator_amplitude_3d.sce
ode: lsoda exit with state -1.
Thank you!
Your ode is particularly stiff (k = 2319733). To me, it has no sense to give such a large final time. The time step you took (0.1) is also very large w.r.t to the driving frequency. If you replace the line
t = 0:0.1:5
by
t = linspace(0,0.1,1001)
i.e. request approximations of your solution for t in [0,0.1] and 1000 time steps you will have the following output:

TensorFlow: dimension error. how to debug?

I'm a beginner with TF
I've tried to adapt a code which is working well with some other data (noMNIST) to some new data, and i have a dimensionality error, and i don't know how to deal with it.
To debug, i'm trying to use tf.shape method but it doesn't give me the info i need...
def reformat(dataset, labels):
#dataset = dataset.reshape((-1, num_var)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
type(train_dataset)
Training set (790184, 29) (790184, 39) Validation set (43899, 29)
(43899, 39) Test set (43899, 29) (43899, 39)
# Adding regularization to the 1 hidden layer network
graph1 = tf.Graph()
batch_size = 128
num_steps=3001
import datetime
startTime = datetime.datetime.now()
def define_and_run_batch(beta):
num_RELU =1024
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
print(tf.shape(weights_RELU) )
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
# beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
print("ok")
print(tf.shape(weights_RELU) )
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
#
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_and_run_batch(0.005)
Tensor("Shape:0", shape=(2,), dtype=int32) ok Tensor("Shape_1:0",
shape=(2,), dtype=int32)
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
94 return(test_acc,round(l,5))
95
---> 96 define_and_run_batch(0.005)
in define_and_run_batch(beta)
54 print(tf.shape(weights_RELU) )
55 valid_prediction = tf.nn.softmax(
---> 56 tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
57
58
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc
in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse,
name)
949 transpose_a=transpose_a,
950 transpose_b=transpose_b,
--> 951 name=name)
952
953 sparse_matmul = gen_math_ops._sparse_mat_mul
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
684 """
685 return _op_def_lib.apply_op("MatMul", a=a, b=b, transpose_a=transpose_a,
--> 686 transpose_b=transpose_b, name=name)
687
688
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.pyc
in apply_op(self, op_type_name, name, **keywords)
653 op = g.create_op(op_type_name, inputs, output_types, name=scope,
654 input_types=input_types, attrs=attr_protos,
--> 655 op_def=op_def)
656 outputs = op.outputs
657 return _Restructure(ops.convert_n_to_tensor(outputs), output_structure)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2040
original_op=self._default_original_op, op_def=op_def) 2041 if
compute_shapes:
-> 2042 set_shapes_for_outputs(ret) 2043 self._add_op(ret) 2044
self._record_op_seen_by_control_dependencies(ret)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in set_shapes_for_outputs(op) 1526 raise RuntimeError("No
shape function registered for standard op: %s" 1527
% op.type)
-> 1528 shapes = shape_func(op) 1529 if len(op.outputs) != len(shapes): 1530 raise RuntimeError(
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/common_shapes.pyc
in matmul_shape(op)
87 inner_a = a_shape[0] if transpose_a else a_shape[1]
88 inner_b = b_shape[1] if transpose_b else b_shape[0]
---> 89 inner_a.assert_is_compatible_with(inner_b)
90 return [tensor_shape.TensorShape([output_rows, output_cols])]
91
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc
in assert_is_compatible_with(self, other)
92 if not self.is_compatible_with(other):
93 raise ValueError("Dimensions %s and %s are not compatible"
---> 94 % (self, other))
95
96 def merge_with(self, other):
ValueError: Dimensions Dimension(29) and Dimension(30) are not
compatible
the whole code is on my github
https://github.com/FaguiCurtain/Kaggle-SF
the Udacity Assignment 3 file is working
the original data is here
https://www.kaggle.com/c/sf-crime/data
in Udacity, the data were images and each image was a 28x28 matrix which was reformatted into flattened vectors of size 784
in the Kaggle-SF file, i am feeding vectors of size 29, and labels can take 39 different values.
thanks for your help
In debug mode you can check shapes of you Tensors.
by the way you error is valid_prediction assignment. to make it better for debugging and reading it's better to define each step in a separate line. you are using 4 operation in 1 line. BTW in debug mode (for example in Pycharm) you can inspect the element and check what is causing the problem
To check the dimensions, you can directly print the tensors. When you print the tensors, you can view the dimensions. I suggest if you are a beginner, try using the 'tf.layers' package which contains high level wrappers for the various layers one would need to build a CNN in tensorflow. By using this, you can avoid having to deal with the various low level operations like 'matmul' and adding bias for example. The activations can also be directly applied by the layers without having to implement it manually.
As far as debugging is concerned, from the code since you have merged the operations, its hard to see what is going on under the hood unless we can use a proper debugger. If you are not using an IDE, I suggest using 'pudb'.

BBC Basic Cipher Help Needed

I am doing a school project in which I need to make a "sort of" vigenere cipher in which the user inputs both the keyword and plaintext. However the vigenere assumes a=0 whereas I am to assume a=1 and I have changed this accordingly for my program. However I am required to make my cipher work for both lower and upper case, How could I make this also work for lower case, it may be a stupid question but I'm very confused at this point and I'm new to programming, thanks.
REM Variables
plaintext$=""
PRINT "Enter the text you would like to encrypt"
INPUT plaintext$
keyword$=""
PRINT "Enter the keyword you wish to use"
INPUT keyword$
encrypted$= FNencrypt(plaintext$, keyword$)
REM PRINTING OUTPUTS
PRINT "Key = " keyword$
PRINT "Plaintext = " plaintext$
PRINT "Encrypted = " encrypted$
PRINT "Decrypted = " FNdecrypt(encrypted$, keyword$)
END
DEF FNencrypt(plain$, keyword$)
LOCAL i%, offset%, Ascii%, output$
FOR i% = 1 TO LEN(plain$)
Ascii% = ASCMID$(plain$, i%)
IF Ascii% >= 65 IF Ascii% <= 90 THEN
output$ += CHR$((66 + (Ascii% + ASCMID$(keyword$, offset%+1)) MOD 26))
ENDIF
offset% = (offset% + 1) MOD LEN(keyword$)
NEXT
= output$
DEF FNdecrypt(encrypted$, keyword$)
LOCAL i%, offset%, n%, o$
FOR i% = 1 TO LEN(encrypted$)
n% = ASCMID$(encrypted$, i%)
o$ += CHR$(64 + (n% + 26 - ASCMID$(keyword$, offset%+1)) MOD 26)
offset% = (offset% + 1) MOD LEN(keyword$)
NEXT
= output$
You can always convert from upper to lowercase and the Stringlib library contains a function for doing this.
First import stringlib at the top of your program:
import #lib$+"stringlib"
then convert strings using:
plaintext$ = fn_lower(plaintext$)

How to print a complex number without percent sign in Scilab?

I tried this
a = 1+3*%i;
disp("a = "+string(a))
I got a = 1+%i*3 , but what I want is a = 1. + 3.i
So is there any method in Scilab to print a complex number without the percent sign?
Similarly to Matlab, you can format the output string by including the real and imaginary parts separately.
mprintf('%g + %gi\n', real(a) , imag(a))
However, that looks pretty ugly when the imaginary part is negative. I suggest writing a formatting function:
function s = complexstring(a)
if imag(a)>=0 then
s = sprintf('%g+%gi', real(a) , imag(a))
else
s = sprintf('%g%gi', real(a) , imag(a))
end
endfunction
Examples:
disp('a = '+complexstring(1+3*%i))
disp('b = '+complexstring(1-3*%i))
Output:
a = 1+3i
b = 1-3i

How to display a polynomial function using printf in Scilab? {Without using disp}

Here is my code. It throws error. I want to printf the polymial.
clc
clear
printf("Example 4.4 | Page number 103 \n\n");
//find Cv and Cp
//Given data
t = poly(0,'t'); //°C //Temperature in °C
u = 196 + .718*t; //KJ/kg //specific internal energy
pv = 287*(t+273); //Nm/kg //p is pressure and v = specific volume
//Solution
Cv = derivat(u);
printf("Cv = %.3f KJ/kgK\n",Cv);
The return type of derivat is another polynomial. You can check with:
disp(typeof(Cv))
The scilab types are listed here.
You can get the sole coefficient out by using coeff
printf("Cv = %.3f KJ/kgK\n",coeff(Cv));

Resources