I've been working on an RSA encryption script in Lua, with the assistance of BigNumbers (http://oss.digirati.com.br/luabignum/bn/index.htm), and I pretty much have a working code. I'm stuck, however, because in a small percentage of cases, the encrypted original message is not decrypted to the original message, and I cannot figure out why. Please note that this will deal with very large numbers (1.08e107, for example). The entire code I've written is below, but here's a breakdown of what it should do.
print(rsa_getkey())
p: 83
q: 23
n: 1909
e: 19
d: 1899
phi: 1804
The above sets the key values, in which the public key is represented by [n, e] and the private key is represented by [n, d]. This is accomplished with the following code:
function rsa_getkey()
rsa_e = 0
local primes = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 57, 71, 73, 79, 83, 89, 97}
math.randomseed = os.time()
rsa_p = primes[math.random(5,#primes)]
rsa_q = rsa_p
while rsa_q == rsa_p do
math.randomseed = os.time()
rsa_q = primes[math.random(5,#primes)]
end
rsa_n = rsa_p*rsa_q
rsa_phi = (rsa_p-1)*(rsa_q-1)
while rsa_e == 0 do
local prime = primes[math.random(1,10)]
if rsa_phi%prime > 0 then
rsa_e = prime
end
end
for i = 2, rsa_phi/2 do
if ((i*rsa_phi)+1)%rsa_e == 0 then
rsa_d = ((i*rsa_phi)+1)/rsa_e
break
end
end
return "p: ",rsa_p,"\nq: ",rsa_q,"\nn: ",rsa_n,"\ne: ",rsa_e,"\nd: ",rsa_d,"\nphi: ",rsa_phi,"\n"
end
After the keys have been determined, you can encrypt the message. In order to convert plain text ("Hello world") to a numeric system, I've created a function that isn't 100% complete, but works in the most basic form:
print(rsa_plaintext("Hello_world"))
1740474750625850534739
The following function is how that message is determined:
function rsa_plaintext(x)
local alphanum = {A=10, B=11, C=12, D=13, E=14, F=15, G=16, H=17, I=18, J=19, K=20, L=21, M=22, N=23, O=24, P=25, Q=26, R=27, S=28, T=29, U=30, V=31, W=32, X=33, Y=34, Z=35, a=36, b=37, c=38, d=39, e=40, f=41, g=42, h=43, i=44, j=45, k=46, l=47, m=48, n=49, o=50, p=51, q=52, r=53, s=54, t=55, u=56, v=57, w=58, x=59, y=60, z=61, _=62}
rsa_cipher = ""
for i = 1, #x do
local s = x:sub(i,i)
rsa_cipher = rsa_cipher .. alphanum[s]
end
return rsa_cipher
end
Lastly, in order to make this much more manageable, I have to break it down into segments. In an effort to save time and code, I've combined the actual encryption with the conversion from plaintext to numeric format to encryption, though I've added decryption for debugging purposes. The code also accounts for affixing 0's to message to ensure 4 digits in each grouping. This is where my problem comes in; the Msg and Decrypted should be identical.
print(rsa_group("Hello world"))
Msg: 1740
Encrypted: 1560
Decrypted: 1740
Msg: 4747
Encrypted: 795
Decrypted: 929
Msg: 5062
Encrypted: 1659
Decrypted: 1244
Msg: 5850
Encrypted: 441
Decrypted: 123
Msg: 5347
Encrypted: 429
Decrypted: 1529
Msg: 3900
Encrypted: 1244
Decrypted: 82
This is done with the following two functions:
function rsa_group(str)
local cipher = {}
local str = rsa_plaintext(str:gsub(" ","_"))
local len = #str
local fillin = ""
if len%4 ~= 0 then
fillin = string.rep(0,(4-len%4))
end
str = str..fillin
for i = 1, #str, 4 do
local s,e = i, i+3
local part = str:sub(s,e)
print(rsa_encrypt(part))
end
end
function rsa_encrypt(msg)
bnrsa_e = BigNum.new(rsa_e)
bnrsa_n = BigNum.new(rsa_n)
bnmsg = BigNum.new(msg)
result = 0
quo = BigNum.new()
rsa_c = BigNum.new()
result = BigNum.pow(bnmsg, bnrsa_e)
BigNum.div(result, bnrsa_n, quo, rsa_c)
bnrsa_c = BigNum.new(rsa_c)
bnrsa_d = BigNum.new(rsa_d)
result = 0
quo = BigNum.new()
rsa_C = BigNum.new()
result = BigNum.pow(bnrsa_c, bnrsa_d)
BigNum.div(result, bnrsa_n, quo, rsa_C)
return "Msg:",msg,"\nEncrypted:",rsa_c,"\nDecrypted:",rsa_C,"\n"
end
Now, I know this is a long question, and there are many components to the problem itself. I'm just at a loss how to figure out where my problem lies. Is there something I'm missing? A fresh set of eyes might be my solution.
Upon closer examination it looks like the message M has to be less than the product n of your two primes. In your above test cases, all the messages except the first failed to decrypt properly because they're greater than n = 1909.
For example, consider where M just exceeded n = 1909:
Msg: 1910
Encrypted: 1
Decrypted: 1
Msg: 1911
Encrypted: 1222
Decrypted: 2
Msg: 1912
Encrypted: 1179
Decrypted: 3
In a real-world example, n is of course significantly larger and so this problem is much less likely to arise.
Related
I am training a model that takes tokenized strings which are then passed through an embedding layer and an LSTM thereafter. However, there seems to be an error in the input, as it does not pass through the embedding layer.
class DrugModel(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim, drug_embed_dim,
lstm_layer, lstm_dropout, bi_lstm, linear_dropout, char_vocab_size,
char_embed_dim, char_dropout, dist_fn, learning_rate,
binary, is_mlp, weight_decay, is_graph, g_layer,
g_hidden_dim, g_out_dim, g_dropout):
super(DrugModel, self).__init__()
# Save model configs
self.drug_embed_dim = drug_embed_dim
self.lstm_layer = lstm_layer
self.char_dropout = char_dropout
self.dist_fn = dist_fn
self.binary = binary
self.is_mlp = is_mlp
self.is_graph = is_graph
self.g_layer = g_layer
self.g_dropout = g_dropout
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# For one-hot encoded SMILES
if not is_mlp:
self.char_embed = nn.Embedding(char_vocab_size, char_embed_dim,
padding_idx=0)
self.lstm = nn.LSTM(char_embed_dim, drug_embed_dim, lstm_layer,
bidirectional=False,
batch_first=True, dropout=lstm_dropout)
# Distance function
self.dist_fc = nn.Linear(drug_embed_dim, 1)
if binary:
# Binary Cross Entropy
self.criterion = lambda x, y: y*torch.log(x) + (1-y)*torch.log(1-x)
def init_lstm_h(self, batch_size):
return (Variable(torch.zeros(
self.lstm_layer*1, batch_size, self.drug_embed_dim)).cuda(),
Variable(torch.zeros(
self.lstm_layer*1, batch_size, self.drug_embed_dim)).cuda())
# Set Siamese network as basic LSTM
def siamese_sequence(self, inputs, length):
# Character embedding
inputs = inputs.long()
inputs = inputs.cuda()
self.char_embed = self.char_embed(inputs.to(self.device))
c_embed = self.char_embed(inputs)
# c_embed = F.dropout(c_embed, self.char_dropout)
maxlen = inputs.size(1)
if not self.training:
# Sort c_embed
_, sort_idx = torch.sort(length, dim=0, descending=True)
_, unsort_idx = torch.sort(sort_idx, dim=0)
maxlen = torch.max(length)
# Pack padded sequence
c_embed = c_embed.index_select(0, Variable(sort_idx).cuda())
sorted_len = length.index_select(0, sort_idx).tolist()
c_packed = pack_padded_sequence(c_embed, sorted_len, batch_first=True)
else:
c_packed = c_embed
# Run LSTM
init_lstm_h = self.init_lstm_h(inputs.size(0))
lstm_out, states = self.lstm(c_packed, init_lstm_h)
hidden = torch.transpose(states[0], 0, 1).contiguous().view(
-1, 1 * self.drug_embed_dim)
if not self.training:
# Unsort hidden states
outputs = hidden.index_select(0, Variable(unsort_idx).cuda())
else:
outputs = hidden
return outputs
def forward(self, key1, key2, targets, key1_len, key2_len, status, predict = False):
if not self.is_mlp:
output1 = self.siamese_sequence(key1, key1_len)
output2 = self.siamese_sequence(key2, key2_len)
After instantiating the class I get the following error when passing the input through the embedding layer:
<ipython-input-128-432fcc7a1e39> in forward(self, key1, key2, targets, key1_len, key2_len, status, predict)
129 def forward(self, key1, key2, targets, key1_len, key2_len, status, predict = False):
130 if not self.is_mlp:
--> 131 output1 = self.siamese_sequence(key1, key1_len)
132 output2 = self.siamese_sequence(key2, key2_len)
133 set_trace()
<ipython-input-128-432fcc7a1e39> in siamese_sequence(self, inputs, length)
74 inputs = inputs.cuda()
75
---> 76 self.char_embed = self.char_embed(inputs.to(self.device))
77 set_trace()
78 c_embed = self.char_embed(inputs)
~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/miniconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
despite the fact that the input (e.g. key1) has already been passed to cuda and has been transformed into long format:
tensor([[25, 33, 30, ..., 0, 0, 0],
[25, 7, 7, ..., 0, 0, 0],
[25, 7, 30, ..., 0, 0, 0],
...,
[25, 7, 33, ..., 0, 0, 0],
[25, 33, 41, ..., 0, 0, 0],
[25, 33, 41, ..., 0, 0, 0]], device='cuda:0')
setting model.device to cuda does not change your inner module devices, so self.lstm, self.char_embed, and self.dist_fc are all still on cpu. correct way of doing it is by using DrugModel().to(device)
in general, it's better not to feed a device to your model and write it in a device-agnostic way. to make your init_lstm_h function device-agnostic you can use something like this
in order to make nn work in cuda.
first step: we must set initial model to cuda
device = torch.device('cuda:1')
model = esm.ProteinBertModel(
args,
alphabet,
).to(device)
second step: we should set loaded model to cuda
bert_model = bert_model.to(device)
I am a noob trying to build a network to classify 2 sequences of floats to one of 16450 different integers. I have 70408 samples and I have padded each sample to have 1400 values. So 1 sample has 2 column vectors eg. [104.243,120.12...], [125.25,14.556...]. Both my x_train is size (70408,1400). I am trying to use keras' functional API but can't seem to figure out the right input shape. Any help would be appreciated.
samples = 70408
mass_size = 1400
intensity_size = 1400
output_size = 16450
mass_input = Input(shape=(samples,mass_size), dtype='float32')
mass_net = layers.Conv1D(32,5,activation='relu')(mass_input)
mass_net = layers.AveragePooling1D(3)(mass_net)
mass_net = layers.Conv1D(16,5,activation='relu')(mass_net)
mass_net = layers.GlobalAveragePooling1D()(mass_net)
intensity_input = Input(shape=(samples,intensity_size), dtype='float32')
intensity_net = layers.Conv1D(32,5,activation='relu')(intensity_input)
intensity_net = layers.AveragePooling1D(3)(intensity_net)
intensity_net = layers.Conv1D(16,5,activation='relu')(intensity_net)
intensity_net = layers.GlobalAveragePooling1D()(intensity_net)
concatenated = layers.concatenate([mass_net,intensity_net],axis=-1)
output = layers.Dense(output_size,activation='softmax')(concatenated)
print(mass_data.shape, intensity_data.shape)
model = Model([mass_data,intensity_data],output)
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['acc'])
model.fit([mass_data,intensity_data],y_train,epochs=10,batch_size=128)
The error I keep getting is:
TypeError Traceback (most recent call last)
<ipython-input-18-aab93c439dd0> in <module>()
28
29 print(mass_data.shape, intensity_data.shape)
---> 30 model = Model([mass_data,intensity_data],output)
31 model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['acc'])
32
~\Anaconda3\envs\deeplearning\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper
~\Anaconda3\envs\deeplearning\lib\site-packages\keras\engine\topology.py in __init__(self, inputs, outputs, name)
1528
1529 # Check for redundancy in inputs.
-> 1530 if len(set(self.inputs)) != len(self.inputs):
1531 raise ValueError('The list of inputs passed to the model '
1532 'is redundant. '
TypeError: unhashable type: 'numpy.ndarray'
The problem seems to be here:
model = Model([mass_data,intensity_data],output)
You should use the input tensors you created, not numpy data:
model = Model([mass_input, intensity_input],output)
Another problem, related to my old comment is the input_shape.
Since you now have your data as (samples, length, features), you need input_shape=(length,features)
My coin change dynamic programming implementation is failing for some of the test cases, and I am having a hard time figuring out why:
Problem Statement: Given an amount and a list of coins, find the minimum number of coins required to make that amount.
Ex:
Target Amount: 63
Coin List: [1, 5, 10, 21, 25]
Output: [21, 21, 21]
def coin_change(change_list, amount, tried):
if amount <= 0:
return []
if amount in change_list:
return [amount]
if amount in tried:
return tried[amount]
coin_count = []
for change in change_list:
if change < amount:
changes = coin_change(change_list, amount-change, tried)
changes.append(change)
coin_count.append(changes)
min_changes = coin_count[0][:]
for x in coin_count[1:]:
if len(min_changes) >= len(x):
min_changes = x[:]
tried[amount] = min_changes[:]
return min_changes
def main():
for amount in range(64):
changes = coin_change([1, 5, 10, 21, 25], amount, {})
if sum(changes) != amount:
print "WRONG: Change for %d is: %r" % (amount, changes)
else:
# print "Change for %d is: %r" % (amount, changes)
pass
if __name__ == "__main__":
main()
Trinket: https://trinket.io/python/43fcff035e
You're corrupting the variable, changes, by appending to it during a loop. Try this:
Replace these two lines:
changes.append(change)
coin_count.append(changes)
With:
_changes = changes[:] + [change]
coin_count.append(_changes)
I'm a beginner with TF
I've tried to adapt a code which is working well with some other data (noMNIST) to some new data, and i have a dimensionality error, and i don't know how to deal with it.
To debug, i'm trying to use tf.shape method but it doesn't give me the info i need...
def reformat(dataset, labels):
#dataset = dataset.reshape((-1, num_var)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
type(train_dataset)
Training set (790184, 29) (790184, 39) Validation set (43899, 29)
(43899, 39) Test set (43899, 29) (43899, 39)
# Adding regularization to the 1 hidden layer network
graph1 = tf.Graph()
batch_size = 128
num_steps=3001
import datetime
startTime = datetime.datetime.now()
def define_and_run_batch(beta):
num_RELU =1024
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
print(tf.shape(weights_RELU) )
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
# beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
print("ok")
print(tf.shape(weights_RELU) )
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
#
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_and_run_batch(0.005)
Tensor("Shape:0", shape=(2,), dtype=int32) ok Tensor("Shape_1:0",
shape=(2,), dtype=int32)
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
94 return(test_acc,round(l,5))
95
---> 96 define_and_run_batch(0.005)
in define_and_run_batch(beta)
54 print(tf.shape(weights_RELU) )
55 valid_prediction = tf.nn.softmax(
---> 56 tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
57
58
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc
in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse,
name)
949 transpose_a=transpose_a,
950 transpose_b=transpose_b,
--> 951 name=name)
952
953 sparse_matmul = gen_math_ops._sparse_mat_mul
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
684 """
685 return _op_def_lib.apply_op("MatMul", a=a, b=b, transpose_a=transpose_a,
--> 686 transpose_b=transpose_b, name=name)
687
688
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.pyc
in apply_op(self, op_type_name, name, **keywords)
653 op = g.create_op(op_type_name, inputs, output_types, name=scope,
654 input_types=input_types, attrs=attr_protos,
--> 655 op_def=op_def)
656 outputs = op.outputs
657 return _Restructure(ops.convert_n_to_tensor(outputs), output_structure)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2040
original_op=self._default_original_op, op_def=op_def) 2041 if
compute_shapes:
-> 2042 set_shapes_for_outputs(ret) 2043 self._add_op(ret) 2044
self._record_op_seen_by_control_dependencies(ret)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in set_shapes_for_outputs(op) 1526 raise RuntimeError("No
shape function registered for standard op: %s" 1527
% op.type)
-> 1528 shapes = shape_func(op) 1529 if len(op.outputs) != len(shapes): 1530 raise RuntimeError(
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/common_shapes.pyc
in matmul_shape(op)
87 inner_a = a_shape[0] if transpose_a else a_shape[1]
88 inner_b = b_shape[1] if transpose_b else b_shape[0]
---> 89 inner_a.assert_is_compatible_with(inner_b)
90 return [tensor_shape.TensorShape([output_rows, output_cols])]
91
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc
in assert_is_compatible_with(self, other)
92 if not self.is_compatible_with(other):
93 raise ValueError("Dimensions %s and %s are not compatible"
---> 94 % (self, other))
95
96 def merge_with(self, other):
ValueError: Dimensions Dimension(29) and Dimension(30) are not
compatible
the whole code is on my github
https://github.com/FaguiCurtain/Kaggle-SF
the Udacity Assignment 3 file is working
the original data is here
https://www.kaggle.com/c/sf-crime/data
in Udacity, the data were images and each image was a 28x28 matrix which was reformatted into flattened vectors of size 784
in the Kaggle-SF file, i am feeding vectors of size 29, and labels can take 39 different values.
thanks for your help
In debug mode you can check shapes of you Tensors.
by the way you error is valid_prediction assignment. to make it better for debugging and reading it's better to define each step in a separate line. you are using 4 operation in 1 line. BTW in debug mode (for example in Pycharm) you can inspect the element and check what is causing the problem
To check the dimensions, you can directly print the tensors. When you print the tensors, you can view the dimensions. I suggest if you are a beginner, try using the 'tf.layers' package which contains high level wrappers for the various layers one would need to build a CNN in tensorflow. By using this, you can avoid having to deal with the various low level operations like 'matmul' and adding bias for example. The activations can also be directly applied by the layers without having to implement it manually.
As far as debugging is concerned, from the code since you have merged the operations, its hard to see what is going on under the hood unless we can use a proper debugger. If you are not using an IDE, I suggest using 'pudb'.
I'm trying to understand how css source map works. I've created a very simple scss file.
#navbar {
color: black;
}
When I compile the above scss, I get the following map file.
{
"version": "3",
"mappings": "AAAA,OAAQ;EACP,KAAK,EAAE,KAAK",
"sources": ["test.scss"],
"file": "test.css"
}
when I decode "mappings", I get the following values.
0) [0,0,0,0], [7,0,0,8]
1) [2,0,1,-7], [5,0,0,5], [2,0,0,2], [5,0,0,5]
What are those values?
I found an example at http://www.thecssninja.com/javascript/source-mapping, under the section "Base64 VLQ and keeping the source map small".
The above diagram AAgBC once processed further would return 0, 0, 32, 16, 1 – the 32 being the continuation bit that helps build the following value of 16. B purely decoded in Base64 is 1. So the important values that are used are 0, 0, 16, 1. This then lets us know that line 1 (lines are kept count by the semi colons) column 0 of the generated file maps to file 0 (array of files 0 is foo.js), line 16 at column 1.
Even after reading the answers, explanations were still not so clear to me. Here is an explanation in plain english in case it helps someone:
something like ;;AAAA,IAAM,WAAW,SAAX;... means <line0 info>;<line1 info>;...
so for ;;AAAA;IAAM,WAAW,SAAX;..., line 0 and line 1 doesn't have any important info (empty spaces etc.)
then for line 2 we have AAAA,IAAM,WAAW,SAAX
we convert each of these groups to binary using the base64 character mapping:
BASE64_ALPHABET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
so we basically find the index in this BASE64_ALPHABET above, and convert the index to 6-bit binary (6 bit because we use base64). eg. index of A is 0, so in 6 bit binary its 000000. so AAAA would be 000000 000000 000000 000000
then if we do this with IAAM we get: 001000 000000 000000 001100.
then this bit representation is the VLQ encoded version of 4 numbers. we start from the left block, remove the sign and continuation bit, keep the bits. and continue adding it bits while continuation bit is 1.
eg. 001000 is (cont)0100(sign)
so cont = 0 (no other block will be added to this number)
sign=0 (its positive)
bits = 0100 --> so it is 4 in decimal
-- note that we only remove sign bit for the first time. so if we had
101000 001000
we would say
0100 (cont=1, sign=0) 01000 (cont=0)
so we would have had +010001000 = 136
when we keep doing this, we will get these 4 numbers (continuation bit should be 0 exactly 4 times).
AAAA would map to (0,0,0,0)
IAAM would map to (4,0,0,6)
WAAW would map to (11,0,0,11)
...
now, each of these mean relative numbers. so we correct those:
AAAA actually points to: (0,0,0,0)
IAAM actually points to: (0+4, 0+0, 0+0, 0+6) = (4,0,0,6)
WAAW actually points to: (4+11, 0+0, 0+0, 6+11) = (15,0,0,17) // we added it where IAAAM was actually pointing to
...
so numbers (n1, n2, n3, n4) here stand for
n1: column in generated code
n2: corresponding source file index in "sources" array of sourceMapping output
n3: line number in original code
n4: column number in original code
we already knew which line this referred to from the beginning. so using the information we have above, we learned:
AAAA: line 2, column 1 of generated code points to sources[0], line 0, column 0
IAAM: line 2, column 4 of generated code points to sources[0], line 0, column 6
WAAW: line 2, column 15 of generated code points to sources[0], line 0, column 17
...
two good sources about this:
more on VLQ encoding
more on how to interpret/decode
Despite the examples I could find, I took me quite a while to understand how the coding / decoding really works. So I thought I'd learn best by trying to make something myself in a very explicit, step by step way. I started out with the explanation of VLQ at this blog,
I use the following Python functor to generate sourcemaps for Transcrypt.
The code is simple and I think gives good insight in how the coding/decoding works in principle.
To achieve speed despite its simplicity, it caches the first 256 numbers, which are used most often in generating a v3 sourcemap.
import math
class GetBase64Vlq:
def __init__ (self):
self.nBits32 = 5
self.encoding = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
self.prefabSize = 256
self.prefab = [self (i, True) for i in range (self.prefabSize)]
def __call__ (self, anInteger, init = False):
if not init and 0 < anInteger < self.prefabSize:
return self.prefab [anInteger]
else:
signed = bin (abs (anInteger)) [2 : ] + ('1' if anInteger < 0 else '0')
nChunks = math.ceil (len (signed) / float (self.nBits32))
padded = (self.nBits32 * '0' + signed) [-nChunks * self.nBits32 : ]
chunks = [('1' if iChunk else '0') + padded [iChunk * self.nBits32 : (iChunk + 1) * self.nBits32] for iChunk in range (nChunks - 1, -1, -1)]
return ''.join ([self.encoding [int (chunk, 2)] for chunk in chunks])
getBase64Vlq = GetBase64Vlq ()
Example of use:
while (True):
print (getBase64Vlq (int (input ('Give number:'))))