I went through the astropy documentation and concluded that
there are no native ECEF (earth-centered, earth-fixed) frames with lon, lat coordinates that can be converted into equatorial coordinates RA, dec if the time is given. Is this true?
Eventually, I'd like to create a map using:
map = HEALPix(nside=NSIDE, order='nested', frame=MY_REF_FRAME())
MY_REF_FRAME = ITRS is apparently not an option.
I'd be grateful if someone can help me to find if there is a way this can be defined with a recent version of astropy.
Thanks!
Eric
HEALPix itself and https://astropy-healpix.readthedocs.io is mostly about the HEALPix pixels in one given sky frame.
For transformations between different sky coordinate systems you should look to astropy.coordinates.
I'm not familiar with ECEF, but you might be able to do the computation you want using the existing ICRS and ITRF frames (see https://stackoverflow.com/a/49325584/498873).
Let me post an answer to last #Christoph comment here so that I can paste the full code. This is the code and error I get:
In [1]: import astropy.coordinates as coord
In [2]: import astropy.units as u
In [3]: from astropy.time import Time
In [4]: coord.ITRS( coord.SphericalRepresentation(lon= 0.0 *u.deg ,lat = 0.0 * u.deg, distance = 1 * u.m), obstime=Time('2018-12-03 14:00:00')).transform_to(coord.ICRS)
ValueError Traceback (most recent call last)
<ipython-input-4-1d50da4d3855> in <module>
----> 1 coord.ITRS( coord.SphericalRepresentation(lon= 0.0 *u.deg ,lat = 0.0 * u.deg, distance = 1 * u.m), obstime=Time('2018-12-03 14:00:00')).transform_to(coord.ICRS)
~/lib/python3.5/site-packages/astropy/coordinates/baseframe.py in transform_to(self, new_frame)
1165 msg = 'Cannot transform from {0} to {1}'
1166 raise ConvertError(msg.format(self.__class__, new_frame.__class__))
-> 1167 return trans(self, new_frame)
1168
1169 def is_transformable_to(self, new_frame):
...
474 # In case we want to convert 1e20 to int.
475 try:
--> 476 fill_value = np.array(fill_value, copy=False, dtype=ndtype)
477 except OverflowError:
478 # Raise TypeError instead of OverflowError.
ValueError: invalid literal for int() with base 10: 'N'
In [6]: astropy.__version__
Out[6]: '3.0.5'
In [8]: numpy.__version__
Out[8]: '1.15.3'
Related
I have been able to create a graph successfully in the OSMnx platform from a edge and node shape file that I have complied for an informal settlement, from cadastral survey maps, in Dharavi, Mumbai as OSM data is not good in such cases. However while calculating basic stats of the graph I am getting ValueError: max() arg is an empty sequence. Can you please advise on what could be wrong? I have added a screen shot of the graph and the error.
screen shot of the code snippet
G7 = ox.utils_graph.graph_from_gdfs(dnodes_gdf, dedges_gdf, graph_attrs=None)```
```G7_projected = ox.project_graph(G7)
fig, ax = ox.plot_graph(G7_projected)```
```G7_proj = ox.project_graph(G7)
nodes_proj = ox.graph_to_gdfs(G7_proj, edges=False)
graph_area_m = nodes_proj.unary_union.convex_hull.area
graph_area_m```
```basic_stats = ox.basic_stats(G7)```
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_11988/668476885.py in <module>
----> 1 basic_stats = ox.basic_stats(G7)
~\.conda\envs\ox\lib\site-packages\osmnx\stats.py in basic_stats(G, area, clean_int_tol, clean_intersects, tolerance, circuity_dist)
346 stats["edge_length_avg"] = stats["edge_length_total"] / stats["m"]
347 stats["streets_per_node_avg"] = streets_per_node_avg(G)
--> 348 stats["streets_per_node_counts"] = streets_per_node_counts(G)
349 stats["streets_per_node_proportions"] = streets_per_node_proportions(G)
350 stats["intersection_count"] = intersection_count(G)
~\.conda\envs\ox\lib\site-packages\osmnx\stats.py in streets_per_node_counts(G)
80 """
81 spn_vals = list(streets_per_node(G).values())
---> 82 return {i: spn_vals.count(i) for i in range(int(max(spn_vals)) + 1)}
83
84
ValueError: max() arg is an empty sequence
I am trying to fine tune a Roberta model after adding some special tokens to its tokenizer:
special_tokens_dict = {'additional_special_tokens': ['[Tok1]','[Tok2]']}
tokenizer.add_special_tokens(special_tokens_dict)
I get this error when i try to train the model (on cpu):
IndexError Traceback (most recent call last)
<ipython-input-75-d63f8d3c6c67> in <module>()
50 l = model(b_input_ids,
51 attention_mask=b_input_mask,
---> 52 labels=b_labels)
53 loss,logits = l
54 total_train_loss += l[0].item()
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
p.s. If I comment add_special_tokens the code works.
You also need to tell your model that it needs to learn the vector representations of two new tokens:
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
t = RobertaTokenizer.from_pretrained('roberta-base')
m = RobertaForQuestionAnswering.from_pretrained('roberta-base')
#roberta-base 'knows' 50265 tokens
print(m.roberta.embeddings.word_embeddings)
special_tokens_dict = {'additional_special_tokens': ['[Tok1]','[Tok2]']}
t.add_special_tokens(special_tokens_dict)
#we now tell the model that it needs to learn new tokens:
m.resize_token_embeddings(len(t))
m.roberta.embeddings.word_embeddings.padding_idx=1
print(m.roberta.embeddings.word_embeddings)
Output:
Embedding(50265, 768, padding_idx=1)
Embedding(50267, 768, padding_idx=1)
I am looking for a way to quickly change a graph within an interactive session in Jupyter in order to test different structures. Initially I wanted to simple delete existing variables and recreate them with a different initializer. This does not seem to be possible [1].
I then found [2] and am now attempting to simply discard and recreate the default graph. But this does not seem to work. This is what I do:
a. Start a session
import tensorflow as tf
import math
sess = tf.InteractiveSession()
b. Create a variable in the default graph
IMAGE_PIXELS = 32 * 32
HIDDEN1 = 200
BATCH_SIZE = 100
NUM_POINTS = 30
images_placeholder = tf.placeholder(tf.float32, shape=(BATCH_SIZE, IMAGE_PIXELS))
points_placeholder = tf.placeholder(tf.float32, shape=(BATCH_SIZE, NUM_POINTS))
# Hidden 1
with tf.name_scope('hidden1'):
weights_init = tf.truncated_normal([IMAGE_PIXELS, HIDDEN1], stddev=1.0 / math.sqrt(float(IMAGE_PIXELS)))
weights = tf.Variable(weights_init, name='weights')
biases_init = tf.zeros([HIDDEN1])
biases = tf.Variable(biases_init, name='biases')
hidden1 = tf.nn.relu(tf.matmul(images_placeholder, weights) + biases)
c. Use the variable
# Add the variable initializer Op.
init = tf.initialize_all_variables()
# Run the Op to initialize the variables.
sess.run(init)
d. Reset the graph
tf.reset_default_graph()
e. Recreate the variable
with tf.name_scope('hidden1'):
weights = tf.get_variable(name='weights', shape=[IMAGE_PIXELS, HIDDEN1],
initializer=tf.contrib.layers.xavier_initializer())
biases_init = tf.zeros([HIDDEN1])
biases = tf.Variable(biases_init, name='biases')
hidden1 = tf.nn.relu(tf.matmul(images_placeholder, weights) + biases)
However, I get an exception (see below). So my question is: is it possible to reset/remove the graph and recreate it as before? If so, how?
Appreciate any pointers.
TIA,
Refs
Change initializer of Variable in Tensorflow
Remove nodes from graph or reset entire default graph
Exception
ValueError Traceback (most recent call last)
<ipython-input-5-e98a82c45473> in <module>()
5 biases_init = tf.zeros([HIDDEN1])
6 biases = tf.Variable(biases_init, name='biases')
----> 7 hidden1 = tf.nn.relu(tf.matmul(images_placeholder, weights) + biases)
8
/home/hmf/my_py3/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse, name)
1323 A `Tensor` of the same type as `a`.
1324 """
-> 1325 with ops.op_scope([a, b], name, "MatMul") as name:
1326 a = ops.convert_to_tensor(a, name="a")
1327 b = ops.convert_to_tensor(b, name="b")
/usr/lib/python3.4/contextlib.py in __enter__(self)
57 def __enter__(self):
58 try:
---> 59 return next(self.gen)
60 except StopIteration:
61 raise RuntimeError("generator didn't yield") from None
/home/hmf/my_py3/lib/python3.4/site-packages/tensorflow/python/framework/ops.py in op_scope(values, name, default_name)
4014 ValueError: if neither `name` nor `default_name` is provided.
4015 """
-> 4016 g = _get_graph_from_inputs(values)
4017 n = default_name if name is None else name
4018 if n is None:
/home/hmf/my_py3/lib/python3.4/site-packages/tensorflow/python/framework/ops.py in _get_graph_from_inputs(op_input_list, graph)
3812 graph = graph_element.graph
3813 elif original_graph_element is not None:
-> 3814 _assert_same_graph(original_graph_element, graph_element)
3815 elif graph_element.graph is not graph:
3816 raise ValueError(
/home/hmf/my_py3/lib/python3.4/site-packages/tensorflow/python/framework/ops.py in _assert_same_graph(original_item, item)
3757 if original_item.graph is not item.graph:
3758 raise ValueError(
-> 3759 "%s must be from the same graph as %s." % (item, original_item))
3760
3761
ValueError: Tensor("weights:0", shape=(1024, 200), dtype=float32_ref) must be from the same graph as Tensor("Placeholder:0", shape=(100, 1024), dtype=float32).`
When you reset the default graph, you do not remove the previous Tensors created. When calling tf.reset_default_graph(), a new graph is created and set to default.
Here is an example to illustrate:
x = tf.constant(1)
print tf.get_default_graph() == x.graph # prints True
tf.reset_default_graph()
print tf.get_default_graph() == x.graph # prints False
The error you had indicates that two tensors must be from the same graph, which means you are still using some tensors from the previous graph AND from the current default graph.
The easy fix is to create again the two placeholders images_placeholder and points_placeholder
I have downloaded a tensorflow GraphDef that implements a VGG16 ConvNet, which I use doing this :
Pl['images'] = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
with open("tensorflow-vgg16/vgg16.tfmodel", mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
tf.import_graph_def(graph_def, input_map={"images": Pl['images']})
Besides, I have image features that are homogeneous to the output of the "import/pool5/".
How can I tell my graph that don't want to use his input "images", but the tensor "import/pool5/" as input ?
Thank's !
EDIT
OK I realize I haven't been very clear. Here is the situation:
I am trying to use this implementation of ROI pooling, using a pre-trained VGG16, which I have in the GraphDef format. So here is what I do:
First of all, I load the model:
tf.reset_default_graph()
with open("tensorflow-vgg16/vgg16.tfmodel",
mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
graph = tf.get_default_graph()
Then, I create my placeholders
images = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
boxes = tf.placeholder(tf.float32,
[None,5], # 5 = [batch_id,x1,y1,x2,y2]
name = "boxes")
And I define the output of the first part of the graph to be conv5_3/Relu
tf.import_graph_def(graph_def,
input_map={'images':images})
out_tensor = graph.get_tensor_by_name("import/conv5_3/Relu:0")
So, out_tensor is of shape [None,14,14,512]
Then, I do the ROI pooling:
[out_pool,argmax] = module.roi_pool(out_tensor,
boxes,
7,7,1.0/1)
With out_pool.shape = N_Boxes_in_batch x 7 x 7 x 512, which is homogeneous to pool5. I would then like to feed out_pool as an input to the op that comes just after pool5, so it would look like
tf.import_graph_def(graph.as_graph_def(),
input_map={'import/pool5':out_pool})
But it doesn't work, I have this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-89-527398d7344b> in <module>()
5
6 tf.import_graph_def(graph.as_graph_def(),
----> 7 input_map={'import/pool5':out_pool})
8
9 final_out = graph.get_tensor_by_name("import/Relu_1:0")
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict)
333 # NOTE(mrry): If the graph contains a cycle, the full shape information
334 # may not be available for this op's inputs.
--> 335 ops.set_shapes_for_outputs(op)
336
337 # Apply device functions for this op.
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1610 raise RuntimeError("No shape function registered for standard op: %s"
1611 % op.type)
-> 1612 shapes = shape_func(op)
1613 if len(op.outputs) != len(shapes):
1614 raise RuntimeError(
/home/hbenyounes/vqa/roi_pooling_op_grad.py in _roi_pool_shape(op)
13 channels = dims_data[3]
14 print(op.inputs[1].name, op.inputs[1].get_shape())
---> 15 dims_rois = op.inputs[1].get_shape().as_list()
16 num_rois = dims_rois[0]
17
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py in as_list(self)
745 A list of integers or None for each dimension.
746 """
--> 747 return [dim.value for dim in self._dims]
748
749 def as_proto(self):
TypeError: 'NoneType' object is not iterable
Any clue ?
It is usually very convenient to use tf.train.export_meta_graph to store the whole MetaGraph. Then, upon restoring you can use tf.train.import_meta_graph, because it turns out that it passes all additional arguments to the underlying import_scoped_meta_graph which has the input_map argument and utilizes it when it gets to it's own invocation of import_graph_def.
It is not documented, and took me waaaay toooo much time to find it, but it works!
What I would do is something along those lines:
-First retrieve the names of the tensors representing the weights and biases of the 3 fully connected layers coming after pool5 in VGG16.
To do that I would inspect [n.name for n in graph.as_graph_def().node].
(They probably look something like import/locali/weight:0, import/locali/bias:0, etc.)
-Put them in a python list:
weights_names=["import/local1/weight:0" ,"import/local2/weight:0" ,"import/local3/weight:0"]
biases_names=["import/local1/bias:0" ,"import/local2/bias:0" ,"import/local3/bias:0"]
-Define a function that look something like:
def pool5_tofcX(input_tensor, layer_number=3):
flatten=tf.reshape(input_tensor,(-1,7*7*512))
tmp=flatten
for i in xrange(layer_number):
tmp=tf.matmul(tmp, graph.get_tensor_by_name(weights_name[i]))
tmp=tf.nn.bias_add(tmp, graph.get_tensor_by_name(biases_name[i]))
tmp=tf.nn.relu(tmp)
return tmp
Then define the tensor using the function:
wanted_output=pool5_tofcX(out_pool)
Then you are done !
Jonan Georgiev provided an excellent answer here. The same approach was also described with little fanfare at the end of this git issue: https://github.com/tensorflow/tensorflow/issues/3389
Below is a copy/paste runnable example of using this approach to switch out a placeholder for a tf.data.Dataset get_next tensor.
import tensorflow as tf
my_placeholder = tf.placeholder(dtype=tf.float32, shape=1, name='my_placeholder')
my_op = tf.square(my_placeholder, name='my_op')
# Save the graph to memory
graph_def = tf.get_default_graph().as_graph_def()
print('----- my_op before any remapping -----')
print([n for n in graph_def.node if n.name == 'my_op'])
tf.reset_default_graph()
ds = tf.data.Dataset.from_tensors(1.0)
next_tensor = tf.data.make_one_shot_iterator(ds).get_next(name='my_next_tensor')
# Restore the graph with a custom input mapping
tf.graph_util.import_graph_def(graph_def, input_map={'my_placeholder': next_tensor}, name='')
print('----- my_op after remapping -----')
print([n for n in tf.get_default_graph().as_graph_def().node if n.name == 'my_op'])
Output, where we can clearly see that the input to the square operation has changed.
----- my_op before any remapping -----
[name: "my_op"
op: "Square"
input: "my_placeholder"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
]
----- my_op after remapping -----
[name: "my_op"
op: "Square"
input: "my_next_tensor"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
]
I'm a beginner with TF
I've tried to adapt a code which is working well with some other data (noMNIST) to some new data, and i have a dimensionality error, and i don't know how to deal with it.
To debug, i'm trying to use tf.shape method but it doesn't give me the info i need...
def reformat(dataset, labels):
#dataset = dataset.reshape((-1, num_var)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
type(train_dataset)
Training set (790184, 29) (790184, 39) Validation set (43899, 29)
(43899, 39) Test set (43899, 29) (43899, 39)
# Adding regularization to the 1 hidden layer network
graph1 = tf.Graph()
batch_size = 128
num_steps=3001
import datetime
startTime = datetime.datetime.now()
def define_and_run_batch(beta):
num_RELU =1024
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
print(tf.shape(weights_RELU) )
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
# beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
print("ok")
print(tf.shape(weights_RELU) )
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
#
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_and_run_batch(0.005)
Tensor("Shape:0", shape=(2,), dtype=int32) ok Tensor("Shape_1:0",
shape=(2,), dtype=int32)
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
94 return(test_acc,round(l,5))
95
---> 96 define_and_run_batch(0.005)
in define_and_run_batch(beta)
54 print(tf.shape(weights_RELU) )
55 valid_prediction = tf.nn.softmax(
---> 56 tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
57
58
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc
in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse,
name)
949 transpose_a=transpose_a,
950 transpose_b=transpose_b,
--> 951 name=name)
952
953 sparse_matmul = gen_math_ops._sparse_mat_mul
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
684 """
685 return _op_def_lib.apply_op("MatMul", a=a, b=b, transpose_a=transpose_a,
--> 686 transpose_b=transpose_b, name=name)
687
688
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.pyc
in apply_op(self, op_type_name, name, **keywords)
653 op = g.create_op(op_type_name, inputs, output_types, name=scope,
654 input_types=input_types, attrs=attr_protos,
--> 655 op_def=op_def)
656 outputs = op.outputs
657 return _Restructure(ops.convert_n_to_tensor(outputs), output_structure)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2040
original_op=self._default_original_op, op_def=op_def) 2041 if
compute_shapes:
-> 2042 set_shapes_for_outputs(ret) 2043 self._add_op(ret) 2044
self._record_op_seen_by_control_dependencies(ret)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in set_shapes_for_outputs(op) 1526 raise RuntimeError("No
shape function registered for standard op: %s" 1527
% op.type)
-> 1528 shapes = shape_func(op) 1529 if len(op.outputs) != len(shapes): 1530 raise RuntimeError(
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/common_shapes.pyc
in matmul_shape(op)
87 inner_a = a_shape[0] if transpose_a else a_shape[1]
88 inner_b = b_shape[1] if transpose_b else b_shape[0]
---> 89 inner_a.assert_is_compatible_with(inner_b)
90 return [tensor_shape.TensorShape([output_rows, output_cols])]
91
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc
in assert_is_compatible_with(self, other)
92 if not self.is_compatible_with(other):
93 raise ValueError("Dimensions %s and %s are not compatible"
---> 94 % (self, other))
95
96 def merge_with(self, other):
ValueError: Dimensions Dimension(29) and Dimension(30) are not
compatible
the whole code is on my github
https://github.com/FaguiCurtain/Kaggle-SF
the Udacity Assignment 3 file is working
the original data is here
https://www.kaggle.com/c/sf-crime/data
in Udacity, the data were images and each image was a 28x28 matrix which was reformatted into flattened vectors of size 784
in the Kaggle-SF file, i am feeding vectors of size 29, and labels can take 39 different values.
thanks for your help
In debug mode you can check shapes of you Tensors.
by the way you error is valid_prediction assignment. to make it better for debugging and reading it's better to define each step in a separate line. you are using 4 operation in 1 line. BTW in debug mode (for example in Pycharm) you can inspect the element and check what is causing the problem
To check the dimensions, you can directly print the tensors. When you print the tensors, you can view the dimensions. I suggest if you are a beginner, try using the 'tf.layers' package which contains high level wrappers for the various layers one would need to build a CNN in tensorflow. By using this, you can avoid having to deal with the various low level operations like 'matmul' and adding bias for example. The activations can also be directly applied by the layers without having to implement it manually.
As far as debugging is concerned, from the code since you have merged the operations, its hard to see what is going on under the hood unless we can use a proper debugger. If you are not using an IDE, I suggest using 'pudb'.