for my code I want all numbers from a dictionary under 70 to be deleted, I'm unsure of how to specify this and I need it to also delete the associated name with that number as well, either that or only diplay numbers that are 70 or above.
Below is the code that I have in it's entirety:
name = []
number =[]
name_grade = {}
counter = 0
counter_bool= True
num_loop = True
while counter_bool:
stu = int(input("please enter the number of students: "))
if stu < 2:
print("value is too low, try again")
continue
else:
break
while counter != stu:
name_inp = str(input("Enter your name: "))
while num_loop:
number_inp = int(input("Enter your number: "))
if number_inp < 0 or number_inp > 100:
print("The value is too high or too low, please enter a number between 0 and 100.")
continue
else:
break
name_grade[name_inp] = number_inp
name.append(name_inp)
number.append(number_inp)
counter += 1
print(name_grade)
sorted_numbers = sorted(name_grade.items(), key= lambda x:x[1])
print(sorted_numbers)
if number > 70:
resorted_numbers = number < 70
print(resorted numbers)
how would I go about this?
Also if it's also not too much trouble could someone explain in detail about dictionary keys and how the lambda function I've used works? I got help but I would prefer to know the small details on how it's applied and formatted but don't worry if it's a pain to explain.
You can just iterate over the dictionary and filter for values less than 70:
resorted_numbers = {k:v for k,v in name_grade.items() if v<70}
dict.items method returns a list of key-value tuple pairs of a dictionary, so the lambda function is telling the sorted function to sort by the second element in each tuple.
for i in range():
for j in range(i):
print("*",end = " ")
print()
Error:
TypeError
Traceback (most recent call last) <ipython-input-81-6bc40b03c32e> in <module>
----> 1 for i in range():
2 for j in range(i):
3 print("*",end = " ")
4 print()
5
TypeError: 'int' object is not callable
In Python, range() is not valid, you need to provide an argument to allow it to create a range.
For example:
range(3) will give you a generator for the integers { 0, 1, 2 };
range(3, 20, 7) will generate { 3, 10, 17 };
range() will just generate an error :-)
You have to pass an int parameter in range() just the way you did in the second for loop. like, range(10).
for i in range(10):
for j in range(i):
print("*",end = " ")
print()
Please help on this. I have an input like this:
a = """A|9578
C|547
A|459
B|612
D|53
B|6345
A|957498
C|2910"""
I want to print in sorted way the numbers related with each letter like this:
A_0|459
A_1|957498
A_2|9578
C_0|2910
C_1|547
B_0|612
B_1|6345
D_0|53
So far I was able to store in array b the letters and numbers, but I'm stuck when I try to create dictionary-like array to join a single letter with its values, I get this error.
b = [i.split('|') for i in a.split('\n')]
c = dict()
d = [c[i].append(j) for i,j in b]
>>> d = [c[i].append(j) for i,j in b]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
TypeError: list indices must be integers or slices, not str
I'm working on python 3.6 just in case. Thanks in advance.
We'll split the string into pairs, sort those pairs, then use groupby and enumerate to come up with the indices.
from itertools import groupby
from operator import itemgetter
def process(a):
pairs = sorted(x.split('|') for x in a.split())
groups = groupby(pairs, key=itemgetter(0))
for _, g in groups:
for index, (letter, number) in enumerate(g):
yield '{}_{}|{}'.format(letter, index, number)
for i in process(a): print(i)
gives us
A_0|459
A_1|957498
A_2|9578
B_0|612
B_1|6345
C_0|2910
C_1|547
D_0|53
I want to print out the elements in the list in reverse order recursively.
def f3(alist):
if alist == []:
print()
else:
print(alist[-1])
f3(alist[:-1])
I know it is working well, but I don't know the difference between
return f3(alist[:-1])
and
f3(alist[:-1])
Actually both are working well.
My inputs are like these.
f3([1,2,3])
f3([])
f3([3,2,1])
There is a difference between the two although it isn't noticeable in this program. Look at the following example where all I am doing is passing a value as an argument and incrementing it thereby making it return the value once it hits 10 or greater:
from sys import exit
a = 0
def func(a):
a += 1
if a >= 10:
return a
exit(1)
else:
# Modifications are made to the following line
return func(a)
g = func(3)
print(g)
Here the output is 10
Now if I re-write the code the second way without the "return" keyword like this :
from sys import exit
a = 0
def func(a):
a += 1
if a >= 10:
return a
exit(1)
else:
# Modifications are made to the following line
func(a)
g = func(3)
print(g)
The output is "None".
This is because we are not returning any value to handle.
In short, return is like internal communication within the function that can be possibly handled versus just running the function again.
I have downloaded a tensorflow GraphDef that implements a VGG16 ConvNet, which I use doing this :
Pl['images'] = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
with open("tensorflow-vgg16/vgg16.tfmodel", mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
tf.import_graph_def(graph_def, input_map={"images": Pl['images']})
Besides, I have image features that are homogeneous to the output of the "import/pool5/".
How can I tell my graph that don't want to use his input "images", but the tensor "import/pool5/" as input ?
Thank's !
EDIT
OK I realize I haven't been very clear. Here is the situation:
I am trying to use this implementation of ROI pooling, using a pre-trained VGG16, which I have in the GraphDef format. So here is what I do:
First of all, I load the model:
tf.reset_default_graph()
with open("tensorflow-vgg16/vgg16.tfmodel",
mode='rb') as f:
fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
graph = tf.get_default_graph()
Then, I create my placeholders
images = tf.placeholder(tf.float32,
[None, 448, 448, 3],
name="images") #batch x width x height x channels
boxes = tf.placeholder(tf.float32,
[None,5], # 5 = [batch_id,x1,y1,x2,y2]
name = "boxes")
And I define the output of the first part of the graph to be conv5_3/Relu
tf.import_graph_def(graph_def,
input_map={'images':images})
out_tensor = graph.get_tensor_by_name("import/conv5_3/Relu:0")
So, out_tensor is of shape [None,14,14,512]
Then, I do the ROI pooling:
[out_pool,argmax] = module.roi_pool(out_tensor,
boxes,
7,7,1.0/1)
With out_pool.shape = N_Boxes_in_batch x 7 x 7 x 512, which is homogeneous to pool5. I would then like to feed out_pool as an input to the op that comes just after pool5, so it would look like
tf.import_graph_def(graph.as_graph_def(),
input_map={'import/pool5':out_pool})
But it doesn't work, I have this error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-89-527398d7344b> in <module>()
5
6 tf.import_graph_def(graph.as_graph_def(),
----> 7 input_map={'import/pool5':out_pool})
8
9 final_out = graph.get_tensor_by_name("import/Relu_1:0")
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict)
333 # NOTE(mrry): If the graph contains a cycle, the full shape information
334 # may not be available for this op's inputs.
--> 335 ops.set_shapes_for_outputs(op)
336
337 # Apply device functions for this op.
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1610 raise RuntimeError("No shape function registered for standard op: %s"
1611 % op.type)
-> 1612 shapes = shape_func(op)
1613 if len(op.outputs) != len(shapes):
1614 raise RuntimeError(
/home/hbenyounes/vqa/roi_pooling_op_grad.py in _roi_pool_shape(op)
13 channels = dims_data[3]
14 print(op.inputs[1].name, op.inputs[1].get_shape())
---> 15 dims_rois = op.inputs[1].get_shape().as_list()
16 num_rois = dims_rois[0]
17
/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py in as_list(self)
745 A list of integers or None for each dimension.
746 """
--> 747 return [dim.value for dim in self._dims]
748
749 def as_proto(self):
TypeError: 'NoneType' object is not iterable
Any clue ?
It is usually very convenient to use tf.train.export_meta_graph to store the whole MetaGraph. Then, upon restoring you can use tf.train.import_meta_graph, because it turns out that it passes all additional arguments to the underlying import_scoped_meta_graph which has the input_map argument and utilizes it when it gets to it's own invocation of import_graph_def.
It is not documented, and took me waaaay toooo much time to find it, but it works!
What I would do is something along those lines:
-First retrieve the names of the tensors representing the weights and biases of the 3 fully connected layers coming after pool5 in VGG16.
To do that I would inspect [n.name for n in graph.as_graph_def().node].
(They probably look something like import/locali/weight:0, import/locali/bias:0, etc.)
-Put them in a python list:
weights_names=["import/local1/weight:0" ,"import/local2/weight:0" ,"import/local3/weight:0"]
biases_names=["import/local1/bias:0" ,"import/local2/bias:0" ,"import/local3/bias:0"]
-Define a function that look something like:
def pool5_tofcX(input_tensor, layer_number=3):
flatten=tf.reshape(input_tensor,(-1,7*7*512))
tmp=flatten
for i in xrange(layer_number):
tmp=tf.matmul(tmp, graph.get_tensor_by_name(weights_name[i]))
tmp=tf.nn.bias_add(tmp, graph.get_tensor_by_name(biases_name[i]))
tmp=tf.nn.relu(tmp)
return tmp
Then define the tensor using the function:
wanted_output=pool5_tofcX(out_pool)
Then you are done !
Jonan Georgiev provided an excellent answer here. The same approach was also described with little fanfare at the end of this git issue: https://github.com/tensorflow/tensorflow/issues/3389
Below is a copy/paste runnable example of using this approach to switch out a placeholder for a tf.data.Dataset get_next tensor.
import tensorflow as tf
my_placeholder = tf.placeholder(dtype=tf.float32, shape=1, name='my_placeholder')
my_op = tf.square(my_placeholder, name='my_op')
# Save the graph to memory
graph_def = tf.get_default_graph().as_graph_def()
print('----- my_op before any remapping -----')
print([n for n in graph_def.node if n.name == 'my_op'])
tf.reset_default_graph()
ds = tf.data.Dataset.from_tensors(1.0)
next_tensor = tf.data.make_one_shot_iterator(ds).get_next(name='my_next_tensor')
# Restore the graph with a custom input mapping
tf.graph_util.import_graph_def(graph_def, input_map={'my_placeholder': next_tensor}, name='')
print('----- my_op after remapping -----')
print([n for n in tf.get_default_graph().as_graph_def().node if n.name == 'my_op'])
Output, where we can clearly see that the input to the square operation has changed.
----- my_op before any remapping -----
[name: "my_op"
op: "Square"
input: "my_placeholder"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
]
----- my_op after remapping -----
[name: "my_op"
op: "Square"
input: "my_next_tensor"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
]