How to use overwrite function on GRASS GIS? - r

I was trying to replicate this exercise of mikerspencer
https://www.r-bloggers.com/spatial-networks-case-study-st-james-centre-edinburgh-13/
I add the data to qgis and then i opened the grass tools to do the first step:
# connect postcodes to streets as layer 2
v.net --overwrite input=roads points=postcodes output=roads_net1 operation=connect thresh=400 arc_layer=1 node_layer=2
although, i can't find the overwrite option.
Can you help me?
Thanks in advance.
Raquel

If you aim to overwrite the existing file simply add --overwrite to you line of code.
In you case it is alread the 1st command v.net --overwrite input=roads points=postcodes output=roads_net1 operation=connect thresh=400 arc_layer=1 node_layer=2

Related

Colab not recognising an existing directory

I have been trying to run an openpose model on colab but havent been able to do so because Colab doesn't recognise the directory. Screenshot of code
I have provided the code screenshot in this message, any help or direction will be highly appreciated!
Edit 1: A modification from the first answer
code:
!cd openpose && ./build/examples/openpose/openpose.bin -image_dir /drive/My\ Drive/research_project/Fall\ Detection/$category/testdata/video$video --render_pose 0 --disable_blending -keypoint_scale 3 --display 0 -write_json /drive/My\ Drive/research_project/Fall\ Detection/$category/jsondata/video$video
output:
Error:
Folder /drive/My Drive/research_project/Fall Detection/Coffee_room/testdata/video0/ does not exist.
I believe you need to remove the '..', as you are already in the '/content' folder from the os.chdir('/content') command
If that's not it, you also have a missing '/research project' after '/My Drive' in the line before the last
with the %cd operation you already moved yourself to [...]/Coffee_room/testdata, so when you try and os.chdir command, it throws an error. At least I think so, the screenshot doesn't let me copy the code to try and recreate the same situation, so it's a bit hard
Try to put your code in the right format inside the question like this
print('Hello, this is my code')

Failed to create wallet for ton with Fift?

Right now I'm trying to create wallet for TON.
I downloaded and built Fift interpreter an was trying to create new wallet with: ./crypto/fift new-walelt.fif
[ 1][t 0][1559491459.312618017][fift-main.cpp:147] Error interpreting standard preamble file `Fift.fif`: cannot locate file `Fift.fif`
Check that correct include path is set by -I or by FIFTPATH environment variable, or disable standard preamble by -n.
Although my path variable is set. Could anyone please help me with this?
First, locate {{lite-client-source-direcotry}}/crypto/fift
This is not the build directory, that's the directory where are the source files (lite-client that you downloaded). So verify you have that it contains Fift.fif file.
If you installed it in the user working directory, it should be:
~/lite-client/crypto/fift/
Now, you should either set FIFTPATH variable to point to this directory or run fift with -I option:
export FIFTPATH=~/lite-client/crypto/fift/
./crypto/fift new-walelt.fif
Or
./crypto/fift -I~/lite-client/crypto/fift/ new-walelt.fif
Have you tried ./crypto/fift -I<source-directory>/crypto/fift new-wallet.fif instead of setting environment variable? Are Fift.fif and Asm.fif library files inside FIFTPATH?
Make sure you have followed all the instruction written here:
https://test.ton.org/HOWTO.txt
It should work if you do all the above instruction correctly. If not, it might be a bug. Remember that TON is in a very early beta strage. You can submit the issue here:
https://github.com/copperbits/TON/issues
You also can use this:
cd ~/liteclient-build
crypto/fift -I/root/lite-client/crypto/fift/lib -s /root/lite-client/crypto/smartcont/new-wallet.fif -1 wallet_name
Try this (worked for me)
export FIFTPATH=~/lite-client/crypto/fift/lib
./crypto/fift new-wallet.fif

Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file

In tensorflow the training from the scratch produced following 6 files:
events.out.tfevents.1503494436.06L7-BRM738
model.ckpt-22480.meta
checkpoint
model.ckpt-22480.data-00000-of-00001
model.ckpt-22480.index
graph.pbtxt
I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application.
I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes.
import tensorflow as tf
meta_path = 'model.ckpt-22480.meta' # Your .meta file
output_node_names = ['output:0'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
If you don't know the name of the output node or nodes, there are two ways
You can explore the graph and find the name with Netron or with console summarize_graph utility.
You can use all the nodes as output ones as shown below.
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
(Note that you have to put this line just before convert_variables_to_constants call.)
But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
As it may be helpful for others, I also answer here after the answer on github ;-).
I think you can try something like this (with the freeze_graph script in tensorflow/python/tools) :
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
The important flag here is --input_binary=false as the file graph.pbtxt is in text format. I think it corresponds to the required graph.pb which is the equivalent in binary format.
Concerning the output_node_names, that's really confusing for me as I still have some problems on this part but you can use the summarize_graph script in tensorflow which can take the pb or the pbtxt as an input.
Regards,
Steph
I tried the freezed_graph.py script, but the output_node_name parameter is totally confusing. Job failed.
So I tried the other one: export_inference_graph.py.
And it worked as expected!
python -u /tfPath/models/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/your/config/path/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix=/your/checkpoint/path/model.ckpt-50000 \
--output_directory=/output/path
The tensorflow installation package I used is from here:
https://github.com/tensorflow/models
First, use the following code to generate the graph.pb file.
with tf.Session() as sess:
# Restore the graph
_ = tf.train.import_meta_graph(args.input)
# save graph file
g = sess.graph
gdef = g.as_graph_def()
tf.train.write_graph(gdef, ".", args.output, True)
then, use summarize graph get the output node name.
Finally, use
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
to generate the freeze graph.

Getting Started with SyntaxNet (start parsing text right away with Parsey McParseface)

I am new to SyntaxNet and I recently tried to install it step by step from https://github.com/tensorflow/models/blob/master/syntaxnet/README.md#instalation.
Although after running bazel test it was said that "Executed 12 out of 12 tests: 12 tests pass"
when I used this code
ubuntu#ubuntu-VirtualBox:~/Downloads/git-2.7.4/models/syntaxnet$
echo 'Bob brought the pizza to Alice.' |syntaxnet/demo.sh
it gives me this error:
syntaxnet/demo.sh: line 31: bazel-bin/syntaxnet/parser_eval:
No such file or directory
syntaxnet/demo.sh: line 43: bazel-bin/syntaxnet/parser_eval:
No such file or directory
syntaxnet/demo.sh: line 55: bazel-bin/syntaxnet/conll2tree:
No such file or directory
I would really appreciate if anyone could help me.
Thank you so much
I had the same issue.
To fix it, modify the demo.sh file, lines 31 and 55.
The locations it points to find parser_eval and conll2tree are wrong, at least they were in my system.
Do a search for "sudo find / -iname 'parser_eval'".
For me the location of this file was "/home/jesus/.cache/bazel/_bazel_jesus/afbbfe6033ddfb6168467a72894e5682/syntaxnet/bazel-out/local-opt/bin/syntaxnet/parser_eval"
I then proceeded to point line 31 to this location instead of "bazel-bin/syntaxnet/parser_eval".
Then did the same for line 55 and conll2tree.
Saved the file, and got it running.
Hope it helps
I had a similar problem, in case it might be useful to anyone. If you rename or move the path where syntaxnet is installed, you'll break half a dozen symbolic links it creates during installation (it uses absolute paths). In that case, you have to recreate the links with the new path.

Output file numbering in graphicsmagick

I'm trying to convert pdf to images using this command:
gm convert ./file.pdf -scene 1 thumbs/thumb%02d.jpg
Although I specify -scene argument, it does nothing, as I get output files starting from thumb00.jpg. And I need them to start from thumb01.jpg.
I'm using GraphicsMagick 1.3.12.
What am I doing wrong here?
In order to ensure numbered output files, add the +adjoin option like:
gm convert ./file.pdf -scene 1 +adjoin thumbs/thumb%02d.jpg
This additional requirement was added by GraphicsMagick 1.3.15. It is ok to use the same option for all earlier releases.
There is still an inability to specify the starting scene number. This is a known bug.

Resources