Hi does anybody knows how to read a text file and put it inside the #startuml and #enduml, the goal of this would be to make the printing of the sequence diagram more automatical inside a jupyter notebook
#startuml
read(~/Doc/trace.txt)
#enduml
thanks in advance for any help
For me having the files:
main.pu
\#startuml
!include file1.puml
\#enduml
with file1.puml (with and without #startuml / #enduml doesn't make a difference:
C -> C : stuff3
D -> D : stuff4
and running:
java -Djava.awt.headless=true -jar plantuml.jar main.pu
Results in:
I used version:
PlantUML version 1.2020.17beta5
Related
I have written a snakemake code to run bwa_map. Fastq files are with different folder name and different sample name (paired end). It shows error as 'SAMPLES' is not defined. Please help.
Error:
$snakemake --snakefile rnaseq.smk mapped_reads/EZ-123-B_IGO_08138_J_2_S101_R2_001.bam -np
*NameError in line 2 of /Users/singhh5/Desktop/tutorial/rnaseq.smk:
name 'SAMPLES' is not defined
File "/Users/singhh5/Desktop/tutorial/rnaseq.smk", line 2, in *
#SAMPLE DIRECTORY
fastq
Sample_EZ-123-B_IGO_08138_J_2
EZ-123-B_IGO_08138_J_2_S101_R1_001.fastq.gz
EZ-123-B_IGO_08138_J_2_S101_R2_001.fastq.gz
Sample_EZ-123-B_IGO_08138_J_4
EZ-124-B_IGO_08138_J_4_S29_R1_001.fastq.gz
EZ-124-B_IGO_08138_J_4_S29_R2_001.fastq.gz
#My Code
expand("~/Desktop/{sample}/{rep}.fastq.gz", sample=SAMPLES)
rule bwa_map:
input:
"data/genome.fa",
"fastq/{sample}/{rep}.fastq"
conda:
"env.yaml"
output:
"mapped_reads/{rep}.bam"
threads: 8
shell:
"bwa mem {input} | samtools view -Sb -> {output}"
The specific error you are seeing is because the variable SAMPLES isn't set to anything before you use it in expand.
Some other issues you may run into:
Output file is missing the {sample} wildcard.
The value of threads isn't passed into bwa or samtools
You should place your expand into the input directive of the first rule in your snakefile, typically called all to properly request the files from bwa_map.
You aren't pairing your reads (R1 and R2) in bwa.
You should look around stackoverflow or some github projects for similar rules to give you inspiration on how to do this mapping.
Is there any way to obtain a Jupyter notebook's entire output history?
There are two objects in python In and Out objects.
That's how they work:
First wrote some code
import math
math.sin(2) #now, replacing with math.cos(3) and running again
It's Output:
0.9092974268256817 #when replaced with cos, O/P: -0.4161468365471424
Now to check input history:
In [4]: print(In)
['', 'import math', 'math.sin(2)', 'math.cos(2)', 'print(In)']
Now to check output history:
In [5]: Out
Out[5]: {2: 0.9092974268256817, 3: -0.4161468365471424}
Note: this will not get history when the notebook is closed and freshly started. It will work in the current session only.
Check this link for more explanation: https://jakevdp.github.io/PythonDataScienceHandbook/01.04-input-output-history.html
For example, you do something like this - In[1]: 1+2 you get Out[1]: 3.
So, you can do this in the next line :
In[2]: print(In[1])
And also for output
print(Out[1])
Check this - Output History
In tensorflow the training from the scratch produced following 6 files:
events.out.tfevents.1503494436.06L7-BRM738
model.ckpt-22480.meta
checkpoint
model.ckpt-22480.data-00000-of-00001
model.ckpt-22480.index
graph.pbtxt
I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application.
I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes.
import tensorflow as tf
meta_path = 'model.ckpt-22480.meta' # Your .meta file
output_node_names = ['output:0'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
If you don't know the name of the output node or nodes, there are two ways
You can explore the graph and find the name with Netron or with console summarize_graph utility.
You can use all the nodes as output ones as shown below.
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
(Note that you have to put this line just before convert_variables_to_constants call.)
But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
As it may be helpful for others, I also answer here after the answer on github ;-).
I think you can try something like this (with the freeze_graph script in tensorflow/python/tools) :
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
The important flag here is --input_binary=false as the file graph.pbtxt is in text format. I think it corresponds to the required graph.pb which is the equivalent in binary format.
Concerning the output_node_names, that's really confusing for me as I still have some problems on this part but you can use the summarize_graph script in tensorflow which can take the pb or the pbtxt as an input.
Regards,
Steph
I tried the freezed_graph.py script, but the output_node_name parameter is totally confusing. Job failed.
So I tried the other one: export_inference_graph.py.
And it worked as expected!
python -u /tfPath/models/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/your/config/path/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix=/your/checkpoint/path/model.ckpt-50000 \
--output_directory=/output/path
The tensorflow installation package I used is from here:
https://github.com/tensorflow/models
First, use the following code to generate the graph.pb file.
with tf.Session() as sess:
# Restore the graph
_ = tf.train.import_meta_graph(args.input)
# save graph file
g = sess.graph
gdef = g.as_graph_def()
tf.train.write_graph(gdef, ".", args.output, True)
then, use summarize graph get the output node name.
Finally, use
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
to generate the freeze graph.
Is there any tool that enables me to input a list of tuples and that shows me a graph that represents the tuples?
Example:
(root,a), (root,b), (b,c), (b,d)
This would be a Tree that looks like that
root
/ \
a b
/ \
c d
I need this to verify that the topology of a network I created in mininet really looks like I want. It has about 1000 links and it is not possible to check that manually without a visualisation.
I does not matter if it is an online tool, a python script, a command line tool or something else.
I found a solution. Gephi works fine!
I found this online and verified that the command:
echo "\033]0;Name\007"
Changes my term name to "Name". I'm just wondering why and how does this happen, so that I can tweak this and use it in my scripts accordingly.
Thanks for the help in advance.
Azeem
Found this (\033 is the sequence for ESC) :
ESC ] 0 ; txt ST Set icon name and window title to txt.
In the man page : http://man7.org/linux/man-pages/man4/console_codes.4.html
So, Linux console implements :
a large subset of the VT102 and ECMA-48/ISO 6429/ANSI X3.64 terminal controls
However this methods does not seems to be portable because it depends of the implementation of the terminal.