Files as input and output in plantuml - dot

I am trying to write the following flow:
The activity I describe consists in two binaries. The first one takes in input one file, and generates several (let's say two). These two files, plus another one from the environment, are fed to a second binary, which will generate an output file.
I would like to use plantuml to describe this, but the documentation doesn't really help - it doesn't go to the inputs/outputs of activities.
I can draw files with file myFile, but I didn't manage to link them to boxes. Should I rather use a usecase diagram or an activity diagram for this? Can someone show me how to draw an arrow from a file to a (binary) ?
I am now standing with
#startuml
file myFile
(firstBinary)
#enduml
which doesn't really do what I want.

Should I rather use a use case diagram or an activity diagram for this?
The closest diagram associated with what you are trying to depict would be a process flow diagram with work product/artifact dependencies. In essence, your binaries are processes that depend on artifacts (files) and create new ones. However, not everything we want to describe fits neatly into a particular diagram type, nor should it have to.
Since PlantUML uses GraphViz to render diagrams, you can always use the DOT language to specify these relationships directly. For example,
#startuml
digraph a {
InFile1 [shape=note]
Binary1 [shape=ellipse]
TmpFile1 [shape=note]
TmpFile2 [shape=note]
TmpFile3 [shape=note]
Binary2 [shape=ellipse]
EnvFile [shape=note]
OutFile [shape=note]
InFile1 -> Binary1
Binary1 -> TmpFile1
Binary1 -> TmpFile2
Binary1 -> TmpFile3
TmpFile1 -> Binary2
TmpFile2 -> Binary2
TmpFile3 -> Binary2
EnvFile -> Binary2
Binary2 -> OutFile
}
#enduml
would result in the following diagram.
DOT is no more complex than PlantUML's language, though when diagrams get large, a good understanding is certainly a benefit. You can get more information on DOT language at Graphviz's Documentation site.

Related

Tensorflow: How to convert .meta, .data and .index model files into one graph.pb file

In tensorflow the training from the scratch produced following 6 files:
events.out.tfevents.1503494436.06L7-BRM738
model.ckpt-22480.meta
checkpoint
model.ckpt-22480.data-00000-of-00001
model.ckpt-22480.index
graph.pbtxt
I would like to convert them (or only the needed ones) into one file graph.pb to be able to transfer it to my Android application.
I tried the script freeze_graph.py but it requires as an input already the input.pb file which I do not have. (I have only these 6 files mentioned before). How to proceed to get this one freezed_graph.pb file? I saw several threads but none was working for me.
You can use this simple script to do that. But you must specify the names of the output nodes.
import tensorflow as tf
meta_path = 'model.ckpt-22480.meta' # Your .meta file
output_node_names = ['output:0'] # Output nodes
with tf.Session() as sess:
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('path/of/your/.meta/file'))
# Freeze the graph
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
If you don't know the name of the output node or nodes, there are two ways
You can explore the graph and find the name with Netron or with console summarize_graph utility.
You can use all the nodes as output ones as shown below.
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
(Note that you have to put this line just before convert_variables_to_constants call.)
But I think it's unusual situation, because if you don't know the output node, you cannot use the graph actually.
As it may be helpful for others, I also answer here after the answer on github ;-).
I think you can try something like this (with the freeze_graph script in tensorflow/python/tools) :
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
The important flag here is --input_binary=false as the file graph.pbtxt is in text format. I think it corresponds to the required graph.pb which is the equivalent in binary format.
Concerning the output_node_names, that's really confusing for me as I still have some problems on this part but you can use the summarize_graph script in tensorflow which can take the pb or the pbtxt as an input.
Regards,
Steph
I tried the freezed_graph.py script, but the output_node_name parameter is totally confusing. Job failed.
So I tried the other one: export_inference_graph.py.
And it worked as expected!
python -u /tfPath/models/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/your/config/path/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix=/your/checkpoint/path/model.ckpt-50000 \
--output_directory=/output/path
The tensorflow installation package I used is from here:
https://github.com/tensorflow/models
First, use the following code to generate the graph.pb file.
with tf.Session() as sess:
# Restore the graph
_ = tf.train.import_meta_graph(args.input)
# save graph file
g = sess.graph
gdef = g.as_graph_def()
tf.train.write_graph(gdef, ".", args.output, True)
then, use summarize graph get the output node name.
Finally, use
python freeze_graph.py --input_graph=/path/to/graph.pbtxt --input_checkpoint=/path/to/model.ckpt-22480 --input_binary=false --output_graph=/path/to/frozen_graph.pb --output_node_names="the nodes that you want to output e.g. InceptionV3/Predictions/Reshape_1 for Inception V3 "
to generate the freeze graph.

[Latex]-Create a weighted graph

I need to create a weighte
d graph on Latex.
I got 6k+ vertex.
I found the manual but is for French.
I found the script below but there is something that I don't understand.
There is a simple way that allow me to declare Vertex / Edge without write the position of the Vertex?
In this script what does it means this row?
\Vertex{P}
\NOEA(P){B} \SOEA(P){M} \NOEA(B){D}
\SOEA(B){C} \SOEA(C){L}
Full script
\documentclass[11pt]{scrartcl}
\usepackage{tkz-graph}
\begin{document}
\begin{tikzpicture}
\SetUpEdge[lw = 1.5pt,
color = orange,
labelcolor = white]
\GraphInit[vstyle=Normal]
\SetGraphUnit{3}
\tikzset{VertexStyle/.append style={fill}}
\Vertex{P}
\NOEA(P){B} \SOEA(P){M} \NOEA(B){D}
\SOEA(B){C} \SOEA(C){L}
\tikzset{EdgeStyle/.style={->}}
\Edge[label=$3$](C)(B)
\Edge[label=$10$](D)(B)
\Edge[label=$10$](L)(M)
\Edge[label=$10$](B)(P)
\tikzset{EdgeStyle/.style={<->}}
\Edge[label=$4$](P)(M)
\Edge[label=$9$](C)(M)
\Edge[label=$4$](C)(L)
\Edge[label=$5$](C)(D)
\Edge[label=$10$](B)(M)
\tikzset{EdgeStyle/.style={<->,relative=false,in=0,out=60}}
\Edge[label=$11$](L)(D)
\end{tikzpicture}
\end{document}
With such a huge amount of vertices, I would recommend, as an alternative, you using Graphviz Dot instead of Latex package you're trying to describe above. Note also that there exists a latex package (dot2tex) to include your Graphviz Dot code into latex; wrapped in a PGF/TikZ environment to create a neat vector graphics image (however for a huge graph I would encourage you to render it externally by using graphviz and simply include the image in you .tex document).
The syntax of Graphviz Dot is really simple and can be written quite easily programatically (as I assume you wont write your 6k vertices manually...).
As an example, the following branch and price tree was generated programatically using Graphviz Dot.
digraph BST {
node [color = "black", shape = "point"];
edge [arrowsize = "0.1"];
1->2;
2 [color = "blue"];
1->3;
3 [color = "blue"];
1 [color = "black"];
3->4;
4 [color = "blue"];
3->5;
...
}
For details, see the Graphviz DOT documentation, and for an example of an autogenerated graph for a small instance of the shortest path problem, see
Visualizing Undirected Graph That's Too Large for GraphViz?
Find a monotonic shortest path in a graph in O(E logV)

Issue while ingesting a Titan graph into Faunus

I have installed both Titan and Faunus and each seems to be working properly (titan-0.4.4 & faunus-0.4.4)
However, after ingesting a sizable graph in Titan and trying to import it in Faunus via
FaunusFactory.open( )
I am experiencing issues. To be more precise, I do seem to get a faunus graph from the call FaunusFactory.open( ),
faunusgraph[titanhbaseinputformat->titanhbaseoutputformat]
but then, even asking a simple
g.v(10)
I do get this error:
Task Id : attempt_201407181049_0009_m_000000_0, Status : FAILED
com.thinkaurelius.titan.core.TitanException: Exception in Titan
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.getAdminInterface(HBaseStoreManager.java:380)
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.ensureColumnFamilyExists(HBaseStoreManager.java:275)
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.openDatabase(HBaseStoreManager.java:228)
My property file is taken straight out of the Faunus page with Titan-HBase input, except of course changing the url of the hadoop cluster:
faunus.graph.input.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseInputFormat
faunus.graph.input.titan.storage.backend=hbase
faunus.graph.input.titan.storage.hostname= my IP
faunus.graph.input.titan.storage.port=2181
faunus.graph.input.titan.storage.tablename=titan
faunus.graph.output.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseOutputFormat
faunus.graph.output.titan.storage.backend=hbase
faunus.graph.output.titan.storage.hostname= IP of my host
faunus.graph.output.titan.storage.port=2181
faunus.graph.output.titan.storage.tablename=titan
faunus.graph.output.titan.storage.batch-loading=true
faunus.output.location=output1
zookeeper.znode.parent=/hbase-unsecure
titan.graph.output.ids.block-size=100000
Anyone can help?
ADDENDUM:
To address the comment below, here is some context: as I have mentioned, I have a graph in Titan and can perform basic gremlin queries on it.
However, I do need to run a gremlin global query which, due to the size of the graph, needs Faunus and its underlying MR capabilities. Hence the need to import it. The error I get doesn't look to me as if it points to some inconsistency in the graph itself.
I'm not sure that you have your "flow" of Faunus right. If your end result is to do a global query of the graph, then consider this approach:
pull your graph to sequence file
issue your global query over the sequence file
More specifically create hbase-seq.properties:
# input graph parameters
faunus.graph.input.format=com.thinkaurelius.faunus.formats.titan.hbase.TitanHBaseInputFormat
faunus.graph.input.titan.storage.backend=hbase
faunus.graph.input.titan.storage.hostname=localhost
faunus.graph.input.titan.storage.port=2181
faunus.graph.input.titan.storage.tablename=titan
# hbase.mapreduce.scan.cachedrows=1000
# output data (graph or statistic) parameters
faunus.graph.output.format=org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
faunus.output.location=snapshot
faunus.output.location.overwrite=true
In Faunus, copy do:
g = FaunusFactory.open('hbase-seq.properties')
g._()
That will read the graph from hbase and write it to sequence file in HDFS. Next, create: seq-noop.properties with these contents:
# input graph parameters
faunus.graph.input.format=org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
faunus.input.location=snapshot/job-0
# output data parameters
faunus.graph.output.format=com.thinkaurelius.faunus.formats.noop.NoOpOutputFormat
faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
faunus.output.location=analysis
faunus.output.location.overwrite=true
The above configuration will read your sequence file from the previous step and without re-writing the graph (that's what NoOpOutputFormat is for). Now in Faunus do:
g = FaunusFactory.open('seq-noop.properties')
g.V.sideEffect('{it.degree=it.bothE.count()}').degree.groupCount()
This will execute a degree distribution, writing the results in HDFS to the 'analysis' directory. Obviously you can do whatever Faunus-flavored Gremlin you want here - I just wanted to provide an example. I think this is a pretty standard "flow" or pattern for using Faunus from a graph analysis perspective.

Faunus graph not printing nodes without using side effect from gremlin shell

I'm trying to print a graph in Faunus (v0.4.0) where a node has any edges (incoming or outgoing). From the gremlin shell, I tried:
g = FaunusFactory.open('faunus.properties')
g.V.filter("{it.bothE.hasNext()}").sideEffect("{println it}")
When I do this, I get a printout of all the nodes as I expected
But without the println, I do not.
According to How do I write a for loop in gremlin?, the gremlin terminal should print this info out for me, but it does not seem to.
Is there something specific I need to do to enable the printing from the console?
Faunus and Gremlin are close to each other in terms of purpose and functionality but not identical. The filter isn't producing a side-effect, which will be written to HDFS. If you did:
g.V.filter("{it.bothE.hasNext()}").id
You could then view the list of ids matching that filter with something like:
hdfs.head('output',100)
to see the first 100 lines of the output. If you need more than just the element identifier you could do a transform to get some of the element properties in there as well. You might find these hdfs helper tips helpful.

Command completion in mathematica : suggest rules/options

In current version of Mathematica these keyboard shortcuts are quite handy
Ctrl+K completes current command
GraphPl -> press Ctrl+K -> GraphPlot
Ctrl+Shift+K completes current command and adds argument placeholders which could be replaced with actual values with tab key
GraphPl -> press Ctrl+Shift+K -> GraphPlot[{vi1->vj1,vi2->vj2,...}]
However I couldn't find any keyboard option to show associated settings/options
For instance Say If I need to plot a graph with different layouts, I know I need to set Method with one of these Possible settings
"CircularEmbedding"
"RandomEmbedding"
"HighDimensionalEmbedding"
"RadialDrawing"
"SpringEmbedding"
"SpringElectricalEmbedding"
Two things
First How to autocomplete these options , is there any shortcut key ?
GraphPlot[sg, Method -> <what keyboard shortcut to display all possible options>]
Second how to generate following PopupMenu list programmatically
list={
"CircularEmbedding"
, "RandomEmbedding"
, "HighDimensionalEmbedding"
, "RadialDrawing"
, "SpringEmbedding"
, "SpringElectricalEmbedding"
}
Manipulate[GraphPlot[sg, Method -> m], {m, list}, ControlType -> PopupMenu]
Is there any way to introspect Mathematica functions and access method Metadata similar to the way it could be done in other programming languages, Like using reflection in Java ?
I don't believe there is any included function to auto-complete a string. I also cannot recall a way to view all valid settings for a particular option, other than searching the help files.
You can expedite input with the Options Inspector settings InputAliases and InputAutoReplacements, allowing entry by EsctxtEsc or txtSpace.
Draft : work in progress ...
This is the nearest I could reach so far, though It needs loads of enhancement, Adding it as it is hoping to get some Ideas from community. If anyone could help enhance it further, Or suggest any Idea, It would really be appreciated.
ruleOfRule[list_] := Map[Rule[#, #] &, list];
Manipulate[
GraphPlot ## {{"A" -> "B", "B" -> "C", "C" -> "A"},
options}, {{options, {}}, ruleOfRule[Options[GraphPlot]]},
ControlType -> CheckboxBar]

Resources