I read that with frama-c, we can generate a PDG
which free tools can I use to generate the program dependence graph for c codes
My question is: there is a way for it to generate a SDG (It is a set of PDG, it aims to modelize interprocedural dependences)?.
Anybody could help me or could give me tips about which tools could generate the SDG.
Thank you
I'm not completely sure that it answers your question, but Frama-C's PDG plugin does have inter-procedural information, in the form of nodes for parameters and implicit inputs (globals that are read by the callee), as well as for the returned value and output locations (globals that are written). It uses results of the From plug-in to compute dependencies.
If I understand correctly PDG's API in Db.Pdg, you should be able to obtain all nodes corresponding to a given call with the Db.Pdg.find_simple_stmt_nodes function.
Related
I'm using igraph in academic research and I need to provide a proper citation for the algorithm used in the components() command. This algorithm returns the connected components of the graph. The command in question is documented here. It's part of the R/CRAN igraph library.
I think the algorithm used is the one below, which seems to be the canonical workhourse algoirthm cited on the Wikipedia page for connected components.
Hopcroft, J.; Tarjan, R. (1973), "Algorithm 447: efficient algorithms for graph manipulation", Communications of the ACM, 16 (6): 372–378, doi:10.1145/362248.362272
Does anyone know what algorithm is used?
It should be noticed that, igraph in R is actually written in c/c++. If you want to dig into the the details about how components is implemented, you should trace back to its c or c++ source code.
Here is a link to the source code for components
https://github.com/igraph/igraph/blob/f9b6ace881c3c0ba46956f6665043e43b95fa196/src/components.c
However, it seems the algorithm applied is not mentioned in the source code. I guess you can reach the author by email and ask for help.
In GBM model, following parameters are used -
col_sample_rate
col_sample_rate_per_tree
col_sample_rate_change_per_level
I understand how the sampling works and how many variables get considered for splitting at each level for every tree. I am trying to understand how many times each feature gets considered for making a decision. Is there a way to easily extract all sample of features used for making a splitting decision from the model object?
Referring to the explanation provided by H2O, http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/algo-params/col_sample_rate.html, is there a way to know 60 randomly chosen features for each split?
Thank you for your help!
If you want to see which features were used at a given split in a give tree you can navigate the H2OTree object.
For R see documentation here and here
For Python see documentation here
You can also take a look at this Blog (if this link ever dies just do a google search for H2OTree class)
I don’t know if I would call this easy, but the MOJO tree visualizer spits out a graphviz dot data file which is turned into a visualization. This has the information you are interested in.
http://docs.h2o.ai/h2o/latest-stable/h2o-genmodel/javadoc/overview-summary.html#viewing-a-mojo
I am trying to find if two programs are gamma-isomorphic or not for which I am taking the help of Jgrapht library. Now, I have to generate program dependence graphs of the programs and capture it as a graph object. Using frama-c we can generate pdgs. I used frama-c -pdg -pdg-dot graph -pdg-print program.c to generate pdg of the program and the output is a dot format. I have to parse the dot format to get the graph. Instead of this, will I be able to get hold of graph data structure like a graph object instead of dot file.
Technically speaking, you should be able to extract the information you want with the functions exported in the Db.Pdg module of Frama-C. In particular,
Db.Pdg.iter_nodes allows you to iter over all nodes (for all functions) generated by the Pdg, and the Db.Pdg.direct_*dpds family of functions will get you the list of direct children of a given node, either all of them or only of a given kind. More information is available in the db.mli file inside Frama-C's sources.
That said, I have to ask you why you'd want to do that. As far as my search engine can tell me, JGraphT is a Java library, and last time I checked, OCaml/Java bindings weren't exactly painless to implement, if at all possible. Furthermore, it seems to me that the class DOTImporter of JGraphT should allow you to use more or less directly the output of pdg.
I have a basic understanding of neural networks. I understand that there should be a y matrix (expected result) which stores 0 or 1 corresponding to different category labels. As an example, for digit recognition, if the number to be identified is 6 then the y vector should be [0,0,0,0,0,1,0,0,0,0]. However, when I see the MXNet example in MXNet.jl repository on Github, I could not identify any code which prepares this kind of result matrix. I think the magic lies in the get_mnist_providers() method which returns 2 providers:
train_provider, eval_provider = get_mnist_providers(batch_size)
I have no idea what these providers are - train_provider, eval_provider.
Please help me understand these providers. I am trying to write an algorithm which has different classifications, so understanding this provider is vital.
You are right as to providing the y vector corresponding to labels. In MXNet there is the concept of iterators. The iterators are used to bind the data to the labels. What your get_mnist_providers method is probably doing is providing the data iterator which has the corresponding label attached to it.
For a more detailed understanding on how data iterators fit into the whole picture of model optimization, you can try this tutorial (links to mxnet-notebooks Github repository):
linear-regression.ipynb
(You will need jupyter notebook to run the tutorial. just pip install jupyter and then run the command 'jupyter notebook' in the folder where the tutorial file exists)
I am currently testing various community detection algorithms in the igraph package to compare against my implementation.
I am able to run the algorithms on different graphs but I was wondering if there was a way for me to write the clustering to a file, where all nodes in one community are written to one line and so on. I am able to obtain the membership of each node using membership(communities_object) and write that to a file using dput() but I don't know how to write it the way I want.
This is the first time I am working with R as well. I apologize if this has been asked before.
This does not have to do much with igraph, the clustering is given by a simple numeric vector. See ?write.
write(membership(communities_object), file="myfile", ncolumns=1)
write(communities_object$membership, file="myfile", ncolumns=1) also work