requires_grad relation to leaf nodes - torch

From the docs:
requires_grad – Boolean indicating whether the Variable has been
created by a subgraph containing any Variable, that requires it. Can
be changed only on leaf Variables
What does it mean by leaf nodes here? Are leaf nodes only the input nodes?
If it can be only changed at the leaf nodes, how can I freeze layers then?

Leaf nodes of a graph are those nodes (i.e. Variables) that were not computed directly from other nodes in the graph. For example:
import torch
from torch.autograd import Variable
A = Variable(torch.randn(10,10)) # this is a leaf node
B = 2 * A # this is not a leaf node
w = Variable(torch.randn(10,10)) # this is a leaf node
C = A.mm(w) # this is not a leaf node
If a leaf node requires_grad, all subsequent nodes computed from it will automatically also require_grad. Else, you could not apply the chain rule to calculate the gradient of the leaf node which requires_grad. This is the reason why requires_grad can only be set for leaf nodes: For all others, it can be smartly inferred and is in fact determined by the settings of the leaf nodes used for computing these other variables.
Note that in a typical neural network, all parameters are leaf nodes. They are not computed from any other Variables in the network. Hence, freezing layers using requires_gradis simple. Here, is an example taken from the PyTorch docs:
model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad = False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_grad=True by default
model.fc = nn.Linear(512, 100)
# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
Even though, what you really do is freezing the entire gradient computation (which is what you should be doing as it avoids unnecessary computation). Technically, you could leave the requires_grad flag on, and only define your optimizer for a subset of the parameters that you would like to learn.

Related

Recursion in context with decision trees

So I am struggling with understanding recursion in the context of decision trees. I have looked at two different codes from two different websites: Example 1 and Example 2. This is a part of the code from the first example:
class DecisionTreeClassifier():
def __init__(self, min_samples_split=2, max_depth=2):
''' constructor '''
# initialize the root of the tree
self.root = None
# stopping conditions
self.min_samples_split = min_samples_split
self.max_depth = max_depth
def build_tree(self, dataset, curr_depth=0):
''' recursive function to build the tree '''
X, Y = dataset[:,:-1], dataset[:,-1]
num_samples, num_features = np.shape(X)
# split until stopping conditions are met
if num_samples>=self.min_samples_split and curr_depth<=self.max_depth:
# find the best split
best_split = self.get_best_split(dataset, num_samples, num_features)
# check if information gain is positive
if best_split["info_gain"]>0:
# recur left
left_subtree = self.build_tree(best_split["dataset_left"], curr_depth+1)
# recur right
right_subtree = self.build_tree(best_split["dataset_right"], curr_depth+1)
# return decision node
return Node(best_split["feature_index"], best_split["threshold"],
left_subtree, right_subtree, best_split["info_gain"])
# compute leaf node
leaf_value = self.calculate_leaf_value(Y)
# return leaf node
return Node(value=leaf_value)
I can't wrap my head around how the information about the split decisions and thresholds are "saved" while performing the recursion. In both of the codes they return only one node after the recursion is done, but how can this node store the whole tree, when it is only one node? Should not a tree be able to contain many nodes?
Since the recursion is performed before the making of the node, how is the information about the different split decisions and thresholds "stored" before making the node? This might be some stupid questions but I am really struggling with understanding this concept and was hoping if someone had a good explanation that could help me visualize what happens during the recursion.

Efficient controlled random walks in gremlin

Goal
The objective is to efficiently generate random walks on a relatively large graph with uneven probabilities of going through edges depending on their type.
Configuration
Ubuntu VM, 23Go RAM
JanusGraph 0.6.1 full
Local graph (default conf/remote.yaml file used)
~1.8m vertices (~28k will be start nodes for the random walks)
~21m relationships (they can all be used in the random walks)
What I am doing
I am currently generating random walks with the sample command:
g.V(<startnode_id>).
repeat( local( both().sample(1) ) ).
times(<desired_randomwalk_length>).
path()
What I tried
I tried using a gremlinpython script to create a random walk generator that would first get all edges connected to the current node, then pick randomly an edge to go through and repeat <desired_randomwalk_length> times.
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.structure.graph import Vertex
from typing import List
connection = DriverRemoteConnection(<URL>, "g")
g = traversal().withRemote(connection)
def get_next_node(start:Vertex) -> Vertex:
next_vertices = g.V(start.id).both().fold().next()
return next_vertices[randint(0, len(next_vertices)-1)]
def get_random_walk(start:Vertex, length:int=10) -> List[Vertex]:
current_node = start
random_walk = [current_node]
for _ in range(length):
current_node = get_next_node(current_node)
random_walk.append(current_node)
return random_walk
Issues
While testing on a subset of the total graph (400k vertices, 1.5m rel), I got these results
Sample query, <desired_randomwalk_length> of 10: 100k random walks in 1h10
Gremlinpython function, <desired_randomwalk_length> of 4: 2k random walks in 1h+
The sample command is really fast, but there are a few problems:
It doesn't seem to truly be a uniform distribution pick amongst the edges (it seems to be successive coin tosses) which could lead to certain paths being taken more often, which then diminishes the interest of generating random walks. (I can't directly do what is recommended here as the nodes ids aren't in a sequence, thus I have to acquire them first.)
I haven't found a way to give different probabilities to different types of relationships.
Is there a better way to do random walks with Gremlin?
If there is none, is there a way to modify the sample query to rectify the assign probabilities to types of edges? Maybe even a way to have a better distribution of the sampling?
In last recourse, is there a way to improve the queries to make this "by hand" with a gremlinpython script?
Thanks to everyone reading/replying!
EDIT
Is there a way to do the following:
Given a r_type1, r_type2, r_type3, ... the acceptable relationship type for this random walk
Given a proba1, proba2, proba3, ... the probabilities of going through these relationship types
For each step
Sample a node for each relationship type r_type1, r_type2, r_type3, ...
Keep only one according to the probabilities proba1, proba2, proba3, ...
I think the second step could be done be sampling multiple nodes for each relationships, in accordance with the probas (which could be done by using a gremlinpython script to build the query). This still leaves the question of how to sample on multiple relationships from a single node, and how to randomly pick one in the sampled nodes.
I hope this is clear!
Thanks to #Kelvin Lawrence's Practical Gremlin (especially the union section), I managed to do what I wanted (or close enough).
The Gremlin query I get is the following:
g.V(<vertex_id>).
repeat(
local(
union(
both(<relationship_type1>).sample(N1),
both(<relationship_type2>).sample(N2),
...
).
sample(1)
)
).times(<walk_length>).
path()
The N_ values are set independently of the node, such that the least probable transition yields exactly 1 sample. This also means that the probabilities are not exactly respected where the number of relationships of a given type is inferior to the corresponding N_ value.
The union part is built in python using gremlinpython (nb_samples is the dictionary storing the number of samples needed for each relationship type)
from gremlin_python.process.graph_traversal import __, GraphTraversal
next_node_traversal:GraphTraversal = __.union(
*[
__.both(key).sample(nb_samples[key])
for key in nb_samples
]
).sample(1)
(Here we are using the * operator to unpack the list when passing it as argument to the union method)

Write a pseudo code for a Graph algorithm

Given a DAG and a function which maps every vertex to a unique number from 1 to , I need to write a pseudo code for an algorithm that finds for every the minimal value of , among all vertices u that are reachable from v, and save it a an attribute of v. The time complexity of the algorithm needs to be (assuming that time complexity of is ).
I thought about using DFS (or a variation of it) and/or topological sort, but I don't know how to use it in order to solve this problem.
In addition, I need to think about an algorithm that gets an undirected graph and the function , and calculate the same thing for every vertex, and I don't know how to do it either.
Your idea of using DFS is right. Actually the function f(v) is only given for saying that each node can be uniquely identified from a number between 1 and |V|.
Just a hint for solving, you would have to modify DFS so that it returns the minimum value of f(v) of the vertex v reachable from this node and save it another array, let's say minReach and the index would be given by the function f(v). The visited array vis would similarly be identified using f(v).
I am also giving the pseudocode if you are not able to solve but try on your own first.
The pseudocode is similar to python and assuming the graph and function f(v) are available. And 0-based indexing is assumed.
vis=[0, 0, 0, .. |V| times] # visited array for dfs
minReach= [1, 2, 3, .. |V| times] # array for storing no. reachable nodes from a node
function dfs(node):
vis[f(node)-1]=1 ### mark unvisited node as visited
for v in graph[node]:
if vis[v]!=1: ## check whether adjacent node is visited
minReach[f(node)-1]=min(minReach[f(node)-1],dfs(v) ## if not visited apply dfs again
else:
minReach[f(node)-1]=min(minReach[f(node)-1],countReach[f(v)-1]) ## else store the minimum node that can be reached from this node.
return countReach[f(node)-1]
for vertex in graph: ### each vertex is checked in graph
if vis[f(vertex)-1]!=1: ### if vertex is not visited dfs is applied
dfs(vertex)

Rejecting isomorphisms from collection of graphs

I have a collection of 15M (Million) DAGs (directed acyclic graphs - directed hypercubes actually) that I would like to remove isomorphisms from. What is the common algorithm for this? Each graph is fairly small, a hybercube of dimension N where N is 3 to 6 (for now) resulting in graphs of 64 nodes each for N=6 case.
Using networkx and python, I implemented it like this which works for small sets like 300k (Thousand) just fine (runs in a few days time).
def isIsomorphicDuplicate(hcL, hc):
"""checks if hc is an isomorphism of any of the hc's in hcL
Returns True if hcL contains an isomorphism of hc
Returns False if it is not found"""
#for each cube in hcL, check if hc could be isomorphic
#if it could be isomorphic, then check if it is
#if it is isomorphic, then return True
#if all comparisons have been made already, then it is not an isomorphism and return False
for saved_hc in hcL:
if nx.faster_could_be_isomorphic(saved_hc, hc):
if nx.fast_could_be_isomorphic(saved_hc, hc):
if nx.is_isomorphic(saved_hc, hc):
return True
return False
One better way to do it would be to convert each graph to its canonical ordering, sort the collection, then remove the duplicates. This bypasses checking each of the 15M graphs in a binary is_isomophic() test, I believe the above implementation is something like O(N!N) (not taking isomorphic time into account) whereas a clean convert all to canonical ordering and sort should take O(N) for the conversion + O(log(N)N) for the search + O(N) for the removal of duplicates. O(N!N) >> O(log(N)N)
I found this paper on Canonical graph labeling, but it is very tersely described with mathematical equations, no pseudocode: "McKay's Canonical Graph Labeling Algorithm" - http://www.math.unl.edu/~aradcliffe1/Papers/Canonical.pdf
tldr: I have an impossibly large number of graphs to check via binary isomorphism checking. I believe the common way this is done is via canonical ordering. Do any packaged algorithms or published straightforward to implement algorithms (i.e. have pseudocode) exist?
Here is a breakdown of McKay ’ s Canonical Graph Labeling Algorithm, as presented in the paper by Hartke and Radcliffe [link to paper].
I should start by pointing out that an open source implementation is available here: nauty and Traces source code.
Ok, let's do this! Unfortunately this algorithm is heavy in graph theory, so we need some terms. First I will start by defining isomorphic and automorphic.
Isomorphism:
Two graphs are isomorphic if they are the same, except that the vertices are labelled differently. The following two graphs are isomorphic.
Automorphic:
Two graphs are automorphic if they are completely the same, including the vertex labeling. The following two graphs are automorphic. This seems trivial, but turns out to be important for technical reasons.
Graph Hashing:
The core idea of this whole thing is to have a way to hash a graph into a string, then for a given graph you compute the hash strings for all graphs which are isomorphic to it. The isomorphic hash string which is alphabetically (technically lexicographically) largest is called the "Canonical Hash", and the graph which produced it is called the "Canonical Isomorph", or "Canonical Labelling".
With this, to check if any two graphs are isomorphic you just need to check if their canonical isomporphs (or canonical labellings) are equal (ie are automorphs of each other). Wow jargon! Unfortuntately this is even more confusing without the jargon :-(
The hash function we are going to use is called i(G) for a graph G: build a binary string by looking at every pair of vertices in G (in order of vertex label) and put a "1" if there is an edge between those two vertices, a "0" if not. This way the j-th bit in i(G) represents the presense of absence of that edge in the graph.
McKay ’ s Canonical Graph Labeling Algorithm
The problem is that for a graph on n vertices, there are O( n! ) possible isomorphic hash strings based on how you label the vertices, and many many more if we have to compute the same string multiple times (ie automorphs). In general we have to compute every isomorph hash string in order to find the biggest one, there's no magic sort-cut. McKay's algorithm is a search algorithm to find this canonical isomoprh faster by pruning all the automorphs out of the search tree, forcing the vertices in the canonical isomoprh to be labelled in increasing degree order, and a few other tricks that reduce the number of isomorphs we have to hash.
(1) Sect 4: the first step of McKay's is to sort vertices according to degree, which prunes out the majority of isomoprhs to search, but is not guaranteed to be a unique ordering since there may be more than one vertex of a given degree. For example, the following graph has 6 vertices; verts {1,2,3} have degree 1, verts {4,5} have degree 2 and vert {6} has degree 3. It's partial ordering according to vertex degree is {1,2,3|4,5|6}.
(2) Sect 5: Impose artificial symmetry on the vertices which were not distinguished by vertex degree; basically we take one of the groups of vertices with the same degree, and in turn pick one at a time to come first in the total ordering (fig. 2 in the paper), so in our example above, the node {1,2,3|4,5|6} would have children { {1|2,3|4,5|6}, {2|1,3|4,5|6}}, {3|1,2|4,5|6}} } by expanding the group {1,2,3} and also children { {1,2,3|4|5|6}, {1,2,3|5|4|6} } by expanding the group {4,5}. This splitting can be done all the way down to the leaf nodes which are total orderings like {1|2|3|4|5|6} which describe a full isomorph of G. This allows us to to take the partial ordering by vertex degree from (1), {1,2,3|4,5|6}, and build a tree listing all candidates for the canonical isomorph -- which is already a WAY fewer than n! combinations since, for example, vertex 6 will never come first. Note that McKay evaluates the children in a depth-first way, starting with the smallest group first, this leads to a deeper but narrower tree which is better for online pruning in the next step. Also note that each total ordering leaf node may appear in more than one subtree, there's where the pruning comes in!
(3) Sect. 6: While searching the tree, look for automorphisms and use that to prune the tree. The math here is a bit above me, but I think the idea is that if you discover that two nodes in the tree are automorphisms of each other then you can safely prune one of their subtrees because you know that they will both yield the same leaf nodes.
I have only given a high-level description of McKay's, the paper goes into a lot more depth in the math, and building an implementation will require an understanding of this math. Hopefully I've given you enough context to either go back and re-read the paper, or read the source code of the implementation.
This is indeed an interesting problem.
I would approach it from the adjacency matrix angle. Two isomorphic graphs will have adjacency matrices where the rows / columns are in a different order. So my idea is to compute for each graph several matrix properties which are invariant to row/column swaps, off the top of my head:
numVerts, min, max, sum/mean, trace (probably not useful if there are no reflexive edges), norm, rank, min/max/mean column/row sums, min/max/mean column/row norm
and any pair of isomorphic graphs will be the same on all properties.
You could make a hash function which takes in a graph and spits out a hash string like
string hashstr = str(numVerts)+str(min)+str(max)+str(sum)+...
then sort all graphs by hash string and you only need to do full isomorphism checks for graphs which hash the same.
Given that you have 15 million graphs on 36 nodes, I'm assuming that you're dealing with weighted graphs, for unweighted undirected graphs this technique will be way less effective.
This is an interesting question which I do not have an answer for! Here is my two cents:
By 15M do you mean 15 MILLION undirected graphs? How big is each one? Any properties known about them (trees, planar, k-trees)?
Have you tried minimizing the number of checks by detecting false positives in advance? Something includes computing and comparing numbers such as vertices, edges degrees and degree sequences? In addition to other heuristics to test whether a given two graphs are NOT isomorphic. Also, check nauty. It may be your way to check them (and generate canonical ordering).
If all your graphs are hypercubes (like you said), then this is trivial: All hypercubes with the same dimension are isomorphic, hypercubes with different dimension aren't. So run through your collection in linear time and throw each graph in a bucket according to its number of nodes (for hypercubes: different dimension <=> different number of nodes) and be done with it.
since you mentioned that testing smaller groups of ~300k graphs can be checked for isomorphy I would try to split the 15M graphs into groups of ~300k nodes and run the test for isomorphy on each group
say: each graph Gi := VixEi (Vertices x Edges)
(1) create buckets of graphs such that the n-th bucket contains only graphs with |V|=n
(2) for each bucket created in (1) create subbuckets such that the (n,m)-th subbucket contains only graphs such that |V|=n and |E|=m
(3) if the groups are still too large, sort the nodes within each graph by their degrees (meaning the nr of edges connected to the node), create a vector from it and distribute the graphs by this vector
example for (3):
assume 4 nodes V = {v1, v2, v3, v4}. Let d(v) be v's degree with d(v1)=3, d(v2)=1, d(v3)=5, d(v4)=4, then find < := transitive hull ( { (v2,v1), (v1,v4), (v4,v3) } ) and create a vector depening on the degrees and the order which leaves you with
(1,3,4,5) = (d(v2), d(v1), d(v4), d(v3)) = d( {v2, v1, v4, v3} ) = d(<)
now you have divided the 15M graphs into buckets where each bucket has the following characteristics:
n nodes
m edges
each graph in the group has the same 'out-degree-vector'
I assume this to be fine grained enough if you are expecting not to find too many isomorphisms
cost so far: O(n) + O(n) + O(n*log(n))
(4) now, you can assume that members inside each bucket are likely to be isomophic. you can run your isomorphic-check on the bucket and only need to compare the currently tested graph against all representants you have already found within this bucket. by assumption there shouldn't be too many, so I assume this to be quite cheap.
at step 4 you also can happily distribute the computation to several compute nodes, which should really speed up the process
Maybe you can just use McKay's implementation? It is found here now: http://pallini.di.uniroma1.it/
You can convert your 15M graphs to the compact graph6 format (or sparse6) which nauty uses and then run the nauty tool labelg to generate the canonical labels (also in graph6 format).
For example - removing isomorphic graphs from a set of random graphs:
#gnp.py
import networkx as nx
for i in range(100000):
graph = nx.gnp_random_graph(10,0.1)
print nx.generate_graph6(graph,header=False)
[nauty25r9]$ python gnp.py > gnp.g6
[nauty25r9]$ cat gnp.g6 |./labelg |sort |uniq -c |wc -l
>A labelg
>Z 10000 graphs labelled from stdin to stdout in 0.05 sec.
710

How can I check if two graphs with LABELED vertices are isomorphic?

For example, suppose I had a graph G that had all blue nodes and one red node. I also had a graph F that had all blue and one red node.
What is an algorithm that I can run to verify that these two graphs are isomorphic with respect to their colored nodes?
I have made a few attempts at trying to create a polynomial graph isomorphism algorithm, and while I have yet to create an algorithm that is proven to be polynomial for every case, one algorithm I came up with is particularly suited for this purpose. It's based on a DFA minimization algorithm (the specific algorithm is http://en.wikipedia.org/wiki/DFA_minimization#Hopcroft.27s_algorithm ; you may want to find a description from elsewhere, since Wikipedia's is difficult to follow).
The original algorithm was initialized by organizing the vertexes into distinct groups based on degree (one group for vertexes of degree 1, one for vertexes of degree 2, etc.). For your purposes, you will want to organize the vertexes into groups based upon both degree and label; this will ensure that no two nodes will be paired if they have different labels. Each graph should have its own structure containing such groups. Check the collection of groups for both graphs; there should be the same number of groups for the two graphs, and for each group in one graph, there should be a group in the other graph containing the same number of vertexes of the same degree and label. If this isn't the case, the graphs aren't isomorphic.
At each iteration of the main algorithm, you should generate a new data structure for each of the two graphs for the vertex groups that the next step will use. For each group, generate a list for each vertex of group indices/IDs that correspond to the vertexes that are adjacent to the vertex in question (include duplicate groups in this list). Check each group to see if the sorted group index/ID list for each contained vertex is the same. If this is the case, create a unmodified copy of this group in the next step's group structure. If this isn't the case, then for each unique list of group indices/IDs within that group, create a new group for vertexes within the original group that generated that list and add this new group to the next step's group structure. If you do not subdivide any of the groups of either graph in a given iteration, stop running the main portion of this algorithm. If you subdivide at least one group, you will need to once again check to make sure the group structures of the two graphs correspond to each other. This check will be similar to the one performed at the end of the algorithm's initialization (you may even be able to use the same function for both). If this check fails, then the graphs aren't isomorphic. If the check passes, then discard/free the current group structures and start the next iteration with the freshly created ones.
To make the process of determining "corresponding groups" easier, I would highly recommend using a predictable scheme for adding groups to the structure. For example, if you add groups during initialization in (degree, label) order, subdivide groups in ascending index order, and add subdivided groups to the new structure based on the order of the group index list (i.e., sorted by first listed index, then second, etc.), then corresponding groups between the two group structures will always have the same index, which makes the process of keeping track of which groups correspond to each other much easier.
If all groups contain 3 or fewer vertexes when the algorithm completes, then the graphs are isomorphic (for corresponding groups containing 2 or 3 vertexes, any vertex pairing is valid). If this isn't the case (this always happens for graphs where all nodes have equal degree and label, and sometimes happens for subgraphs with that property), then the graphs are not yet determined to be isomorphic or non-isomorphic. To differentiate between the two cases, choose an arbitrary node of the first graph's largest group and separate it into its own group. Then, for each node of the other graph's largest group, try running the algorithm again with that node separated into its own group. In essence, you are choosing an unpaired node from the first graph and pairing it by guess-and-check to every node in the second graph that is still a plausible pairing. If any of the forked iterations returns an isomorphism, the graphs are isomorphic. If none of them do, the graphs are not isomorphic.
For general cases, this algorithm is polynomial. In corner cases, the algorithm might be exponential. Whether this is the case or not is related to how frequently the algorithm can be forced to fork in the worst case of both graph input and node selection, which I have had difficulties trying to put useful bounds on. For example, although the algorithm forks at every step when comparing two full graphs, every branch of that tree produces an isomorphism; therefore, the algorithm returns in polynomial time in this case even though traversing the entire execution tree would require exponential time since traversing only one branch of the execution tree takes polynomial time.
Regardless, this algorithm should work well for your purposes. I hope my explanation of it was comprehensible; if not, I can try providing examples of the algorithm handling simple cases or expressing it as pseudocode instead.
Years ago, I created a simple and flexible algorithm for exactly this problem (graph isomorphism with labels).
I named it "Powerhash", and to create the algorithm it required two insights. The first is the power iteration graph algorithm, also used in PageRank. The second is the ability to replace power iteration's inside step function with anything that we want. I replaced it with a function that does the following on each iteration, and for each node:
Sort the hashes (from previous iteration) of the node's neighbors
Hash the concatenated sorted hashes
Replace node's hash with newly computed hash
On the first step, a node's hash is affected by its direct neighbors. On the second step, a node's hash is affected by the neighborhood 2-hops away from it. On the Nth step a node's hash will be affected by the neighborhood N-hops around it. So you only need to continue running the Powerhash for N = graph_radius steps. In the end, the graph center node's hash will have been affected by the whole graph.
To produce the final hash, sort the final step's node hashes and concatenate them together. After that, you can compare the final hashes to find if two graphs are isomorphic. If you have labels, then add them (on the first iteration) in the internal hashes that you calculate for each node.
For more on this you can look at my post here:
https://plus.google.com/114866592715069940152/posts/fmBFhjhQcZF
The algorithm above was implemented inside the "madIS" functional relational database. You can find the source code of the algorithm here:
https://github.com/madgik/madis/blob/master/src/functions/aggregate/graph.py
Just checking; do you mean strict graph isomorphism or something else? Isomorphic graphs have the same adjacency relations (I.e. if node A is adjacent to node B in one graph then node g(A) is adjacent to node g(B) in another graph that is the result of applying the transformation g to the first one...) If you just wanted to check of one graph has the same types and number of nodes as another then you can just compare counts.

Resources