Removal of sink node in pagerank - information-retrieval

Why don't we remove sink nodes altogether while considering importance of the pages using pagerank algorithm? Why do we care about sink nodes and take Z matrix in consideration so as to compensate for all zero columns(sinks) in probability transition matrix M? If there is some important reason in retaining them can anyone please tell what is it?

It might contain genuine or fake information, we cannot decide that using an algorithm

Related

How to choose an appropriate null-space velocity vector when solving inverse kinematics problem of rendundancy robots?

Recently I'm reading 《Morion Control of Rendundant Robots under Joint Constraints:Saturation in the Null Space》 and have a few questions.
1.In the end of Section 2, the author mentioned that they can obtain the largest possible reduncing factor s=0.9091 with their algorithm. I don't understand why they can get this result by using a null-space velocity vector Qndot=[-0.4913 0.8537 -0.6093 -0.9007].
2.The pseudocode of SNS algorithm at the velocity level, how to choose or calculate an appropriate null-space velocity vector? What's the difference between this vector and several objective functions?
I'd like to appreciate so much if someone can help me to get through this part. It confused me a lot.
Actually, I think that one of the situation we have to use SNS algorithm is that when the robotic arm is close to the singular position, because singularity of Jacobian matrix may cause large velocity and over constraints. Is this thoughts correct?

Graph partitioning optimization

The problem
I have a set of locations on a plane (actually they are pins in a KML file) and I want to partition this graph into subgraphs. Connectivity is pretty good - as with all real world road networks - so I assume that if two locations are close they have some kind of connection. The resulting set of subgraphs should adhere to these constraints:
Every node has to be covered by a subgraph
Every node should be in exactly 1 subgraph
Every node within a subgraph should be close to each other (L2 norm distances)
Every subgraph should contain at least 5 locations
The amount of subgraphs should be minimal
Right now the amount of locations is no more than 100 so I thought about brute forcing through every possibility but this obviously won't scale well.
I thought about using some k-Nearest-Neighbors algorithm (e.g. using QuickGraph) but I can't get my head around where to start and how to extend/shrink the subgraphs on the way. Maybe it's possible to map this problem to another problem that can easily be solved with some numerical procedure (e.g. Simplex) ...
Maybe someone has experience in this kind of optimization problems and is willing to help me find a solution? I don't have access to Mathematica/Matlab or the like ... but sufficient .NET programming skills and hmm Excel :-)
Thanks a lot!
As soon as there are multiple criteria that need to be appeased in the best possible way simultanously, it is usually starting to get difficult.
A numerical solution could work as follows: You could define yourself a utility function, that maps partitionings of your locations to positive real values, describing how "good" a partition is by assigning it a "rating" (good could be high "bad" could be near zero).
Once you have such a function assigning partitions their according "values", you simply need to optimize it and then you hopefully obtain a good solution if you defined your utility function reasonably. Evolutionary algorithms are good at that task since your utility function is probably analytically too complex to solve due to its discrete nature.
The problem is then only how you assign "values" to partitions via this utility function. This is then your task. It can be done for example by weighing each criterion with a factor and summing the results up, or even more complex functions (least squares etc.). The factors you use in the definition of the utility function are tuning parameters and can be varied until the result seems to be good.
Some CA software wold help a lot for testing if you can get your hands on one, bit I guess to obtain a black box solver for your partitioning problem, you need to implement the complete procedure yourself using a language of your choice.

Community detection with InfoMap algorithm producing one massive module

I am using the InfoMap algorithm in the igraph package to perform community detection on a directed and non-weighted graph (34943 vertices, 206366 edges). In the graph, vertices represent websites and edges represent the existence of a hyperlink between websites.
A problem I have encountered after running the algorithm is that the majority of vertices have a membership in a single massive community (32920 or 94%). The rest of the vertices are dispersed into hundreds of other tiny communities.
I have tried different settings with the nb.trials parameter (i.e. 50, 100, and now running 500). However, this doesn't seem to change the result much.
I am feeling rather exasperated because the run-time on the algorithm is quite high, so I have to wait each time for the results (with no luck yet!!).
Many thanks.
Thanks for all the excellent comments. In the end, I got it working by downloading and running the source code for Infomap, which is available at: http://www.mapequation.org/code.html.
Due to licence issues, the latest code has not been integrated with igraph.
This solved the problem of too many nodes being 'lumped' into a single massive community.
Specifically, I used the following options from the command line: -N 10 --directed --two-level --map
Kudos to Martin Rosvall from the Infomap project for providing me with detailed help to resolve this problem.
For the interested reader, here is more information about this issue:
When a network collapses into one major cluster, it is most often because of a very dense and random link structure ... In the code for directed networks implemented in iGraph, teleportation is encoded. If many nodes have no outlinks, the effect of teleportation can be significant because it randomly connect nodes. We have made new code available here: http://www.mapequation.org/code.html that can cluster network without encoding the random teleportation necessary to make the dynamics ergodic. For details, see this paper: http://pre.aps.org/abstract/PRE/v85/i5/e056107
I was going to put this in a comment, but it ended up being too long and hard to read in that format, so this is a tangentially related answer.
One thing you should do is assess whether the algorithm is doing a good job at finding community structure. You can try to visualise your network to establish:
Is the algorithm returning community structures that make sense? Maybe there is one massive community?
If not does the visualisation provide insight as to why?
This will help inform your next steps. Maybe the structure of the network requires a different algorithm?
One thing I find useful for large networks is plotting your edges as a heatmap. This is simple to do if you have your edges stored in an adjacency matrix.
For this, you can use the image function, passing in your matrix of edges as the argument z. Hopefully this will allow you to see by eye the community structure.
However you also want to assess the correctness of your algorithm, so you want to sort the nodes (rows and columns of your adjacency matrix) by the community they've been assigned to.
Another thing to note is that if your edges are directed it may be more difficult to assess by eye as edges can appear on either side of the diagonal of the heatmap. One thing you can do is instead plot the underlying graph -- that is the adjacency matrix assuming your edges are undirected.
If your algorithm is doing a good job, you would expect to see square blocks along the diagonal, one for each detected community.

Graph Clustering for almost Clustered Graph by removing nodes(vertices)

I want to carry out Graph Clustering in a huge undirected graph with millions of edges and nodes. Graph is almost clustered with different clusters joined together only by some nodes(kind of ambiguous nodes which can relate to multiple clusters). There will be very few or almost no edges between two clusters. This problem is almost similar to finding vertex cut set of a graph, with one exception that graph needs to be partitioned into many components(their number being unknown).(Refer this picture https://docs.google.com/file/d/0B7_3zLD0XdtAd3ZwMFAwWDZuU00/edit?pli=1)
Its almost like different strongly connected components sharing a couple of nodes between them and i am supposed to remove those nodes to separate those strongly connected components. Edges are weigthed but this problem is more like finding structures in a graph, so edge weigths won't be of relevance. (Another way to think about the problem would be to visualize Solid Spheres touching each other at some points with Spheres being those strongly connected components and touching points being those ambiguous nodes)
I am prototyping something, so am quiet short of time to pick up Graph Clustering Algorithms by myself and to select the best possible. Plus i need a solution that would cut nodes and not edges since different clusters share nodes and not edges in my case.
Is there any research paper, blog that addresses this or somewhat related problem? Or can anyone come up with a solution to this problem howsoever dirty.
Since millions of nodes and edges are involved, i would need a MapReduce implementation of the solution. Any inputs, links for that too?
Is there any current open source implementation in MapReduce that can i directly use?
I think this problem is analogous to Finding Communities in online social networks by removing vertices.
Your problem is not so simple. I am afraid that it is related to the clique problem, which is NP complete, so unless you quantify somehow the statement "there are almost no edges between the clusters", your problem might be still very difficult. But what I would do in your shoes, would be to try one dirty, greedy approach, namely regarding the nodes as the following kind of quasi-neural net:
Each vertex I would consider to have inputs, outputs and a sigmoid activation function which convert the input value (sum of inputs) into the output value. The output value, and I consider this important, would not be cloned and sent to all the neighbors, but rather divided evenly between the neighbors. In addition to this, I would define a logarithmic decay of activity in a neuron (self-suppression, suppressive connection to itself), defined by a decay parameter global for the net.
Now, I would start simulation with all the neurons starting from activity 0.5 (activity range is 0 to 1) with very high decay parameter, which would lead to all the neuronst quickly stabilizing in 0 state. I would then gradually decrease the decay parameter until the steady state result would yield the first clique with non-zero stable activity.
The question is what to do next. One possibility is to subtract the found clique from the graph and run the same process again until we find all the cliques. This greedy approach might succeed if your graph is indeed as well behaved (really almost clustered) as you say, but might lead to unexpected results otherwise. Another possibility is to give the found clique a unique clique smell that would be repulsive (mutual suppresion) to other cliques an rerun the algorithm until the second clique is found, give it a different clique smell repulsive to all others etc., until each node has its own assigned smell.
I think this would be as many big ideas as i have about this.
The key is, that since it is probably not possible to solve this problem in the general case (likely NP complete), you need to take use of whatever special properties your graph has. That means you need to play with parameters for a while until the algorithm solves 99% of the cases that you encounter. I don't think that it is possible to give the numerically precise answer to your question without long experimentation with the actual datasets that you encounter.
Since millions of nodes and edges are involved, i would need a MapReduce implementation of the solution. Any inputs, links for that too?
In my experience I doubt if using Map/Reduce over here would be truly advantageous. First 10^6 order of nodes isn't really that large [that too in a non hyper-connected graph, since you are considering clustering], and the over head of using Map/Reduce [unless you already have setup your hardware/software for it] for your problem will not be worth it.
Map/Reduce will work much better, where once you have solved the clustering issue, and then want to process each cluster with similar analysis. Basically when you can break your task into relatively isolated sub-tasks, which can be performed in parallel. This of course can be cascaded to several layers.
In a relatively similar situation, I personally first modelled my graph into a graph database (I used Neo4J, and would recommend it highly) and then ran my analytic and queries on it. You will be surprised as to how white board friendly this solution is, and even massively joined and connected queries will be executed near instantaneously especially at the scale of only a few million nodes. For example, you can do a filtered analysis, based on degrees of separation, followed by listing of commons.

Dijkstra's algorithm cannot deal with negative weights, when do you see negative weights in the real world?

I can't think of a concrete instance in which you'd have a negative weight. You cannot have a negative distance between two houses, you cannot go back in time. When would you have a graph with a negative edge weight?
I found the Bellman Ford algorithm was originally used to deal with routing in ARPANET, but again, can't imagine where you'd run into a route with a negative weight, it just doesn't seem possible. I could just be thinking too hard about this, what would be a simple example?
Suppose that walking a distance takes a certain amount of food. But along some paths there is food you can gather, so you might gain food by following those paths.
When doing routing, a negative weight might be assigned to a link to make it the default path. You could use this if you have a primary and a backup line and for whatever reason you don't want to load balance between them.
I guess you might get negative weights where you've already got a system with non-negative weights and a path comes along that is cheaper than all existing paths, and for some reason it's expensive to reweight the network.
Even if there were an example; you could probably normalize it to be all positive. Any actual representation of a negative weight is relative to some 0. I guess what I'm saying is that there probably isn't an application of negative weights that can't be done using exclusively positive weights.
EDIT: After thinking about this a little bit more, I suppose you could have situations where a given path has a negative weight. In this context; assuming the negative weight is bad, you would have to have a situation where the only possible way to achieve the goal of getting to your desired endpoint, would mean there would have to be at least one point in your graph where you're REQUIRED to take the negative path (as in, no other option is available to reach your goal). But I suppose if the graph hasn't been traversed; how would you know it were true?
EDIT (AGAIN): #Jim, I think you're right. The choke point isn't really relevant. I guess I was too quick to assume that it was because one question that pops into my mind when introducing negative edges is - if it is possible to traverse the graph without taking ANY negative edge, then what are the negative edges doing there in the first place? But, this doesn't hold very well, because - outside of hindsight - how would you ever know if a graph could or could not be traversed without going across a negative edge?
Also worth noting, according to the wikipedia page for Djikstra's algorithm :
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree. This algorithm is often used in routing and as a subroutine in other graph algorithms.
So, even though this conversation is useful and thought provoking; maybe the title of the question should be "What is the proper algorithm to use for traversing a graph with negative edges?" Djikstra's algorithm was intended to find the shortest path. But, if you introduce positive and negative weights, then doesn't the goal change from finding the shortest path to finding the MOST positive - regardless of how many edges are on your chosen path? And if it does, what is your exit condition? The only way you could know you've reached the optimal solution would be if you happened across a path that included all positive edges without any negative edges - and wouldn't this scenario only occur by chance? So - if introducing a situation where there are positive and negative weights changes the goal to be the most positive (or negative depending on how you want to frame it) wouldn't this problem be doomed to O(n!) and therefor be best solved by a decision making algorithm (like alpha/beta) which would produce the best outcome given a restriction on the total amount of edges you're allowed to take?
If you're trying to find the quickest way to swim across a series of linked pools in a water park, and it has flumes.

Resources