Visualizing distance between nodes according to weights - with R - r

I'm trying to draw a graph where the distance between vertices correspond to the edge weights* and I've founde that in graphviz there is a way to draw such graph. Is there a way to do this in R with the igraph package (specfically with graph.adkacency)?
Thanks,
Noam
(as once have been asked: draw a graph where the distance between vertices correspond to the edge weights)

This is not possible as you need triangle equality for every triangle to be able to plot such an object. So you can only approximate it. For this you can use "force embedded" algorithms. There are a few in igraph. The one I often use is the Fruchterman-Reingold algorithm.
See for details:
library("igraph")
?layout.fruchterman.reingold
Edit:
Note that the distance between nodes will correspond somewhat with the inverse of the absolute edge weight.

Like Sacha Epskamp mentioned, unless your data is perfect, you cannot draw a graph that would not violate some triangular inequalities. However, there are techniques named Multidimensional scaling (MDS) targeted at minimizing such violations.
One implementation in R is cmdscale from the stats package. I'd recommend the example at the bottom of ?cmdscale:
> require(graphics)
>
> loc <- cmdscale(eurodist)
> x <- loc[,1]
> y <- -loc[,2]
> plot(x, y, type="n", xlab="", ylab="", main="cmdscale(eurodist)")
> text(x, y, rownames(loc), cex=0.8)
Of course, you can plot x and y using any graphics packages (you were inquiring about igraph specifically).
Finally, I'm sure you'll find plenty of other implementations if you search for "multidimensional scaling" or "MDS". Good luck.

Related

R - locate intersection of two curves

There are a number of questions in this forum on locating intersections between a fitted model and some raw data. However, in my case, I am in an early stage project where I am still evaluating data.
To begin with, I have created a data frame that contains a ratio value whose ideal value should be 1.0. I have plotted the data frame and also used abline() function to plot a horizontal line at y=1.0. This horizontal line and the plot of ratios intersect at some point.
plot(a$TIME.STAMP, a$PROCESS.RATIO,
xlab='Time (5s)',
ylab='Process ratio',
col='darkolivegreen',
type='l')
abline(h=1.0,col='red')
My aim is to locate the intersection point, say x and draw two vertical lines at x±k, as abline(v=x-k) and abline(v=x+k) where, k is certain band of tolerance.
Applying a grid on the plot is not really an option because this plot will be a part of a multi-panel plot. And, because ratio data is very tightly laid out, the plot will not be too readable. Finally, the x±k will be quite valuable in my discussions with the domain experts.
Can you please guide me how to achieve this?
Here are two solutions. The first one uses locator() and will be useful if you do not have too many charts to produce:
x <- 1:5
y <- log(1:5)
df1 <-data.frame(x= 1:5,y=log(1:5))
k <-0.5
plot(df1,type="o",lwd=2)
abline(h=1, col="red")
locator()
By clicking on the intersection (and stopping the locator top left of the chart), you will get the intersection:
> locator()
$x
[1] 2.765327
$y
[1] 1.002495
You would then add abline(v=2.765327).
If you need a more programmable way of finding the intersection, we will have to estimate the function of your data. Unfortunately, you haven’t provided us with PROCESS.RATIO, so we can only guess what your data looks like. Hopefully, the data is smooth. Here’s a solution that should work with nonlinear data. As you can see in the previous chart, all R does is draw a line between the dots. So, we have to fit a curve in there. Here I’m fitting the data with a polynomial of order 2. If your data is less linear, you can try increasing the order (2 here). If your data is linear, use a simple lm.
fit <-lm(y~poly(x,2))
newx <-data.frame(x=seq(0,5,0.01))
fitline = predict(fit, newdata=newx)
est <-data.frame(newx,fitline)
plot(df1,type="o",lwd=2)
abline(h=1, col="red")
lines(est, col="blue",lwd=2)
Using this fitted curve, we can then find the closest point to y=1. Once we have that point, we can draw vertical lines at the intersection and at +/-k.
cross <-est[which.min(abs(1-est$fitline)),] #find closest to 1
plot(df1,type="o",lwd=2)
abline(h=1)
abline(v=cross[1], col="green")
abline(v=cross[1]-k, col="purple")
abline(v=cross[1]+k, col="purple")

Plotting ellipsoids / oblate spheroids in rgl

I have been using rgl to plot spheres, but now I need to plot ellipsoids.
The package includes ellipse3d; however, this seems to be for fitting ellipsoids to data, using matrices and stuff I'm not very good at.
What I want is a simple way to plot ellipsoids, in a similar way to spheres, using the centre coordinates and the scales in each direction. Can anyone help me out?
If you don't need the ellipse rotated around the axes, then you can just use a diagonal matrix for x (this plots a sphere, and defines the virtual "axes" along the x, y, z axes) and use the centre and scale parameters to shift the location and change the proportions.
plot3d(ellipse3d(diag(3),centre=c(1,2,4),scale=c(1,2,5)))
There's one in my cda package,
library(cda)
library(rgl)
## single ellipsoid
plot3d(rgl.ellipsoid(a=2,b=1,c=5))
## multiple ellipsoids, translated and rotated
cl <- helix(0.5, 1, 36, delta=pi/6, n.smooth=1e3)
sizes <- equal_sizes(0.04,0.02,0.02,NROW(cl$positions))
rgl.ellipsoids(cl$positions, sizes, cl$angles, col="gold")

R reciprocal edges in igraph in R

I am working with graphs in R. I am currently using igraph and I would like to be able to plot bidirectional edges "reciprocal edges" of a graph. So far I've seen it is possible to plot "bidirectional" graphs but not reciprocal edges, for example: E(1,3) and E(3,1) could potentially be represented as a bidirectional edge <-->, but instead I would like to plot two parallel edges one pointing to the opposite direction of the other || .
There exist in Rgraphviz an option when plotting "plot(rEG, recipEdges = "distinct")" that makes this, but I like more how plots look like on igraph. Thanks in advance.
In igraph, you can use the edge attribute curved to curve the edges you want.
For example, here is a graph based small adjacency matrix:
library("igraph")
adj <- matrix(c(
0,1,1,
1,0,1,
0,0,0),3,3,byrow=TRUE)
library("igraph")
G <- graph.adjacency(adj)
The edge between node 0 and 1 is bidirected (Actually, it isn't, it are two edges and they just look like a bidirected edge because they are straight).:
plot(G)
To change this, we can use the edgelist:
E <- t(apply(get.edgelist(G),1,sort))
E(G)$curved <- 0
E(G)[duplicated(E) | duplicated(E,fromLast =TRUE)]$curved <- 0.2
plot(G)
Another option is my package, where this is the default behavior:
library("qgraph")
qgraph(adj)
which can be suppressed with the bidirectional argument.
Try plot(graph, edge.curved=TRUE). It definitely works in igraph 0.6, and it may also work in igraph 0.5.4 (not sure when it was added).

What techniques exists in R to visualize a "distance matrix"?

I wish to present a distance matrix in an article I am writing, and I am looking for good visualization for it.
So far I came across balloon plots (I used it here, but I don't think it will work in this case), heatmaps (here is a nice example, but they don't allow to present the numbers in the table, correct me if I am wrong. Maybe half the table in colors and half with numbers would be cool) and lastly correlation ellipse plots (here is some code and example - which is cool to use a shape, but I am not sure how to use it here).
There are also various clustering methods but they will aggregate the data (which is not what I want) while what I want is to present all of the data.
Example data:
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist(nba[1:20, -1], )
I am open for ideas.
You could also use force-directed graph drawing algorithms to visualize a distance matrix, e.g.
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist_m <- as.matrix(dist(nba[1:20, -1]))
dist_mi <- 1/dist_m # one over, as qgraph takes similarity matrices as input
library(qgraph)
jpeg('example_forcedraw.jpg', width=1000, height=1000, unit='px')
qgraph(dist_mi, layout='spring', vsize=3)
dev.off()
Tal, this is a quick way to overlap text over an heatmap. Note that this relies on image rather than heatmap as the latter offsets the plot, making it more difficult to put text in the correct position.
To be honest, I think this graph shows too much information, making it a bit difficult to read... you may want to write only specific values.
also, the other quicker option is to save your graph as pdf, import it in Inkscape (or similar software) and manually add the text where needed.
Hope this helps
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dst <- dist(nba[1:20, -1],)
dst <- data.matrix(dst)
dim <- ncol(dst)
image(1:dim, 1:dim, dst, axes = FALSE, xlab="", ylab="")
axis(1, 1:dim, nba[1:20,1], cex.axis = 0.5, las=3)
axis(2, 1:dim, nba[1:20,1], cex.axis = 0.5, las=1)
text(expand.grid(1:dim, 1:dim), sprintf("%0.1f", dst), cex=0.6)
A Voronoi Diagram (a plot of a Voronoi Decomposition) is one way to visually represent a Distance Matrix (DM).
They are also simple to create and plot using R--you can do both in a single line of R code.
If you're not famililar with this aspect of computational geometry, the relationship between the two (VD & DM) is straightforward, though a brief summary might be helpful.
Distance Matrices--i.e., a 2D matrix showing the distance between a point and every other point, are an intermediate output during kNN computation (i.e., k-nearest neighbor, a machine learning algorithm which predicts the value of a given data point based on the weighted average value of its 'k' closest neighbors, distance-wise, where 'k' is some integer, usually between 3 and 5.)
kNN is conceptually very simple--each data point in your training set is in essence a 'position' in some n-dimension space, so the next step is to calculate the distance between each point and every other point using some distance metric (e.g., Euclidean, Manhattan, etc.). While the training step--i.e., construcing the distance matrix--is straightforward, using it to predict the value of new data points is practically encumbered by the data retrieval--finding the closest 3 or 4 points from among several thousand or several million scattered in n-dimensional space.
Two data structures are commonly used to address that problem: kd-trees and Voroni decompositions (aka "Dirichlet tesselation").
A Voronoi decomposition (VD) is uniquely determined by a distance matrix--i.e., there's a 1:1 map; so indeed it is a visual representation of the distance matrix, although again, that's not their purpose--their primary purpose is the efficient storage of the data used for kNN-based prediction.
Beyond that, whether it's a good idea to represent a distance matrix this way probably depends most of all on your audience. To most, the relationship between a VD and the antecedent distance matrix will not be intuitive. But that doesn't make it incorrect--if someone without any statistics training wanted to know if two populations had similar probability distributions and you showed them a Q-Q plot, they would probably think you haven't engaged their question. So for those who know what they are looking at, a VD is a compact, complete, and accurate representation of a DM.
So how do you make one?
A Voronoi decomp is constructed by selecting (usually at random) a subset of points from within the training set (this number varies by circumstances, but if we had 1,000,000 points, then 100 is a reasonable number for this subset). These 100 data points are the Voronoi centers ("VC").
The basic idea behind a Voronoi decomp is that rather than having to sift through the 1,000,000 data points to find the nearest neighbors, you only have to look at these 100, then once you find the closest VC, your search for the actual nearest neighbors is restricted to just the points within that Voronoi cell. Next, for each data point in the training set, calculate the VC it is closest to. Finally, for each VC and its associated points, calculate the convex hull--conceptually, just the outer boundary formed by that VC's assigned points that are farthest from the VC. This convex hull around the Voronoi center forms a "Voronoi cell." A complete VD is the result from applying those three steps to each VC in your training set. This will give you a perfect tesselation of the surface (See the diagram below).
To calculate a VD in R, use the tripack package. The key function is 'voronoi.mosaic' to which you just pass in the x and y coordinates separately--the raw data, not the DM--then you can just pass voronoi.mosaic to 'plot'.
library(tripack)
plot(voronoi.mosaic(runif(100), runif(100), duplicate="remove"))
You may want to consider looking at a 2-d projection of your matrix (Multi Dimensional Scaling). Here is a link to how to do it in R.
Otherwise, I think you are on the right track with heatmaps. You can add in your numbers without too much difficulty. For example, building of off Learn R :
library(ggplot2)
library(plyr)
library(arm)
library(reshape2)
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
nba$Name <- with(nba, reorder(Name, PTS))
nba.m <- melt(nba)
nba.m <- ddply(nba.m, .(variable), transform,
rescale = rescale(value))
(p <- ggplot(nba.m, aes(variable, Name)) + geom_tile(aes(fill = rescale),
colour = "white") + scale_fill_gradient(low = "white",
high = "steelblue")+geom_text(aes(label=round(rescale,1))))
A dendrogram based on a hierarchical cluster analysis can be useful:
http://www.statmethods.net/advstats/cluster.html
A 2-D or 3-D multidimensional scaling analysis in R:
http://www.statmethods.net/advstats/mds.html
If you want to go into 3+ dimensions, you might want to explore ggobi / rggobi:
http://www.ggobi.org/rggobi/
In the book "Numerical Ecology" by Borcard et al. 2011 they used a function called *coldiss.r *
you can find it here: http://ichthyology.usm.edu/courses/multivariate/coldiss.R
it color codes the distances and even orders the records by dissimilarity.
another good package would be the seriation package.
Reference:
Borcard, D., Gillet, F. & Legendre, P. (2011) Numerical Ecology with R. Springer.
A solution using Multidimensional Scaling
data = read.csv("http://datasets.flowingdata.com/ppg2008.csv", sep = ",")
dst = tcrossprod(as.matrix(data[,-1]))
dst = matrix(rep(diag(dst), 50L), ncol = 50L, byrow = TRUE) +
matrix(rep(diag(dst), 50L), ncol = 50L, byrow = FALSE) - 2*dst
library(MASS)
mds = isoMDS(dst)
#remove {type = "n"} to see dots
plot(mds$points, type = "n", pch = 20, cex = 3, col = adjustcolor("black", alpha = 0.3), xlab = "X", ylab = "Y")
text(mds$points, labels = rownames(data), cex = 0.75)

How to make topographic map from sparse sampling data?

I need to make a topographic map of a terrain for which I have only fairly sparse samples of (x, y, altitude) data. Obviously I can't make a completely accurate map, but I would like one that is in some sense "smooth". I need to quantify "smoothness" (probably the reciprocal the average of the square of the surface curvature) and I want to minimize an objective function that is the sum of two quantities:
The roughness of the surface
The mean square distance between the altitude of the surface at the sample point and the actual measured altitude at that point
Since what I actually want is a topographic map, I am really looking for a way to construct contour lines of constant altitude, and there may be some clever geometric way to do that without ever having to talk about surfaces. Of course I want contour lines also to be smooth.
Any and all suggestions welcome. I'm hoping this is a well-known numerical problem. I am quite comfortable in C and have a working knowledge of FORTRAN. About Matlab and R I'm fairly clueless.
Regarding where our samples are located: we're planning on roughly even spacing, but we'll take more samples where the topography is more interesting. So for example we'll sample mountainous regions more densely than a plain. But we definitely have some choices about sampling, and could take even samples if that simplifies matters. The only issues are
We don't know how much terrain we'll need to map in order to find features that we are looking for.
Taking a sample is moderately expensive, on the order of 10 minutes. So sampling a 100x100 grid could take a long time.
Kriging interpolation may be of some use for smoothly interpolating your sparse samples.
R has many different relevant tools. In particular, have a look at the spatial view. A similar question was asked in R-Help before, so you may want to look at that.
Look at the contour functions. Here's some data:
x <- seq(-3,3)
y <- seq(-3,3)
z <- outer(x,y, function(x,y,...) x^2 + y^2 )
An initial plot is somewhat rough:
contour(x,y,z, lty=1)
Bill Dunlap suggested an improvement: "It often works better to fit a smooth surface to the data, evaluate that surface on a finer grid, and pass the result to contour. This ensures that contour lines don't cross one another and tends to avoid the spurious loops that you might get from smoothing the contour lines themselves. Thin plate splines (Tps from library("fields")) and loess (among others) can fit the surface."
library("fields")
contour(predict.surface(Tps(as.matrix(expand.grid(x=x,y=y)),as.vector(z))))
This results in a very smooth plot, because it uses Tps() to fit the data first, then calls contour. It ends up looking like this (you can also use filled.contour if you want it to be shaded):
For the plot, you can use either lattice (as in the above example) or the ggplot2 package. Use the geom_contour() function in that case. An example can be found here (ht Thierry):
ds <- matrix(rnorm(100), nrow = 10)
library(reshape)
molten <- melt(data = ds)
library(ggplot2)
ggplot(molten, aes(x = X1, y = X2, z = value)) + geom_contour()
Excellent review of contouring algorithm, you might need to mesh the surface first to interpolate onto a grid.
maybe you can use:
GEOMap
geomapdata
gtm
with
Matrix
SparseM
slam
in R

Resources