I was wondering if there's a fast way to get an incidence matrix for this such a problem. I've got two data frames with three columns (the join keys)
df1 <- data.frame(K1=c(1,1,0,1,3,2,2),K2=c(1,2,1,0,2,0,1),K3=c(0,0,3,2,1,3,0))
df2 <- data.frame(K1=c(1,2,0,3),K2=c(0,1,2,0),K3=c(2,0,3,1))
and I need to obtain the corresponding incidence matrix
# IM:
# 1 2 3 4
# 1 1 1 0 0
# 2 1 0 1 0
# 3 0 1 1 0
# 4 1 0 0 0
# 5 0 0 1 1
# 6 0 1 1 0
# 7 0 1 0 0
where it's set 1 if there's a match between the corresponding KEY (column value) of rows of the two data frames.
I would do by using multiple loops
for (j in seq_len(nrow(df2)))
for (k in seq_len(ncol(df2))) {
if (df2[j,k])
m[which(df1[,k] == df2[j,k]),j] <- 1
}
but it's a C approach and maybe there's something faster in R. Do you have any other ideas? Besides, when the data.frame are quite big (around 50k and 20k rows), I cannot allocate the matrix as it seems too big.
Related
I want to create a Venn Diagram in R that represent the interesection between three columns of a dataframe, i.e. that represent how many observations are "1" for each pairwise comparison and also between the three groups.
Here is a reproducible example.
library(ggVennDiagram)
df <- as.data.frame(cbind(A=c(1,0,0,0,0,1,1,1,0,1),
B=c(0,0,1,1,0,0,0,1,1,0),
C=c(0,0,1,1,1,1,0,1,0,0)))
The resulting toy dataframe:
df
A B C
1 1 0 0
2 0 0 0
3 0 1 1
4 0 1 1
5 0 0 1
6 1 0 1
7 1 0 0
8 1 1 1
9 0 1 0
10 1 0 0
So, I want to produce a Venn Diagram of the intesection of A, B and C, i.e. representing how many rows have "1" in common between A and B, B and C, A and C, and A B and C at the same time.
However, if I run ggVennDiagram(df) it does produce this plot:
Which is obviously not the output that I am expecting.
Anyone knows how can I solve?
You need a named list of integer vectors, which is equivalent to which(x == 1) for each column in your data frame, so you can do:
ggVennDiagram(lapply(df, function(x) which(x == 1)))
I have a vector called "combined" with 1's and 0's
combined
1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I sampled twice from this vector, each with a sample size of 3 and put it into a contingency table of counts as follows.
2 1
1 2
I want to reiterate this sampling 1000 times such that I end with 1000 contingency tables each with counts of 1s and 0s from the sampling.
This is what I tried:
sample1 = as.vector(replicate(10000, sample(combined, 3)))
sample2 = as.vector(replicate(10000, sample(combined, 3)))
con_table = table(sample1,sample2)
but I ended up only getting 1 table instead of 10000. Hoping to get some help.
8109 7573
7306 7012
You need to wrap the entire expression, sample and table inside replicate. Add a conversion to a factor to ensure you always get a 2x2 table. E.g. a simple version with 2 replications:
combined <- rep(0:1,each=10)
combined <- as.factor(combined)
replicate(2, table(sample(combined,3), sample(combined,3)), simplify=FALSE)
#[[1]]
#
# 0 1
# 0 0 1
# 1 1 1
#
#[[2]]
#
# 0 1
# 0 1 1
# 1 0 1
Say I got a data.table (can also be data.frame, doesn't matter to me) which has numeric columns a, b, c, d and e.
Each row of the table represents an article and a-e are numeric characteristics of the articles.
What I want to find out is which articles are similar to each other, based on columns a, b and c.
I define "similar" by allowing a, b and c to vary +/- 1 at most.
That is, article x is similar to article y if neither a, b nor c differs by more than 1. Their values for d and e don't matter and may differ significantly.
I've already tried a couple of approaches but didn't get the desired result. What I want to achieve is to get a result table which contains only those rows that are similar to at least one other row. Plus, duplicates must be excluded.
Particularly, I'm wondering if this is possible using the sqldf library. My idea is to somehow join the table with itself under the given conditions, but I don't get it together properly. Any ideas (not necessarily using sqldf)?
Suppose our input data frame is the built-in 11x8 anscombe data frame. Its first three column names are x1, x2 and x3. Then here are some solutions.
1) sqldf This returns the pairs of row numbers of similar rows:
library(sqldf)
ans <- anscombe
ans$id <- 1:nrow(ans)
sqldf("select a.id, b.id
from ans a
join ans b on abs(a.x1 - b.x1) <= 1 and
abs(a.x2 - b.x2) <= 1 and
abs(a.x3 - b.x3) <= 1")
Add another condition and a.id < b.id if each row should not be paired with itself and if we want to exclude the reverse of each pair or add and not a.id = b.id to just exclude self pairs.
2) dist This returns a matrix m whose i,j-th element is 1 if rows i and j are similar and 0 if not based on columns 1, 2 and 3.
# matrix of pairs (1 = similar, 0 = not)
m <- (as.matrix(dist(anscombe[1:3], method = "maximum")) <= 1) + 0
giving:
1 2 3 4 5 6 7 8 9 10 11
1 1 0 0 1 1 0 0 0 0 0 0
2 0 1 0 1 0 0 0 0 0 1 0
3 0 0 1 0 0 1 0 0 1 0 0
4 1 1 0 1 0 0 0 0 0 0 0
5 1 0 0 0 1 0 0 0 1 0 0
6 0 0 1 0 0 1 0 0 0 0 0
7 0 0 0 0 0 0 1 0 0 1 1
8 0 0 0 0 0 0 0 1 0 0 1
9 0 0 1 0 1 0 0 0 1 0 0
10 0 1 0 0 0 0 1 0 0 1 0
11 0 0 0 0 0 0 1 1 0 0 1
We could add m[lower.tri(m, diag = TRUE)] <- 0 to exclude self pairs and the reverse of each pair if desired or diag(m) <- 0 to just exclude self pairs.
We can create a data frame of similar row number pairs like this. To keep the output short we have excluded self pairs and the reverse of each pair.
# two-column data.frame of pairs excluding self pairs and reverses
subset(as.data.frame.table(m), c(Var1) < c(Var2) & Freq == 1)[1:2]
giving:
Var1 Var2
34 1 4
35 2 4
45 1 5
58 3 6
91 3 9
93 5 9
101 2 10
106 7 10
117 7 11
118 8 11
Here is a network graph of the above. Note that answer continues after the graph:
# network graph
library(igraph)
g <- graph.adjacency(m)
plot(g)
# raster plot
library(ggplot2)
ggplot(as.data.frame.table(m), aes(Var1, Var2, fill = factor(Freq))) +
geom_raster()
I am quite new to R so don't expect to much.
What if you create from your values (which are basically vectors) a matrix with the distance from the two values. So you can find those combinations which have a difference of less than 1 from each other. Via this way you can find the matching (a)-pairs. Repeat this with (b) and (c) and find those which are included in all pairs.
Alternatively this can probably be done as a cube as well.
Just as a thought hint.
I want to style the output of table(). Suppose I have the following:
dat$a <- c(1,2,3,4,4,3,4,2,2,2)
dat$b <- c(1,2,3,4,1,2,4,3,2,2)
table(dat$a,dat$b)
1 2 3 4
1 50 0 0 0
2 0 150 50 0
3 0 50 50 0
4 50 0 0 100
There are two problems with this. First, it doesn't give me the correct frequencies. Additionally, it has no row or column labels. I found this, and the table works for both frequency counts and axis labels. Is there an issue because this way subsets from a data frame? I would appreciate any tips on both fixing the frequency counts and adding style to the table.
The only problem is the way that you are inputting arguments to table. To get the desired output (with labels), use the data frame as argument, not 2 vectors (the columns). If you have a larger data frame, use only the subset that you want.
a <- c(1,2,3,4,4,3,4,2,2,2)
b <- c(1,2,3,4,1,2,4,3,2,2)
dat <- data.frame(a,b)
table(dat)
Gives me the output:
b
a 1 2 3 4
1 1 0 0 0
2 0 3 1 0
3 0 1 1 0
4 1 0 0 2
It shouldn't give the wrong frequencies, even with your approach. You could try restarting your R session to check this.
I have quite big data frame (few millions of records).
I need to filter it due to following rule:
- For each product delete all records which are before the fifth record after the first record with x>0.
So, We are interested only in two columns - ID and x. Data frame is sorted by ID.
It is fairly easy to do it using loops, but loops doesn't perform well on such big data frame.
How to do it in 'vector style'?
Example:
BEFORE FILTERING
ID x
1 0
1 0
1 5 # First record with x>0
1 0
1 3
1 4
1 0
1 9
1 0 # Delete all earlier records of that product
1 0
1 6
2 0
2 1 # First record with x>0
2 0
2 4
2 5
2 8
2 0 # Delete all earlier records of that product
2 1
2 3
After filtering:
ID x
1 9
1 0
1 0
1 6
2 0
2 1
2 3
For these split, apply, combine problems - I like using plyr. There are alternatives if speed becomes an issue, but for most things - plyr is easy to understand and use. I wrote a function that implements the logic you described above and then fed that to ddply() to operate on each chunk of the data based on ID.
fun <- function(x, column, threshold, numplus){
whichcol <- which(x[column] > threshold)[1]
rows <- seq(from = (whichcol + numplus), to = nrow(x))
return(x[rows,])
}
And then feed this to ddply()
require(plyr)
ddply(dat, "ID", fun, column = "x", threshold = 0, numplus = 5)
#-----
ID x
1 1 9
2 1 0
3 1 0
4 1 6
5 2 0
6 2 1
7 2 3