I've got a documentTermMatrix that looks as follows:
artikel naam product personeel loon verlof
doc 1 1 1 2 1 0 0
doc 2 1 1 1 0 0 0
doc 3 0 0 1 1 2 1
doc 4 0 0 0 1 1 1
In the package tm, it's possible to calculate the hamming distance between 2 documents. But now I want to cluster all the documents that have a hamming distance smaller than 3.
So here I would like that cluster 1 is document 1 and 2, and that cluster 2 is document 3 and 4. Is there a possibility to do that?
I saved your table to myData:
myData
artikel naam product personeel loon verlof
doc1 1 1 2 1 0 0
doc2 1 1 1 0 0 0
doc3 0 0 1 1 2 1
doc4 0 0 0 1 1 1
Then used hamming.distance() function from e1071 library. You can use your own distances (as long as they are in the matrix form)
lilbrary(e1071)
distMat <- hamming.distance(myData)
Followed by hierarchical clustering using "complete" linkage method to make sure that the maximum distance within one cluster could be specified later.
dendrogram <- hclust(as.dist(distMat), method="complete")
Select groups according to the maximum distance between points in a group (maximum = 5)
groups <- cutree(dendrogram, h=5)
Finally plot the results:
plot(dendrogram) # main plot
points(c(-100, 100), c(5,5), col="red", type="l", lty=2) # add cutting line
rect.hclust(dendrogram, h=5, border=c(1:length(unique(groups)))+1) # draw rectangles
Another way to see the cluster membership for each document is with table:
table(groups, rownames(myData))
groups doc1 doc2 doc3 doc4
1 1 1 0 0
2 0 0 1 1
So documents 1st and 2nd fall into one group while 3rd and 4th - to another group.
Related
I have a vector called "combined" with 1's and 0's
combined
1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
I sampled twice from this vector, each with a sample size of 3 and put it into a contingency table of counts as follows.
2 1
1 2
I want to reiterate this sampling 1000 times such that I end with 1000 contingency tables each with counts of 1s and 0s from the sampling.
This is what I tried:
sample1 = as.vector(replicate(10000, sample(combined, 3)))
sample2 = as.vector(replicate(10000, sample(combined, 3)))
con_table = table(sample1,sample2)
but I ended up only getting 1 table instead of 10000. Hoping to get some help.
8109 7573
7306 7012
You need to wrap the entire expression, sample and table inside replicate. Add a conversion to a factor to ensure you always get a 2x2 table. E.g. a simple version with 2 replications:
combined <- rep(0:1,each=10)
combined <- as.factor(combined)
replicate(2, table(sample(combined,3), sample(combined,3)), simplify=FALSE)
#[[1]]
#
# 0 1
# 0 0 1
# 1 1 1
#
#[[2]]
#
# 0 1
# 0 1 1
# 1 0 1
Suppose I have matrix D which consists of death counts per year by specific ages.
I want to fill this matrix with appropriate death counts that is stored in
vector Age, but the following code gives me wrong answer. How should I write the code without making a loop?
# Year and age grid for tables
Years=c(2007:2017)
Ages=c(60:70)
#Data.frame of deaths
D=data.frame(matrix(ncol=length(Years),nrow=length(Ages))); D[is.na(D)]=0
colnames(D)=Years
rownames(D)=Ages
Age=c(60,61,62,65,65,65,68,69,60)
year=2010
D[as.character(Age),as.character(year)]<-
D[as.character(Age),as.character(year)]+1
D[,'2010'] # 1 1 1 0 0 1 0 0 1 1 0
# Should be 2 1 1 0 0 3 0 0 1 1 0
You need to use table
AgeTable = table(Age)
D[names(AgeTable), as.character(year)] = AgeTable
D[,'2010']
[1] 2 1 1 0 0 3 0 0 1 1 0
I was wondering if there's a fast way to get an incidence matrix for this such a problem. I've got two data frames with three columns (the join keys)
df1 <- data.frame(K1=c(1,1,0,1,3,2,2),K2=c(1,2,1,0,2,0,1),K3=c(0,0,3,2,1,3,0))
df2 <- data.frame(K1=c(1,2,0,3),K2=c(0,1,2,0),K3=c(2,0,3,1))
and I need to obtain the corresponding incidence matrix
# IM:
# 1 2 3 4
# 1 1 1 0 0
# 2 1 0 1 0
# 3 0 1 1 0
# 4 1 0 0 0
# 5 0 0 1 1
# 6 0 1 1 0
# 7 0 1 0 0
where it's set 1 if there's a match between the corresponding KEY (column value) of rows of the two data frames.
I would do by using multiple loops
for (j in seq_len(nrow(df2)))
for (k in seq_len(ncol(df2))) {
if (df2[j,k])
m[which(df1[,k] == df2[j,k]),j] <- 1
}
but it's a C approach and maybe there's something faster in R. Do you have any other ideas? Besides, when the data.frame are quite big (around 50k and 20k rows), I cannot allocate the matrix as it seems too big.
I have data where X-column is review and then columns of words that most reviews have. Is it possible to create a graph where nodes would be reviews and edges would be words?
X action age ago amazing american art author back bad beautiful beginning
1 1 1 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 1 0 0
4 1 3 0 0 0 2 1 0 0 0 0
5 0 0 1 0 1 0 0 2 0 1 0
Another idea is to claster the reviews in the graph according to the used words and their frequency.
Thank you very much. Any help is appreciated.
Here are three approaches to explore the relationships in your data:
par(mfrow=c(1,3))
# two mode network (reviews+words)
library(igraph)
set.seed(1)
g <- graph_from_data_frame(subset(reshape2::melt(df, 1), !!value, -value)[2:1])
V(g)$type <- bipartite.mapping(g)$type
plot(g, layout = layout_as_bipartite(g)[, 2:1], vertex.color = V(g)$type+1L)
# just the reviews:
library(reshape2)
lst <- with(subset(melt(df, 1), !!value)[2:1], split(X, variable))
lst <- lst[lengths(lst)>1]
lst <- lapply(lst, function(x) t(combn(x, m=2)))
g <- graph_from_edgelist(do.call(rbind, lst), dir = F)
E(g)$label <- rep(names(lst), sapply(lst, nrow))
plot(g)
# review clustering
df[-1] %>% dist(meth="bin") %>% hclust %>% plot
Output:
Data:
df <- read.table(header=T, text="
X action age ago amazing american art author back bad beautiful beginning
1 1 1 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 1 0 0
4 1 3 0 0 0 2 1 0 0 0 0
5 0 0 1 0 1 0 0 2 0 1 0")
PS: There may be a shortcut to no. 2 (reviews as nodes & words as edges) - feel free to add it.
With your example data one could create graph consiting of nodes (each review) and edges (two reviews are connected when they use the same word). Moreover, you could weight the edges according to how many words two reviews have in common and moreover you could use different shapes/colors of the edges to represent the different words.
There are several ways to create a graph with your data. First, to create a adjacency matrix, where each columns and rows would each represent a review. The adjecency matrix only counts whether there is a common word between two reviews or not. In case two reviews share a common word it takes the value 1, otherwise it is zero.
The adjency matrix would look similar to this, where the latters denote column and row labels:
Review A B C D
A 0 1 1 1
B 1 0 0 1
C 1 0 0 1
D 1 1 1 0
With the R command graph_from_adjency( ) in the igraph package you could then create a graph and use the plot functions.
Second you could also create a weight matrix, which counts how many words are shared between two review. Using the same command graph_from_adjency( , weighted=T) from the igraph package you could create from that matrix a graph .
You can find a good introduction to network analysis with the igraph package here: http://kateto.net/networks-r-igraph
Review A B C D
A 0 2 3 1
B 2 0 0 2
C 3 0 0 2
D 1 2 2 0
Third, you could specifiy the graph from an edge and nodes data frames.
The nodes data frame would contain a short id of each node and maybe the name and all other information you may want to include about the nodes :
id long_review_name
R1 A
R2 B
R3 C
R4 D
The edges data frame collects all the information about the connections between two reviews. First, and most important it would record all edges in the columns from and to . Further, it could contain the frequency as weight on the edges and type would denote, which word connection the two nodes share:
from to weight type
R1 R2 1 american
R1 R2 1 age
R1 R3 2 american
R1 R3 1 age
R1 R4 1 age
R2 R4 2 american
To turn the edges and the node data frame into a graph you would need to use the command graph_from_data_frame(d=links, vertices=nodes).
I am using the regular method to do a Hierarchical Clustering project.
mydata.dtm <- TermDocumentMatrix(mydata.corpus)
mydata.dtm2 <- removeSparseTerms(mydata.dtm, sparse=0.98)
mydata.df <- as.data.frame(inspect(mydata.dtm2))
mydata.df.scale <- scale(mydata.df)
d <- dist(mydata.df.scale, method = "euclidean") # distance matrix
fit <- hclust(d, method="ward")
groups <- cutree(fit, k=10)
groups
congestion cough ear eye fever flu fluzonenon medicare painpressure physical pink ppd pressure
1 2 3 4 5 6 5 5 5 7 4 8 5
rash screening shot sinus sore sports symptoms throat uti
5 5 6 1 9 7 5 9 10
And I what I want is to put the group number back to the new column in the original data.
I've looked at approximate string matching within single list - r
Because the df here is a document matrix so what I got after df <- t(data.frame(mydata.df.scale,cutree(hc,k=10))) is a matrix like
df[1:5,1:5]
congestion cough ear eye fever
[1,] 0 0 0 0 0
[2,] 0 0 0 0 0
[3,] 0 0 0 0 0
[4,] 0 0 0 1 0
[5,] 0 0 0 0 0
Since eye has the group number 3 then I want add the number 3 to the new column in 4th row.
note that in this case a single document can be mapped to two items in the same group.
df[23,17:21]
sinus sore sports symptoms throat
0 1 0 0 1
Instead of put back the number directly I use the 0-1 matrix:
label_back <-t(data.frame(mydata.df,cutree(fit,k=10)))
row.names(label_back) <- NULL
#label_back<-label_back[1:(nrow(label_back)-1),]# the last line is the sum
groups.df<-as.data.frame(groups)
groups.df$label<-rownames(groups.df)
for (i in 1:length((colnames(label_back)))){
ind<-which(colnames(label_back)[i]==groups.df$label) # match names and return index
label_back[,i]=groups.df$groups[ind]*label_back[,i] # time the 0-1 with the #group number
}
find the max value in each row because there are more than 1 value in some rows.
data_group<-rep(0,nrow(data)
for (i in 1:nrow(data)){
data_group[i]<-max(unique(label_back[i,]))
}
data$group<-data_group
I am looking for more elegant way.