I am dealing with the problem that I need to count unique names of people in a string, but taking into consideration that there may be slight typos.
My thought was to set strings below a certain threshold (e.g. levenshtein distance below 2) as being equal. Right now I manage to calculate the string distances, but not making any changes to my input string that would get me the correct number of unique names.
library(stringdist);library(stringr)
names<-"Michael, Liz, Miichael, Maria"
names_split<-strsplit(names, ", ")[[1]]
stringdistmatrix(names_split,names_split)
[,1] [,2] [,3] [,4]
[1,] 0 6 1 5
[2,] 6 0 7 4
[3,] 1 7 0 6
[4,] 5 4 6 0
(number_of_people<-str_count(names, ",")+1)
[1] 4
The correct value of number_of_people should be, of course, 3.
As I am only interested in the number of uniques names, I am not concerned if "Michael" becomes replaced by "Miichael" or the other way round.
One option is to try to cluster the names based on their distance matrix:
library(stringdist)
# create a 'dist' object (=lower triangular part of distance matrix)
d <- stringdistmatrix(names_split,method="osa")
# use hierarchical clustering to group nearest neighbors
hc <- hclust(d)
# visual inspection: y-axis labels the distance value
plot(hc)
# decide what distance value you find acceptable for grouping.
cutree(hc, h=3)
Depending on your actual data you will need to experiment with the distance type (qgrams/cosine may be useful, or the jaro-winkler distance in the case of names).
Related
Will try not to complicate things too much with my explanations, but I'm confused how to best go about filling a triangulated correlation matrix with no repeat values with existing correlation values derived from another package. This involves extracting specific values from a list of text files. This is what I have done so far:
# read in list of file names (they are named '1_1', '1_2' .. so on until '47_48' with no repeat values generated)
filenames <- read_table('/home/filenames.txt', col_names = 'file_id')
# create symmetrical matrix
M <- diag(48)
ct <- 1
for (sub in (filenames$file_id)) {
subj <- read.table(paste0(dat_dir, '/ht_', sub, '.HEreg'), sep="", fill=TRUE)
ht <- as.character(subj$V2[grep("rG",sub$V1)]) # wanting to extract the specific value in that column for each text file
M[ct,] <- as.numeric(ht) #input this value into the appropriate location
ct <- ct + 1
}
This obviously does not give me the triangulated output I would envision - I know there is an error with inputting the variable 'ht' into the matrix, but am not sure how to solve this moving forward. Ideally, the correlation value of file 1_1 should be inserted in row 1, col 1, file 1_2 should be inserted in row 2, col 1, so on and so forth, and avoiding repeats (should be 0's)
Should I turn to nested loops?
Much help would be appreciated from this R newbie here, I hope I didn't complicate things unnecessarily!
I think the easiest way would be to read in all your values into a vector. You can do this using a variation of your existing loop.
Let us assume that your desired size correlation matrix is 5x5 (I know you have 48x48 judging by your code, but to keep the example simple I will work with a smaller matrix).
Let us assume that you have read all of your correlation values into the vector x in column major order (same as R uses), i.e. the first element of x is row 2 column 1, second element is row 3 column 1 etc. I am further assuming that you are creating a symmetric correlation matrix, i.e. you have ones on the diagonal, which is why the indexing starts the way it does, because of your use of the diag() function. Let's assume your vector x contains the following values:
x <- 1:10
I know that these are not correlations, but they will make it easy to see how we fill the matrix, i.e. which vector element goes into which position in the resulting matrix.
Now, let us create the identity matrix and zero matrices for the upper and lower triangular correlations (off diagonal).
# Assuming 5x5 matrix
n_elements <- 5
m <- diag(n_elements)
m_upper <- m_lower <- matrix(0, n_elements, n_elements)
To quickly fill the lower triangular matrix, we can use the lower.tri().
m_lower[lower.tri(m_lower, diag = FALSE)] <- x
This will yield the following output:
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 0 0 0
[2,] 1 0 0 0 0
[3,] 2 5 0 0 0
[4,] 3 6 8 0 0
[5,] 4 7 9 10 0
As you can see, we have successfully filled the lower triangular. Also note the order in which the elements of the vector is filled into the matrix. This is crucial for your results to be correct. The upper triangular is simply the transpose of the lower triangular, and then we can add our three matrices together to form your symmetric correlation matrix.
m_upper <- t(m_lower)
M <- m_lower + m + m_upper
Which yields the desired output:
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1 2 3 4
[2,] 1 1 5 6 7
[3,] 2 5 1 8 9
[4,] 3 6 8 1 10
[5,] 4 7 9 10 1
As you see, there is no need to work with nested loops to fill these matrices. The only loop you need is to read in the results from files (which it appears you have a handle on). If you only want the triangulated output, you can simply stop at the lower triangular matrix above. If your vector of estimated correlations (in my example x) include the diagonal elements, simply set diag = TRUE in the lower.tri() function and you are good to go.
I have two clustering results for the same variables but with different values each time. Let us create them with the following code:
set.seed(11)
a<-matrix(rnorm(10000),ncol=100)
colnames(a)<-(c(1:100))
set.seed(31)
b<-matrix(rnorm(10000),ncol=100)
colnames(b)<-colnames(a)
c.a<-hclust(dist(t(a)))
c.b<-hclust(dist(t(b)))
# clusters
groups.a<-cutree(c.a, k=15)
# take groups names
clus.a=list()
for (i in 1:15) clus.a[[i]] <- colnames(a)[groups.a==i]
# see the clusters
clus.a
groups.b<-cutree(c.b, k=15)
clus.b=list()
for (i in 1:15) clus.b[[i]] <- colnames(b)[groups.b==i]
# see the clusters
clus.b
What I get from that is two lists, clus.a and clus.b with the names (here just numbers from 1 to 100) of each cluster's variables.
Is there any way to examine if and which of the variables are clustered together in both clusterings? Meaning, how can I see if I have variables (could be teams of 2, 3, 4 etc) in same clusters for both clus.a and clus.b (doesn't have to be in the same cluster number).
If I understand your question correctly, you want to know if there are any clusters in a which have exactly the same membership as any of the clusters in b. Here's one way to do that.
Note: AFAICT in your example there are no matching clusters in a and b, so we create a few artificially to demo the solution.
# create artificial matches
clus.b[[3]] <- clus.a[[2]]
clus.b[[10]] <- clus.a[[8]]
clus.b[[15]] <- clus.a[[11]]
f <- function(a,b) (length(a)==length(b) & length(intersect(a,b))==length(a))
result <- sapply(clus.b,function(x)sapply(clus.a,f,b=x))
which(result, arr.ind=TRUE)
# row col
# [1,] 2 3
# [2,] 8 10
# [3,] 11 15
So this loops through all the clusters in b (sapply(clus.b,...)) and for each, loops through all the clusters in a looking for an exact match (in arbitrary order). For there to be a match, both clusters must have the same length, and the intersection of the two must contain all the elements in either - hence have the same length. This process produces a logical matrix with rows representing a and columns representing b.
Edit: To reflect the fact that OP is changing the question.
To detect clusters with two or more common elements, use:
f <- function(a,b) length(intersect(a,b))>1
result <- sapply(clus.b,function(x)sapply(clus.a,f,b=x))
matched <- which(result, arr.ind=TRUE)
matched
# row col
# [1,] 4 1
# [2,] 8 1
# [3,] 11 1
# [4,] 3 2
# ...
To identify which elements were present in both:
apply(matched,1,function(r) intersect(clus.a[[r[1]]],clus.b[[r[2]]]))
I am new to R and the clustering world. I am using a shopping dataset to extract features from it in order to identify something meaningful.
So far I have managed to learn how to merge files, remove na., do the sum of errors squared, workout the mean values, summarise by group, do the K means clustering and plot the results X, Y.
However, I am very confused on how to view these results or identify what would be a useful cluster? Am i repeating something or missing out on something? I get confused with plotting X Y variables aswell.
Below is my code, maybe my code might be wrong. Could you please help. Any help would be great.
# Read file
mydata = read.csv(file.choose(), TRUE)
#view the file
View(mydata)
#create new data set
mydata.features = mydata
mydata.features <- na.omit(mydata.features)
wss <- (nrow(mydata.features)-1)*sum(apply(mydata.features,2,var))
for (i in 2:20) wss[i] <- sum(kmeans(mydata.features, centers=i)$withinss)
plot(1:20, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")
# K-Means Cluster Analysis
fit <- kmeans(mydata.features, 3)
# get cluster means
aggregate(mydata.features,by=list(fit$cluster),FUN=mean)
# append cluster assignment
mydata.features <- data.frame(mydata.features, fit$cluster)
results <- kmeans(mydata.features, 3)
plot(mydata[c("DAY","WEEK_NO")], col= results$cluster
Sample data Variables, below are all the variables I have within my dataset, its shopping dataset collected over 2 years
PRODUCT_ID - uniquely identifies each product
household_key - uniquely identifies each household
BASKET_ID - uniquely identifies a purchase occasion
DAY - day when transaction occured
QUANTITY - number of products purchased during the trip
SALES_VALUE - amount of dollar retailers receive from sales
STORE_ID - identifies unique stores
RETAIL_DISC - disccount applied due to manufacture coupon
TRANS_TIME - time of day when the transaction occurred
WEEK_NO - week of transaction occurred 1-102
MANUFACTURER - code that links products with same manufacture together
DEPARTMENT - groups similar products together
BRAND - indicates private or national label band
COMMODITY_DESC - groups similar products together at the lower level
SUB_COMMODITY_DESC - groups similar products together at the lowest level
Sample Data
I put together some sample data, so I can help you better:
#generate sample data
sampledata <- matrix(data=rnorm(200,0,1),50,4)
#add ID to data
sampledata <-cbind(sampledata, 1:50)
#show data:
head(sampledata)
[,1] [,2] [,3] [,4] [,5]
[1,] 0.72859559 -2.2864943 -0.5408501 0.1564730 1
[2,] 0.34852943 0.3100891 0.6007349 -0.5985266 2
[3,] -0.04605026 0.5067896 -0.2911211 -1.1617171 3
[4,] -1.88358617 1.3739440 -0.5655383 0.9518367 4
[5,] 0.35528650 -1.7482304 -0.3871520 -0.7837712 5
[6,] 0.38057682 0.1465488 -0.6006462 1.3827544 6
I have a matrix with data points. Each data point has 4 variables (column 1 - 4) and an id (column 5).
Apply K-means
After that I apply the k-means function (but only to column 1:4 since it doesnt make much sense to cluster the id):
#kmeans (4 centers)
result <- kmeans(sampledata[,1:4], 4)
Analyse output
if i want to see what data point belongs to which cluster i can type:
result$cluster
The result will be for example:
[1] 4 3 2 2 1 2 4 4 3 3 3 3 2 1 4 4 4 2 4 4 4 1 1 1 3 3 3 3 1 3 2 2 4 4 2 4 2 3 1 2 2 2 1 2 1 1 4 1 1 1
This means that data point 1 belongs to cluster 4. The second data point belongs to cluster 3, and so on...
If I want to retrieve all data points that are in cluster 1, i can do the following:
sampledata[result$cluster==1,]
This will output a matrix, with all the values and the Data Point Id in the last Column:
[,1] [,2] [,3] [,4] [,5]
[1,] 0.3552865 -1.748230422 -0.3871520 -0.78377121 5
[2,] 0.5806156 0.479576142 1.1314052 1.60730796 14
[3,] 1.1871472 1.280881477 -1.7227361 -0.89045074 22
[4,] 0.8482060 0.726470349 0.6851352 -0.78526581 23
[5,] -0.5324139 -1.745802580 0.6779943 0.99915708 24
[6,] 0.2472263 -0.006298136 -0.1457003 -0.44789364 29
[7,] 0.1412812 -0.247076976 0.9181507 -0.58570904 39
[8,] 0.1859786 -1.768692166 0.5681229 -0.80618157 43
[9,] -1.1577178 -0.179886998 1.5183880 0.40014071 45
[10,] 1.0667566 -1.602875994 0.6010581 -0.49514049 46
[11,] 0.2464646 1.226129859 -1.3628096 -0.37666716 48
[12,] 1.2660358 0.282688323 0.7650636 0.23442255 49
[13,] -0.2499337 0.855327072 0.2290221 0.03492807 50
If i want to know how many data points are in cluster 1, I can type:
sum(result$cluster==1)
This will return 13, and corresponds to the number of lines in the matrix above.
Finally some plotting:
First, lets plot the data. Since you have a multidimensional dataframe, and you can only plot two dimensions in a standard plot, you have to do it like this. Select the variables you want to plot, For example var 2 and 3 (column 2 and 3). This corresponds to:
sampledata[,2:3]
To plot this data, simply write:
plot(sampledata[,2:3], col=result$cluster ,main="Affiliation of observations")
use the argumemnt col (this stands for colors) to give the data points a color accordingly to their cluster affiliation by typing col= result$cluster
If you also want to see the cluster centers in the plot, add the following line:
+ points(result$centers, col=1:4, pch="x", cex=3)
The plot should now look like this (for variable 2 vs variable 3):
(The dots are the data points, the X´s are the cluster centers)
I am not really familiar with the k-means function, and its hard to help without any sample data. Here however is something that might help:
kmeans returns an object of class "kmeans" which has a print and a fitted method. It is a list with at least the following components:
cluster: A vector of integers (from 1:k) indicating the cluster to which each point is allocated.
centers: A matrix of cluster centres.
totss: The total sum of squares.
withinss: Vector of within-cluster sum of squares, one component per cluster.
tot.withinss: Total within-cluster sum of squares, i.e. sum(withinss).
betweenss: The between-cluster sum of squares, i.e. totss-tot.withinss.
size: The number of points in each cluster.
iter: The number of (outer) iterations.
ifault: integer: indicator of a possible algorithm problem – for experts.
more here.
You can access these components like this:
I.e. if you want to have a look at the clusters:
results$cluster
Or have more details about the centers:
results$centers
Sorry if this has been posted before. I looked for the answer both on Google and Stackoverflow and couldn't find a solution.
Right now I have two matrices of data in R. I am trying to loop through each row in the matrix, and find the row in the other matrix that is most similar by some distance metric (for now least squared). I figured out one method but it is O(n^2) which is prohibitive for my data.
I think this might be similar to some dictionary learning techniques but I couldn't find anything.
Thanks!
Both matrices are just 30 by n matrices with a number at each entry.
distance.fun=function(mat1,mat2){
match=c()
for (i in 1:nrow(mat1)){
if (all(is.na(mat1[i,]))==FALSE){
dist=c()
for (j in 1:nrow(mat2)){
dist[j]=sum((mat1[i,]-mat2[j,])^2)
match[i]=which(min(dist) %in% dist)
}
}
}
return(match)
}
A better strategy would be to compute the distance matrix all at once first, then extract the mins. Here's an example using simualted data
set.seed(15)
mat1<-matrix(runif(2*25), ncol=2)
mat2<-matrix(runif(2*25), ncol=2)
and here's a helper function that can calculate the distances between values in one matrix to another. It uses the built in dist function but it does do unnecessary within-group comparisons that we eventually have to filter out, still it may be better performing overall.
distab<-function(m1, m2) {
stopifnot(ncol(m1)==ncol(m2))
m<-as.matrix(dist(rbind(m1, m2)))[1:nrow(m1), -(1:nrow(m1))]
rownames(m)<-rownames(m1)
colnames(m)<-rownames(m2)
m
}
mydist<-distab(mat1, mat2)
now that we have the between-group distances, we just need to minimize the matches.
best <- apply(mydist, 2, which.min)
rr <- cbind(m1.row=seq.int(nrow(mat1)), best.m2.row = best)
head(rr) #just print a few
# m1.row best.m2.row
# [1,] 1 1
# [2,] 2 14
# [3,] 3 7
# [4,] 4 3
# [5,] 5 23
# [6,] 6 15
note that with a strategy like this (we well as with your original implementation) it is possible for multiple rows from mat1 to match to the same row in mat2 and some rows in mat2 to be unmatched to mat1.
How can I reduce the size of a vector to a lower dimension?
Say for example X:=(1,2,3,4,5,6,7,8,9,10) is a 10-D vector. Suppose
I want to reduce it to a 5 dimensional space. Is there any way to do this?
I have a situation where I need to compare an N-d vector with a corresponding vector of a lower dimension.
There are an infinite number of ways to convert a 10d vector into a 5d vector.
This is like saying "I want a function that takes two integer parameters and returns an integer, can I make such a function". There an infinite many such functions.
It really depends on what you want to do with the vector. What are the meanings of your 10d and 5d vectors?
If my assumption is right, the OP would like to convert a vector of 10 values to a matrix with 2 columns.
This could be done easily in R:
# make up the demo data
> v <- c(1,2,3,4,5,6,7,8,9,10)
# modify the dimensions of 'v' to have 2 columns
> dim(v) <- c(5,2)
# and check the result
> v
[,1] [,2]
[1,] 1 6
[2,] 2 7
[3,] 3 8
[4,] 4 9
[5,] 5 10