Confusion matrix using table in k-means and hierarchical clustering - r

I have some problems with calculating of confusion matrix. I have created three sets of points by multivariate normal distibution:
library('MASS')
library('ggplot2')
library('reshape2')
library("ClusterR")
library("cluster")
library("dplyr")
library ("factoextra")
library("dendextend")
library("circlize")
mu1<-c(1,1)
mu2<-c(1,-9)
mu3<-c(-7,-2)
sigma1<-matrix(c(1,1,1,2), nrow=2, ncol=2, byrow = TRUE)
sigma2<-matrix(c(1,-1,-1,2), nrow=2, ncol=2, byrow = TRUE)
sigma3<-matrix(c(2,0.5,0.5,0.3), nrow=2, ncol=2, byrow = TRUE)
simulation1<-mvrnorm(100,mu1,sigma1)
simulation2<-mvrnorm(100,mu2,sigma2)
simulation3<-mvrnorm(100,mu3,sigma3)
X<-rbind(simulation1,simulation2,simulation3)
colnames(X)<-c("x","y")
X<-data.frame(X)
I have also constructed clusters using k-means clustering and hierarchical clustering with k initial centers (k=3):
//k-means clustering
k<-3
B<-kmeans(X, centers = k, nstart = 10)
x_cluster = data.frame(X, group=factor(B$cluster))
ggplot(x_cluster, aes(x, y, color = group)) + geom_point()
//hierarchical clustering
single<-hclust(dist(X), method = "single")
clusters2<-cutree(single, k = 3)
fviz_cluster(list (data = X, cluster=clusters2))
How can I calculate confusion matrix for full dataset(X) using table in both of these cases?

Using your data, insert set.seed(42) just before you create sigma1 so that we have a reproducible example. Then after you created X:
X.df <- data.frame(Grp=rep(1:3, each=100), x=X[, 1], y=X[, 2])
k <- 3
B <- kmeans(X, centers = k, nstart = 10)
table(X.df$Grp, B$cluster)
#
# 1 2 3
# 1 1 0 99
# 2 0 100 0
# 3 100 0 0
Original group 1 is identified as group 3 with one specimen assigned to group 1. Original group 2 is assigned to group 2 and original group 3 is assigned to group 1. The group numbers are irrelevant. The classification is perfect if each row/column contains all values in a single cell. In this case only 1 specimen was missplaced.
single <- hclust(dist(X), method = "single")
clusters2 <- cutree(single, k = 3)
table(X.df$Grp, clusters2)
# clusters2
# 1 2 3
# 1 99 1 0
# 2 0 0 100
# 3 0 100 0
The results are the same, but the cluster numbers are different. One specimen from the original group 1 was assigned to the same group as the group 3 specimens. To compare these results:
table(Kmeans=B$cluster, Hierarch=clusters2)
# Hierarch
# Kmeans 1 2 3
# 1 0 101 0
# 2 0 0 100
# 3 99 0 0
Notice that each row/column contains only one cell that is nonzero. The two cluster analyses agree with one another even though the cluster designations differ.
D <- lda(Grp~x + y, X.df)
table(X.df$Grp, predict(D)$class)
#
# 1 2 3
# 1 99 0 1
# 2 0 100 0
# 3 0 0 100
Linear discriminant analysis tries to predict the specimen number given the values of x and y. Because of this, the cluster numbers are not arbitrary and the correct predictions all fall on the diagonal of the table. This is what is usually described as a confusion matrix.

Related

Find what values were changed to after normalization

How can I see the original values post normalization? Or change them in the final output?
I want to change my final output back to there original values. Or at least close to it considering I aggregate and take the mean.
I have a dataset that has 10 columns and 5,000 rows. After cleaning up the data and selecting which columns and rows I want, I run a normalization code.
Then I run a kmeans and get my output. How can I see what the values were changed to after normalization? Like, if I have Region 1, 2, 3, 4, and 5. And post normalization it changes to 0.00, 0.25, 0.5, 0.75, and 1. is there a way to change them back to the original in the kmeans output?
I want to change my final output back to there original values. Or at least close to it considering I aggregate and take the mean.
normalize = function(X) {
return(abs((X-min(X)))/(max(X)-min(X)))
}
df_age_norm = as.data.frame(lapply(df_age,normalize))
clusters = kmeans(df_age_norm, 9)[['cluster']]
df_age_norm$clusters = clusters
df_age_norm =
aggregate(df_age_norm[,1:4],list(df_age_norm$clusters),FUN
= mean)
I want to change my final output back to there original values. Or at least close to it considering I aggregate and take the mean.
Head of dataset before normalization
Age HHIncome Region MaritalStatus group
18 11000 5 0 1
18 11000 5 1 1
18 12000 2 0 1
18 12000 4 0 1
18 13000 1 0 1
Head of dataset after normalization
Age HHIncome Region MaritalStatus group
0 0.001879699 1.00 0 0
0 0.001879699 1.00 1 0
0 0.002819549 0.25 0 0
0 0.002819549 0.75 0 0
0 0.003759398 0.00 0 0
This solution is inspired in base R function scale, that centers and scales the vector by subtracting the mean value and dividing by the standard deviation of the vector x. These two values, mean(x) and sd(x) are returned as attributes.
x <- -4:5
y <- scale(x)
attributes(y)
#$dim
#[1] 10 1
#
#$`scaled:center`
#[1] 0.5
#
#$`scaled:scale`
#[1] 3.02765
I have, therefore, rewritten function normalize to also set and return min(x) and max(x) as attributes. They will be used to later denormalize.
normalize <- function(X, na.rm = FALSE) {
if(na.rm) X <- X[!is.na(X)]
Min <- min(X)
Max <- max(X)
Y <- X - Min
if(Min != Max) Y <- Y/(Max - Min)
attr(Y, "scaled:min") <- Min
attr(Y, "scaled:max") <- Max
Y
}
denormalize <- function(X){
Min <- attr(X, "scaled:min")
Max <- attr(X, "scaled:max")
attr(X, "scaled:min") <- NULL
attr(X, "scaled:max") <- NULL
Y <- if(Min != Max) X*(Max - Min) else X
Y <- Y + Min
Y
}
df_age_norm <- as.data.frame(lapply(df_age, normalize))
df_age_2 <- as.data.frame(lapply(df_age_norm, denormalize))
df_age_2
# Age HHIncome Region MaritalStatus group
#1 18 11000 5 0 1
#2 18 11000 5 1 1
#3 18 12000 2 0 1
#4 18 12000 4 0 1
#5 18 13000 1 0 1
Data.
df_age <- read.table(text = "
Age HHIncome Region MaritalStatus group
18 11000 5 0 1
18 11000 5 1 1
18 12000 2 0 1
18 12000 4 0 1
18 13000 1 0 1
", header = TRUE)

conditional which.min function

I have two sets of data, one is coordinates of machines, one is coordinates of the nearest repair shop.
I have a working model that has assigned each machine to the nearest store. However one store only has 1 machine and another has 7 machines assigned to it.
What I want is to add a condition so that each store is assigned at least 2 machines but no more than 4.
library(geosphere)
library(ggplot2)
#machine Locations
machine.x <- c(-122.37, -111.72, -111.87, -112.05, -87.17, -86.57, -86.54, -88.04, -86.61, -88.04, -86.61)
machine.y <- c(37.56, 35.23, 33.38, 33.57, 30.36, 30.75, 30.46, 30.68, 30.42, 30.68, 30.42)
machines <- data.frame(machine.x, machine.y)
#store locations
store.x <- c(-121.98, -112.17, -86.57)
store.y <- c(37.56, 33.59, 30.75)
stores <- data.frame(store.x, store.y)
centers<-data.frame(x=stores$store.x, y=stores$store.y)
pts<-data.frame(x=(machines$machine.x), y=(machines$machine.y))
#allocate space
distance<-matrix(-1, nrow = length(pts$x), ncol= length(centers$x))
#calculate the dist matrix - the define centers to each point
#columns represent centers and the rows are the data points
dm<-apply(data.frame(1:length(centers$x)), 1, function(x){ replace(distance[,x], 1:length(pts$x), distGeo(centers[x,], pts))})
#find the column with the smallest distance
closestcenter<-apply(dm, 1, which.min)
#color code the original data for verification
colors<-c(stores)
#create a scatter plot of assets color coded by which fe they belong to
plot(pts, col=closestcenter, pch=9)
So what I want is for each group to have a minimum count of 2 and a max count of 4, I tried adding a if else statement in the closest center variable but it didn't get even close to working out the way I thought it would. and i've looked around on line but can't find any way to add a counting condition to the which.min statement.
Note:My actual data set has several thousand machines and over 100 stores.
If M is an 11 x 3 zero-one matrix where M[i,j] = 1 if machine i is assigned to store j and 0 otherwise then the rows of M must each sum to 1 and the columns must each sum to 2 to 4 inclusive and we want to choose such an M which minimizes the sum of the distances sum(M * dm), say. This would give us the 0-1 linear program shown below. Below A is such that A %*% c(M) is the same as rowSums(M). Also B is such that B %*% c(M) is the same as colSums(M).
library(lpSolve)
k <- 3
n <- 11
dir <- "min"
objective.in <- c(dm)
A <- t(rep(1, k)) %x% diag(n)
B <- diag(k) %x% t(rep(1, n))
const.mat <- rbind(A, B, B)
const.dir <- c(rep("==", n), rep(">=", 3), rep("<=", 3))
const.rhs <- c(rep(1, n), rep(2, k), rep(4, k))
res <- lp(dir, objective.in, const.mat, const.dir, const.rhs, all.bin = TRUE)
res
## Success: the objective function is 9025807
soln <- matrix(res$solution, n, k)
and this solution:
> soln
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 1 0 0
[3,] 0 1 0
[4,] 0 1 0
[5,] 0 1 0
[6,] 0 0 1
[7,] 0 0 1
[8,] 1 0 0
[9,] 0 0 1
[10,] 0 1 0
[11,] 0 0 1
or in terms of the vector of store numbers assigned to each machine:
c(soln %*% (1:k))
## [1] 1 1 2 2 2 3 3 1 3 2 3

Merge all possible combinations of multiple data frames

I would like to merge by columns all the possible pair combinations of these three data frames (i.e. nine combinations)
frame1 = data.frame(a=c(1,2,3), b=c(1,2,3), c=c(1,2,3))
frame2 = data.frame(a=c(2,1,3), b=c(2,1,3), c=c(2,1,3))
frame3 = data.frame(a=c(3,2,1), b=c(3,2,1), c=c(3,2,1))
which contain the same 3 rows each but not in the same order, so I would also like that the merging be by coincidence of the pair of values of the columns a and b in the two files merged. Example:
a b c
1 1 1
2 2 2
3 3 3
+
a b c
2 2 2
1 1 1
3 3 3
=
a.x b.x c.x a.y b.y c.y
1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3
I wanted then to obtain the difference between each pair of values of the columns c.x and c.y present in each merged file, in absolute values, and sum all these differences thus obtaining a "score" (of course this would be zero in this example), which I would like to add to an empty matrix 3x3 in the correspondant cell (i.e., the score of frame1 vs. frame 2 should be located in cell [2,1], etc.):
nframes = 3
frames = c(frame1,frame2,frame3)
matrix = matrix(, nrow = nframes, ncol = nframes)
matrix_scores = data.frame(matrix)
for (i in frames){
for (j in frames)
{
x = merge(i, j, by=c("a","b"))
score = sum(abs(x$c.x - x$c.y))
matrix_scores[j,i] <- score
}
}
However, when I run the loop I obtain the following message:
Error in fix.by(by.x, x) : 'by' must specify uniquely valid columns
Also, I understand that the line
matrix_scores[j,i] <- score
will give an error, too, but I do not know how to express that I want the score to be stored in cell [1,1], for the first iteration of the loop (frame1 vs. frame1).
The resulting matrix should be a 3x3 matrix containing all zeros:
f1 f2 f3
frame1 0 0 0
frame2 0 0 0
frame3 0 0 0
You can do:
# Put all frames in a list
d <- list(frame1, frame2, frame3)
# get all merge-combinations
gr <- expand.grid(1:length(d), 1:length(d))
# function to merge and get the sum diff:
foo <- function(i, x, gr){
tmp <- merge(x[[gr[i, 1]]], x[[gr[i, 2]]], by=c("a", "b"))
sum(abs(tmp$c.x - tmp$c.y))
}
# result matrix
matrix(sapply(1:nrow(gr), foo, d, gr), length(d), length(d), byrow = T)
[,1] [,2] [,3]
[1,] 0 0 0
[2,] 0 0 0
[3,] 0 0 0
# The scores are set as followed:
matrix(apply(gr, 1, paste, collapse="_"), 3, 3, byrow = T)
[,1] [,2] [,3]
[1,] "1_1" "2_1" "3_1"
[2,] "1_2" "2_2" "3_2"
[3,] "1_3" "2_3" "3_3"
# alternative using apply:
# function to merge and get the sum diff:
foo <- function(y, x){
tmp <- merge(x[[ y[1] ]], x[[ y[2] ]], by=c("a", "b"))
sum(abs(tmp$c.x - tmp$c.y))
}
# result matrix
matrix(apply(gr, 1, foo, d), length(d), length(d), byrow = T)

Computing pairwise distances between a set of intervals

Let's say I have a set of closed linear intervals represented by this matrix:
interval.mat = matrix(c(1,2,3,5,4,6,8,9), byrow = TRUE, ncol = 2)
where interval.mat[,1] are the interval start points and interval.mat[,2] are their corresponding end points.
I'm looking for an efficient (since this example matrix is a toy and in reality my matrix contains a few thousands of intervals) way to produce a matrix that will hold all the pairwise positive distances between the intervals. The distance between a pair of intervals should be the start of the interval with the bigger end among the two minus the end of the interval with the smaller end among the two. For example the distance between intervals c(1,2) and c(3,5) should 3 - 2 = 1, since the second interval ends after the first one. In case the intervals overlap the distance should be 0. So for example, in the case of c(3,5) and c(4,6) the distance would be 0.
So, the pairwise distance matrix for the intervals above would be:
> matrix(c(0,1,2,6,1,0,0,3,2,0,0,2,6,3,2,0), byrow = TRUE, nrow = 4, ncol = 4)
[,1] [,2] [,3] [,4]
[1,] 0 1 2 6
[2,] 1 0 0 3
[3,] 2 0 0 2
[4,] 6 3 2 0
Here's an Rcpp solution. It will be fast and memory efficient (for details see below).
First let's define a helper function which calculates all the pairwise distances. If n is the number of intervals to consider, we have n*(n-1)/2 unique pairs of vectors (we don't take the same intervals into account, of course, as the distance between them is 0).
library('Rcpp')
library('inline')
cppFunction("
NumericVector distint_help(NumericMatrix x) {
int n = x.nrow(); // number of rows
NumericVector out(n*(n-1)/2); // result numeric vector
int k = 0;
for (int i=0; i<n-1; ++i) {
for (int j=i+1; j<n; ++j) {
if (x(i,0) >= x(j,1))
out[k++] = x(i,0)-x(j,1);
else if (x(j,0) > x(i,1))
out[k++] = x(j,0)-x(i,1);
else
out[k++] = 0.0;
}
}
return out;
}
")
The above function returns a numeric vector with the calculated distances. Let's try to mimic the output of the built-in dist function (checkout the result of x <- dist(interval.mat); unclass(x)).
Now the main function:
distint <- function(interval) {
stopifnot(is.numeric(interval), is.matrix(interval), ncol(interval) == 2)
res <- distint_help(interval) # use Rcpp to calculate the distances
# return the result similar to the one of dist()
structure(res, class='dist', Size=nrow(interval), Diag=FALSE, Upper=FALSE)
}
distint(interval.mat)
## 1 2 3
## 2 1
## 3 2 0
## 4 6 3 2
The above object may be converted to an "ordinary" square matrix:
as.matrix(distint(interval.mat))
## 1 2 3 4
## 1 0 1 2 6
## 2 1 0 0 3
## 3 2 0 0 2
## 4 6 3 2 0
Unless the distance matrix is sparse (there are many many zeros), the above solution is storage efficient.
A benchmark:
test <- matrix(runif(1000), ncol=2)
library('microbenchmark')
library(proxy)
f <- function(x,y) max(min(x)-max(y),0)
microbenchmark(distint(test), as.matrix(dist(test, method=f)), times=10)
## Unit: milliseconds
## expr min lq median uq max neval
## distint(test) 1.584548 1.615146 1.650645 3.071433 3.164231 10
## as.matrix(dist(test, method = f)) 455.300974 546.438875 551.596582 599.977164 609.418194 10
You can use the proxy package, which has a dist(...) method that allows user definition of the distance function. Note that loading this library will mask the dist(...) function in base R
library(proxy)
f <- function(x,y) max(min(x)-max(y),0)
as.matrix(dist(interval.mat,method=f))
# 1 2 3 4
# 1 0 1 2 6
# 2 1 0 0 3
# 3 2 0 0 2
# 4 6 3 2 0

Frequency table comparison using R

I have two frequency tables created using R's table() function:
freq1 <- table(unlist(strsplit(topic_list1, split=";")))
freq2 <- table(unlist(strsplit(topic_list2, split=";")))
topic_list1 and topic_list2 are strings that contains textual representations of topics separated by ;.
I want a way to compare the two frequencies, graphically if possible.
So if the two lists contain the same topic with different frequencies, I would like to be able to see it. The same goes for topics present in one frequency table, but not in the other.
There's probably a more elegant way to do this, but this ought to work:
# here I'm generating some example data
set.seed(5)
topic_list1 <- paste(sample(letters, 20, replace=T), sep=";")
topic_list2 <- paste(sample(letters, 15, replace=T), sep=";")
# I don't make the tables right away
tl1 <- unlist(strsplit(topic_list1, split=";"))
tl2 <- unlist(strsplit(topic_list2, split=";"))
big_list <- unique(c(tl1, tl2))
# this computes your frequencies
lbl <- length(big_list)
tMat1 <- matrix(rep(tl1, lbl), byrow=T, nrow=lbl)
tMat2 <- matrix(rep(tl2, lbl), byrow=T, nrow=lbl)
tMat1 <- cbind(big_list, tMat1)
tMat2 <- cbind(big_list, tMat2)
counts1 <- apply(tMat1, 1, function(x){sum(x[1]==x[2:length(x)])})
counts2 <- apply(tMat2, 1, function(x){sum(x[1]==x[2:length(x)])})
total_freqs <- rbind(counts1, counts2, counts1-counts2)
# this makes it nice looking & user friendly
colnames(total_freqs) <- big_list
rownames(total_freqs) <- c("topics1", "topics2", "difference")
total_freqs <- total_freqs[ ,order(total_freqs[3,])]
total_freqs
d l a z b f s y m r x h n i g k c v o
topics1 0 0 0 0 0 2 1 1 1 1 2 2 1 1 1 1 2 2 2
topics2 2 2 2 1 1 2 1 1 1 0 1 1 0 0 0 0 0 0 0
difference -2 -2 -2 -1 -1 0 0 0 0 1 1 1 1 1 1 1 2 2 2
From there you could just use the straight numbers or visualize them however you want (e.g, dotplots, etc.). Here's a simple dotplot:
windows()
dotchart(t(total_freqs)[,3], main="Frequencies of topics1 - topics2")
abline(v=0)
You can simply barplot them (with beside=T argument), which will give you a way to visually compare the counts per level ...
below is an example:
counts <- table(mtcars$vs, mtcars$gear)
barplot(counts, col=c("darkblue","red"), legend=rownames(counts), beside=T)

Resources