Visualization on Cluster for Mixed Data - r

So, i'm working with fuzzy clustering for Mixed data. Then i want to do Visualization for clustering result.
Here is my data
> head(x)
x1 x2 x3 x4
A C 8.461373 27.62996
B C 10.962334 27.22474
A C 9.452127 27.57246
B D 8.196687 27.29332
A D 8.961367 26.72793
B C 8.009029 27.97227
i followed this step https://www.r-bloggers.com/clustering-mixed-data-types-in-r/
gower_dist <- daisy(x,
metric = "gower")
#type = list(logratio = 1))
tsne_obj <- Rtsne(gower_dist1, dims=2 ,is_distance = TRUE)
tsne_data = data.frame(tsne_obj1$Y, factor(g1$clusters))
colnames(tsne_data1)[3] = "cluster"
ggplot(aes(x = X1, y = X2), data = tsne_data1) +
geom_point(aes(color = cluster))
Based on the website, the first step has transformed the data using Gower distance (i guess), then applying R-tsne.
So My question is :
Is it good using Rtsne for mixed data (as Representative the points)? I have doubt, with Gower distance in the first step, its like force your categorical data to be numeric data.
But one thing that amazed me, my method always give a better result than a classic method based on the plot. so this is important for me to know better about this, can I use the plot as a tool to measure the goodness of clustering result? because based on the plot, it's not difficult to determine which method is better (by plotting clustering result), I give plot images below, I really impressed with it.
Classic Method
My Method

Related

How to generate a multivariate spline basis in R?

I want to obtain a multivariate spline basis using R. I do not know how to do it properly or the best approach for this. According to my limited research on the Internet, I think that the package that can help me is mgcv and the functions ti and smooth.construct.tensor.smooth.spec but I am not sure.
The structure of my data is simple. I have two vectors xdata and alphadata generated as
n = 200
T = 2
xdata = as.matrix(rnorm(T*n),T*n,1)
tau = seq(-2,2,by=0.1)
tau = as.matrix(tau,length(tau),1)
So basically I have two vectors xdata and alphadata of dimension n*T and 41, respectively. My goal is then obtain a spline basis (for example a cubic spline) which should be a function of both b(alphadata,xdata).
What I have tried so far is something like this
xdata_data <- data.frame("xdata" = xdata[,1])
tau_data <- data.frame("tau" = tau[,1])
basisobj1 <- ti(tau_data, xdata_data, bs = 'cr', k = c(6, 6), fx = TRUE) #cr:cubic regression splines
xdata_data <- data.frame("xdata_data" = xdata[,1])
tau_data <- data.frame("tau_data" = tau[,1])
basisobj2 <- smooth.construct.tensor.smooth.spec(basisobj1, data = c(tau_data,xdata_data), knots = NULL)
basis <- basisobj2[["X"]]
Note that I manipulated my data, otherwise I get some errors with smooth.construct.tensor.smooth.spec.
My questions are:
(1) With the previous approach I am doing what I want?
(2) Is this a smart approach to do what I want?
(3) When I do the above, the number of rows of basis is 41 but shouldn't the number of rows of basis be equal to the product of dimensions of xdata and alphadata as the basis is a function of two vectors?

Is there a way to simulate time series data with a specific rolling mean and autocorrelation in R?

I have an existing time series (1000 samples) and calculated the rolling mean using the filter() function in R, averaging across 30 samples each. The goal of this was to create a "smoothed" version of the time series. Now I would like to create artificial data that "look like" the original time series, i.e., are somewhat noisy, that would result in the same rolling mean if I would apply the same filter() function to the artificial data. In short, I would like to simulate a time series with the same overall course but not the exact same values as those of an existing time series. The overall goal is to investigate whether certain methods can detect similarity of trends between time series, even when the fluctuations around the trend are not the same.
To provide some data, my time series looks somewhat like this:
set.seed(576)
ts <- arima.sim(model = list(order = c(1,0,0), ar = .9), n = 1000) + 900
# save in dataframe
df <- data.frame("ts" = ts)
# plot the data
plot(ts, type = "l")
The filter function produces the rolling mean:
my_filter <- function(x, n = 30){filter(x, rep(1 / n, n), sides = 2, circular = T)}
df$rolling_mean <- my_filter(df$ts)
lines(df$rolling_mean, col = "red")
To simulate data, I have tried the following:
Adding random noise to the rolling mean.
df$sim1 <- df$rolling_mean + rnorm(1000, sd = sd(df$ts))
lines(df$sim1, col = "blue")
df$sim1_rm <- my_filter(df$sim1)
lines(df$sim1_rm, col = "green")
The problem is that a) the variance of the simulated values is higher than the variance of the original values, b) that the rolling average, although quite similar to the original, sometimes deviates quite a bit from the original, and c) that there is no autocorrelation. To have an autocorrelational structure in the data would be good since it is supposed to resemble the original data.
Edit: Problem a) can be solved by using sd = sqrt(var(df$ts)-var(df$rolling_mean)) instead of sd = sd(df$ts).
I tried arima.sim(), which seems like an obvious choice to specify the autocorrelation that should be present in the data. I modeled the original data using arima(), using the model parameters as input for arima.sim().
ts_arima <- arima(ts, order = c(1,0,1))
my_ar <- ts_arima$coef["ar1"]
my_ma <- ts_arima$coef["ma1"]
my_intercept <- ts_arima$coef["intercept"]
df$sim2 <- arima.sim(model = list(order = c(1,0,1), ar = my_ar, ma = my_ma), n = 1000) + my_intercept
plot(df$ts)
lines(df$sim2, col = "blue")
The resulting time series is very different from the original. Maybe a higher order for ar and ma in arima.sim() would solve this, but I think a whole different method might be more appropriate.

cluster::daisy drops labels

I am trying to cluster data using cluster::daisy function and dissimilarity matrix. The data looks as shown below.
> head(score_mat_unique_3)
ID_1 ID_2 Score
1: 1000035849 1000532512 2.49e-60
2: 1000035849 1000682765 3.87e-08
3: 1000086994 1000658924 8.90e-18
4: 1000234640 1000535109 1.20e-87
5: 1000235015 1000754236 6.29e-34
6: 1000258002 1000598768 8.34e-36
Score shows how different the objects are (the larger the value, the more different objects.), so I use the dissimilarity matrix and the daisy function.
diss3 <- daisy(score_mat_unique_3, metric = "gower")
But when I try to plot hclust, some numbers are printed instead of ID.
fit3 <- hclust(diss3, method = "ward.D2")
plot(fit3)
Accordingly, information about objects in clusters is lost. How can I return information about the initial IDs and understand which IDs are in the clusters?
You just need to use the labels argument to plot. Since you don't provide your data, I will illustrate with the built-in iris data.
DAT = iris[sample(26),1:4]
fit3 = hclust(dist(DAT))
plot(fit3, labels=LETTERS)
In your case, try plot(fit3, labels=score_mat_unique_3$ID_1)

R - Compare performance of two types while controlling for interaction

I have been programming in R and have a dataset containing the results (succes or not) of two Machine Learning algorithms which have been tried out using different amounts of parameters. An example is provided below:
type success paramater_amount
a1 0 15639
a1 0 18623
a1 1 19875
a2 1 12513
a2 1 10256
a2 0 12548
I now want to compare both algorithms to see which one has the best overall performance. But there is a catch. It is known that the higher the parameter_amount, the higher the chances for success. When checking out the parameter amounts both algorithms were tested on, one can also notice that a1 has been tested with higher parameter amounts than a2 was. This would make simply counting the amount of successes of both algorithms unfair.
What would be a good approach to handle this scenario?
I will give you an answer but without any guarantees on the truth of what I'm telling you. Indeed for more precisions you should give more informations on the algorithm and other. I also propose to migrate this question to cross-validate.
Indeed, your question is a statistical question. Because, in statistics, we search for sparcity. We prefer a simpler model than a very complex one at given performance because we are worried of over-fitting : https://statisticsbyjim.com/regression/overfitting-regression-models/.
One way to do what you want is to compare the performance with respect to the complexity of the model like for this toy example :
library(tidyverse)
library(ggplot2)
set.seed(123)
# number of estimation for each models
n <- 1000
performance_1 <- round(runif(n))
complexity_1 <- round(rnorm(n, mean = n, sd = 50))
performance_2 <- round(runif(n, min = 0, max = 0.6))
complexity_2 <- round(rnorm(n, mean = n, sd = 50))
df <- data.frame(performance = c(performance_1, performance_2),
complexity = c(complexity_1, complexity_2),
models = as.factor(c(rep(1, n), rep(2, n))))
temp <- df %>% group_by(complexity, models) %>% summarise(perf = sum(performance))
ggplot(temp, aes(x = complexity, y = perf, group = models, fill = models)) +
geom_smooth() +
theme_classic()
It only works if you have many data points. Complexity for you is the number of parameters fitted. In that toy exemple, the first model seems a better because for each level of complexity it is better.

How do I predict new data's cluster after clustering training data?

I have already trained my clustering model using hclust:
model=hclust(distances,method="ward”)
And the result looks good:
Now I get some new data records, I want to predict which cluster every one of them belongs to. How do I get it done ?
Clustering is not supposed to "classify" new data, as the name suggests - it is the core concept of classification.
Some of the clustering algorithms (like those centroid based - kmeans, kmedians etc.) can "label" new instance based on the model created. Unfortunately hierarchical clustering is not one of them - it does not partition the input space, it just "connects" some of the objects given during clustering, so you cannot assign the new point to this model.
The only "solution" to use the hclust in order to "classify" is to create another classifier on top of the labeled data given by hclust. For example you can now train knn (even with k=1) on the data with labels from hclust and use it to assign labels to new points.
As already mentioned, you can use a classifier such as class :: knn, to determine which cluster a new individual belongs to.
The KNN or k-nearest neighbors algorithm is one of the simplest machine learning algorithms and is an example of instance-based learning, where new data are classified based on stored, labeled instances. More specifically, the distance between the stored data and the new instance is calculated by means of some kind of a similarity measure. This similarity measure is typically expressed by a distance measure such as the Euclidean distance.
Next I leave a code as an example for the iris data.
library(scorecard)
library(factoextra)
library(class)
df_iris <- split_df(iris, ratio = 0.75, seed = 123)
d_iris <- dist(scale(df_iris$train[,-5]))
hc_iris <- hclust(d_iris, method = "ward.D2")
fviz_dend(hc_iris, k = 3,cex = 0.5,k_colors = c("#00AFBB","#E7B800","#FC4E07"),
color_labels_by_k = TRUE, ggtheme = theme_minimal())
groups <- cutree(hc_iris, k = 3)
table(groups)
Predict new data
knnClust <- knn(train = df_iris$train[,-5], test = df_iris$test[,-5] , k = 1, cl = groups)
knnClust
[1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 3 2 3 3 3 2 2 2 2 2 3 3 2 2 3 2 2 2 2 2 2 2 2 2
Levels: 1 2 3
# p1 <- fviz_cluster(list(data = df_iris$train[,-5], cluster = groups), stand = F) + xlim(-11.2,-4.8) + ylim(-3,3) + ggtitle("train")
# p2 <- fviz_cluster(list(data = df_iris$test[,-5], cluster = knnClust),stand = F) + xlim(-11.2,-4.8) + ylim(-3,3) + ggtitle("test")
# gridExtra::grid.arrange(p1,p2,nrow = 2)
pca1 <- data.frame(prcomp(df_iris$train[,-5], scale. = T)$x[,1:2], cluster = as.factor(groups), factor = "train")
pca2 <- data.frame(prcomp(df_iris$test[,-5], scale. = T)$x[,1:2], cluster = as.factor(knnClust), factor = "test")
pca <- as.data.frame(rbind(pca1,pca2))
Plot train and test data
ggplot(pca, aes(x = PC1, y = PC2, color = cluster, size = 1, alpha = factor)) +
geom_point(shape = 19) + theme_bw()
You can use this classification and then use LDA to predict which class the new point should fall into.
I face the similar problem and work out a temporal solution.
In my environment R, the function hclust gives the label for the train data.
We can use one supervised learning model to reconnect label and features.
And then we just do the same data processing when we deal with a supervised learning model.
If we face a binary classification model, we can use KS value, AUC value and so on to see the performance of this clustering.
Similarly, we can use PCA method on the feature and extract PC1 as a label.
To binning this label, we get a new label fitted to classification.
In the same way, we do the same processing when we deal with a classification model.
In R, I find PCA method processes much faster than hclust. (Mayank 2016)
In practice, I find this way is easy to deploy the model.
But I suspect whether this temporal solution results in bias on prediction or not.
Ref
Mayank. 2016. “Hclust() in R on Large Datasets.” Stack Overflow. hclust() in R on large datasets.
Why not compute the centroid of the points for each hclust cluster, then assign a new point to the nearest using the same distance function ?
knn in class will only look at nearest n and only allows Euclidean distance.
There's no need to run a classifier.

Resources