Here's what I can use to list weight for all terminal nodes : but how can I add some code to get response prediction as well as weight by each terminal node ID :
say I want my output to look like this
--
Here below is what I have so far to get the weight
nodes(airct, unique(where(airct)))
Thank you
The Binary tree is a big S4 object, so sometimes it is difficult to extract the data.
But the plot method for BinaryTree object, has an optional panel function of the form function(node) plotting the terminal nodes. So when you plot you can get node informations.
here I use the plot function, to extract the information and even better I used the gridExtra package to convert the terminal node to a table.
library(party)
library(gridExtra)
set.seed(100)
lls <- data.frame(N = gl(3, 50, labels = c("A", "B", "C")),
a = rnorm(150) + rep(c(1, 0,150)),
b = runif(150))
pond= sample(1:5,150,replace=TRUE)
tt <- ctree(formula=N~a+b, data=lls,weights = pond)
output.df <- data.frame()
innerWeights <- function(node){
dat <- data.frame (x=node$nodeID,
y=sum(node$weights),
z=paste(round(node$prediction,2),collapse=' '))
grid.table(dat,
cols = c('ID','Weights','Prediction'),
h.even.alpha=1,
h.odd.alpha=1,
v.even.alpha=0.5,
v.odd.alpha=1)
output.df <<- rbind(output.df,dat) # note the use of <<-
}
plot(tt, type='simple', terminal_panel = innerWeights)
data
ID Weights Prediction
1 4 24 0.42 0.5 0.08
2 5 17 0.06 0.24 0.71
3 6 24 0.08 0 0.92
4 7 388 0.37 0.37 0.26
Here's what I found , it works fine with a bit extra information. But I just want to post it here just in case someone need them in the future.
y <- do.call(rbind, nodes(tt, unique(where(tt))))
write.table(y, 'clipboard', sep='\t')
#agstudy , let me know what you think.
Related
I'm trying to make this function work, but am failing.
What I need is a function that reads the names from a dataframe columns and uses them to perform a Wilcoxon test on each of those columns. "result" would be the main final product, a table with the genus names and their p-values on each row. I've added also a plotting feature for visualizing the values among groups for each column, that I would save naming them after the corresponding genus.
library("dplyr")
library("ggpubr")
library(PairedData)
library(tidyr)
process <- function(data, genus){
group_by(data,group) %>%summarise(
count = n(),
median = median(genus, na.rm = TRUE),
IQR = IQR(genus, na.rm = TRUE)
)
# Subset data before and after treatment
T0 <- subset(data, group == "T0", genus,drop = TRUE)
T2 <- subset(data, group == "T2", genus,drop = TRUE)
#Wilcoxon test for paired data, I want a table of names and corresponding p-values
res <- wilcox.test(T0, T2, paired = TRUE)
res$p.value
result <- spread(genus,res$p.value)
# Plot paired data, with title depending on the data and its p-value (this last one could be optional)
pd <- paired(T0, T2)
tiff(genus".tiff", width = 600, height = 400)
plot(pd, type = "profile") + labs(title=print(data[,genus]", paired p-value="res[,p.value]) +theme_bw()
dev.off()
}
l <- length(my_data)
glist <- list(colnames(my_data[3:l])) #bacteria start at col 3
wilcoxon <- process(data = my_data, genus = glist)
A reproducible dataset could be
my_data
Patient group Subdoligranulum Agathobacter
pt_10T0 T0 0.02 0.00
pt_10T2 T2 10.71 19.89
pt_15T0 T0 29.97 0.28
pt_15T2 T2 16.10 7.70
pt_20T0 T0 2.39 0.44
pt_20T2 T2 20.48 3.35
pt_32T0 T0 12.23 0.17
pt_32T2 T2 37.11 1.87
pt_36T0 T0 0.64 0.03
pt_36T2 T2 0.02 0.08
pt_39T0 T0 0.04 0.01
pt_39T2 T2 0.36 0.05
pt_3t0 T0 13.23 1.34
pt_3T2 T2 19.22 1.51
pt_9T0 T0 11.69 0.57
pt_9T2 T2 34.56 3.52
I'm not very familiar with functions, and haven't found yet a good tutorial on how to make them from a dataframe... so this is my best attempt, I hope some of you can make it work.
Thank you for the help!
Simply, return the needed value at end of processing. Below does not test the plot step (with unknown packages) but adjusted for proper R grammar:
proc_wilcox <- function(data, genus){
# Subset data before and after treatment
T0 <- data[[genus]][data$group == "T0"]
T2 <- data[[genus]][data$group == "T2"]
# Wilcoxon test for paired data
res <- wilcox.test(T0, T2, paired = TRUE)
# Plot paired data, with title depending on the data and its p-value
# pd <- paired(T0, T2)
# tiff(paste0(genus, ".tiff"), width = 600, height = 400)
# plot(pd, type = "profile") +
# labs(title=paste0(genus, " paired p-value= ", res$p.value)) +
# theme_bw()
# dev.off()
return(res$p.value)
}
Then, call the method with an apply function such as sapply or slightly faster vapply designed to process across iterables and return same length.
# VECTOR OF RESULTS (USING sapply)
wilcoxon_results <- sapply(
names(my_data)[3:ncol(my_data)],
function(col) proc_wilcox(my_data, col)
)
# VECTOR OF RESULTS (USING vapply)
wilcoxon_results <- vapply(
names(my_data)[3:ncol(my_data)],
function(col) proc_wilcox(my_data, col),
numeric(1)
)
wilcoxon_results
# Subdoligranulum Agathobacter
# 0.1484375 0.0078125
wilcoxon_df <- data.frame(wilcoxon_results)
wilcoxon_df
# wilcoxon_results
# Subdoligranulum 0.1484375
# Agathobacter 0.0078125
I am quite new to the use of lists so I apologize if this problem may sound very dumb.
From an original set of 459,046 customers, I have created a function that splits and stores the base in several elements of a list.
sampled_list <- baseSample(dataset = clv_df_cbs, sample.size = 10000, seed = 12345)
Executing this function (baseSample) you will get a new object list, containing mutually exclusive groups of customers (each group will be made of 10,000 customers - apart from the last one who may be smaller, depending on the initial volume)
> sampled_list <- baseSample(dataset = clv_df_cbs, sample.size = 10000, seed = 12345)
[1] "Seed: 12345"
[1] "Total groups created: 46"
[1] "Group size: 10000"
In this case, the output is a list of 46 elements stored in the object called sample_list.
Now, I want to pass each of these 46 elements to a BTYD model that will forecast the number of transactions in the next 90 days (given the learnings from the input).
The reason why I cannot pass the full dataset to the BTYD model is because this model heavily uses mcmc, therefore there is a long time of calculation that stops the model to provide any output. So I have decided to generate forecasts running the same model several times (on sample big enough) until I manage to pass all the base as model input.
The operations that need to be performed on each of the elements are the following
# Estimate parameters for element1 of the list
pggg.draws1 <- pggg.mcmc.DrawParameters(element1,
mcmc = 1000, # number of MCMC steps
burnin = 250, # number of initial MCMC steps which are discarded
thin = 10, # only every thin-th MCMC step will be returned
chains = 2, # number of MCMC chains to be run
trace = 50) # print logging step every trace iteration
# generate draws for holdout period
pggg.xstar.draws1 <- mcmc.DrawFutureTransactions(element1, pggg.draws1)
# conditional expectations
element1$xstar.pggg <- apply(pggg.xstar.draws1, 2, mean)
# P(active)
element1$pactive.pggg <- mcmc.PActive(pggg.xstar.draws1)
# P(alive)
element1$palive.pggg <- mcmc.PAlive(pggg.draws1)
# show estimates for first few customers
head(element1[, c("x", "t.x", "x.star",
"xstar.pggg", "pactive.pggg", "palive.pggg")],50)
# report median cohort-level parameter estimates
round(apply(as.matrix(pggg.draws1$level_2), 2, median), 3)
# report mean over median individual-level parameter estimates
median.est1 <- sapply(pggg.draws1$level_1, function(draw) {
apply(as.matrix(draw), 2, median)
})
round(apply(median.est1, 1, mean), 3)
Ideally, the output should be stored straight into a new data.frame - so I can retrieve the Id and the forecast (amongst other stuff originally included in the dataset).
Here below some mock data to play with from a publicly available dataset.
library(BTYDplus)
library(tidyverse)
data("groceryElog")
dataset<-elog2cbs(groceryElog, T.cal = "2006-12-01")
# FUNCTION baseSample ####
baseSample <- function(dataset, sample.size, seed=NULL) {
seed.value <- if(is.null(seed)) {
as.numeric(format(Sys.Date(),"%Y"))*10000+as.numeric(format(Sys.Date(),"%m"))*100+as.numeric(format(Sys.Date(),"%d"))
} else {
seed
}
set.seed(seed.value)
# RE-ORDER DATA FRAME (SAME LENGTH)
data <- with(dataset, dataset[order(sample(cust, nrow(dataset))),])
# BUILD A LIST OF DFs
set.sample.size <- sample.size
data$cycles_group <- paste0("sample_", ceiling(1:nrow(data)/set.sample.size))
df_list <- split(data, data$cycles_group)
print(paste0("Seed: ", seed.value))
print(paste0("Total groups created: ", length(unique(data$cycles_group))))
print(paste0("Group size: ", set.sample.size))
return(df_list)
#print(df_list)
}
# ** OUTPUT: Base split in lists ####
sampled_list <- baseSample(dataset = dataset, sample.size = 100, seed = 12345)
Thanks
In base R, you can use lapply to iterate a function over the elements of a list and return a new list with the results of those iterations. After using your example code to generate a list called sampled_list...
# turn the code for the operations you want to perform on each list element into a function,
# with a couple of minor tweaks
thingy <- function(i) {
# Estimate parameters for element1 of the list
pggg.draws1 <- pggg.mcmc.DrawParameters(i,
mcmc = 1000, # number of MCMC steps
burnin = 250, # number of initial MCMC steps which are discarded
thin = 10, # only every thin-th MCMC step will be returned
chains = 2, # number of MCMC chains to be run
trace = 50) # print logging step every trace iteration
# generate draws for holdout period
pggg.xstar.draws1 <- mcmc.DrawFutureTransactions(i, pggg.draws1)
# conditional expectations
i$xstar.pggg <- apply(pggg.xstar.draws1, 2, mean)
# P(active)
i$pactive.pggg <- mcmc.PActive(pggg.xstar.draws1)
# P(alive)
i$palive.pggg <- mcmc.PAlive(pggg.draws1)
# show estimates for first few customers [commenting out for this iterated version]
# head(element1[, c("x", "t.x", "x.star", "xstar.pggg", "pactive.pggg", "palive.pggg")],50)
# report median cohort-level parameter estimates
round(apply(as.matrix(pggg.draws1$level_2), 2, median), 3)
# report mean over median individual-level parameter estimates
median.est1 <- sapply(pggg.draws1$level_1, function(draw) {
apply(as.matrix(draw), 2, median)
})
# get the bits you want in a named vector
z <- round(apply(median.est1, 1, mean), 3)
# convert that named vector of results into a one-row data frame to make collapsing easier
data.frame(as.list(z))
}
# now use lapply to iterate that function over the elements of your list
results <- lapply(sampled_list, thingy)
# now bind the results into a data frame
boundresults <- do.call(rbind, results)
Results (which took a while to get):
k lambda mu tau z
sample_1 4.200 0.174 0.091 102.835 0.27
sample_10 3.117 0.149 0.214 128.143 0.29
sample_11 4.093 0.154 0.115 130.802 0.30
sample_12 4.191 0.142 0.053 114.108 0.33
sample_13 2.605 0.155 0.071 160.743 0.35
sample_14 9.196 0.210 0.084 111.747 0.36
sample_15 2.005 0.145 0.091 298.872 0.40
sample_16 2.454 0.111 0.019 78731750.121 0.70
sample_2 2.808 0.138 0.059 812.278 0.40
sample_3 4.327 0.166 0.116 559.318 0.42
sample_4 9.266 0.166 0.038 146.283 0.40
sample_5 3.277 0.157 0.073 105.915 0.33
sample_6 9.584 0.184 0.086 118.299 0.31
sample_7 4.244 0.189 0.118 54.945 0.23
sample_8 4.388 0.147 0.085 325.054 0.36
sample_9 7.898 0.181 0.052 83.892 0.33
You can also combine those last two steps into a single line of do.call(rbind, lapply(...)). If you want to make the row names in the results table into a column, you could do boundresults$sample <- row.names(boundresults) after making that table. And if you don't like creating new objects in your environment, you could put that function inside the call to lapply, i.e., lapply(sampled_list, function(i) { [your code] }).
i have this script:
library(plyr)
library(gstat)
library(sp)
library(dplyr)
library(ggplot2)
library(scales)
a<-c(10,20,30,40,50,60,70,80,90,100)
b<-c(15,25,35,45,55,65,75,85,95,105)
x<-rep(a,3)
y<-rep(b,3)
E<-sample(30)
freq<-rep(c(100,200,300),10)
data<-data.frame(x,y,freq,E)
data<-arrange(data,x,y,freq)
df <- ddply(data,"freq", function (h){
dim_h<-length(h$x)
perc_max <- 0.9
perc_min <- 0.8
u<-round((seq(perc_max,perc_min,by=-0.1))*dim_h)
dim_u<-length(u)
perc_punti<- percent(seq(perc_max,perc_min,by=-0.1))
for (i in 1:dim_u)
{ t<-u[i]
time[i]<-system.time(
for (j in 1:2)
{
df_tass <- sample_n(h, t)
df_residuo <- slice(h,-as.numeric(rownames(df_tass)))
coordinates(df_tass)= ~x + y
x.range <- range(h$x)
y.range <- range(h$y)
grid <- expand.grid(x = seq(from = x.range[1], to = x.range[2], by = 1), y = seq(from = y.range[1],
to = y.range[2], by = 1))
coordinates(grid) <- ~x + y
gridded(grid) <- TRUE
nearest = krige(E ~ 1, df_tass, grid, nmax = 1)
nearest_df<-as.data.frame(nearest)
names(nearest_df) <- c("x", "y", "E")
#Error of prediction
df_pred <- inner_join(nearest_df[1:3],select(df_residuo,x,y,E),by=c("x","y"))
names(df_pred) <- c("x", "y", "E_pred","E")
sqm[j] <- mean((df_pred[,4]-df_pred[,3])^2)
})[3]
sqmm[i]<-mean(sqm)
}
df_finale<-data.frame(sqmm,time,perc_punti)
})
df
I measured in several points of coordinates (x,y) the value of the electromagnetic field (E value) at different frequencies (freq value). For each frequency value, I use once 90% of points and once 80% (with the for loop with l) to interpolate the value of the electromagnetic field (E) inside grid with Nearest Neighbour Interpolation (krige function); and i repeat this 2 times. The remaining points will then be used to calculate the prediction error. I hope it's clear.
This script above is a simplified case. Unfortunately, in my case the script takes too long for the two for-loops implemented.
I want to ask if it's possible to simplify the code in some way, for instance by using the apply function family. Thanks.
Reply #clemlaflemme ok it works! thanks... now i have a little proble with the final dataframe, it looks like this:
freq 1 2
1 100 121.00 338.00
2 100 0.47 0.85
3 200 81.00 462.50
4 200 0.74 0.73
5 300 36.00 234.00
6 300 0.82 0.76
but i want something like this:
freq sqmm time
1 100 121.0 0.47
2 100 338.0 0.85
3 200 81.0 0.74
4 200 462.5 0.73
5 300 36.0 0.82
6 300 234.0 0.76
how can i do that??
I am building a tree using the partykit R package, and I am wondering if there is a simple, efficient way to determine the depth number at each internal node. For example, the root node would have depth 0, the first two kid nodes have depth 1, the next kid nodes have depth 2, and so forth. This will eventually be used to calculate the minimal depth of a variable. Below is a very basic example (taken from vignette("constparty", package="partykit")):
library("partykit")
library("rpart")
data("Titanic", package = "datasets")
ttnc<-as.data.frame(Titanic)
ttnc <- ttnc[rep(1:nrow(ttnc), ttnc$Freq), 1:4]
names(ttnc)[2] <- "Gender"
rp <- rpart(Survived ~ ., data = ttnc)
ttncTree<-as.party(rp)
plot(ttncTree)
#This is one of my many attempts which does NOT work
internalNodes<-nodeids(ttncTree)[-nodeids(ttncTree, terminal = TRUE)]
depth(ttncTree)-unlist(nodeapply(ttncTree, ids=internalNodes, FUN=function(n){depth(n)}))
In this example, I want to output something similar to:
nodeid = 1 2 4 7
depth = 0 1 2 1
I apologize if my question is too specific.
Here's a possible solution which should be efficient enough as usually the trees have no more than several dozens of nodes.
I'm ignoring node #1, as it is always 0 an hence no point neither calculating it or showing it (IMO)
Inters <- nodeids(ttncTree)[-nodeids(ttncTree, terminal = TRUE)][-1]
table(unlist(sapply(Inters, function(x) intersect(Inters, nodeids(ttncTree, from = x)))))
# 2 4 7
# 1 2 1
I had to revisit this problem recently. Below is a function to determine the depth of each node. I count the depth based on the number of times a vertical line | appears running the print.party() function.
library(stringr)
idDepth <- function(tree) {
outTree <- capture.output(tree)
idCount <- 1
depthValues <- rep(NA, length(tree))
names(depthValues) <- 1:length(tree)
for (index in seq_along(outTree)){
if (grepl("\\[[0-9]+\\]", outTree[index])) {
depthValues[idCount] <- str_count(outTree[index], "\\|")
idCount = idCount + 1
}
}
return(depthValues)
}
> idDepth(ttncTree)
1 2 3 4 5 6 7 8 9
0 1 2 2 3 3 1 2 2
There definitely seems to be a simpler, faster solution, but this is faster than using the intersect() function. Below is an example of the computation time for a large tree (around 1,500 nodes)
# Compare computation time for large tree #
library(mlbench)
set.seed(470174)
dat <- data.frame(mlbench.friedman1(5000))
rp <- rpart(as.formula(paste0("y ~ ", paste(paste0("x.", 1:10), collapse=" + "))),
data=dat, control = rpart.control(cp = -1, minsplit=3, maxdepth = 10))
partyTree <- as.party(rp)
> length(partyTree) #Number of splits
[1] 1503
>
> # Intersect() computation time
> Inters <- nodeids(partyTree)[-nodeids(partyTree, terminal = TRUE)][-1]
> system.time(table(unlist(sapply(Inters, function(x) intersect(Inters, nodeids(partyTree, from = x))))))
user system elapsed
22.38 0.00 22.44
>
> # Proposed computation time
> system.time(idDepth(partyTree))
user system elapsed
2.38 0.00 2.38
I'm running k-means clustering on a data frame df1, and I'm looking for a simple approach to computing the closest cluster center for each observation in a new data frame df2 (with the same variable names). Think of df1 as the training set and df2 on the testing set; I want to cluster on the training set and assign each test point to the correct cluster.
I know how to do this with the apply function and a few simple user-defined functions (previous posts on the topic have usually proposed something similar):
df1 <- data.frame(x=runif(100), y=runif(100))
df2 <- data.frame(x=runif(100), y=runif(100))
km <- kmeans(df1, centers=3)
closest.cluster <- function(x) {
cluster.dist <- apply(km$centers, 1, function(y) sqrt(sum((x-y)^2)))
return(which.min(cluster.dist)[1])
}
clusters2 <- apply(df2, 1, closest.cluster)
However, I'm preparing this clustering example for a course in which students will be unfamiliar with the apply function, so I would much prefer if I could assign the clusters to df2 with a built-in function. Are there any convenient built-in functions to find the closest cluster?
You could use the flexclust package, which has an implemented predict method for k-means:
library("flexclust")
data("Nclus")
set.seed(1)
dat <- as.data.frame(Nclus)
ind <- sample(nrow(dat), 50)
dat[["train"]] <- TRUE
dat[["train"]][ind] <- FALSE
cl1 = kcca(dat[dat[["train"]]==TRUE, 1:2], k=4, kccaFamily("kmeans"))
cl1
#
# call:
# kcca(x = dat[dat[["train"]] == TRUE, 1:2], k = 4)
#
# cluster sizes:
#
# 1 2 3 4
#130 181 98 91
pred_train <- predict(cl1)
pred_test <- predict(cl1, newdata=dat[dat[["train"]]==FALSE, 1:2])
image(cl1)
points(dat[dat[["train"]]==TRUE, 1:2], col=pred_train, pch=19, cex=0.3)
points(dat[dat[["train"]]==FALSE, 1:2], col=pred_test, pch=22, bg="orange")
There are also conversion methods to convert the results from cluster functions like stats::kmeans or cluster::pam to objects of class kcca and vice versa:
as.kcca(cl, data=x)
# kcca object of family ‘kmeans’
#
# call:
# as.kcca(object = cl, data = x)
#
# cluster sizes:
#
# 1 2
# 50 50
Something I noticed about both the approach in the question and the flexclust approaches are that they are rather slow (benchmarked here for a training and testing set with 1 million observations with 2 features each).
Fitting the original model is reasonably fast:
set.seed(144)
df1 <- data.frame(x=runif(1e6), y=runif(1e6))
df2 <- data.frame(x=runif(1e6), y=runif(1e6))
system.time(km <- kmeans(df1, centers=3))
# user system elapsed
# 1.204 0.077 1.295
The solution I posted in the question is slow at calculating the test-set cluster assignments, since it separately calls closest.cluster for each test-set point:
system.time(pred.test <- apply(df2, 1, closest.cluster))
# user system elapsed
# 42.064 0.251 42.586
Meanwhile, the flexclust package seems to add a lot of overhead regardless of whether we convert the fitted model with as.kcca or fit a new one ourselves with kcca (though the prediction at the end is much faster)
# APPROACH #1: Convert from the kmeans() output
system.time(km.flexclust <- as.kcca(km, data=df1))
# user system elapsed
# 87.562 1.216 89.495
system.time(pred.flexclust <- predict(km.flexclust, newdata=df2))
# user system elapsed
# 0.182 0.065 0.250
# Approach #2: Fit the k-means clustering model in the flexclust package
system.time(km.flexclust2 <- kcca(df1, k=3, kccaFamily("kmeans")))
# user system elapsed
# 125.193 7.182 133.519
system.time(pred.flexclust2 <- predict(km.flexclust2, newdata=df2))
# user system elapsed
# 0.198 0.084 0.302
It seems that there is another sensible approach here: using a fast k-nearest neighbors solution like a k-d tree to find the nearest neighbor of each test-set observation within the set of cluster centroids. This can be written compactly and is relatively speedy:
library(FNN)
system.time(pred.knn <- get.knnx(km$center, df2, 1)$nn.index[,1])
# user system elapsed
# 0.315 0.013 0.345
all(pred.test == pred.knn)
# [1] TRUE
You can use the ClusterR::KMeans_rcpp() function, use RcppArmadillo. It allows for multiple initializations (which can be parallelized if Openmp is available). Besides optimal_init, quantile_init, random and kmeans ++ initilizations one can specify the centroids using the CENTROIDS parameter. The running time and convergence of the algorithm can be adjusted using the num_init, max_iters and tol parameters.
library(scorecard)
library(ClusterR)
library(dplyr)
library(ggplot2)
## Generate data
set.seed(2019)
x = c(rnorm(200000, 0,1), rnorm(150000, 5,1), rnorm(150000,-5,1))
y = c(rnorm(200000,-1,1), rnorm(150000, 6,1), rnorm(150000, 6,1))
df <- split_df(data.frame(x,y), ratio = 0.5, seed = 123)
system.time(
kmrcpp <- KMeans_rcpp(df$train, clusters = 3, num_init = 4, max_iters = 100, initializer = 'kmeans++'))
# user system elapsed
# 0.64 0.05 0.82
system.time(pr <- predict_KMeans(df$test, kmrcpp$centroids))
# user system elapsed
# 0.01 0.00 0.02
p1 <- df$train %>% mutate(cluster = as.factor(kmrcpp$clusters)) %>%
ggplot(., aes(x,y,color = cluster)) + geom_point() +
ggtitle("train data")
p2 <- df$test %>% mutate(cluster = as.factor(pr)) %>%
ggplot(., aes(x,y,color = cluster)) + geom_point() +
ggtitle("test data")
gridExtra::grid.arrange(p1,p2,ncol = 2)