Finding the right cluster methods based on data distribution - r

My record has 821050 rows and 18 columns. The rows represent different online users, the columns the browsing behavior of the users in an online shop. The column variables include shopping cart cancellations, number of items in the shopping cart, detailed view of items, product list/multi-item view, detailed search view, etc... Half of the variables are discrete, half are continuous. 8 of the variables are dummy variables. Based on the data set, I want to apply different hard and soft clustering methods and analyze the shopping cart abondonnement of my data set more precisely. With the help of descriptive statistics I have analyzed my data set and obtained the following results.
# 1. WKA_ohneJB <- read.csv("WKA_ohneJB_PCA.csv", header=TRUE, sep = ";", stringsAsFactors = FALSE)
# 2. summary(WKA_ohneJB)
X BASKETS_NZ LOGONS PIS PIS_AP PIS_DV
Min. : 1 Min. : 0.000 Min. :0.0000 Min. : 1.00 Min. : 0.000 Min. : 0.000
1st Qu.:205263 1st Qu.: 1.000 1st Qu.:1.0000 1st Qu.: 9.00 1st Qu.: 0.000 1st Qu.: 0.000
Median :410525 Median : 1.000 Median :1.0000 Median : 20.00 Median : 1.000 Median : 1.000
Mean :410525 Mean : 1.023 Mean :0.9471 Mean : 31.11 Mean : 1.783 Mean : 4.554
3rd Qu.:615786 3rd Qu.: 1.000 3rd Qu.:1.0000 3rd Qu.: 41.00 3rd Qu.: 2.000 3rd Qu.: 5.000
Max. :821048 Max. :49.000 Max. :1.0000 Max. :593.00 Max. :71.000 Max. :203.000
PIS_PL PIS_SDV PIS_SHOPS PIS_SR QUANTITY WKA
Min. : 0.000 Min. : 0.00 Min. : 0.00 Min. : 0.000 Min. : 1.00 Min. :0.0000
1st Qu.: 0.000 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.000 1st Qu.: 1.00 1st Qu.:0.0000
Median : 0.000 Median : 0.00 Median : 2.00 Median : 0.000 Median : 2.00 Median :1.0000
Mean : 5.729 Mean : 2.03 Mean : 10.67 Mean : 3.873 Mean : 3.14 Mean :0.6341
3rd Qu.: 4.000 3rd Qu.: 2.00 3rd Qu.: 11.00 3rd Qu.: 4.000 3rd Qu.: 4.00 3rd Qu.:1.0000
Max. :315.000 Max. :142.00 Max. :405.00 Max. :222.000 Max. :143.00 Max. :1.0000
NEW_CUST EXIST_CUST WEB_CUST MOBILE_CUST TABLET_CUST LOGON_CUST_STEP2
Min. :0.00000 Min. :0.0000 Min. :0.0000 Min. :0.0000 Min. :0.0000 Min. :0.0000
1st Qu.:0.00000 1st Qu.:1.0000 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:0.0000
Median :0.00000 Median :1.0000 Median :0.0000 Median :0.0000 Median :0.0000 Median :0.0000
Mean :0.07822 Mean :0.9218 Mean :0.4704 Mean :0.3935 Mean :0.1361 Mean :0.1743
3rd Qu.:0.00000 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:0.0000 3rd Qu.:0.0000
Max. :1.00000 Max. :1.0000 Max. :1.0000 Max. :1.0000 Max. :1.0000 Max. :1.0000
With the non-dummy variables it is noticeable that they have a right-skewed distribution. For the dummy variables, 5 have a right-skewed distribution and 3 have a left-skewed distribution.
I have also listed range and quantiles for the 9 non-dummies
# BASKETS_NZ
range(WKA_ohneJB$BASKETS_NZ) # 0 49
quantile(WKA_ohneJB$BASKETS_NZ, 0.5) # 1
quantile(WKA_ohneJB$BASKETS_NZ, 0.25) # 1
quantile(WKA_ohneJB$BASKETS_NZ, 0.75) # 1
# PIS
range(WKA_ohneJB$PIS) # 1 593
quantile(WKA_ohneJB$PIS, 0.25) # 9
quantile(WKA_ohneJB$PIS, 0.5) # 20
quantile(WKA_ohneJB$PIS, 0.75) # 41
# PIS_AP
range(WKA_ohneJB$PIS_AP) # 0 71
quantile(WKA_ohneJB$PIS_AP, 0.25) # 0
quantile(WKA_ohneJB$PIS_AP, 0.5) # 1
quantile(WKA_ohneJB$PIS_AP, 0.75) # 2
# PIS_DV
range(WKA_ohneJB$PIS_DV) # 0 203
quantile(WKA_ohneJB$PIS_DV, 0.25) # 0
quantile(WKA_ohneJB$PIS_DV, 0.5) # 1
quantile(WKA_ohneJB$PIS_DV, 0.75) # 5
#PIS_PL
range(WKA_ohneJB$PIS_PL) # 0 315
quantile(WKA_ohneJB$PIS_PL, 0.25) # 0
quantile(WKA_ohneJB$PIS_PL, 0.5) # 0
quantile(WKA_ohneJB$PIS_PL, 0.75) # 4
#PIS_SDV
range(WKA_ohneJB$PIS_SDV) # 0 142
quantile(WKA_ohneJB$PIS_SDV, 0.25) # 0
quantile(WKA_ohneJB$PIS_SDV, 0.5) # 0
quantile(WKA_ohneJB$PIS_SDV, 0.75) # 2
# PIS_SHOPS
range(WKA_ohneJB$PIS_SHOPS) # 0 405
quantile(WKA_ohneJB$PIS_SHOPS, 0.25) # 0
quantile(WKA_ohneJB$PIS_SHOPS, 0.5) # 2
quantile(WKA_ohneJB$PIS_SHOPS, 0.75) # 11
# PIS_SR
range(WKA_ohneJB$PIS_SR) # 0 222
quantile(WKA_ohneJB$PIS_SR, 0.25) # 0
quantile(WKA_ohneJB$PIS_SR, 0.5) # 0
quantile(WKA_ohneJB$PIS_SR, 0.75) # 4
# QUANTITY
range(WKA_ohneJB$QUANTITY) # 1 143
quantile(WKA_ohneJB$QUANTITY, 0.25) # 1
quantile(WKA_ohneJB$QUANTITY, 0.5) # 2
quantile(WKA_ohneJB$QUANTITY, 0.75) # 4
How can I recognize from the distribution of my data which cluster methods are suitable for mixed type clickstream data?

Related

Summarize the same variables from multiple dataframes in one table

I have voter and party-data from several datasets that I further separated into different dataframes and lists to make it comparable. I could just use the summary command on each of them individually then compare manually, but I was wondering whether there was a way to get them all together and into one table?
Here's a sample of what I have:
> summary(eco$rilenew)
Min. 1st Qu. Median Mean 3rd Qu. Max.
3 4 4 4 4 5
> summary(ecovoters)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.000 3.000 4.000 3.744 5.000 10.000 26
> summary(lef$rilenew)
Min. 1st Qu. Median Mean 3rd Qu. Max.
2.000 3.000 3.000 3.692 4.000 7.000
> summary(lefvoters)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.000 2.000 3.000 3.612 5.000 10.000 332
> summary(soc$rilenew)
Min. 1st Qu. Median Mean 3rd Qu. Max.
2.000 4.000 4.000 4.143 5.000 6.000
> summary(socvoters)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
0.000 3.000 4.000 3.674 5.000 10.000 346
Is there a way I can summarize these lists (ecovoters, lefvoters, socvoters etc) and the dataframe variables (eco$rilenew, lef$rilenew, soc$rilenew etc) together and have them in one table?
You could put everything into a list and summarize with a small custom function.
L <- list(eco$rilenew, ecovoters, lef$rilenew,
lefvoters, soc$rilenew, socvoters)
t(sapply(L, function(x) {
s <- summary(x)
length(s) <- 7
names(s)[7] <- "NA's"
s[7] <- ifelse(!any(is.na(x)), 0, s[7])
return(s)
}))
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
[1,] 0.9820673 3.3320662 3.958665 3.949512 4.625109 7.229069 0
[2,] -4.8259384 0.5028293 3.220546 3.301452 6.229384 9.585749 26
[3,] -0.3717391 2.3280366 3.009360 3.013908 3.702156 6.584659 0
[4,] -2.6569493 1.6674330 3.069440 3.015325 4.281100 8.808432 332
[5,] -2.3625651 2.4964361 3.886673 3.912009 5.327401 10.349040 0
[6,] -2.4719404 1.3635785 2.790523 2.854812 4.154936 8.491347 346
Data
set.seed(42)
eco <- data.frame(rilenew=rnorm(800, 4, 1))
ecovoters <- rnorm(75, 4, 4)
ecovoters[sample(length(ecovoters), 26)] <- NA
lef <- data.frame(rilenew=rnorm(900, 3, 1))
lefvoters <- rnorm(700, 3, 2)
lefvoters[sample(length(lefvoters), 332)] <- NA
soc <- data.frame(rilenew=rnorm(900, 4, 2))
socvoters <- rnorm(700, 3, 2)
socvoters[sample(length(socvoters), 346)] <- NA
Can use map from tidyverse to get the summary list, then if you want the result as dataframe, then plyr::ldply can help to convert list to dataframe:
ll = map(L, summary)
ll
plyr::ldply(ll, rbind)
> ll = map(L, summary)
> ll
[[1]]
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.9821 3.3321 3.9587 3.9495 4.6251 7.2291
[[2]]
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-4.331 1.347 3.726 3.793 6.653 16.845 26
[[3]]
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.3717 2.3360 3.0125 3.0174 3.7022 6.5847
[[4]]
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-2.657 1.795 3.039 3.013 4.395 9.942 332
[[5]]
Min. 1st Qu. Median Mean 3rd Qu. Max.
-2.363 2.503 3.909 3.920 5.327 10.349
[[6]]
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
-3.278 1.449 2.732 2.761 4.062 8.171 346
> plyr::ldply(ll, rbind)
Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
1 0.9820673 3.332066 3.958665 3.949512 4.625109 7.229069 NA
2 -4.3312551 1.346532 3.725708 3.793431 6.652917 16.844796 26
3 -0.3717391 2.335959 3.012507 3.017438 3.702156 6.584659 NA
4 -2.6569493 1.795307 3.038905 3.012928 4.395338 9.941819 332
5 -2.3625651 2.503324 3.908727 3.920050 5.327401 10.349040 NA
6 -3.2779863 1.448814 2.732515 2.760569 4.061854 8.170793 346

Error in eval(predvars, data, env) : object 'Sewer' not found

I have a set of data containing species names and numbers (spp_data) and I am trying to test how the species are influenced by different parameters such as pH, conductivity, as well as the Sewer position (Upstream/Downstream) (env_data1).
When I'm trying to run the lm() I get the following error:
lm1 <- lm(specnumber ~ Sewer + pH + Conductivity, data=spp_data,env_data1)
Error in eval(predvars, data, env) : object 'Sewer' not found
Is it because the column Sewer is non-numeric?
I also tried to exclude that column and run the lm() but it did not work.
species data
summary(spp_data)
Pisidium G_pulex C_pseudo A_aquatic V_pisc
Min. :0.000 Min. : 0.00 Min. : 0.000 Min. :0.0000 Min. :0.00000
1st Qu.:0.000 1st Qu.: 3.00 1st Qu.: 0.000 1st Qu.:0.0000 1st Qu.:0.00000
Median :0.000 Median : 8.00 Median : 3.000 Median :0.0000 Median :0.00000
Mean :1.429 Mean :16.86 Mean : 4.476 Mean :0.5714 Mean :0.04762
3rd Qu.:2.000 3rd Qu.:20.00 3rd Qu.:10.000 3rd Qu.:0.0000 3rd Qu.:0.00000
Max. :7.000 Max. :68.00 Max. :16.000 Max. :4.0000 Max. :1.00000
Taeniopt Rhyacoph Hydropsy Lepidost Glossos
Min. :0.00000 Min. :0.0000 Min. :0.000 Min. :0.000 Min. : 0.00
1st Qu.:0.00000 1st Qu.:0.0000 1st Qu.:0.000 1st Qu.:0.000 1st Qu.: 0.00
Median :0.00000 Median :0.0000 Median :0.000 Median :0.000 Median : 0.00
Mean :0.09524 Mean :0.2381 Mean :1.286 Mean :1.238 Mean : 1.81
3rd Qu.:0.00000 3rd Qu.:0.0000 3rd Qu.:3.000 3rd Qu.:2.000 3rd Qu.: 1.00
Max. :2.00000 Max. :2.0000 Max. :5.000 Max. :7.000 Max. :14.00
Agapetus Hydroptil Limneph S_person Tipula
Min. : 0.0000 Min. :0.00000 Min. :0.000 Min. :0.00000 Min. :0
1st Qu.: 0.0000 1st Qu.:0.00000 1st Qu.:0.000 1st Qu.:0.00000 1st Qu.:0
Median : 0.0000 Median :0.00000 Median :0.000 Median :0.00000 Median :0
Mean : 0.5714 Mean :0.04762 Mean :0.381 Mean :0.09524 Mean :0
3rd Qu.: 0.0000 3rd Qu.:0.00000 3rd Qu.:1.000 3rd Qu.:0.00000 3rd Qu.:0
Max. :12.0000 Max. :1.00000 Max. :2.000 Max. :2.00000 Max. :0
Culicida Ceratopo Simuliid Chrinomi Chrnomus
Min. :0.0000 Min. : 0 Min. : 0.0000 Min. : 0.000 Min. : 0.000
1st Qu.:0.0000 1st Qu.: 0 1st Qu.: 0.0000 1st Qu.: 0.000 1st Qu.: 1.000
Median :0.0000 Median : 1 Median : 0.0000 Median : 2.000 Median : 3.000
Mean :0.5714 Mean : 7 Mean : 0.5238 Mean : 7.286 Mean : 6.095
3rd Qu.:0.0000 3rd Qu.: 8 3rd Qu.: 0.0000 3rd Qu.: 8.000 3rd Qu.: 6.000
Max. :5.0000 Max. :31 Max. :10.0000 Max. :67.000 Max. :41.000
environmental data
summary(env_data)
Sample Sewer pH Conductivity
Length:21 Length:21 Min. :7.780 Length:21
Class :character Class :character 1st Qu.:7.850 Class :character
Mode :character Mode :character Median :8.100 Mode :character
Mean :8.044
3rd Qu.:8.270
Max. :8.280
Depth %rock %mud %sand,,
Min. : 7.00 Min. :10.00 Min. : 0 Length:21
1st Qu.: 8.00 1st Qu.:10.00 1st Qu.:20 Class :character
Median :11.00 Median :70.00 Median :30 Mode :character
Mean :17.14 Mean :57.14 Mean :40
3rd Qu.:28.00 3rd Qu.:80.00 3rd Qu.:90
Max. :40.00 Max. :90.00 Max. :90
Assuming that the rows of your spp_data match the rows of your environmental data ... I think if you do
lm1 <- lm(as.matrix(spp_data) ~ Sewer + pH + Conductivity,
data=env_data1)
you will get the results of running 44 separate linear models, one for each species. (Be careful: with 44 regressions and only 21 observations, you may need to do some multiple comparisons corrections to avoid overstating your conclusions.)
There are R packages for more sophisticated multi-species analyses such as mvabund or gllvm, but they might not apply to a data set this size ...

Caret method = "rf" warning message: invalid ## mtry: reset to within valid range

I am working on a Coursera Machine Learning project. The goal is to perform a predictive modeling for the following dataset.
> summary(training)
roll_belt pitch_belt yaw_belt total_accel_belt gyros_belt_x
Min. :-28.90 Min. :-55.8000 Min. :-180.00 Min. : 0.00 Min. :-1.040000
1st Qu.: 1.10 1st Qu.: 1.7600 1st Qu.: -88.30 1st Qu.: 3.00 1st Qu.:-0.030000
Median :113.00 Median : 5.2800 Median : -13.00 Median :17.00 Median : 0.030000
Mean : 64.41 Mean : 0.3053 Mean : -11.21 Mean :11.31 Mean :-0.005592
3rd Qu.:123.00 3rd Qu.: 14.9000 3rd Qu.: 12.90 3rd Qu.:18.00 3rd Qu.: 0.110000
Max. :162.00 Max. : 60.3000 Max. : 179.00 Max. :29.00 Max. : 2.220000
gyros_belt_y gyros_belt_z accel_belt_x accel_belt_y accel_belt_z magnet_belt_x
Min. :-0.64000 Min. :-1.4600 Min. :-120.000 Min. :-69.00 Min. :-275.00 Min. :-52.0
1st Qu.: 0.00000 1st Qu.:-0.2000 1st Qu.: -21.000 1st Qu.: 3.00 1st Qu.:-162.00 1st Qu.: 9.0
Median : 0.02000 Median :-0.1000 Median : -15.000 Median : 35.00 Median :-152.00 Median : 35.0
Mean : 0.03959 Mean :-0.1305 Mean : -5.595 Mean : 30.15 Mean : -72.59 Mean : 55.6
3rd Qu.: 0.11000 3rd Qu.:-0.0200 3rd Qu.: -5.000 3rd Qu.: 61.00 3rd Qu.: 27.00 3rd Qu.: 59.0
Max. : 0.64000 Max. : 1.6200 Max. : 85.000 Max. :164.00 Max. : 105.00 Max. :485.0
magnet_belt_y magnet_belt_z roll_arm pitch_arm yaw_arm total_accel_arm
Min. :354.0 Min. :-623.0 Min. :-180.00 Min. :-88.800 Min. :-180.0000 Min. : 1.00
1st Qu.:581.0 1st Qu.:-375.0 1st Qu.: -31.77 1st Qu.:-25.900 1st Qu.: -43.1000 1st Qu.:17.00
Median :601.0 Median :-320.0 Median : 0.00 Median : 0.000 Median : 0.0000 Median :27.00
Mean :593.7 Mean :-345.5 Mean : 17.83 Mean : -4.612 Mean : -0.6188 Mean :25.51
3rd Qu.:610.0 3rd Qu.:-306.0 3rd Qu.: 77.30 3rd Qu.: 11.200 3rd Qu.: 45.8750 3rd Qu.:33.00
Max. :673.0 Max. : 293.0 Max. : 180.00 Max. : 88.500 Max. : 180.0000 Max. :66.00
gyros_arm_x gyros_arm_y gyros_arm_z accel_arm_x accel_arm_y
Min. :-6.37000 Min. :-3.4400 Min. :-2.3300 Min. :-404.00 Min. :-318.0
1st Qu.:-1.33000 1st Qu.:-0.8000 1st Qu.:-0.0700 1st Qu.:-242.00 1st Qu.: -54.0
Median : 0.08000 Median :-0.2400 Median : 0.2300 Median : -44.00 Median : 14.0
Mean : 0.04277 Mean :-0.2571 Mean : 0.2695 Mean : -60.24 Mean : 32.6
3rd Qu.: 1.57000 3rd Qu.: 0.1400 3rd Qu.: 0.7200 3rd Qu.: 84.00 3rd Qu.: 139.0
Max. : 4.87000 Max. : 2.8400 Max. : 3.0200 Max. : 437.00 Max. : 308.0
accel_arm_z magnet_arm_x magnet_arm_y magnet_arm_z roll_dumbbell pitch_dumbbell
Min. :-636.00 Min. :-584.0 Min. :-392.0 Min. :-597.0 Min. :-153.71 Min. :-149.59
1st Qu.:-143.00 1st Qu.:-300.0 1st Qu.: -9.0 1st Qu.: 131.2 1st Qu.: -18.49 1st Qu.: -40.89
Median : -47.00 Median : 289.0 Median : 202.0 Median : 444.0 Median : 48.17 Median : -20.96
Mean : -71.25 Mean : 191.7 Mean : 156.6 Mean : 306.5 Mean : 23.84 Mean : -10.78
3rd Qu.: 23.00 3rd Qu.: 637.0 3rd Qu.: 323.0 3rd Qu.: 545.0 3rd Qu.: 67.61 3rd Qu.: 17.50
Max. : 292.00 Max. : 782.0 Max. : 583.0 Max. : 694.0 Max. : 153.55 Max. : 149.40
yaw_dumbbell total_accel_dumbbell gyros_dumbbell_x gyros_dumbbell_y gyros_dumbbell_z
Min. :-150.871 Min. : 0.00 Min. :-204.0000 Min. :-2.10000 Min. : -2.380
1st Qu.: -77.644 1st Qu.: 4.00 1st Qu.: -0.0300 1st Qu.:-0.14000 1st Qu.: -0.310
Median : -3.324 Median :10.00 Median : 0.1300 Median : 0.03000 Median : -0.130
Mean : 1.674 Mean :13.72 Mean : 0.1611 Mean : 0.04606 Mean : -0.129
3rd Qu.: 79.643 3rd Qu.:19.00 3rd Qu.: 0.3500 3rd Qu.: 0.21000 3rd Qu.: 0.030
Max. : 154.952 Max. :58.00 Max. : 2.2200 Max. :52.00000 Max. :317.000
accel_dumbbell_x accel_dumbbell_y accel_dumbbell_z magnet_dumbbell_x magnet_dumbbell_y
Min. :-419.00 Min. :-189.00 Min. :-334.00 Min. :-643.0 Min. :-3600
1st Qu.: -50.00 1st Qu.: -8.00 1st Qu.:-142.00 1st Qu.:-535.0 1st Qu.: 231
Median : -8.00 Median : 41.50 Median : -1.00 Median :-479.0 Median : 311
Mean : -28.62 Mean : 52.63 Mean : -38.32 Mean :-328.5 Mean : 221
3rd Qu.: 11.00 3rd Qu.: 111.00 3rd Qu.: 38.00 3rd Qu.:-304.0 3rd Qu.: 390
Max. : 235.00 Max. : 315.00 Max. : 318.00 Max. : 592.0 Max. : 633
magnet_dumbbell_z roll_forearm pitch_forearm yaw_forearm total_accel_forearm
Min. :-262.00 Min. :-180.0000 Min. :-72.50 Min. :-180.00 Min. : 0.00
1st Qu.: -45.00 1st Qu.: -0.7375 1st Qu.: 0.00 1st Qu.: -68.60 1st Qu.: 29.00
Median : 13.00 Median : 21.7000 Median : 9.24 Median : 0.00 Median : 36.00
Mean : 46.05 Mean : 33.8265 Mean : 10.71 Mean : 19.21 Mean : 34.72
3rd Qu.: 95.00 3rd Qu.: 140.0000 3rd Qu.: 28.40 3rd Qu.: 110.00 3rd Qu.: 41.00
Max. : 452.00 Max. : 180.0000 Max. : 89.80 Max. : 180.00 Max. :108.00
gyros_forearm_x gyros_forearm_y gyros_forearm_z accel_forearm_x accel_forearm_y
Min. :-22.000 Min. : -7.02000 Min. : -8.0900 Min. :-498.00 Min. :-632.0
1st Qu.: -0.220 1st Qu.: -1.46000 1st Qu.: -0.1800 1st Qu.:-178.00 1st Qu.: 57.0
Median : 0.050 Median : 0.03000 Median : 0.0800 Median : -57.00 Median : 201.0
Mean : 0.158 Mean : 0.07517 Mean : 0.1512 Mean : -61.65 Mean : 163.7
3rd Qu.: 0.560 3rd Qu.: 1.62000 3rd Qu.: 0.4900 3rd Qu.: 76.00 3rd Qu.: 312.0
Max. : 3.970 Max. :311.00000 Max. :231.0000 Max. : 477.00 Max. : 923.0
accel_forearm_z magnet_forearm_x magnet_forearm_y magnet_forearm_z classe
Min. :-446.00 Min. :-1280.0 Min. :-896.0 Min. :-973.0 A:5580
1st Qu.:-182.00 1st Qu.: -616.0 1st Qu.: 2.0 1st Qu.: 191.0 B:3797
Median : -39.00 Median : -378.0 Median : 591.0 Median : 511.0 C:3422
Mean : -55.29 Mean : -312.6 Mean : 380.1 Mean : 393.6 D:3216
3rd Qu.: 26.00 3rd Qu.: -73.0 3rd Qu.: 737.0 3rd Qu.: 653.0 E:3607
Max. : 291.00 Max. : 672.0 Max. :1480.0 Max. :1090.0
For training the model, I did the following:
trainCtrl <- trainControl(method = "cv", number = 10, savePredictions = TRUE)
rfModel <- train(classe ~., method = "rf", trControl = trainCtrl, preProcess = "pca", data = training, prox = TRUE)
The model worked. However, I was rather annoyed by multiple warning messages, repeated up to 20 times, invalid mtry: reset to within valid range. A few searches on Google did not return any useful insights. Also, not sure it matters, there were no NA values in the dataset; they were removed in a prior step.
I also ran system.time(), the processing time was awfully more than 1 hour.
> system.time(train(classe ~., method = "rf", trControl = trainCtrl, preProcess = "pca", data = training, prox = TRUE))
user system elapsed
6478.113 302.281 7044.483
If you can help decipher the what and why this warning message, that would be super. I would love to hear any comments regarding such a long processing time.
Thank you!
The caret rf method uses the randomForest function from the randomForest package. If you set the mtry argument of randomForest to a value greater than the number of predictor variables, you'll get the warning you posted (for example, try rf = randomForest(mpg ~ ., mtry=15, data=mtcars)). The model still runs, but randomForest sets mtry to a lower, valid value.
The question is, why is train (or one of the functions it calls) feeding randomForest an mtry value that's too large? I'm not sure, but here's a guess: Setting preProcess="pca" reduces the number of features being fed to randomForest (relative to the number of features in the raw data), because the least important principal components are discarded to reduce the dimensionality of the feature set. However, when doing cross-validation, it's possible that train nevertheless sets the maximum mtry value for randomForest based on the larger number of features in the raw data, rather than based on the pre-processed data set that's actually fed to randomForest. Circumstantial evidence for this is that the warning goes away if you remove the preProcess="pca" argument, but I didn't check any further than that.
Reproducible code showing that the warning goes away without pca:
trainCtrl <- trainControl(method = "cv", number = 10, savePredictions = TRUE)
rfModel <- train(mpg ~., method = "rf", trControl = trainCtrl, preProcess = "pca", data = mtcars, prox = TRUE)
rfModel <- train(mpg ~., method = "rf", trControl = trainCtrl, data = mtcars, prox = TRUE)

Restructure output of R summary function

Is there an easy way to change the output format for R's summary function so that the results print in a column instead of row? R does this automatically when you pass summary a data frame. I'd like to print summary statistics in a column when I pass it a single vector. So instead of this:
>summary(vector)
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.000 1.000 2.000 6.699 6.000 559.000
It would look something like this:
>summary(vector)
Min. 1.000
1st Qu. 1.000
Median 2.000
Mean 6.699
3rd Qu. 6.000
Max. 559.000
Sure. Treat it as a data.frame:
set.seed(1)
x <- sample(30, 100, TRUE)
summary(x)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 1.00 10.00 15.00 16.03 23.25 30.00
summary(data.frame(x))
# x
# Min. : 1.00
# 1st Qu.:10.00
# Median :15.00
# Mean :16.03
# 3rd Qu.:23.25
# Max. :30.00
For slightly more usable output, you can use data.frame(unclass(.)):
data.frame(val = unclass(summary(x)))
# val
# Min. 1.00
# 1st Qu. 10.00
# Median 15.00
# Mean 16.03
# 3rd Qu. 23.25
# Max. 30.00
Or you can use stack:
stack(summary(x))
# values ind
# 1 1.00 Min.
# 2 10.00 1st Qu.
# 3 15.00 Median
# 4 16.03 Mean
# 5 23.25 3rd Qu.
# 6 30.00 Max.

geeglm in R: Error message about variable lengths differing

I am trying to run a GEE model with a logit outcome, using the following code.
mod.gee <- geeglm(general_elec~activity_outside_home+econ_scale_6_pt,
data=D_work_small, id="pidlink",
family=binomial(link="logit"), corstr="ar1")
But I keep getting this error:
Error in model.frame.default(formula = general_elec ~ activity_outside_home+ :
variable lengths differ (found for '(id)')
My data is longitudinal data from two waves of a survey. I have tried omitting responses with NAs, and changing the data types of the variables, but nothing has worked. Any suggestions, or has anyone run into this problem before?
My data is structured as follows:
> head(D_work_small)
pidlink econ_scale_6_pt activity_outside_home general_elec wave age female java educ_level pid_unit
1 001220001 3 1 1 3 48 0 0 1 0012200013
10 001220002 3 1 1 3 47 1 0 1 0012200023
19 001220003 2 1 1 3 27 0 0 4 0012200033
77 001250003 2 1 1 3 27 0 0 1 0012500033
79 001290001 2 1 1 3 52 0 0 1 0012900013
88 001290002 2 1 1 3 49 1 0 1 0012900023
> summary(D_work_small)
pidlink econ_scale_6_pt activity_outside_home general_elec wave
Length:44106 Min. :1.000 Min. :0.0000 Min. :0.0000 Min. :3.000
Class :character 1st Qu.:2.000 1st Qu.:0.0000 1st Qu.:1.0000 1st Qu.:3.000
Mode :character Median :3.000 Median :1.0000 Median :1.0000 Median :4.000
Mean :2.894 Mean :0.7048 Mean :0.8304 Mean :3.608
3rd Qu.:3.000 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:4.000
Max. :6.000 Max. :1.0000 Max. :1.0000 Max. :4.000
age female java educ_level pid_unit
Min. : 14.00 Min. :0.0000 Min. :0.0000 Min. :1.0 Length:44106
1st Qu.: 26.00 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:1.0 Class :character
Median : 35.00 Median :1.0000 Median :0.0000 Median :2.0 Mode :character
Mean : 37.35 Mean :0.5118 Mean :0.4171 Mean :2.2
3rd Qu.: 47.00 3rd Qu.:1.0000 3rd Qu.:1.0000 3rd Qu.:3.0
Max. :999.00 Max. :1.0000 Max. :1.0000 Max. :5.0

Resources