Related
In the following plot:
I would like to change count in legend to be in percent instead of number of counts. To generate the plot, I have written the following script:
library("RColorBrewer")
df <- read.csv("/home/adam/Desktop/data_norm.csv")
d <- ggplot(df, aes(case1, case2)) + geom_hex(bins = 30) + theme_bw() +
theme(text = element_text(face = "bold", size = 16)) + xlab("Case 2") + ylab("Case 1")
d <- d + scale_fill_gradientn(colors = brewer.pal(3,"Dark2"))
Using dput function to generate a reproducible example:
structure(list(ID = c(14L, 15L, 38L, 6L, 7L, 1L, 32L, 31L,
17L, 30L, 19L, 24L, 5L, 5L, 7L, 8L, 35L, 4L, 1L, 6L, 45L, 58L,
59L, 5L, 11L, 29L, 6L, 7L, 22L, 23L, 3L, 4L, 25L, 3L, 20L, 16L,
21L, 109L, 108L, 54L, 111L, 105L, 114L, 28L, 27L, 2L, 24L, 26L,
50L, 49L, 51L, 48L, 56L, 54L, 53L, 55L, 57L, 52L, 25L, 22L, 34L,
23L, 19L, 38L, 39L, 18L, 13L, 27L, 11L), case1 = c(2L, 0L,
0L, 0L, 4L, 17L, 11L, 7L, 9L, 11L, 14L, 5L, 1L, 0L, 0L, 0L, 1L,
0L, 0L, 0L, 0L, 0L, 0L, 26L, 0L, 16L, 0L, 0L, 6L, 4L, 1L, 10L,
3L, 13L, 13L, 12L, 6L, 0L, 0L, 11L, 0L, 0L, 0L, 0L, 3L, 16L,
4L, 3L, 0L, 0L, 0L, 11L, 0L, 0L, 0L, 0L, 0L, 8L, 5L, 7L, 8L,
7L, 4L, 0L, 1L, 15L, 2L, 19L, 2L), case2 = c(30L, 0L, 0L,
0L, 30L, 30L, 29L, 29L, 29L, 29L, 29L, 29L, 30L, 30L, 30L, 30L,
30L, 30L, 30L, 30L, 0L, 29L, 25L, 30L, 30L, 29L, 0L, 0L, 29L,
29L, 30L, 30L, 30L, 30L, 29L, 29L, 29L, 0L, 3L, 29L, 16L, 14L,
0L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 23L, 29L, 30L,
30L, 30L, 30L, 30L, 30L, 30L, 30L, 29L, 0L, 30L, 29L, 30L, 29L,
30L)), class = "data.frame", row.names = c(NA, -69L))
How can I change in the script so that I show the number of counts as percentage instead of showing the exact number of counts?
To change the way that the labels for a scale are shown without changing the underlying values, you can pass a reformatting function to the labels= argument of any scale_* function:
plot <- ggplot(df, aes(case1, case2)) + geom_hex(bins = 30) +
theme_bw() +
theme(text = element_text(face = "bold", size = 16)) +
xlab("Case 2") +
ylab("Case 1")
To convert from number of cases to percentage of total cases, we just divide each value by the total number of cases in df:
plot + scale_fill_gradientn(colors = brewer.pal(3,"Dark2"),
labels = function(x) x/nrow(df))
The answers to How can I change the Y-axis figures into percentages in a barplot? provide a number of ways to convert them into proper percentages, but the easiest is to use percent from the scales package (which is included with ggplot2):
plot + scale_fill_gradientn(colors = brewer.pal(3,"Dark2"),
labels = function(x) scales::percent(x/nrow(df)))
If you want to specify breaks so that the scale lists specific round percentages, note that the listed breaks will need to reference the original values, not the transformed percentages. You can do that by reversing whatever conversion you used in labels:
plot + scale_fill_gradientn(colors = brewer.pal(3,"Dark2"),
labels = function(x) scales::percent(x/nrow(df)),
breaks = c(.05, .1, .15, .2, .25, .3) * nrow(df))
I have a dataset with a variable that has a left-skewed distribution (the tail is on the left).
variable <- c(rep(35, 2), rep(36, 4), rep(37, 16), rep(38, 44), rep(39, 72), rep(40, 30))
I just want to make this data have a more normal distribution so I can perform an anova, but using log10, or log2 makes it still way left-skewed. What transformation can I use to make this data more normal?
EDIT: My model is: mod <- lme(reponse ~ variable*variable2, random=~group, data=data), so Kruskal Wallace would work except for the random effect and one predictor term thing. I did a Shapiro Wilk test, and my data is definitely non-normal. If justifiable, I would like to transform my data to give the ANOVA a better chance of detecting a significant result. Either that, or a mixed effect test for non-normal data.
#Ben Bolker - Thank you for your reply; I appreciate it. I did read your answer, but I'm still reading up on exactly what some of your suggestions mean (I'm very new to statistics). My p-value is fairly close to significant and I don't want to p-hack, but I also want to give my data the best chance I can of being significant. If I can't justify transforming my data or using something besides ANOVA, then so be it.
I've provided a dataframe snapshot below. My response variable is "temp.max", the maximum temperature at which a plant dies. My predictor variables are "growth.chamber" (either a 29 or 21 degree growth chamber) and "environment" (either field or forest). My random variable is "groupID" (the group the plants were raised in, consisting of 5-10 individuals). This is a reciprocal transplant experiment, so I raised both forest and field plants in both 21 and 29 degree chambers. What I want to know is if "temp.max" differs between field and forest populations, whether "temp.max" differs between growth chambers, and whether there is any interaction between environment and growth chamber in regards to temp.max. I would very, very much appreciate any help. Thank you.
> dput(data)
structure(list(groupID = structure(c(12L, 12L, 12L, 12L, 12L,
12L, 12L, 12L, 12L, 12L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L,
14L, 14L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 16L, 16L,
16L, 16L, 16L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L,
18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 17L, 17L, 17L,
17L, 17L, 17L, 17L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 6L, 6L, 6L, 6L, 6L, 6L, 6L,
6L, 6L, 6L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 7L, 7L, 7L, 7L, 7L, 7L,
7L, 7L, 7L, 7L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 13L, 13L, 13L,
13L, 13L, 13L, 13L, 13L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L,
9L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 11L, 11L,
11L, 11L, 11L, 8L, 8L, 8L, 8L, 8L), .Label = c("GRP_104", "GRP_111",
"GRP_132", "GRP_134", "GRP_137", "GRP_142", "GRP_145", "GRP_147",
"GRP_182", "GRP_192", "GRP_201", "GRP_28", "GRP_31", "GRP_40",
"GRP_68", "GRP_70", "GRP_78", "GRP_83", "GRP_92", "GRP_98"), class = "factor"),
individual = c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 1L,
2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 1L, 2L, 3L, 4L, 5L,
6L, 7L, 8L, 9L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L, 4L, 5L, 6L,
7L, 8L, 9L, 10L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L,
1L, 2L, 3L, 4L, 5L, 6L, 7L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L,
9L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 16L, 17L, 1L, 2L, 3L,
4L, 5L, 6L, 7L, 8L, 9L, 10L, 1L, 2L, 3L, 4L, 5L, 1L, 2L,
3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 1L, 2L, 3L, 4L, 5L, 6L,
7L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 15L, 16L, 20L, 1L, 2L, 3L,
4L, 5L, 6L, 7L, 16L, 1L, 2L, 3L, 4L, 5L, 11L, 12L, 14L, 1L,
2L, 3L, 4L, 5L, 6L, 7L, 18L, 19L, 20L, 1L, 2L, 3L, 4L, 5L,
16L, 17L, 18L, 19L, 20L, 1L, 2L, 3L, 4L, 5L, 1L, 2L, 3L,
4L, 5L), temp.max = c(39L, 35L, 39L, 39L, 35L, 40L, 40L,
40L, 40L, 39L, 39L, 39L, 39L, 39L, 39L, 39L, 39L, 38L, 38L,
38L, 39L, 39L, 40L, 38L, 40L, 39L, 39L, 40L, 40L, 39L, 39L,
39L, 39L, 39L, 39L, 39L, 39L, 39L, 39L, 39L, 39L, 40L, 38L,
40L, 40L, 40L, 40L, 40L, 40L, 39L, 40L, 39L, 39L, 40L, 39L,
39L, 39L, 39L, 38L, 38L, 38L, 38L, 40L, 39L, 39L, 38L, 38L,
39L, 39L, 37L, 39L, 39L, 37L, 39L, 39L, 39L, 39L, 37L, 39L,
39L, 38L, 37L, 38L, 38L, 38L, 36L, 36L, 36L, 37L, 37L, 40L,
39L, 40L, 39L, 39L, 37L, 37L, 38L, 38L, 38L, 37L, 38L, 38L,
38L, 37L, 38L, 38L, 37L, 38L, 40L, 38L, 38L, 38L, 38L, 37L,
38L, 39L, 38L, 38L, 38L, 38L, 38L, 40L, 38L, 40L, 39L, 39L,
39L, 39L, 39L, 39L, 39L, 39L, 39L, 40L, 40L, 39L, 39L, 38L,
37L, 39L, 37L, 39L, 39L, 39L, 39L, 39L, 39L, 40L, 39L, 39L,
40L, 40L, 38L, 40L, 40L, 36L, 38L, 38L, 38L, 38L, 37L, 37L,
38L, 38L, 38L, 39L, 39L), environment = structure(c(1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L), .Label = c("field", "forest"), class = "factor"), growth.chamber = c(29L,
29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 21L, 21L, 21L,
21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L,
21L, 21L, 21L, 21L, 29L, 29L, 29L, 29L, 29L, 21L, 21L, 21L,
21L, 21L, 21L, 21L, 21L, 21L, 21L, 29L, 29L, 29L, 29L, 29L,
29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L,
21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 29L, 29L, 29L,
29L, 29L, 29L, 29L, 29L, 29L, 29L, 21L, 21L, 21L, 21L, 21L,
21L, 21L, 21L, 21L, 21L, 29L, 29L, 29L, 29L, 29L, 21L, 21L,
21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 29L, 29L, 29L, 29L,
29L, 29L, 29L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L,
21L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 21L, 21L, 21L,
21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L,
21L, 21L, 21L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L,
29L, 21L, 21L, 21L, 21L, 21L, 29L, 29L, 29L, 29L, 29L)), .Names = c("groupID",
"individual", "temp.max", "environment", "growth.chamber"), row.names = c(1L,
2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 21L, 22L, 23L, 24L, 25L,
26L, 27L, 28L, 29L, 30L, 41L, 42L, 43L, 44L, 45L, 46L, 47L, 48L,
49L, 58L, 59L, 60L, 61L, 62L, 68L, 69L, 70L, 71L, 72L, 73L, 74L,
75L, 76L, 77L, 88L, 89L, 90L, 91L, 92L, 93L, 94L, 95L, 96L, 97L,
108L, 109L, 110L, 111L, 112L, 113L, 114L, 122L, 123L, 124L, 125L,
126L, 127L, 128L, 129L, 130L, 139L, 140L, 141L, 142L, 143L, 144L,
145L, 146L, 147L, 148L, 158L, 159L, 160L, 161L, 162L, 163L, 164L,
165L, 166L, 167L, 178L, 179L, 180L, 181L, 182L, 188L, 189L, 190L,
191L, 192L, 193L, 194L, 195L, 196L, 197L, 208L, 209L, 210L, 211L,
212L, 213L, 214L, 222L, 223L, 224L, 225L, 226L, 227L, 228L, 229L,
230L, 231L, 242L, 243L, 244L, 245L, 246L, 247L, 248L, 249L, 258L,
259L, 260L, 261L, 262L, 263L, 264L, 265L, 272L, 273L, 274L, 275L,
276L, 277L, 278L, 279L, 280L, 281L, 292L, 293L, 294L, 295L, 296L,
297L, 298L, 299L, 300L, 301L, 312L, 313L, 314L, 315L, 316L, 322L,
323L, 324L, 325L, 326L), class = "data.frame")
tl;dr you probably don't actually need to worry about the skew here.
There are a few issues here, and since they're mostly statistical rather than programming-related, this question is probably more relevant for CrossValidated.
If I copied your data correctly, they're equivalent to this:
dd <- rep(35:40,c(2,4,16,44,72,30))
plot(table(dd))
Your data are discrete - that's why the density plot that #user113156 posts has distinct peaks.
Here are the issues:
the most important is that for most statistical purposes you're not actually interested in the Normality of the marginal distribution, which is what you're showing here. Rather, you want to know whether the distribution of the residuals from a model is Normal or not; for an ANOVA, this is equivalent to asking whether the distribution of values within each group is Normal (and the groups have similar within-group variances).
Normality is not very important; ANOVA is robust to moderate degrees of non-Normality (e.g. see here).
Log transformation modifies your data in the wrong direction (i.e. it will tend to increase the left skewness). In general fixing this kind of left-skewed data requires a transformation like raising to a power >1 (the opposite direction from log- or square root-transformation), but when the values are far from zero it doesn't usually help very much anyway.
Some statistical options if you are worried:
a non-parametric, rank-based test like the Kruskal-Wallis test (the rank-based analogue of 1-way ANOVA)
do an ANOVA, but use a permutation-based approach to test statistical significance.
use an ordinal model
use hierarchical bootstrapping (resample within replacement within and between clusters) to derive more robust confidence intervals on parameters
Your variable follows a discrete distribution. You have integer values ranging from 35 (n=2) to 40 (n=30). I think you need to carry out some ordinal analysis collapsing values from 35 to 37 that have fewer observations in one category. Otherwise you could perform a non-parametric analysis using kruskal.test() function.
I have bad news and good news.
the bad news is that I don't see statistically significant patterns in your data.
the good news is that, given the structure of your experimental design, you can analyze your data much more simply (you don't need mixed models)
load packages, adjust defaults
library(ggplot2); theme_set(theme_bw())
library(dplyr)
check structure of data
Tabulating the data confirms that this is a nested design; each group occurs within a single environment/growth chamber combination.
tt <- with(dd,table(groupID,
interaction(environment,growth.chamber)))
## exactly one non-zero entry per group
all(rowSums(tt>0)==1)
aggregate data
Convert growth.chamber to a categorical variable; collapse each group to its mean temp.max value (and record the number of observations per group)
dda <-(dd
%>% mutate(growth.chamber=factor(growth.chamber))
%>% group_by(groupID,environment,growth.chamber)
%>% summarise(n=n(),temp.max=mean(temp.max))
)
ggplot(dda,aes(growth.chamber,temp.max,
colour=environment))+
geom_boxplot(aes(fill=environment),alpha=0.2)+
geom_point(position=position_dodge(width=0.75),
aes(size=n),alpha=0.5)+
scale_size(range=c(3,7))
Analysis
Now that we've aggregated (without losing any information we care about), we can use a linear regression with weights specifying the number of samples per observation:
m1 <- lm(temp.max~growth.chamber*environment,weights=n,
data=dda)
Checking distribution etc. of residuals:
plot(m1)
This all looks fine; no indication of serious bias, heteroscedasticity, non-Normality, or outliers ...
summary(m1)
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 38.2558 0.2858 133.845 <2e-16 ***
## growth.chamber29 0.3339 0.4144 0.806 0.432
## environmentforest 0.2442 0.3935 0.620 0.544
## growth.chamber29:environmentforest 0.3240 0.5809 0.558 0.585
## Residual standard error: 1.874 on 16 degrees of freedom
## Multiple R-squared: 0.2364, Adjusted R-squared: 0.09318
## F-statistic: 1.651 on 3 and 16 DF, p-value: 0.2174
Or a coefficient plot (dotwhisker::dwplot(m1))
While the plot of the data doesn't look like it's just noise, the statistical analysis suggests that we can't really distinguish it from noise ...
Consider the following data set:
SimulatedDated <- structure(list(CustumerId = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 5L, 5L,
5L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L,
7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 8L, 8L, 8L,
8L, 8L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 10L, 10L, 10L, 10L, 10L,
10L, 10L, 10L, 10L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L,
11L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 13L, 13L,
13L, 13L, 13L, 13L, 13L, 13L, 13L, 14L, 14L, 14L, 14L, 14L, 14L,
14L, 14L, 14L, 14L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L,
15L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 17L, 17L, 17L,
17L, 17L, 17L, 17L, 17L, 17L, 18L, 18L, 18L, 18L, 18L, 18L, 18L,
18L, 18L, 18L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 20L, 20L,
20L, 20L, 20L, 20L, 20L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L,
22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 23L, 23L, 23L, 23L,
23L, 23L, 23L, 23L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L,
25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 26L, 26L, 26L,
26L, 26L, 26L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 28L,
28L, 28L, 28L, 28L, 28L, 28L, 29L, 29L, 29L, 29L, 29L, 29L, 29L,
29L, 29L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 31L, 31L,
31L, 31L, 31L, 31L, 31L, 31L, 31L, 32L, 32L, 32L, 32L, 32L, 32L,
32L, 32L, 32L, 32L, 33L, 33L, 33L, 33L, 33L, 33L, 33L, 33L, 33L,
34L, 34L, 34L, 34L, 34L), ProductId = c(6L, 3L, 4L, 9L, 8L, 10L,
1L, 5L, 7L, 1L, 5L, 3L, 4L, 2L, 7L, 6L, 10L, 8L, 7L, 4L, 10L,
5L, 1L, 3L, 8L, 6L, 2L, 9L, 6L, 1L, 2L, 4L, 7L, 8L, 5L, 9L, 10L,
3L, 2L, 5L, 9L, 4L, 10L, 3L, 6L, 1L, 8L, 8L, 10L, 2L, 4L, 3L,
9L, 5L, 6L, 5L, 6L, 4L, 9L, 10L, 8L, 2L, 7L, 1L, 3L, 10L, 3L,
2L, 8L, 9L, 7L, 5L, 4L, 1L, 7L, 1L, 3L, 2L, 4L, 8L, 9L, 6L, 5L,
10L, 1L, 9L, 2L, 4L, 7L, 3L, 8L, 7L, 9L, 8L, 4L, 10L, 3L, 5L,
1L, 6L, 2L, 6L, 4L, 9L, 3L, 10L, 1L, 8L, 7L, 5L, 2L, 9L, 5L,
7L, 4L, 10L, 1L, 3L, 2L, 6L, 5L, 9L, 2L, 4L, 3L, 8L, 1L, 10L,
6L, 7L, 10L, 9L, 2L, 1L, 5L, 8L, 6L, 4L, 7L, 3L, 9L, 8L, 3L,
5L, 6L, 10L, 1L, 7L, 4L, 1L, 6L, 9L, 10L, 3L, 4L, 2L, 8L, 7L,
10L, 8L, 1L, 6L, 4L, 5L, 9L, 3L, 7L, 2L, 4L, 8L, 3L, 7L, 10L,
1L, 6L, 5L, 5L, 6L, 4L, 7L, 1L, 10L, 3L, 10L, 8L, 3L, 1L, 4L,
5L, 6L, 2L, 9L, 5L, 6L, 4L, 8L, 2L, 10L, 3L, 1L, 8L, 4L, 10L,
6L, 9L, 7L, 2L, 3L, 8L, 3L, 6L, 7L, 9L, 4L, 5L, 2L, 10L, 1L,
5L, 9L, 3L, 7L, 6L, 10L, 8L, 2L, 4L, 8L, 7L, 1L, 4L, 2L, 10L,
10L, 3L, 8L, 1L, 7L, 5L, 4L, 6L, 2L, 10L, 6L, 1L, 2L, 5L, 4L,
8L, 1L, 10L, 8L, 3L, 2L, 9L, 5L, 6L, 4L, 9L, 10L, 6L, 2L, 1L,
7L, 4L, 8L, 5L, 1L, 5L, 9L, 10L, 3L, 8L, 7L, 2L, 4L, 10L, 1L,
5L, 7L, 6L, 2L, 3L, 4L, 9L, 8L, 1L, 5L, 2L, 7L, 3L, 6L, 10L,
4L, 9L, 9L, 5L, 10L, 8L, 2L), DaysSinceEpoch = c(7L, 20L, 31L,
40L, 105L, 146L, 162L, 169L, 212L, 10L, 18L, 31L, 65L, 84L, 122L,
156L, 202L, 206L, 1L, 4L, 7L, 11L, 14L, 24L, 25L, 100L, 148L,
149L, 3L, 10L, 12L, 14L, 18L, 26L, 35L, 41L, 96L, 147L, 9L, 22L,
66L, 80L, 102L, 104L, 170L, 199L, 234L, 10L, 24L, 36L, 38L, 75L,
122L, 163L, 169L, 9L, 16L, 35L, 39L, 54L, 58L, 79L, 116L, 133L,
224L, 27L, 35L, 37L, 49L, 73L, 91L, 105L, 141L, 252L, 16L, 28L,
51L, 73L, 76L, 83L, 126L, 202L, 97L, 105L, 150L, 172L, 203L,
207L, 223L, 256L, 259L, 25L, 28L, 38L, 40L, 63L, 100L, 120L,
176L, 186L, 191L, 7L, 22L, 36L, 37L, 40L, 41L, 53L, 67L, 114L,
233L, 1L, 16L, 17L, 23L, 40L, 52L, 125L, 184L, 186L, 12L, 42L,
53L, 65L, 67L, 69L, 83L, 149L, 154L, 265L, 10L, 14L, 33L, 47L,
67L, 106L, 133L, 181L, 247L, 258L, 6L, 21L, 26L, 41L, 49L, 68L,
89L, 112L, 119L, 9L, 34L, 88L, 91L, 102L, 110L, 132L, 171L, 200L,
6L, 14L, 21L, 36L, 40L, 60L, 64L, 88L, 109L, 208L, 8L, 17L, 21L,
55L, 77L, 85L, 97L, 168L, 18L, 28L, 42L, 44L, 70L, 77L, 101L,
14L, 23L, 33L, 84L, 107L, 123L, 124L, 125L, 25L, 29L, 33L, 57L,
79L, 83L, 98L, 112L, 119L, 5L, 31L, 64L, 91L, 102L, 131L, 222L,
234L, 27L, 46L, 48L, 60L, 61L, 64L, 72L, 103L, 161L, 8L, 24L,
27L, 50L, 60L, 62L, 92L, 99L, 147L, 159L, 16L, 19L, 20L, 84L,
175L, 202L, 17L, 21L, 25L, 46L, 69L, 121L, 161L, 175L, 267L,
10L, 14L, 20L, 39L, 58L, 90L, 229L, 32L, 35L, 39L, 40L, 60L,
66L, 98L, 153L, 173L, 2L, 3L, 25L, 46L, 51L, 80L, 96L, 166L,
202L, 43L, 70L, 76L, 77L, 115L, 160L, 183L, 202L, 223L, 25L,
33L, 61L, 72L, 74L, 77L, 85L, 91L, 152L, 265L, 16L, 62L, 63L,
64L, 66L, 82L, 104L, 126L, 181L, 47L, 49L, 55L, 58L, 67L), BoughtPAD = c(0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 1L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L)), .Names = c("CustumerId",
"ProductId", "DaysSinceEpoch", "BoughtPAD"), row.names = c(NA,
300L), class = "data.frame")
Then, doing
library(TraMineR)
SimSeq <- seqecreate(id = SimulatedDated$CustumerId,
timestamp = SimulatedDated$DaysSinceEpoch,
event = SimulatedDated$ProductId)
Cohort <- factor(SimulatedDated$BoughtPAD, labels = c("PAD", "NPAD"))
Fsubseq <- seqefsub(seq = SimSeq, pMinSupport = .01)
DiscrCohort <- seqecmpgroup(subseq = Fsubseq, group = Cohort)
produces:
Error in model.frame.default(formula = ww ~ group + seqmatrix[, index]) :
variable lengths differ (found for 'group')
and I was wondering, what could be causing this problem?
The group variable should have length equal to the number of sequences, i.e., the number of customers in your case. Also it is supposed to remain constant all along the sequence (which is not the case in your example).
The Cohort variable that you use as group argument has for length the total number of events (300) while you have only 34 customers. So you need to aggregate it by the CustumerID.
Here is how you can do that (here by taking the max of the group value for each customer.)
bylist <- list(id = SimulatedDated$CustumerId)
agg.PAD <- aggregate(SimulatedDated[,c("CustumerId","BoughtPAD")], by=bylist, FUN="max")
Cohort <- agg.PAD$BoughtPAD
Now you can look for the subsequences that best discriminate the groups
DiscrCohort <- seqecmpgroup(subseq = Fsubseq, group = Cohort)
print(DiscrCohort[1:10])
Hope this helps.
I have adjacency list as shown here- dput(data)
list(c(2L, 3L, 5L, 6L, 7L, 8L, 9L, 11L, 12L, 13L, 14L, 18L, 22L,
32L), c(1L, 3L, 4L, 8L, 14L, 18L, 20L, 31L), c(2L, 4L, 9L, 10L,
14L, 28L, 29L, 33L), c(1L, 2L, 3L, 8L, 13L), c(1L, 7L, 11L),
c(1L, 11L, 17L), c(5L, 6L, 17L), c(1L, 3L, 4L), c(1L, 3L,
31L, 34L), c(3L, 34L, 33L), c(1L, 5L, 6L), c(1L, 32L, 23L
), c(1L, 4L, 2L), c(1L, 2L, 4L, 34L), c(33L, 34L, 29L), c(33L,
34L, 33L), c(6L, 7L, 19L), c(1L, 2L, 10L), c(33L, 34L, 14L
), c(1L, 2L, 34L), c(33L, 34L, 26L), c(1L, 2L, 15L), c(33L,
34L, 6L), c(26L, 28L, 33L, 34L), c(26L, 28L, 32L), c(24L,
25L, 32L), c(30L, 34L, 20L), c(3L, 24L, 34L), c(3L, 32L,
34L), c(24L, 27L, 33L), c(9L, 33L, 34L), c(1L, 25L, 26L,
33L), c(3L, 9L, 15L, 16L, 19L, 21L, 24L, 30L, 31L, 32L, 34L
), c(9L, 10L, 14L, 15L, 16L, 19L, 21L, 23L, 24L, 28L, 29L,
30L, 31L, 32L, 33L))
The list is similar to adjacency list and I wanted to convert it into igraph object in R. I have tried using
graph_from_data_frame(data, directed = FALSE, vertices = NULL)
but it is not working. I have also tried
graph_from_adj_list(data,mode="all",duplicate=FALSE)
but this is giving wrong graph. For example 1-> 2,3,5,6 should have edges as 1->2,1->3,1->5,1-6; but it is giving random output as 1->3,2->5,3->6,2->6 ...likewise.
Any idea, how this could be done in R?
############ uncoded data
x10<- structure(c(0L, 0L, 0L, 0L, 1L, 1L, 1L, 5L, 8L, 9L, 31L, 1L,
0L, 0L, 0L, 1L, 0L, 1L, 2L, 7L, 2L, 10L, 0L, 2L, 0L, 2L, 2L,
5L, 2L, 4L, 6L, 8L, 4L, 1L, 1L, 3L, 2L, 2L, 6L, 1L, 12L, 18L,
7L, 29L, 8L, 4L, 6L, 8L, 6L, 19L, 3L, 9L, 12L, 3L, 12L, 14L,
1L, 2L, 1L, 3L, 1L, 0L, 4L, 6L, 3L, 11L, 0L, 0L, 0L, 1L, 3L,
7L, 5L, 8L, 21L, 26L, 51L, 0L, 1L, 0L, 3L, 5L, 10L, 9L, 29L,
55L, 60L, 125L, 3L, 0L, 1L, 1L, 3L, 10L, 1L, 6L, 18L, 17L, 13L,
6L, 3L, 4L, 13L, 6L, 33L, 17L, 48L, 84L, 54L, 103L, 34L, 11L,
20L, 27L, 26L, 50L, 29L, 30L, 54L, 28L, 34L, 31L, 5L, 7L, 3L,
4L, 20L, 8L, 16L, 16L, 8L, 41L, 1L, 0L, 0L, 3L, 1L, 3L, 3L, 11L,
19L, 16L, 56L, 0L, 0L, 0L, 0L, 3L, 11L, 3L, 18L, 25L, 21L, 62L,
3L, 0L, 1L, 4L, 2L, 7L, 8L, 15L, 22L, 12L, 19L, 5L, 2L, 8L, 9L,
9L, 42L, 18L, 51L, 70L, 45L, 103L, 29L, 15L, 23L, 34L, 25L, 57L,
23L, 38L, 55L, 30L, 33L, 36L, 5L, 5L, 6L, 6L, 16L, 6L, 10L, 17L,
9L, 35L, 2L, 0L, 1L, 1L, 2L, 4L, 6L, 8L, 22L, 33L, 73L, 0L, 0L,
0L, 1L, 2L, 7L, 7L, 15L, 27L, 21L, 56L, 1L, 2L, 2L, 0L, 2L, 9L,
4L, 8L, 24L, 13L, 17L, 14L, 2L, 8L, 10L, 16L, 51L, 16L, 51L,
69L, 29L, 99L, 44L, 18L, 25L, 34L, 19L, 49L, 26L, 43L, 63L, 15L,
30L, 42L, 9L, 17L, 7L, 3L, 16L, 8L, 13L, 22L, 18L, 45L, 0L, 0L,
1L, 3L, 0L, 7L, 4L, 14L, 15L, 20L, 47L, 0L, 1L, 0L, 1L, 1L, 3L,
3L, 5L, 6L, 11L, 21L, 1L, 0L, 0L, 4L, 2L, 3L, 8L, 7L, 17L, 3L,
13L, 5L, 2L, 6L, 13L, 15L, 34L, 19L, 42L, 62L, 37L, 83L, 52L,
16L, 26L, 26L, 29L, 53L, 28L, 45L, 45L, 15L, 22L, 26L, 8L, 12L,
11L, 5L, 12L, 5L, 7L, 17L, 10L, 28L), .Dim = c(11L, 6L, 5L), .Dimnames = structure(list(
c("0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"),
c("I've changed for work/ a new job/ gone on a work plan",
"I want a phone that doesn't offer", "I want Best Mates/ Favourites",
"I was offered or saw a better offer on another network",
"Issues with the network (poor coverage)", "Other"
), YearQuarter = c("2011-09-01", "2011-12-01", "2012-03-01",
"2012-06-01", "2012-09-01")), .Names = c("", "", "YearQuarter"
)), class = "table")
############ recoded data
x10 <- structure(c(40L, 3L, 13L, 12L, 3L, 9L, 12L, 13L, 10L, 36L, 16L,
30L, 15L, 54L, 21L, 14L, 22L, 10L, 77L, 16L, 29L, 185L, 28L,
84L, 30L, 19L, 24L, 157L, 82L, 132L, 62L, 197L, 84L, 49L, 78L,
32L, 72L, 11L, 30L, 83L, 17L, 43L, 31L, 25L, 37L, 148L, 93L,
121L, 63L, 206L, 93L, 44L, 80L, 27L, 106L, 16L, 30L, 77L, 17L,
42L, 30L, 20L, 32L, 128L, 117L, 120L, 45L, 215L, 106L, 63L, 102L,
35L, 67L, 15L, 29L, 32L, 9L, 11L, 16L, 18L, 24L, 120L, 94L, 104L,
37L, 230L, 90L, 38L, 79L, 24L), .Dim = c(3L, 6L, 5L), .Dimnames = structure(list(
c("Promoters", "Detractors", "Passive"), c("I've changed for work/ a new job/ gone on a work plan",
"I want a phone that doesn't offer", "I want Best Mates/ Favourites",
"I was offered or saw a better offer on another network",
"Issues with the network (poor coverage)", "Other"
), YearQuarter = c("2011-09-01", "2011-12-01", "2012-03-01",
"2012-06-01", "2012-09-01")), .Names = c("", "", "YearQuarter"
)), class = "table")
x10.p <- round(prop.table(x10,c(3,2)),2)*100
Hi there
The Net Promotion Score is a question which asks the consumers to rate the 'the likelihood to recommend the product or the service' on a zero to ten scale. People reported with 10 and 9 are called 'promoters', people rated 8 and 7 are seen as 'Passive', and people reported less than 6 are considered as detractors. The Net Promotion score is the difference between the percentage of 'Promoters' minus the the percentage of 'Detractors'.
I summerised and recoded the answers from the question into a table x10 from Sep 2011 to Sep 2012. The numbers are actual people counts for each group (Promoter,Detractor and Passive). Apologies for the three dimensioanl table, I am interested in the Net Promoter Score for each reason( i.e what's the percentage difference among the promoters and detractors for "I've changed for work/ a new job/ gone on a work plan" in Sep 2012.
The Net Promotion Score before I can plot it which requires a bit manipulation. I wonder if anyone knows to how do it?
Cheers
First, don't round until you've done all your calculations (otherwise you will have percentages not adding to 1)
x10.p <- prop.table(x10,c(3,2))*100
# get the total promoters
promoters <- apply(x10.p, 2:3, function(x) sum(tail(x,2)))
# and detractors
detractors <- apply(x10.p, 2:3, function(x) sum(head(x,7)))
# passive is everything else
passive <- passive <- 100 - (detractors +promoters)
# the net score
net <- promoters - detractors
net
YearQuarter
2011-09-01 2011-12-01 2012-03-01 2012-06-01 2012-09-01
I've changed for work/ a new job/ gone on a work plan 66.071429 50.00000 53.982301 59.210526 46.846847
I want a phone that doesn't offer 37.500000 52.86195 46.153846 44.117647 44.230769
I want Best Mates/ Favourites -2.857143 15.06849 6.451613 12.195122 -3.448276
I was offered or saw a better offer on another network 24.390244 20.21563 15.193370 3.013699 8.176101
Issues with the network (poor coverage) -43.333333 -39.35860 -39.502762 -46.448087 -54.061625
Other -17.391304 -18.23899 -23.841060 -19.500000 -29.078014
You want september 2012, select just that column, with drop = FALSE to ensure it is still a matrix with 1 column.
net[,'2012-09-01', drop = FALSE]
YearQuarter
2012-09-01
I've changed for work/ a new job/ gone on a work plan 46.846847
I want a phone that doesn't offer 44.230769
I want Best Mates/ Favourites -3.448276
I was offered or saw a better offer on another network 8.176101
Issues with the network (poor coverage) -54.061625
Other -29.078014