R Beginner: Loops on a Welch t-test - r

I am just starting to pick up R to help with a new retailing project, and although I can punch in some basic functions, I am looking for a way to do some sales comparisons more efficiently. The following is a condensed example.
I would like to compare the means for total purchases by six different types of customers (noted using the factor MemberType with 6 levels, one for each type of rewards membership enrollment).
Although I can certainly do something like this:
>m<-t.test(TotalPurchase[MemberType=='2'],TotalPurchase[MemberType=='4'])
for each pair, my objective here is to avoid running the test for each pair of factor levels manually.
At this early stage I do not understand conceptually how to go about this. Is it possible to use the function across a vector of unique factor levels, e.g.
>tp<-data.frame(levels(MemberType))
? If so, are there any insights on how/whether to construct a nested for-loop something like
>for(i in tp) function(i){
>##something like tt<-t.test(TotalPurchase[MemberType==i],##
>##+t.test(TotalPurchase[MemberType==i])##
>+}
with an additional layer? I have also monkeyed around with the 'apply' family of functions but am stumped by 1)the need for two inputs into the two-sample t.test
and by 2)the indexing syntax in the for() and lapply() arguments that tell R what vector of values to use in the t-test.
Any specific help on this problem or polite guidance on my formatting in R (or in Stack Overflow) will be greatly appreciated by this novice. Thanks!

The most straight forward way is to use pairwise.t.test() in the stats package.
But you need to be clear on what type of multiple-test adjustment
you'd like to use to control your family wise error rate. So, it's
really a statistics question and not a programming question. Do you
have a preference between Bonferroni or others?
You're also unclear on the if you're using pooled variance or not.
Finally, your data is unclear.
If you have a data frame with the variables: purchases is the measurement variable, MemberType for customer category, and ItemType as the item category, and you want Bonferroni corrections and unpooled SD, this will work for the example item type == a:
df <- data.frame(purchases= rnorm(100, 50, 20),
MemberType= factor(sample(c("a", "b", "c"), 100, replace= TRUE)),
ItemType= factor(sample(c("d","e","f"), 100, replace=TRUE))
df2 <- df[df$ItemType == "a", ]
pairwise.t.test(df2$purchases, df2$MemberType, p.adj= "bonf", pool.sd= FALSE)
Please provide a complete description of your problem and I can update this solution as needed.

Related

Generating data with a loop to use the predict function in R

I've built a model with numeric and factor variables to predict sales based on advertising with weekly data from 2017 to 2019 and I am trying to run a code that will predict the monthly sales for 2020, for each combination of variables.
For that, I need to input the right factor variables, and I was wondering what would be the best way to go about it, here is what my regression looks like:
regads2 = lm(dolsales~flavour+brand+packaging+month+organization+manufacturer+region+displaydummie+addummie+maddummie+laddummie+discount5to10+discount10to15
+discount15to20+discount20to25+discount25to30+discount30to35+discount35to40+discount40to45+discount45to50+
discount50to55+discount55to60+discount60to65+discount65to70+discount70to75, na.action = na.exclude,data = df)
Each of the factor variables have many levels (from 5 to 30), while "discount" variables are dummies. I tried to write a loop that would generate the data based on the levels of variables and store them but I haven't been able to fully get there and I am finding myself a little stuck. Here is what I wrote so far (it is working for one variable, but not for many)
input <- matrix(ncol= 1, nrow = nlevels(region))
for(i in c(0:nlevels(region))) {
working[i,] <- levels(region)[i]
}
input
I imagine there is a simpler way to go about it.
Thanks so much, I've been stuck on that for a good week now.

apply fisher test in a large dataset that join all contingency tables

I have a dataset like this:
contingency_table<-tibble::tibble(
x1_not_happy = c(1,4),
x1_happy = c(19,31),
x2_not_happy = c(1,4),
x2_happy= c(19,28),
x3_not_happy=c(14,21),
X3_happy=c(0,9),
x4_not_happy=c(3,13),
X4_happy=c(17,22)
)
in fact, there are many other variables that come from a poll aplied in two different years.
Then, I apply a Fisher test in each 2X2 contingency matrix, using this code:
matrix1_prueba <- contingency_table[1:2,1:2]
matrix2_prueba<- contingency_table[1:2,3:4]
fisher1<-fisher.test(matrix1_prueba,alternative="two.sided",conf.level=0.9)
fisher2<-fisher.test(matrix2_prueba,alternative="two.sided",conf.level=0.9)
I would like to run this task using a short code by mean of a function or a loop. The output must be a vector with the p_values of each questions.
Thanks,
Frederick
So this was a bit of fun to do. The main thing that you need to recognize is that you want combinations of your data. There are a number of functions in R that can do that for you. The main workhorse is combn() Link
So in the language of the problem, we want all combinations of your tibble taken 2 at a time link2
From there, you just need to do some looping structure to get your tests to work, and extract the p-values from the object.
list_tables <- lapply(combn(contingency_table,2,simplify=F), fisher.test)
unlist(lapply(list_tables, `[`, 'p.value'))
This should produce your answer.
EDIT
Given the updated requirements for just adjacement data.frame columns, the following modifications should work.
full_list <- combn(contingency_table,2,simplify=F)
full_list <- full_list[sapply(
full_list, function(x) all(startsWith(names(x), substr(names(x)[1], 1,2))))]
full_list <- lapply(full_list, fisher.test)
unlist(lapply(full_list, `[`, 'p.value'))
This is approximately the same code as before, but now we have to find the subsets of the data that have the same question prefix name. This only works if the prefixes are exactly the same (X3 != x3). I think this is a better solution than trying to work with column indexes, and without the guarantee of always being next to one another. The sapply code does just that. The final output should be what you need for the problem.

Mixed Anova in R

I am trying to do an anova anaysis in R on a data set with one within factor and one between factor. The data is from an experiment to test the similarity of two testing methods. Each subject was tested in Method 1 and Method 2 (the within factor) as well as being in one of 4 different groups (the between factor). I have tried using the aov, the Anova(in car package), and the ezAnova functions. I am getting wrong values for every method I try. I am not sure where my mistake is, if its a lack of understanding of R or the Anova itself. I included the code I used that I feel should be working. I have tried a ton of variations of this hoping to stumble on the answer. This set of data is balanced but I have a lot of similar data sets and many are unblanced. Thanks for any help you can provide.
library(car)
library(ez)
#set up data
sample_data <- data.frame(Subject=rep(1:20,2),Method=rep(c('Method1','Method2'),each=20),Level=rep(rep(c('Level1','Level2','Level3','Level4'),each=5),2))
sample_data$Result <- c(4.76,5.03,4.97,4.70,5.03,6.43,6.44,6.43,6.39,6.40,5.31,4.54,5.07,4.99,4.79,4.93,5.36,4.81,4.71,5.06,4.72,5.10,4.99,4.61,5.10,6.45,6.62,6.37,6.42,6.43,5.22,4.72,5.03,4.98,4.59,5.06,5.29,4.87,4.81,5.07)
sample_data[, 'Subject'] <- as.factor(sample_data[, 'Subject'])
#Set the contrats if needed to run type 3 sums of square for unblanaced data
#options(contrats=c("contr.sum","contr.poly"))
#With aov method as I understand it 'should' work
anova_aov <- aov(Result ~ Method*Level + Error(Subject/Method),data=test_data)
print(summary(anova_aov))
#ezAnova method,
anova_ez = ezANOVA(data=sample_data, wid=Subject, dv = Result, within = Method, between=Level, detailed = TRUE, type=3)
print(anova_ez)
Also, the values I should be getting as output by SAS
SAS Anova
Actually, your R code is correct in both cases. Running these data through SPSS yielded the same result. SAS, like SPSS, seems to require that the levels of the within factor appear in separate columns. You will end up with 20 rows instead of 40. An arrangmement like the one below might give you the desired result in SAS:
Subject Level Method1 Method2

Frequency weighting in R, comparing results with Stata

I'm trying to analyze data from the University of Minnesota IPUMS dataset for the 1990 US census in R. I'm using the survey package because the data is weighted. Just taking the household data (and ignoring the person variables to keep things simple), I am attempting to calculate the mean of hhincome (household income). To do this I created a survey design object using the svydesign() function with the following code:
> require(foreign)
> ipums.household <- read.dta("/path/to/stata_export.dta")
> ipums.household[ipums.household$hhincome==9999999, "hhincome"] <- NA # Fix missing
> ipums.hh.design <- svydesign(id=~1, weights=~hhwt, data=ipums.household)
> svymean(ipums.household$hhincome, ipums.hh.design, na.rm=TRUE)
mean SE
[1,] 37029 17.365
So far so good. However, I get a different standard error if I attempt the same calculation in Stata (using code meant for a different portion of the same dataset):
use "C:\I\Hate\Backslashes\stata_export.dta"
replace hhincome = . if hhincome == 9999999
(933734 real changes made, 933734 to missing)
mean hhincome [fweight = hhwt] # The code from the link above.
Mean estimation Number of obs = 91746420
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
hhincome | 37028.99 3.542749 37022.05 37035.94
--------------------------------------------------------------
And, looking at another way to skin this cat, the author of survey, has this suggestion for frequency weighting:
expanded.data<-as.data.frame(lapply(compressed.data,
function(x) rep(x,compressed.data$weights)))
However, I can't seem to get this code to work:
> hh.dataframe <- data.frame(ipums.household$hhincome, ipums.household$hhwt)
> expanded.hh.dataframe <- as.data.frame(lapply(hh.dataframe, function(x) rep(x, hh.dataframe$hhwt)))
Error in rep(x, hh.dataframe$hhwt) : invalid 'times' argument
Which I can't seem to fix. This may be related to this issue.
So in sum:
Why don't I get the same answers in Stata and R?
Which one is right (or am I doing something wrong in both cases)?
Assuming I got the rep() solution working, would that replicate Stata's results?
What's the right way to do it? Kudos if the answer allows me to use the plyr package for doing arbitrary calculations, rather than being limited to the functions implemented in survey (svymean(), svyglm() etc.)
Update
So after the excellent help I've received here and from IPUMS via email, I'm using the following code to properly handle survey weighting. I describe here in case someone else has this problem in future.
Initial Stata Preparation
Since IPUMS don't currently publish scripts for importing their data into R, you'll need to start from Stata, SAS, or SPSS. I'll stick with Stata for now. Begin by running the import script from IPUMS. Then before continuing add the following variable:
generate strata = statefip*100000 + puma
This creates a unique integer for each PUMA of the form 240001, with first two digits as the state fip code (24 in the case of Maryland) and the last four a PUMA id which is unique on a per state basis. If you're going to use R you might also find it helpful to run this as well
generate statefip_num = statefip * 1
This will create an additional variable without labels, since importing .dta files into R apply the labels and lose the underlying integers.
Stata and svyset
As Keith explained, survey sampling is handled by Stata by invoking svyset.
For an individual level analysis I now use:
svyset serial [pweight=perwt], strata(strata)
This sets the weighting to perwt, the stratification to the variable we created above, and uses the household serial number to account for clustering. If we were using multiple years, we might want to try
generate double yearserial = year*100000000 + serial
to account for longitudinal clustering as well.
For household level analysis (without years):
svyset serial [pweight=hhwt], strata(strata)
Should be self-explanatory (though I think in this case serial is actually superfluous). Replacing serial with yearserial will take into account a time series.
Doing it in R
Assuming you're importing a .dta file with the additional strata variable explained above and analysing at the individual letter:
require(foreign)
ipums <- read.dta('/path/to/data.dta')
require(survey)
ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt)
Or at the household level:
ipums.hh.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=hhwt)
Hope someone finds this helpful, and thanks so much to Dwin, Keith and Brandon from IPUMS.
1&2) The comment you cited from Lumley was written in 2001 and predates any of his published work with the survey package which has only been out a few years. You are probably using "weights" in two different senses. (Lumley describes three possible senses early in his book.) The survey function svydesign is using probability weights rather than frequency weights. Seems likely that these are not really frequency weights but rather probability weights, given the massive size of that dataset, and that would mean that the survey package result is correct and the Stata result incorrect. If you are not convinced, then the survey package offers the function as.svrepdesign() with which Lumley's book describes how to create a replicate weight vector from a svydesign-object.
3) I think so, but as RMN said ..."It would be wrong."
4) Since it's wrong (IMO) it's not necessary.
You shouldn't be using frequency weights in Stata. That is pretty clear. If IPUMS doesn't have a "complex" survey design, you can just use:
mean hhincome [pw = hhwt]
Or, for convenience:
svyset [pw = hhwt]
svy: mean hhincome
svy: regress hhincome `x'
What's nice about the second option is that you can use it for more complex survey designs (via options on svyset. Then you can run lots of commands without having to typ [pw...] all the time.
Slight addition for people who don't have access to Stata or SAS; (I would put this in comments but...)
The library SAScii can use the SAS code file to read in the IPUMS downloaded data. The code to read in the data is from the doc
library(SAScii)
IPUMS.file.location <- "..\\usa_00007dat\\usa_00007.dat"
IPUMS.SAS.read.in.instructions <- "..\\usa_00007dat\\usa_00007.sas"
#store the IPUMS extract as an R data frame!
IPUMS.df <-
read.SAScii (
IPUMS.file.location ,
IPUMS.SAS.read.in.instructions ,
zipped = F )

Handling missing/incomplete data in R--is there function to mask but not remove NAs?

As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance:
Many R functions have an na.rm flag that when set to TRUE, remove the NAs:
>>> v = mean( c(5, NA, 6, 12, NA, 87, 9, NA, 43, 67), na.rm=T)
>>> v
(5, 6, 12, 87, 9, 43, 67)
But if you want to deal with NAs before the function call, you need to do something like this:
to remove each 'NA' from a vector:
vx = vx[!is.na(a)]
to remove each 'NA' from a vector and replace it w/ a '0':
ifelse(is.na(vx), 0, vx)
to remove entire each row that contains 'NA' from a data frame:
dfx = dfx[complete.cases(dfx),]
All of these functions permanently remove 'NA' or rows with an 'NA' in them.
Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it).
to be as clear as possible about what i'm looking for: python/numpy has a class, masked array, with a mask method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?
Exactly what to do with missing data -- which may be flagged as NA if we know it is missing -- may well differ from domain to domain.
To take an example related to time series, where you may want to skip, or fill, or interpolate, or interpolate differently, ... is that just the (very useful and popular) zoo has all these functions related to NA handling:
zoo::na.approx zoo::na.locf
zoo::na.spline zoo::na.trim
allowing to approximate (using different algorithms), carry-forward or backward, use spline interpolation or trim.
Another example would be the numerous missing imputation packages on CRAN -- often providing domain-specific solutions. [ So if you call R a DSL, what is this? "Sub-domain specific solutions for domain specific languages" or SDSSFDSL? Quite a mouthful :) ]
But for your specific question: no, I am not aware of a bit-level flag in base R that allows you to mark observations as 'to be excluded'. I presume most R users would resort to functions like na.omit() et al or use the na.rm=TRUE option you mentioned.
It's a good practice to look at the data, hence infer about the type of missing values: is it MCAR (missing complete and random), MAR (missing at random) or MNAR (missing not at random)? Based on these three types, you can study the underlying structure of missing values and conclude whether imputation is at all applicable (you're lucky if it's not MNAR, 'cause, in that case, missing values are considered non-ignorable, and are related to some unknown underlying influence, factor, process, variable... whatever).
Chapter 3. in "Interactive and Dynamic Graphics for Data Analyst with R and GGobi" by Di Cook and Deborah Swayne is great reference regarding this topic.
You'll see norm package in action in this chapter, but Hmisc package has data imputation routines. See also Amelia, cat (for categorical missings imputation), mi, mitools, VIM, vmv (for missing data visualisation).
Honestly, I still don't quite understand is your question about statistics, or about R missing data imputation capabilities? I reckon that I've provided good references on second one, and about the first one: you can replace your NA's either with central tendency (mean, median, or similar), hence reduce the variability, or with random constant "pulled out" of observed (recorded) cases, or you can apply regression analysis with variable that contains NA's as criteria, and other variables as predictors, then assign residuals to NA's... it's an elegant way to deal with NA's, but quite often it would not go easy on your CPU (I have Celeron on 1.1GHz, so I have to be gentle).
This is an optimization problem... there's no definite answer, you should decide what/why are you sticking with some method. But it's always good practice to look at the data! =)
Be sure to check Cook & Swayne - it's an excellent, skilfully written guide. "Linear Models with R" by Faraway also contains a chapter about missing values.
So there.
Good luck! =)
The function na.exclude() sounds like what you want, although it's only an option for some (important) functions.
In the context of fitting and working with models, R has a family of generic functions for dealing with NAs: na.fail(), na.pass(), na.omit(), and na.exclude(). These are, in turn, arguments for some of R's key modeling functions, such as lm(), glm(), and nls() as well as functions in MASS, rpart, and survival packages.
All four generic functions basically act as filters. na.fail() will only pass the data through if there are no NAs, otherwise it fails. na.pass() passes all cases through. na.omit() and na.exclude() will both leave out cases with NAs and pass the other cases through. But na.exclude() has a different attribute that tells functions processing the resulting object to take into account the NAs. You could see this attribute if you did attributes(na.exclude(some_data_frame)). Here's a demonstration of how na.exclude() alters the behavior of predict() in the context of a linear model.
fakedata <- data.frame(x = c(1, 2, 3, 4), y = c(0, 10, NA, 40))
## We can tell the modeling function how to handle the NAs
r_omitted <- lm(x~y, na.action="na.omit", data=fakedata)
r_excluded <- lm(x~y, na.action="na.exclude", data=fakedata)
predict(r_omitted)
# 1 2 4
# 1.115385 1.846154 4.038462
predict(r_excluded)
# 1 2 3 4
# 1.115385 1.846154 NA 4.038462
Your default na.action, by the way, is determined by options("na.action") and begins as na.omit() but you can set it.

Resources