in R, Creating a summary table with comparisons of two groups - r

I frequently want to create summary tables for studies where I compare several variables between two groups, listing values for each variable along with the difference between that variable for the two groups.
For example, say I want to compare age groups (young and old) and proportion of males between two groups, A and B. I’d like to end up with a table with rows for each variable (age, proportion of males) and columns for the following variables repeated for each group (numerator, denominators, rate, difference between the two rates, 95%CI, p-value from a chi-square).
I’m looking for a general approach to this type of table.
Let’s say I have the following table:
library(dplyr)
AgeGroup <- sample(c("Young", "Old"), 10, replace = TRUE)
Gender <- sample(c("Male", "Female"), 10, replace = TRUE)
df <- data.frame(AgeGroup, Gender)
df
I can create a summary table without the comparison easily:
df1 <- df %>%
group_by(AgeGroup) %>%
summarise(num_M = sum(Gender == "Male"),
den_M = n(),
prop_M = num_M/den_M)
df1
But I can’t figure out how to create additional columns of comparisons between the different rows of grouped data. Let’s say I want to do a chi.sq test on the proportion of Males in each AgeGroup and add the p-value to the summary table above.
It would look like this (numbers, obviously, are examples), Y = Young, O = Old:
Any gentle nudges in the right direction would be greatly appreciated.
Thanks!

I like the finalfit package for summary tables. If you need to add custom summary functions, it might not be flexible enough, but its default stats cover everything you've asked for in your example, e.g. numbers in each group, proportions, and a chi-squared test. If you have continuous variables it will calculate means and SDs in each group.
library(finalfit)
finalfit::summary_factorlist(
df,
dependent = "Gender",
explanatory = "AgeGroup",
total_col = TRUE,
p = TRUE
)
Output:
label levels Female Male Total p
1 AgeGroup Old 0 (0.0) 6 (100.0) 6 0.197
2 Young 1 (25.0) 3 (75.0) 4

Related

Weighted dataset after IPTW using weightit?

I'm trying to get a weighted dataset after IPTW using weightit. Unfortunately, I'm not even sure where to start. Any help would be appreciated.
library(WeightIt)
library(cobalt)
library(survey)
W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75,
data = lalonde, estimand = "ATT", method = "ps")
bal.tab(W.out)
# pre-weighting dataset
lalonde
# post-weighting dataset??
The weightit() function produces balance weights. In your case, setting method = "ps" will produce propensity scores that are transformed into weights. More details of how it produces those weights can be found with ?method_ps. You can extract the weights from your output and store them as a column in a data.frame via: data.frame(w = W.out[["weights"]]). The output is a vector of weights with a length equal to the number of non-NA rows in your data (lalonde).
What you actually mean by "weighted dataset" is ambiguous for two reasons. First, any analyses that use those weights will typically not actually produce a new data.set...rather it will weight the contribution of the row to the likelihood. This is substantively different from simply analyzing a dataset that has had each row's values multiplied by its weight and will produce different results for many models. Second, you are asking how to get a weighted dataset that has character vectors in columns. For example, lalonde$race is a character vector. Multiplying 5*"black" doesn't make much sense.
If you are indeed intent on multiplying every value in every row of your data by the row's respective weight, you will need to convert your race variable to numeric indicators, remove it from your data, then you can apply sweep():
library(dplyr)
df <- lalonde %>%
black = if_else(race == "black", 1, 0),
hispan = if_else(race == "hispan",1,0),
white = if_else(race == "white",1,0)) %>%
select(-race)
sweep(df, MARGIN = 2, W.out[["weights"]], `*`)

T-test after running sentiment analysis

I am trying to run a t-test after doing the sentiment analysis.
I did the sentiment analysis, and grouped my data into two parts:
library(textdata)
afinn_dictionary <- get_sentiments("afinn")
news_tokenized <- full_data %>%
unnest_tokens(word, full_article, to_lower = TRUE)
head(news_tokenized$word, 10)
full_data$full_article[2]
word_counts_senti <- news_tokenized %>%
inner_join(afinn_dictionary)
head(word_counts_senti)
news_senti <- word_counts_senti %>%
group_by(partisan_media) %>% #group by partisan media
summarize(sentiment = sum(value))
head(news_senti)
#as a result, I got: c(1): -13194, c(2): -12321. Both group 1 and 2 were negative, but group 1's stories tend to use more negative words (have greater negative sentiment).
table(full_data$partisan_media) #there are 1866 articles in group 1 and 2174 articles in group 2
I am trying to see if the differences between groups 1 and 2 (two groups of partisan media) are statistically different by running a t-test.
I'm using:
g1_senti = rnorm(1866, mean = -7.07074, sd = ) #group1
g2_senti = rnorm(2174, mean = -5.667433, sd = ) #group2
t.test(g1_senti, g2_senti)
The means are from "sentiment score of a group" divided by "number of articles of a group"
But I wasn't sure what should be entered inside the parenthesis for the sd. Does anyone have an idea about this?
I am adding my data set here: https://www.mediafire.com/file/uei2e3tajvi7wao/eight.csv/file

Arranging tables in R

I am working with the dataset twinData in R. I have two questions related to data wrangling.
How would I go about listing only the combinations of cohort and zygosity where the twins’ heights are significantly similar.
My prior code was to create a new variable to indicate whether the correlation coefficient between ht1 and ht2 in the particular subgroup is greater 0.5, with 95 percent confidence.
sig_twin_cor <- twinData %>%
group_by(cohort,zygosity) %>%
do(tidy( cor.test(~ ht1 + ht2, alternative = "greater" , data = . ))) %>%
arrange(desc(estimate)) %>%
mutate(Greater0.5 = ifelse(estimate>0.5,"Yes","No"))
sig_twin_cor
Second, I need to transform the dataset twinData into a narrow form using gather(). Can someone show me how to do this?
Thanks!

I need to find all predictors(p-value < 0.05) from my dataset using loops. Is there any way to do it?

I am new to R and I am using glm() function to fit a logistic model. I have 5 columns. I need to find all possible predictors using a loop based on their p-values(less than 0.05).
My dataset has 40,000 entries which contains numerical and categorical variables and it looks more or less like this:
"Age" "Sex" "Occupation" "Education" "Income"
50 Male Farmer High School False
30 Female Maid High School False
25 Male Engineer Graduate True
The target variable "Income" denotes if the person earns more or less than 30K. If true means, they earn more than 30K and vice versa. I would like to find the predictor variables that can be used to predict the target using loops. Also, can I find the best 3 predictors based on their p-values?
Thanks in Advance!
If I understood correctly your question you are looking into a way of test univariable models given your dataframe (i am in fact in doubt if you want to test every combination of these variables including cross variation)
My suggestion is to use purrr::map function and create list for every column. Check the following example based on your information:
library(tidyr)
library(purrr)
## Sample data
df <- data.frame(
Age = rnorm(n = 40000,
mean = mean(c(50,30,25)),
sd(c(50,30,25))),
Ocupation = sample(x = c("Farmer", "Maid", "Engineer"),
size = 40000,
replace = TRUE),
Education = sample(x = c("High School", "Graduate", "UnderGraduate"),
size = 40000,
replace = TRUE),
Income = as.logical(rbinom(40000, 1, 0.5))
)
## Split dataframe into lists
list_df <- Map(cbind, split.default(df[-4], names(df)[-4]))
list_df <- lapply(list_df, cbind, "target" = df[4])
## Use map to fit a model for each list
list_models <- map(.x = list_df,
.f = ~glm(Income ~ ., data = .x, family = binomial))
You can call each model using list_models[i].
Now addressing the second part of your question concerning p-values. Given that each project is unique and so are their metrics i suggest you double check you usage of p-values. Granted, they are important, but they provid you a probability of acceptance given a specific statistic test and treshold which depends on context. It is a fundamental tool of statistical quality and decision (not only about t-test, but f-test and hence forward). But for ranking ? hmm i would say is a litle odd. But just saying :)

Replacing outliers in whole data set (based on Tukey and each level of a categorical variable) in R

How can I detect the outliers of all data set (all continuous columns) based on a categorical variable and replace them with NA. I want to use Tukey technique but focusing on each level of a categorical variable. For example replace the outliers of mtcars[, -c(8,9)] with NA based on the each level of mtcars$am
OR How can I modify this code to work for all variables in each level of am.
lapply(mtcars, function(x){sort(outlier_values<- boxplot.stats(x)$out)})
EDIT: outliers are now 1.5*IQR, as specified in comment.
This replaces the outliers in the qsec column per group in the am column with NA's. It does so by first constructin a dataframe called limits, which contains lower- and upperbounds per am group. Then, that dataframe is joined with the original dataframe, and outliers are filtered.
library(dplyr)
limits = data.frame(am = unique(mtcars$am))
limits$lower = lapply(limits$am, function(x) quantile(mtcars$qsec[mtcars$am==x],0.25) - 1.5 * (quantile(mtcars$qsec[mtcars$am==x],0.75)- quantile(mtcars$qsec[mtcars$am==x],0.25)) )
limits$upper = lapply(limits$am, function(x) quantile(mtcars$qsec[mtcars$am==x],0.75) + 1.5 * (quantile(mtcars$qsec[mtcars$am==x],0.75)- quantile(mtcars$qsec[mtcars$am==x],0.25)) )
df = mtcars %>% left_join(limits)
df$qsec = ifelse(df$qsec< df$lower | df$qsec>df$upper,NA,df$qsec)
df = df %>% select(-upper,-lower)
The a parameter can be used to determine what proportion is considered an outlier.

Resources