How to use R for multiple select questions? - r

I am trying to figure out how to analyze multiple select/multiple responses (i.e., 'select all that apply') questions in a survey I recently conducted.
SPSS has nice capabilities for analyzing online survey data and these types of questions so I am guessing that R has that and more. Dealing with these survey answers is a bit tricky in Excel. For example, show me a histogram/distribution everyone who likes strawberry and chocolate ice cream by age.
How do I structure the data set and what would be the commands to perform some basic tabulations of frequency, pareto, and logical AND OR functions?

I've not found anything that is quite as convenient as the multiple response sets in SPSS. However, you can create groups relatively easily based on common column names, and then use any of the apply() function or friends to iterate through each group. Here's one approach using adply() from the plyr package:
library(plyr)
set.seed(1)
#Fake data with three "like" questions. 0 = non selected, 1 = selected
dat <- data.frame(resp = 1:10,
like1 = sample(0:1, 10, TRUE),
like2 = sample(0:1, 10, TRUE),
like3 = sample(0:1, 10, TRUE)
)
adply(dat[grepl("like", colnames(dat))], 2, function(x)
data.frame(Count = as.data.frame(table(x))[2,2],
Perc = as.data.frame(prop.table(table(x)))[2,2]))
#-----
X1 Count Perc
1 like1 6 0.6
2 like2 5 0.5
3 like3 3 0.3

I recently wrote a quick function to deal with these. You can easily modify it to add proportion of total responses too.
set.seed(1)
dat <- data.frame(resp = 1:10,
like1 = sample(0:1, 10, TRUE),
like2 = sample(0:1, 10, TRUE),
like3 = sample(0:1, 10, TRUE))
The function:
multi.freq.table = function(data, sep="", dropzero=FALSE, clean=TRUE) {
# Takes boolean multiple-response data and tabulates it according
# to the possible combinations of each variable.
#
# See: http://stackoverflow.com/q/11348391/1270695
counts = data.frame(table(data))
N = ncol(counts)
counts$Combn = apply(counts[-N] == 1, 1,
function(x) paste(names(counts[-N])[x],
collapse=sep))
if (isTRUE(dropzero)) {
counts = counts[counts$Freq != 0, ]
} else if (!isTRUE(dropzero)) {
counts = counts
}
if (isTRUE(clean)) {
counts = data.frame(Combn = counts$Combn, Freq = counts$Freq)
}
counts
}
Apply the function:
multi.freq.table(dat[-1], sep="-")
# Combn Freq
# 1 1
# 2 like1 2
# 3 like2 2
# 4 like1-like2 2
# 5 like3 1
# 6 like1-like3 1
# 7 like2-like3 0
# 8 like1-like2-like3 1
Hope this helps! Otherwise, show some examples of desired output or describe some features, and I'll see what can be added.
Update
After looking at the output of SPSS for this online, it seems like the following should do it for you. This is easy enough to wrap into a function if you need to use it a lot.
data.frame(Freq = colSums(dat[-1]),
Pct.of.Resp = (colSums(dat[-1])/sum(dat[-1]))*100,
Pct.of.Cases = (colSums(dat[-1])/nrow(dat[-1]))*100)
# Freq Pct.of.Resp Pct.of.Cases
# like1 6 42.85714 60
# like2 5 35.71429 50
# like3 3 21.42857 30

multfreqtable(data_set, "Banner")
multfreqtable = function(data, question.prefix) {
z = length(question.prefix)
temp = vector("list", z)
for (i in 1:z) {
a = grep(question.prefix[i], names(data))
b = sum(data[, a] != 0)
d = colSums(data[, a] != 0)
e = sum(rowSums(data[,a]) !=0)
f = as.numeric(c(d, b))
temp[[i]] = data.frame(question = c(sub(question.prefix[i],
"", names(d)), "Total"),
freq = f,
percent_response = (f/b)*100,
percent_cases = (f/e)*100 )
names(temp)[i] = question.prefix[i]
}
temp
}
does a very good job of giving you numbers, percentages at the number of cases level and percentage at the number of responses level. Perfect for analyzing Multi-Response Questions

Related

Efficient Montecarlo simulation over a grid in R

I am running a Montecarlo simulation of a multinomial logit. Therefore I have a function that generates the data and estimates the model. Additionally, I want to generate different datasets over a grid of values. In particular, changing both the number of individuals (n.indiv) and the number of answers by each individual (n.choices).
So far, I have managed to solve it, but at some point, I incurred into a nested for-loop structure over a grid search of the possible values for the number of individuals (n.indiv_list) and the number of answers by each individual(n.choices_list). Finally, I am quite worried about the efficiency of the usage of my last bit of code with the double for-loop structure running on the combinations of the possible values. Probably there is a vectorized way to do it that I am missing (or maybe not?).
Finally, and this is mostly a matter of style, I managed to arrive a multiples objects that contain the models from the combinations of the grid search with informative names, but also would be great if I could collapse all of them in a list but with the current structure, I am not sure how to do it. Thank you in advance!
1) Function that generates data and estimates the model.
library(dplyr)
library(VGAM)
library(mlogit)
#function that generates the data and estimates the model.
mlogit_sim_data <- function(...){
# generating number of (n.alter) X (n.choices)
df <- data.frame(id= rep(seq(1,n.choices ),n.alter ))
# id per individual
df <- df %>%
group_by(id) %>%
mutate(altern = sequence(n()))%>%
arrange(id)
#Repeated scheme for each individual + id_ind
df <- cbind(df[rep(1:nrow(df), n.indiv), ], id_ind = rep(1:n.indiv, each = nrow(df)))
## creating attributes
df<- df %>%
mutate(
x1=rlnorm(n.indiv*n.alter),
x2=rlnorm(n.indiv*n.alter),
)%>%
group_by(altern) %>%
mutate(
id_choice = sequence(n()))%>%
group_by(id_ind) %>%
mutate(
z1 = rpois(1,lambda = 25),
z2 = rlnorm(1,meanlog = 5, sdlog = 0.5),
z3 = ifelse(runif(1, min = 0 , max = 1) > 0.5 , 1 , 0)
)
# Observed utility
df$V1 <- with(df, b1 * x1 + b2 * x2 )
#### Generate Response Variable ####
fn_choice_generator <- function(V){
U <- V + rgumbel(length(V), 0, 1)
1L * (U == max(U))
}
# Using fn_choice_generator to generate 'choice' columns
df <- df %>%
group_by(id_choice) %>%
mutate(across(starts_with("V"),
fn_choice_generator, .names = "choice_{.col}")) %>% # generating choice(s)
select(-starts_with("V")) %>% ##drop V variables.
select(-c(id,id_ind))
tryCatch(
{
model_result <- mlogit(choice_V1 ~ 0 + x1 + x2 |1 ,
data = df,
idx = c("id_choice", "altern"))
return(model_result)
},
error = function(e){
return(NA)
}
)
}
2) Grid search over possible combinations of the data
#List with the values that varies in the simulation
#number of individuals
n.indiv_list <- c(1, 15, 100, 500 )
#number of choice situations
n.choices_list <- c(1, 2, 4, 8, 10)
# Values that remains constant across simulations
#set number of alternatives
n.alter <- 3
## Real parameters
b1 <- 1
b2 <- 2
#Number of reps
nreps <- 10
#Set seed
set.seed(777)
#iteration over different values in the simulation
for(i in n.indiv_list) {
for(j in n.choices_list) {
n.indiv <- i
n.choices <- j
assign(paste0("m_ind_", i, "_choices_", j), lapply(X = 1:nreps, FUN = mlogit_sim_data))
}
}
You can vectorize using the map2 function of the purrr package:
library(tidyverse)
n.indiv_list <- c(1, 15, 100, 500 )
#number of choice situations
n.choices_list <- c(1, 2, 4, 8, 10)
l1 <- length(n.indiv_list)
l2 <- length(n.choices_list)
v1 <- rep(n.indiv_list, each = l2)
v2 <- rep(n.choices_list, l1) #v1, v2 generate all pairs
> v1
[1] 1 1 1 1 1 15 15 15 15 15 100 100 100 100 100 500 500 500 500 500
> v2
[1] 1 2 4 8 10 1 2 4 8 10 1 2 4 8 10 1 2 4 8 10
result <- map2(v1, v2, function(v1, v2) assign(paste0("m_ind_", v1, "_choices_", v2), lapply(X = 1:nreps, FUN = mlogit_sim_data)))
result will be a list of your function outputs.

Simple approach to assigning clusters for new data after k-modes clustering

I am using a k-modes model (mymodel) which is created by a data frame mydf1. I am looking to assign the nearest cluster of mymodel for each row of a new data frame mydf2.
Similar to this question - just with k-modes instead of k-means. The predict function of the flexclust package only works with numeric data, not categorial.
A short example:
require(klaR)
set.seed(100)
mydf1 <- data.frame(var1 = as.character(sample(1:20, 50, replace = T)),
var2 = as.character(sample(1:20, 50, replace = T)),
var3 = as.character(sample(1:20, 50, replace = T)))
mydf2 <- data.frame(var1 = as.character(sample(1:20, 50, replace = T)),
var2 = as.character(sample(1:20, 50, replace = T)),
var3 = as.character(sample(1:20, 50, replace = T)))
mymodel <- klaR::kmodes(mydf1, modes = 5)
# Get mode centers
mycenters <- mymodel$modes
# Now I would want to predict which of the 5 clusters each row
# of mydf2 would be closest to, e.g.:
# cluster2 <- predict(mycenters, mydf2)
Is there already a function which can predict with a k-modes model or what would be the simplest way to do that? Thanks!
We can use the distance measure that is used in the kmodes algorithm to assign each new row to its nearest cluster.
## From klaR::kmodes
distance <- function(mode, obj, weights) {
if (is.null(weights))
return(sum(mode != obj))
obj <- as.character(obj)
mode <- as.character(mode)
different <- which(mode != obj)
n_mode <- n_obj <- numeric(length(different))
for (i in seq(along = different)) {
weight <- weights[[different[i]]]
names <- names(weight)
n_mode[i] <- weight[which(names == mode[different[i]])]
n_obj[i] <- weight[which(names == obj[different[i]])]
}
dist <- sum((n_mode + n_obj)/(n_mode * n_obj))
return(dist)
}
AssignCluster <- function(df,kmeansObj)
{
apply(
apply(df,1,function(obj)
{
apply(kmeansObj$modes,1,distance,obj,NULL)
}),
2, which.min)
}
AssignCluster(mydf2,mymodel)
[1] 4 3 4 1 1 1 2 2 1 1 5 1 1 3 2 2 1 3 3 1 1 1 1 1 3 1 1 1 3 1 1 1 1 2 1 5 1 3 5 1 1 4 1 1 2 1 1 1 1 1
Please note that this will likely produce a lot of entries that are equally far away from multiple clusters and which.min will then choose the cluster with the lowest number.

Find combinations of genome without any package

I want to find how many combinations of genome are found in a sequence. I mean for binary combinations: AA,AT,AG,AC,... 16 combinations like that;or for 3-elemented combinations ATG,ACG,... 64 combinations like that. I know how to do that with a package and I will write down it here. I want to create my own code to perform this
seqinr package is perfect on its job. That is the code that i used for;
install.packages('seqinr')
library(seqinr)
m = read.fasta(file='sequence.fasta')
mseq = m[[1]]
count(mseq,2) # gives how many binary combinations are found in the seq
count(mseq,3) # gives how many 3-elemented combinations are found in the seq
This is a slow way to do it. I am certain it is faster in the bioconductor package.
# some practice data
mseq = paste(sample(c("A", "C", "G", "T"), 1000, rep=T), collapse="")
# define a function called count
count = function(mseq, n){
# split the sequence into every possible sub sequence of length n
x = sapply(1:(nchar(mseq) - n + 1), function(i) substr(mseq, i, i+n-1))
# how many unique sub sequences of length R are there?
length(table(x))
}
Actually just checked and this is pretty much how they did it:
function (seq, wordsize, start = 0, by = 1, freq = FALSE, alphabet = s2c("acgt"),
frame = start)
{
if (!missing(frame))
start = frame
istarts <- seq(from = 1 + start, to = length(seq), by = by)
oligos <- seq[istarts]
oligos.levels <- levels(as.factor(words(wordsize, alphabet = alphabet)))
if (wordsize >= 2) {
for (i in 2:wordsize) {
oligos <- paste(oligos, seq[istarts + i - 1], sep = "")
}
}
counts <- table(factor(oligos, levels = oligos.levels))
if (freq == TRUE)
counts <- counts/sum(counts)
return(counts)
}
If you want to find the code for a function use getAnywhere()
getAnywhere(count)
The simple thing to do is just something like this:
# Generate a test sequence
set.seed(1234)
testSeq <- paste(sample(LETTERS[1:3], 100, replace = T), collapse = "")
# Split string into chunks of size 2 and then count occurrences
testBigram <- substring(testSeq, seq(1, nchar(testSeq), 2), seq(2, nchar(testSeq), 2))
table(testBigram)
AA AB AC BA BB BC CA CB CC
10 10 14 3 3 2 2 5 1
Here is a way using a "function factory" (https://adv-r.hadley.nz/function-factories.html).
The 2-element and 3-element combinations are n-grams of size 2 and 3. So we make this n-gram function factory.
# Generate a function to create a function
ngram <- function(size) {
function(myvector) {
substring(myvector, seq(1, nchar(myvector), size), seq(size, nchar(myvector), size))
}
}
# Assign the functions names (optional)
bigram <- ngram(2)
trigram <- ngram(3)
# 2 element combinations
table(bigram(testSeq))
AA AB AC BA BB BC CA CB CC
10 10 14 3 3 2 2 5 1
# count of 2 element combinations
length(unique(bigram(testSeq)))
[1] 9
# counting function
count <- function(mseq, n) length(unique(ngram(n)(mseq)))
count(testSeq, 2)
[1] 9
# and if we wanted to do with with 3 element combinations
table(trigram(testSeq))

Duplicating character strings in R

I am simulating some random DNA strings (sequences) in R. These sequences are composed of the letters A, C, G and T.
In a dataset of such sequences, some may be identical (duplicates).
Suppose I have 100 such sequences, which can be grouped in 5 unique sets.
Working code that I now have (shown below) will output something like
Unique groups: 1 2 3 4 5
Number of Sequences: 96 1 1 1 1
However, I would like the distribution to be something like
Unique groups: 1 2 3 4 5
Number of Sequences: 56 24 10 5 5
This can be accomplished through randomly allocating some sequences from group 1 (96 sequences) to all other groups containing only a single sequence.
My sequences are stored in a variable called 'res'.
The below code doesn't quite do what I would like.
res <- res[sample(unique(nrow(res)), size = nrow(res), replace = TRUE), ]
In the above code, 'size' is the number of DNA sequences in the dataset.
How can the above code be altered to accomplish what I am needing?
Note: one does not know the number of unique sets that will be generated a priori, so using the 'probs' argument in sample will not suffice.
Reproducible example:
library(pegas)
num.seqs <- 100 # number of DNA sequences to generate
length.seqs <- 500 # length of DNA sequences
mu.rate <- 1e-4 # mutation rate
set.seed(1234) # for reproducibility
nucl <- as.DNAbin(c('a','c','g','t')) # DNA alphabet
res <- sample(nucl, size = length.seqs, replace = TRUE, prob = rep(0.25, 4)) # generate a random DNA sequence
mu.set <- list('a' = as.DNAbin('c'),
'a' = as.DNAbin('g'),
'a' = as.DNAbin('t'),
'c' = as.DNAbin('a'),
'c' = as.DNAbin('g'),
'c' = as.DNAbin('t'),
'g' = as.DNAbin('a'),
'g' = as.DNAbin('c'),
'g' = as.DNAbin('t'),
't' = as.DNAbin('a'),
't' = as.DNAbin('c'),
't' = as.DNAbin('g'))
muts <- function(res) {
unlist(mu.set[as.character(res)])
}
duplicate.seq <- function(res) { # duplicate sequrnce
num.muts <- rbinom(n = 1, size = length.seqs, prob = mu.rate) # add random mutations
if (num.muts > 0) {
idx <- sample(length.seqs, size = num.muts, replace = FALSE)
res[idx] <- muts(res[idx])
}
res
}
res <- matrix(replicate(num.seqs, duplicate.seq(res)), byrow = TRUE, nrow = num.seqs) # generate num.seqs sequences
class(res) <- "DNAbin"
h <- sort(haplotype(res), decreasing = TRUE, what = "frequencies") # distribution of unique groups
rownames(h) <- 1:nrow(h)
h # print distribution
A simpler example (sans R package)
num.seqs <- 10
length.seqs <- 10
lets <- letters[c(1, 3, 7, 20)] # DNA alphabet
lets
res <- t(replicate(num.seqs, sample(lets, length.seqs, replace = TRUE))) # generate sequences
res <- res[sample(nrow(res), size = nrow(res), replace = TRUE), ] # duplicate some sequences

Creating mock data with natural decreasing numbers

I want to create random mock data looks like this.
__ID__|__Amount__
1 20
1 14
1 9
1 3
2 11
2 5
2 2
Starting from the random number but the second number with the same ID should be lesser than the first one, and the third number has to be lesser than the second one. Maximum number to start should be 20.
you can just create the data first and then sort it as you need, using tidyverse :
set.seed(0)
df <- data.frame(id = rep(1:3,10), amt = sample(1:20, 30, replace = TRUE))
df %>%
group_by(id) %>%
arrange(id, desc(amt))
This is a tricky one if you want the Amount column to be truly random values you can use a recursive call that will use sample recursively:
## Recursively sampling from a uniform distribution
recursive.sample <- function(start, end, length, results = NA, counter =0) {
## To enter the recursion, counter must be smaller than the length out
## and the last result must be smaller than the starting point (except the firs time)
if(counter < length && ifelse(counter != 0, results[counter] > start, TRUE)){
## Increment the counter
counter <- counter + 1
## Sample between start and the last result or the start and the end of the vector
results[counter] <- ifelse(counter != 1, sample(start:results[counter-1], 1), sample(start:end, 1))
## Recursive call
return(recursive.sample(start = start, end = end, length = length, results = results, counter = counter))
} else {
## Exit the recursion
return(results)
}
}
## Example
set.seed(0)
recursive.sample(start = 1, end = 20, length = 3, results = NA, counter = 0)
#[1] 18 5 2
Alternatively (and way easier) you can use sort(sample()):
set.seed(0)
sort(sample(1:20, 3), decreasing = TRUE)
#[1] 18 7 6
Note that the results differ due to the lower probability of sampling higher values in the recursive function.
You can then easily create your table with your chosen function as follow:
set.seed(123)
## The ID column
ID <- c(rep(1, 4), rep(2,3))
## The Amount column
Amount <- c(recursive.sample(1, 20, 4, NA, 0), recursive.sample(1, 11, 3, NA, 0))
## The table
cbind(ID, Amount)
# ID Amount
#[1,] 1 18
#[2,] 1 5
#[3,] 1 2
#[4,] 1 2
#[5,] 2 10
#[6,] 2 3
#[7,] 2 3
Or, again, with the simple sort(sample()) function for a higher probability of picking larger numbers.
Two methods, one using dplyr and one using only base R functions. These are slightly different to the two previous solutions.
I used sorted ID column, but this is not necessary.
Method 1
rm(list = ls())
set.seed(1)
df <- data.frame(ID = rep(1:3, each = 5))
df %>% group_by(ID) %>%
mutate(Amount = sort(sample(1 : 20, n(), replace = T), decreasing = TRUE))
Method 2
rm(list = ls())
set.seed(1)
df <- data.frame(ID = rep(1:3, each = 5))
df$Amount <- NA
uniq_ID <- unique(df$ID)
index_lst <- lapply(uniq_ID, function(x) which(df$ID == x))
res <- lapply(index_lst, function(x) sort(sample(1 : 20, length(x)),
decreasing = TRUE))
df$Amount[unlist(index_lst)] <- unlist(res)
Method 2.5
This is more convoluted than the 2nd method.
rm(list = ls())
set.seed(1)
df <- data.frame(ID = rep(1:3, each = 5))
df$Amount <- NA
tab <- as.data.frame(table(df$ID))
lapply(1 : nrow(tab), function(x) df$Amount[which(df$ID == tab$Var1[x])] <<-
sort(sample(1 : 20, tab$Freq[x]), decreasing = TRUE))

Resources