Here is the outline of my data. There are 500 students. Each student has final grade for math, physics, chemistry, music, history. The range of the final grade for each subject is from 0 to 100. For each subject, if student's grade is below a cutoff, then the student will fail this subject. However, the teacher of each subject may change a few students (less than 5%) assessment from fail to pass due to their good performance for class activity. If a student fail any subject, then the overall assessment is supposed to be fail. If a student pass all 5 subjects, then the overall assessment is pass.
Now suppose the cutoffs for math, physics, chemistry, music, history are 45, 45, 45, 60, 60, respectively. Then we will have the demo table below. The second student passed the history due to the history teacher is satisfied with his class performance.
ID math physics chemistry music history overall_assessment
1 95 96 70 65 75 pass
2 46 61 72 86 59 pass
3 55 32 21 95 96 fail
Now my question is that if I have the table above, how can I know the cutoff for each subject? I have the data below in R.
set.seed(1)
math <- sample(30:100, 500, replace=T)
physics <- sample(30:100, 500, replace=T)
chemistry<- sample(30:100, 500, replace=T)
music<- sample(30:100, 500, replace=T)
history<- sample(60:100, 500, replace=T)
grade <- as.data.frame(cbind(math,physics,chemistry,music,history))
grade$assess <- ifelse(grade$math > 45 & grade$physics >55 & grade$chemistry > 60 & grade$music > 50 & grade$history > 80, "pass","fail")
grade$ID <- seq(1,500,1)
change_grade <- sample(1:500, 25, replace=F)
grade$assess[grade$ID %in% change_grade] <- "pass"
Because there is randomness in who is selected to pass for good activity, it is not possible to find the exact cutoff values. But we can find upper and lower bounds for the cutoff. Note that I slightly adjust the data generation, but you can change it and confirm this method gives correct bounds no matter the true cutoffs.
library(tidyverse)
n <- 500
prop <- 0.05
set.seed(1)
math <- sample(30:100, n, replace = T)
physics <- sample(30:100, n, replace = T)
chemistry <- sample(30:100, n, replace = T)
music <- sample(30:100, n, replace = T)
history <- sample(30:100, n, replace = T)
grade <-
as.data.frame(cbind(math, physics, chemistry, music, history))
grade$assess <- ifelse(
grade$math >= 45 &
grade$physics >= 45 &
grade$chemistry >= 45 &
grade$music >= 60 &
grade$history >= 60,
"pass", "fail")
grade$ID <- seq(1, n, 1)
change_grade <- sample(1:n, n * prop, replace = F)
grade$assess[grade$ID %in% change_grade] <- "pass"
grade$assess <- factor(grade$assess)
To find the upper bound for a subject, we will consider all individuals who passed the assessment, and look at their grades in that subject. We know that at most 25 individuals were granted an exception for that subject (n * proportion of exceptions), so the grade of the 26th worst individual is an upper bound for the cutoff score.
# upper bound
get_upper_bound <- function(var, n, prop) {
var <- var[order(var)]
var[ceiling(n * prop) + 1]
}
upper_bound <- grade %>%
subset(assess == "pass") %>%
summarise(
math = get_upper_bound(math, n = n, prop = prop),
physics = get_upper_bound(physics, n = n, prop = prop),
chemistry = get_upper_bound(chemistry, n = n, prop = prop),
music = get_upper_bound(music, n = n, prop = prop),
history = get_upper_bound(history, n = n, prop = prop))
upper_bound
#> math physics chemistry music history
#> 1 57 53 58 68 67
Having now found an upper bound, we can look at the lower bounds. Consider all individuals who passed Math, Physics, Chemistry, and Music by achieving at least the upper bound in those subjects, but who also failed the assessment. Then we know that they must have failed the History subject. Looking at the maximum History grade in those students gives us a lower bound for the cutoff score for History. We can apply this for all different subjects.
This code is inelegant, but I believe it works.
# lower bound
get_lower_bound <- function(varnum, data, upper_bound) {
varnames = c("math", "physics", "chemistry", "music", "history")
vars_using <- c(1:5)
vars_using <- vars_using[-varnum]
indexes <- rep(TRUE, nrow(data))
for (i in vars_using) {
indexes <-
indexes & (data[, varnames[i]] >= as.numeric(upper_bound[i]))
}
indexes <- indexes & (data$assess == "fail")
ifelse(is.finite(max(data[indexes, varnum])),
max(data[indexes, varnum]) + 1,
min(data[, varnum]))
}
lower_bound <- data.frame(
"math" = get_lower_bound(1, grade, upper_bound),
"physics" = get_lower_bound(2, grade, upper_bound),
"chemistry" = get_lower_bound(3, grade, upper_bound),
"music" = get_lower_bound(4, grade, upper_bound),
"history" = get_lower_bound(5, grade, upper_bound))
lower_bound
#> math physics chemistry music history
#> 1 45 44 45 58 60
Then the final bounds for the cutoff scores are:
rbind("lower" = lower_bound,
"upper" = upper_bound)
#> math physics chemistry music history
#> lower 45 44 45 58 60
#> upper 57 53 58 68 67
Created on 2022-08-30 by the reprex package (v2.0.1)
Note that by increasing n and decreasing prop, eventually the lower bound and upper bound are equal, and we have found the cutoff score exactly.
Related
Probably a stupid question but I have no idea how to do this.
Consider the following game, in which a balanced die with six sides numbered from 1 to 6 is rolled. If a 4 or 1 is rolled, you lose 50 euros. If you roll 2 or 3, nothing happens. If you roll 5, you win 50 euros. If you roll 6, you win 16×50 euros.
We would like to know how much money you can expect to win per game on average. Setting the seed to 990, simulate 5649 repetitions of the game.
Calculate the average of the winnings in these repetitions, as an estimate of the expected value of the winning in the game. Indicate this value circled to 2 decimal places.
Here is a base R way with a logical index on the die side.
set.seed(990)
rolls <- sample(6, 5649, TRUE)
win <- integer(5649)
win[rolls %in% c(1, 4)] <- -50
win[rolls == 5] <- 50
win[rolls == 6] <- 16*50
mean(win)
#> [1] 121.4728
Created on 2022-11-27 with reprex v2.0.2
A simpler way. Create a vector of prizes and index it with the rolls values.
prizes <- c(-50, 0, 0, -50, 50, 16*50)
win <- prizes[rolls]
mean(win)
#> [1] 121.4728
Created on 2022-11-27 with reprex v2.0.2
To output the result with 2 decimal places, just
round(mean(win), 2)
#> 121.47
#Simulation of the dice roll
set.seed(990);dice_roll <- sample(1:6,5649,replace = TRUE)
library(dplyr)
df <- tibble(dice_roll = dice_roll)
df %>%
mutate(
#Setting each dice roll to their respective result
result = case_when(
dice_roll == 6 ~ (16*50),
dice_roll == 5 ~ 50,
(dice_roll == 2 | dice_roll == 3) ~ 0,
(dice_roll == 1 | dice_roll == 4) ~ -50,
)
) %>%
# The global average
summarise(average = round(mean(result),2)) %>%
pull(average)
[1] 121.47
Could just get the analytical solution:
P(X=-50) = 1/3, P(X=0) = 1/3, P(X=50) = 1/6, P(X=16*50) = 1/6.
E[X] = -50/3 + 0/3 + 50/6 + 16*50/6 = 125.
-50/3 + 0/3 + 50/6 + 16*50/6
[1] 125
I have a large dataset with multiple categorical values that have different integer values (counts) in two different groups.
As an example
Element <- c("zinc", "calcium", "magnesium", "sodium", "carbon", "nitrogen")
no_A <- c(45, 143, 10, 35, 70, 40)
no_B <- c(10, 11, 1, 4, 40, 30)
elements_df <- data.frame(Element, no_A, no_B)
Element
no_A
no_B
Zinc
45
10
Calcium
143
11
Magnesium
10
1
Sodium
35
4
Carbon
70
40
Nitrogen
40
30
Previously I’ve just been using the code below and changing x manually to get the output values:
x = "calcium"
n1 = (elements_df %>% filter(Element== x))$no_A
n2 = sum(elements_df$no_A) - n1
n3 = (elements_df %>% filter(Element== x))$no_B
n4 = sum(elements_df$no_B) - n3
fisher.test(matrix(c(n1, n2, n3, n4), nrow = 2, ncol = 2, byrow = TRUE))
But I have a very large dataset with 4000 rows and I’d like the most efficient way to iterate through all of them and see which have significant p values.
I imagined I’d need a for loop and function, although I’ve looked through a few previous similar questions (none that I felt I could use) and it seems using apply might be the way to go.
So, in short, can anyone help me with writing code that iterates over x in each row and prints out the corresponding p values and odds ratio for each element?
You could get them all in a nice data frame like this:
`row.names<-`(do.call(rbind, lapply(seq(nrow(elements_df)), function(i) {
f <- fisher.test(matrix(c(elements_df$no_A[i], sum(elements_df$no_A[-i]),
elements_df$no_B[i], sum(elements_df$no_B[-i])), nrow = 2));
data.frame(Element = elements_df$Element[i],
"odds ratio" = f$estimate, "p value" = scales::pvalue(f$p.value),
"Lower CI" = f$conf.int[1], "Upper CI" = f$conf.int[2],
check.names = FALSE)
})), NULL)
#> Element odds ratio p value Lower CI Upper CI
#> 1 zinc 1.2978966 0.601 0.6122734 3.0112485
#> 2 calcium 5.5065701 <0.001 2.7976646 11.8679909
#> 3 magnesium 2.8479528 0.469 0.3961312 125.0342574
#> 4 sodium 2.6090482 0.070 0.8983185 10.3719176
#> 5 carbon 0.3599468 <0.001 0.2158107 0.6016808
#> 6 nitrogen 0.2914476 <0.001 0.1634988 0.5218564
I am visiting a bird sanctuary that has many different species of birds. Some species are more numerous while other species are less numerous. I came back to the sanctuary 9 times and after every visit I am calculating the total number of species I observed. Unsurprisingly, there is a diminishing return in my visits, since I observe the most numerous species on my every visit, but it does not increase the count of observed species. What is the best function in R to predict how many birds I will observe on my 20th visit?
Here is the data.frame
d <- structure(list(visit = 1:9,
totalNumSpeciesObserved = c(200.903, 296.329, 370.018, 431.59, 485.14, 533.233, 576.595, 616.536, 654)),
class = "data.frame", row.names = c(NA, 9L))
I expect to see a model that fits data well and behaves in a "log-like" fashion, predicting diminishing returns
In order to best ask a question, stack has some good links: https://stackoverflow.com/help/how-to-ask
If you're trying to model this, I might take the approach of a regression on the square root of the independent variable based on the data. Kind of strange to think about it as a function of visits though... Maybe if it were even spaced time periods it would make more sense.
d <- structure(list(visit = 1:9,
totalNumSpeciesObserved = c(200.903, 296.329, 370.018, 431.59, 485.14, 533.233, 576.595, 616.536, 654)),
class = "data.frame", row.names = c(NA, 9L))
mod <- lm(totalNumSpeciesObserved ~ I(sqrt(visit)), d)
new.df <- data.frame(visit=1:13)
out <- predict(mod, newdata = new.df)
plot(d, type = 'o',pch = 16, xlim = c(1,13), ylim = c(200,800), lwd = 2, cex = 2)
points(out, type= 'o', pch = 21, col = "blue", cex = 2)
The I() wrapper allows you to transform the independent variable on the fly, hense the use of sqrt() without needing to save a new variable.
I also don't know if this helps, but you could build a simulator to test for asymptoptic behaviour. For example you could build a population:
population <- sample(size = 1e6, LETTERS[1:20],
replace = TRUE, prob = 1/(2:21)^2)
This would say there are 20 species and decreasing probability in your population (expand as you wish).
The you could simulate visits and information about your visit. For example how large is the sample of your visit? During a visit you only see 1% of the rainforest etc.
sim_visits <- function(visits, percent_obs, population){
species_viewed <- vector()
unique_views <- vector()
for(i in 1:visits){
my_samp <- sample(x = population, size = round(percent_obs*length(population),0),
replace = FALSE)
species_viewed <- c(species_viewed, my_samp)
unique_views[i] <- length(unique(species_viewed))
}
new_observed <- unique_views - dplyr::lag(unique_views, 1, 0)
df <- data.frame(unique_views = unique_views, new_observed)
df$cummulative <- cumsum(unique_views)
df
}
And then you could draw from the simulation many times and see what distribution of values you get.
sim_visits(9, percent_obs = .001, population = population)
unique_views new_observed cummulative
1 13 13 13
2 15 2 28
3 15 0 43
4 17 2 60
5 17 0 77
6 17 0 94
7 17 0 111
8 17 0 128
9 17 0 145
And don't know if this is helpful, but I find simulation a good way to conceptualise problems like these.
I'm working to implement a lpSolve solution to optimizing a hypothetical daily fantasy baseball problem. I'm having trouble applying my last constraint:
position - Exactly 3 outfielders (OF) 2 pitchers (P) and 1 of everything else
cost - Cost less than 200
team - Max number from any one team is 6
team - Minimum number of teams on a roster is 3**
Say for example you have a dataframe of 1000 players with points, cost, position, and team and you're trying to maximize average points:
library(tidyverse)
library(lpSolve)
set.seed(123)
df <- data_frame(avg_points = sample(5:45,1000, replace = T),
cost = sample(3:45,1000, replace = T),
position = sample(c("P","C","1B","2B","3B","SS","OF"),1000, replace = T),
team = sample(LETTERS,1000, replace = T)) %>% mutate(id = row_number())
head(df)
# A tibble: 6 x 5
# avg_points cost position team id
# <int> <int> <chr> <chr> <int>
#1 17 13 2B Y 1
#2 39 45 1B P 2
#3 29 33 1B C 3
#4 38 31 2B V 4
#5 17 13 P A 5
#6 10 6 SS V 6
I've implemented the first 3 constraints with the following code, but i'm having trouble figuring out how to implement the minimum number of teams on a roster. I think I need to add additional variable to the model, but i'm not sure how to do that.
#set the objective function (what we want to maximize)
obj <- df$avg_points
# set the constraint rows.
con <- rbind(t(model.matrix(~ position + 0,df)), cost = df$cost, t(model.matrix(~ team + 0, df)) )
#set the constraint values
rhs <- c(1,1,1,1,3,2,1, # 1. #exactly 3 outfielders 2 pitchers and 1 of everything else
200, # 2. at a cost less than 200
rep(6,26) # 3. max number from any team is 6
)
#set the direction of the constraints
dir <- c("=","=","=","=","=","=","=","<=",rep("<=",26))
result <- lp("max",obj,con,dir,rhs,all.bin = TRUE)
If it helps, i'm trying to replicate This paper (with minor tweaks) which has corresponding julia code here
This might be a solution for your problem.
This is the data I have used (identical to yours):
library(tidyverse)
library(lpSolve)
N <- 1000
set.seed(123)
df <- tibble(avg_points = sample(5:45,N, replace = T),
cost = sample(3:45,N, replace = T),
position = sample(c("P","C","1B","2B","3B","SS","OF"),N, replace = T),
team = sample(LETTERS,N, replace = T)) %>%
mutate(id = row_number())
You want to find x1...xn that maximise the objective function below:
x1 * average_points1 + x2 * average_points1 + ... + xn * average_pointsn
With the way lpSolve works, you will need to express every LHS as the sum over
x1...xn times the vector you provide.
Since you cannot express the number of teams with your current variables, you can introduce new ones (I will call them y1..yn_teams and z1..zn_teams):
# number of teams:
n_teams = length(unique(df$team))
Your new objective function (ys and zs will not influence your overall objective funtion, since the constant is set to 0):
obj <- c(df$avg_points, rep(0, 2 * n_teams))
)
The first 3 constraints are the same, but with the added constants for y and z:
c1 <- t(model.matrix(~ position + 0,df))
c1 <- cbind(c1,
matrix(0, ncol = 2 * n_teams, nrow = nrow(c1)))
c2 = df$cost
c2 <- c(c2, rep(0, 2 * n_teams))
c3 = t(model.matrix(~ team + 0, df))
c3 <- cbind(c3, matrix(0, ncol = 2 * n_teams, nrow = nrow(c3)))
Since you want to have at least 3 teams, you will first use y to count the number of players per team:
This constraint counts the number of players per team. You sum up all players of a team that you have picked and substract the corresponding y variable per team. This should be equal to 0. (diag() creates the identity matrix, we do not worry about z at this point):
# should be x1...xn - y1...n = 0
c4_1 <- cbind(t(model.matrix(~team + 0, df)), # x
-diag(n_teams), # y
matrix(0, ncol = n_teams, nrow = n_teams) # z
) # == 0
Since each y is now the number of players in a team, you can now make sure that z is binary with this constraint:
c4_2 <- cbind(t(model.matrix(~ team + 0, df)), # x1+...+xn ==
-diag(n_teams), # - (y1+...+yn )
diag(n_teams) # z binary
) # <= 1
This is the constraint that ensures that at least 3 teams are picked:
c4_3 <- c(rep(0, nrow(df) + n_teams), # x and y
rep(1, n_teams) # z >= 3
)
You need to make sure that
You can use the big-M method for that to create a constraint, which is:
Or, in a more lpSolve friendly version:
In this case you can use 6 as a value for M, because it is the largest value any y can take:
c4_4 <- cbind(matrix(0, nrow = n_teams, ncol = nrow(df)),
diag(n_teams),
-diag(n_teams) * 6)
This constraint is added to make sure all x are binary:
#all x binary
c5 <- cbind(diag(nrow(df)), # x
matrix(0, ncol = 2 * n_teams, nrow = nrow(df)) # y + z
)
Create the new constraint matrix
con <- rbind(c1,
c2,
c3,
c4_1,
c4_2,
c4_3,
c4_4,
c5)
#set the constraint values
rhs <- c(1,1,1,1,3,2,1, # 1. #exactly 3 outfielders 2 pitchers and 1 of everything else
200, # 2. at a cost less than 200
rep(6, n_teams), # 3. max number from any team is 6
rep(0, n_teams), # c4_1
rep(1, n_teams), # c4_2
3, # c4_3,
rep(0, n_teams), #c4_4
rep(1, nrow(df))# c5 binary
)
#set the direction of the constraints
dir <- c(rep("==", 7), # c1
"<=", # c2
rep("<=", n_teams), # c3
rep('==', n_teams), # c4_1
rep('<=', n_teams), # c4_2
'>=', # c4_3
rep('<=', n_teams), # c4_4
rep('<=', nrow(df)) # c5
)
The problem is almost the same, but I am using all.int instead of all.bin to make sure the counts work for the players in the team:
result <- lp("max",obj,con,dir,rhs,all.int = TRUE)
Success: the objective function is 450
roster <- df[result$solution[1:nrow(df)] == 1, ]
roster
# A tibble: 10 x 5
avg_points cost position team id
<int> <int> <chr> <chr> <int>
1 45 19 C I 24
2 45 5 P X 126
3 45 25 OF N 139
4 45 22 3B J 193
5 45 24 2B B 327
6 45 25 OF P 340
7 45 23 P Q 356
8 45 13 OF N 400
9 45 13 SS L 401
10 45 45 1B G 614
If you change your data to
N <- 1000
set.seed(123)
df <- tibble(avg_points = sample(5:45,N, replace = T),
cost = sample(3:45,N, replace = T),
position = sample(c("P","C","1B","2B","3B","SS","OF"),N, replace = T),
team = sample(c("A", "B"),N, replace = T)) %>%
mutate(id = row_number())
It will now be infeasable, because the number of teams in the data is less then 3.
You can check that it now works:
sort(unique(df$team))[result$solution[1027:1052]==1]
[1] "B" "E" "I" "J" "N" "P" "Q" "X"
sort(unique(roster$team))
[1] "B" "E" "I" "J" "N" "P" "Q" "X"
I have a dataset with both missing-at-random (MAR) and censored data. The variables are correlated and I am trying to impute the missing data conditionally so that I can estimate the distribution parameters for a correlated multivariate normal distribution. I would like to use Gibbs MCMC, but am having difficulty implementing the procedure. My dataframe has 5 variables (denoted x1:x5), 1099 samples which contain some combination of MAR, censored and observed values. This is what I have tried so far:
# packages
library(msm, tmvtnorm, MCMCpack)
# priors
theta0<-c(rep(0, 5))
Sigma0<-S0<-diag(5)
nu0<-4
# initialize parameters
theta<-c(rep(0, 5))
Tau<-diag(5)
# initialize output matrix
n_samples <- 1000
mu_MCMC <- matrix(0, nrow = n_samples, ncol = 5)
mu_MCMC[1,] <- theta
cov_MCMC <- matrix(0, nrow = n_samples, ncol = 25)
cov_MCMC[1,] <- c(diag(5))
# detection limits
det_lim <- matrix(c(-1.7, 0, 0, 0, 0), nrow = 1, ncol = 5)
# function to detect NaN (i.e., below detection data)
is.nan.data.frame <- function(x)
do.call(cbind, lapply(x, is.nan))
for(i in 2:n_samples){
imputedDF <- data.frame()
for(r in 1:nrow(originalDF)){
# variables that are MAR or censored
mis <- r[, is.na(r) & is.nan(r)]
# variables that are observed
obs <- r[, !is.na(r)]
# subset mu for missing, observed
mu1 <- mu[, names(r) %in% names(mis)]
mu2 <- mu[, names(r) %in% names(obs)]
# calculate sigmas for MVN partitions of mis, obs
sigma11 <- sigma[names(r) %in% names(mis), names(r) %in% names(mis)]
sigma22 <- sigma[names(r) %in% names(obs), names(r) %in% names(obs)]
sigma12 <- sigma[names(r) %in% names(obs), names(r) %in% names(mis)]
sigma21 <- t(sigma12)
# create matrix for detection limits based on missing values
## if NaN, use detection limit; if NA use Inf
dl <- c(ifelse("x1" %in% names(is.nan(r)), det_lim[1, "x1"], Inf),
ifelse("x2" %in% names(is.nan(r)), det_lim[1, "x2"], Inf),
ifelse("x3" %in% names(is.nan(r)), det_lim[1, "x3"], Inf),
ifelse("x4" %in% names(is.nan(r)), det_lim[1, "x4"], Inf),
ifelse("x5" %in% names(is.nan(r)), det_lim[1, "x5"], Inf))
# compute mu, sigma to use for conditional MVN
## if all values are missing
if(length(names(obs) == 0) {
mu_mis <- mu1
sigma_mis <- sigma11
## otherwise
} else {
mu_mis <- mu1 + sigma12 %*% solve(sigma22) * (obs - t(mu2))
sigma_mis <- sigma11 - sigma12 %*% solve(sigma22) %*% sigma21
}
# imputation
## if all data are observed, missing is empty
if(length(obs) == 0) {
mis_impute <- data.frame()
## only need to impute a single value
} else if(length(names(mis)) == 1) {
mis_impute <- rtnorm(1, mean = mu_mis, sd = sigma_mis, lower = -Inf, upper = dl)
## have more than one missing value
} else {
mis_impute <- rtmvnorm(1, mean = mu_mis, sigma = sigma_mis, lower = rep(-Inf, length = length(names(mis))), upper = dl)
}
# merge observed values with simulated
## if all values observed
if(length(names(mis)) == 0) {
sim_result <- obs
} else {
sim_result <- cbind(mis_impute, obs)
}
imputedDF <- rbind(imputedDF, sim_result)
}
# update theta
v <- solve(solve(Sigma0) + nrow(sim_result)*Tau)
m <- v %*% (solve(Sigma0) %*% theta0 + Tau %*% apply(sim_result,2,sum))
mu <- as.data.frame(rmvnorm(1,m,v))
mu_MCMC[i,] <- mu
# update Sigma
tmp <- t(sim_result) - mu
Tau <- rwish(nu0 + nrow(sim_result), solve(S0 + t(tmp) %*% tmp))
sigma <- matrix(c(solve(Tau)), nrow = 5, ncol = 5, byrow = TRUE)
cov_MCMC[i,] <- c(solve(Tau))
}
I keep running into errors because the imputation returns NaN and NA values, but I can't figure out what is going wrong because when I test it just using the inner loop to impute the data, it seems to work. Thus, the issue seems to be the parameter updating but I can't figure it out!
Preamble:
My sense is that part of the problem here is we do not have a good example dataset to work off.
My feeling is we can address this by creating an example dataset to frame the solution discussion. A useful package to this end is the Wakefield package that allows for the creation of simulated datasets.
We might, for example, create a dataset of 2000 people, where some of the ages, gender, employment status, education data, and marital status information is missing.
Imputation
The core question is can we impute the age or gender from other data in the data set?
For example, if we do not know someone's age, can we impute it from their marital status, employment type and or their Education level? At a very simplistic level, we might simply search for entries with NA for age, and look at Marital status. If the marital status is "married", then we impute that our data set is for American's and look up at the average age for marriage and replace with an estimated age for a married person.
We can expand on this and make our estimates more accurate by taking into account more variables. For example, we might look at both Marital Status, Education level and Employment status to further improve our age estimate. If a person is married, with a Ph.D., and retired we push the age upwards. If a person is single, a student we push the age lower. Further to this, we can look at the distribution of the ages in the data set to impute data about missing values.
Generate an Example Data Set.
# packages
requiredPackages <- c("wakefield", "dplyr", "BaylorEdPsych", "mice", "MCMCpack")
ipak <- function(pkg) {
new.pkg <- pkg[!(pkg %in% installed.packages()[, "Package"])]
if (length(new.pkg)) {
install.packages(new.pkg, dependencies = TRUE)
}
sapply(pkg, require, character.only = TRUE)
}
ipak(requiredPackages)
# generate some data for Males with a 8% missing value for
# age
set.seed(10)
f.df <- r_data_frame(n = 1000, age,
gender(x = c("M", "F"), prob = c(0, 1), name = "Gender"),
employment(x = c("Full Time", "Part Time", "Unemployed", "Retired", "Student"), prob = c(0.6, 0.1, 0.1, 0.1, 0.1), name = "Employment"),
education(x = c("No Schooling Completed", "Nursery School to 8th Grade", "9th Grade to 12th Grade, No Diploma",
"Regular High School Diploma", "GED or Alternative Credential", "Some College, Less than 1 Year", "Some College, 1 or More Years, No Degree",
"Associate's Degree", "Bachelor's Degree", "Master's Degree", "Professional School Degree", "Doctorate Degree"),
prob = c(0.013, 0.05, 0.085, 0.246, 0.039, 0.064, 0.15, 0.075, 0.176, 0.072, 0.019, 0.012), name = "Education"),
marital(x = c("Married", "Divorced", "Widowed", "Separated", "Never Married"), prob = NULL, name = "Marital")) %>% r_na(cols = 1 - 3, prob = 0.05)
# str(f.df)
summary(f.df)
set.seed(20)
# generate some data for Females with a 5% missing value for
# age
m.df <- r_data_frame(n = 1000, age,
gender(x = c("M", "F"), prob = c(1, 0), name = "Gender"),
employment(x = c("Full Time", "Part Time", "Unemployed", "Retired", "Student"), prob = c(0.6, 0.1, 0.1, 0.1, 0.1), name = "Employment"),
education(x = c("No Schooling Completed", "Nursery School to 8th Grade", "9th Grade to 12th Grade, No Diploma",
"Regular High School Diploma", "GED or Alternative Credential", "Some College, Less than 1 Year", "Some College, 1 or More Years, No Degree",
"Associate's Degree", "Bachelor's Degree", "Master's Degree", "Professional School Degree", "Doctorate Degree"),
prob = c(0.013,0.05, 0.085, 0.246, 0.039, 0.064, 0.15, 0.075, 0.176, 0.072,0.019, 0.012), name = "Education"),
marital(x = c("Married", "Divorced", "Widowed", "Separated", "Never Married"), prob = NULL, name = "Marital")) %>% r_na(cols = 1 - 3, prob = 0.03)
summary(m.df)
all.df = rbind.data.frame(m.df, f.df)
summary(all.df)
Data Summary
> summary(all.df)
Age Gender Employment Education Marital
Min. :18.00 M:1000 Full Time :1142 Regular High School Diploma :459 Married :394
1st Qu.:35.00 F:1000 Part Time : 207 Bachelor's Degree :356 Divorced :378
Median :54.00 Unemployed: 193 Some College, 1 or More Years, No Degree:284 Widowed :411
Mean :53.76 Retired : 182 9th Grade to 12th Grade, No Diploma :156 Separated :379
3rd Qu.:72.00 Student : 196 Associate's Degree :145 Never Married:358
Max. :89.00 NA's : 80 (Other) :520 NA's : 80
NA's :80 NA's : 80
>
Is data Missing completely at Random or Not Missing at Random?
# Test for MCAR - Missing at Completely at Random...
test_mcar <- LittleMCAR(all.df)
print(test_mcar$amount.missing)
print(test_mcar$p.value)
Console Output
> # Test for MCAR - Missing at Completely at Random...
> test_mcar <- LittleMCAR(all.df)
this could take a while> print(test_mcar$amount.missing)
Age Gender Employment Education Marital
Number Missing 80.00 0 80.00 80.00 80.00
Percent Missing 0.04 0 0.04 0.04 0.04
> print(test_mcar$p.value)
[1] 0.02661428
Imputation of Data
Ok, let us first look at the distribution of missing values. We can run mice::md.pattern() function, to show the distribution of the missing values over the other columns in the dataframe. The md.pattern() function output is useful for suggesting which variables might be good candidates to use for imputing the missing values:
> md.pattern(all.df)
Gender Age Employment Education Marital
1696 1 1 1 1 1 0
73 1 1 1 1 0 1
73 1 1 1 0 1 1
2 1 1 1 0 0 2
71 1 1 0 1 1 1
3 1 1 0 1 0 2
2 1 1 0 0 1 2
71 1 0 1 1 1 1
2 1 0 1 1 0 2
3 1 0 1 0 1 2
4 1 0 0 1 1 2
0 80 80 80 80 320
Ok, from this we can now move to impute the missing values:
imp <- mice(all.df, m = 5, maxit = 50, seed = 1234, printFlag = FALSE)
The m=5 parameter specifies that you end up with five plausible imputations for the variable
The maxit=50 parameter specifies that there will be up to 50 iterations of the algorithm before it converges to a solution and this can be adjusted upward or downward to the desired precision
The mice() function may take a while depending upon the number of iterations we specify. In this case, upon completion we can see some of the imputed values for Age using head() function:
head(imp$imp$Age)
1 2 3 4 5
7 28 49 37 70 89
33 55 54 52 88 24
56 89 83 68 71 61
84 43 43 24 30 31
96 28 64 89 41 50
120 47 34 36 22 77
To actually complete the imputation, we have to run the complete() function and assign the results to a new dataframe. This version of complete() function will collect all imputations in the assigned dataframe via the "long" parameter:
all_imputed_df <- complete(imp, "long", include = TRUE)
table(all_imputed_df$.imp, is.na(all_imputed_df$Age))
Console:
> all_imputed_df <- complete(imp, "long", include = TRUE)
> table(all_imputed_df$.imp, is.na(all_imputed_df$Age))
FALSE TRUE
0 1920 80
1 2000 0
2 2000 0
3 2000 0
4 2000 0
5 2000 0
Now we have a dataset of 12000 age values, across 5 age inputted values.
Let's try a regression with imputation #3.
First, extract impute #3
impute.3 <- subset(all_imputed_df,.imp=='3')
summary(impute.3)
Console:
> impute.3 <- subset(all_imputed_df, .imp == "3")
> summary(impute.3)
.imp .id Age Gender Employment
Min. :3 Min. : 1.0 Min. :18.00 M:1000 Full Time :1192
1st Qu.:3 1st Qu.: 500.8 1st Qu.:35.00 F:1000 Part Time : 211
Median :3 Median :1000.5 Median :54.00 Unemployed: 202
Mean :3 Mean :1000.5 Mean :53.89 Retired : 191
3rd Qu.:3 3rd Qu.:1500.2 3rd Qu.:72.00 Student : 204
Max. :3 Max. :2000.0 Max. :89.00
Education Marital
Regular High School Diploma :478 Married :416
Bachelor's Degree :376 Divorced :390
Some College, 1 or More Years, No Degree:295 Widowed :425
9th Grade to 12th Grade, No Diploma :168 Separated :393
Associate's Degree :150 Never Married:376
Master's Degree :141
(Other) :392
Now we can run a linear regression:
> lm(Age ~ Education + Gender + Employment + Marital, data = impute.3)
Call:
lm(formula = Age ~ Education + Gender + Employment + Marital,
data = impute.3)
Coefficients:
(Intercept) EducationNursery School to 8th Grade
51.6733 1.4100
Education9th Grade to 12th Grade, No Diploma EducationRegular High School Diploma
1.3675 0.7611
EducationGED or Alternative Credential EducationSome College, Less than 1 Year
1.0365 -2.6069
EducationSome College, 1 or More Years, No Degree EducationAssociate's Degree
0.3563 0.9506
EducationBachelor's Degree EducationMaster's Degree
1.2505 -1.6372
EducationProfessional School Degree EducationDoctorate Degree
1.1774 0.4936
GenderF EmploymentPart Time
-0.3190 1.1316
EmploymentUnemployed EmploymentRetired
3.1622 -0.6855
EmploymentStudent MaritalDivorced
3.0850 0.2934
MaritalWidowed MaritalSeparated
2.3162 1.6833
MaritalNever Married
1.6169
MCMCRegress
library(MCMCpack) # b0 = prior mean, B0 = prior precision = 1/variance
fitBayes <- MCMCregress(Age ~ Education + Gender + Employment + Marital, data = impute.3, mcmc = 10000, seed = 1234, b0 = 0, B0 = 0.01, drop.unused.levels = TRUE)
summary(fitBayes)
Console Output
> fitBayes <- MCMCregress(Age ~ Education + Gender + Employment + Marital, data = impute.3, mcmc = 10000, seed = 1234, b0 = 0, B0 = 0.01, drop.unused.levels = TRUE)
> summary(fitBayes)
Iterations = 1001:11000
Thinning interval = 1
Number of chains = 1
Sample size per chain = 10000
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
(Intercept) 48.67377 2.5337 0.025337 0.025337
EducationNursery School to 8th Grade 3.77088 3.0514 0.030514 0.030514
Education9th Grade to 12th Grade, No Diploma 3.81009 2.7794 0.027794 0.027794
EducationRegular High School Diploma 3.24531 2.4933 0.024933 0.025412
EducationGED or Alternative Credential 3.38733 3.2155 0.032155 0.032155
EducationSome College, Less than 1 Year -0.08419 2.9104 0.029104 0.029577
EducationSome College, 1 or More Years, No Degree 2.82889 2.6092 0.026092 0.026092
EducationAssociate's Degree 3.32932 2.8410 0.028410 0.028410
EducationBachelor's Degree 3.72272 2.5228 0.025228 0.025659
EducationMaster's Degree 0.87738 2.8611 0.028611 0.028611
EducationProfessional School Degree 3.27542 4.0199 0.040199 0.040199
EducationDoctorate Degree 2.43794 4.5996 0.045996 0.045996
GenderF -0.11321 0.9327 0.009327 0.009327
EmploymentPart Time 1.25556 1.5756 0.015756 0.016170
EmploymentUnemployed 3.27395 1.6213 0.016213 0.015708
EmploymentRetired -0.52614 1.6394 0.016394 0.016394
EmploymentStudent 3.17027 1.6058 0.016058 0.016889
MaritalDivorced 0.72379 1.4715 0.014715 0.014715
MaritalWidowed 2.73130 1.4394 0.014394 0.014706
MaritalSeparated 2.10423 1.4608 0.014608 0.014608
MaritalNever Married 2.00781 1.4960 0.014960 0.014960
sigma2 448.01488 14.0715 0.140715 0.140715
2. Quantiles for each variable:
2.5% 25% 50% 75% 97.5%
(Intercept) 43.75477 46.9556 48.6619 50.3967 53.609
EducationNursery School to 8th Grade -2.19290 1.7079 3.7701 5.8216 9.718
Education9th Grade to 12th Grade, No Diploma -1.59323 1.9586 3.8326 5.6676 9.349
EducationRegular High School Diploma -1.61001 1.5641 3.2474 4.9296 8.155
EducationGED or Alternative Credential -2.88523 1.2095 3.4173 5.5405 9.691
EducationSome College, Less than 1 Year -5.75364 -2.0617 -0.1009 1.8986 5.614
EducationSome College, 1 or More Years, No Degree -2.28754 1.0853 2.8608 4.5718 7.895
EducationAssociate's Degree -2.27611 1.4311 3.3285 5.2330 8.978
EducationBachelor's Degree -1.21780 2.0258 3.7275 5.4203 8.655
EducationMaster's Degree -4.61270 -1.0872 0.8601 2.8484 6.456
EducationProfessional School Degree -4.63027 0.5900 3.2767 5.9475 11.059
EducationDoctorate Degree -6.47767 -0.6371 2.4553 5.4188 11.705
GenderF -1.95673 -0.7298 -0.1067 0.4903 1.727
EmploymentPart Time -1.82784 0.1849 1.2597 2.3160 4.354
EmploymentUnemployed 0.09335 2.1988 3.2674 4.3557 6.433
EmploymentRetired -3.80162 -1.6316 -0.5147 0.5953 2.706
EmploymentStudent 0.03387 2.0713 3.1502 4.2227 6.342
MaritalDivorced -2.15073 -0.2732 0.7249 1.7266 3.602
MaritalWidowed -0.13488 1.7817 2.7367 3.6961 5.567
MaritalSeparated -0.76396 1.1177 2.1118 3.0700 5.001
MaritalNever Married -0.92230 0.9950 1.9976 3.0248 4.898
sigma2 420.98019 438.4621 447.7222 457.2730 476.481
Hopefully, the above observations point in the right direction.
Citations:
R Package: Mice - Multivariate Imputation by Chained Equations Reference Manual By: Stef van Buuren
Flexible Imputation of Missing Data By Stef van Buuren (Online Book)
Practical Predictive Analytics by: Ralph Winters
Simulation for Data Science with R by: Matthias Templ
Bayesian Data Analysis, Third Edition, 3rd Edition By: Andrew Gelman; John B. Carlin; Hal S. Stern; David B. Dunson; Aki Vehtari; Donald B. Rubin