I am working on a meta analysis and a sensitivity analysis for missing data. I want to replace censorsed data either with 0 or 1 according to a predefined probability.
I have a dataset with colum x: timepoints and y: events (1 = event, 0 = censored). For the analysis I replaced some of the 0 with NAs. Z is the indicator for the treatment arm. I want to replace NAs to either 1 or 0 with a predefined probability.
This is my code:
Just an example:
library(mice)
x <- c(1:10)
y <- c(1,1,1,NA,NA,NA,1,1,0,NA)
z <- rep(2,10)
data <- data.frame(x,y,z)
str(data)
md.pattern(data)
mice.impute.myfunct <- function(y, ry, x, ...)
{event <- sample(c(0:1), size = 1, replace=T, prob=c(0.5,0.5)); return(event)}
data.imp <- mice(data, me = c("","myfunct",""), m = 1)
data.comp <- complete(data.imp)
I would expect that NAs in y will be replaced with 0 (20% of cases) and 1 (80% of cases). But NAs are either replaced only with 0 or only with 1.
I have to admit, that I am quite a beginner with R and did not have to write own little functions before.
Thank you very much for your help!
Here is a possible solution just replacing the missing values with the 0 and 1, and a varying probability between 0.1 and 0,9:
for( i in seq(0.1,0.9,0.1)){
data[[paste0("y_imp",i)]] <- data$y
N <- sum(is.na( data$y))
data[[paste0("y_imp",i)]][is.na(data[[paste0("y_imp",i)]])] <- sample(c(0,1), size = N, replace=T, prob=c(i,1-i))
}
data[[paste0("y_imp",i)]] <- data$y create the column where you has the i probability of replacing the missing by 0.
Related
for some reason, my model is not running. I created a model matrix to run a simple model with the package neuralnet. I know it might be challenging to debug other people code especially without the data but in case you think you could assist me here is the code:
library(tidyverse)
library(neuralnet)
#Activity 1 Load Data
featchannels <-read.csv("features_channel.csv")
trainTargets <-read.table("traintargets.txt")
#Activity 2 Normalize every column of the features dataset using min-max
normalization to range [0-1].
normalized <- function(x) {
return((x-min(x)) /(max(x) -min(x)))
}
featchannels <- normalized(featchannels)
#Activity 3 Add a target feature named response to the features dataset
with 0-1 values read from trainTargets.txt, with 1 indicating P300
response and 0 otherwise.
colnames(trainTargets)[1] <- "State"
featchannels <- cbind(featchannels, trainTargets)
# Changing rows to P300 and others.
featchannels <- within(featchannels, State <- factor(State, labels =
c("Other", "P300")))
featchannels$State <- as.factor(featchannels$State)
#4. Take the first 3840 rows of the dataset as the training data set, and
the remaining 960 rows as the testing data set.
training <- featchannels[1:3840,]
testing <- featchannels[3841:4800,]
enter code here
#Activitry 6
#Creating model matrix before runing the model
df_comb_training <- training
y <- model.matrix(~ df_comb_training$State + 0, data = df_comb_training[,
c('State'), drop=FALSE])
# fix up names for as.formula
y_feats <- gsub("^[^ ]+\\$", "", colnames(y))
colnames(y) <- y_feats
df_comb_training <- df_comb_training[, !(colnames(df_comb_training) ==
"State")]
feats <- colnames(df_comb_training)
df_comb_training <- cbind(y, df_comb_training)
# Concatenate strings
f <- paste(feats, collapse=' + ')
y_f <- paste(y_feats, collapse=' + ')
f <- paste(y_f, '~', f)
# Convert to formula
f <- as.formula(f)
model_h5 <- neuralnet(f, df_comb_training, stepmax = 1e+08, hidden = 5)
I have the following data frame:
library(dplyr)
set.seed(42)
df <- data_frame(x = sample(seq(0, 1, 0.1), 5, replace = T), y = sample(seq(0, 1, 0.1), 5, replace = T), z= sample(seq(0, 1, 0.1), 5, replace = T) )
For each row in df, I would like to find out whether there is a row in df2 which is close to it ("neighbor") in all columns, where "close" means that it is not different by more than 0.1 in each column.
So for instance, a proper neighbor to the row (1, 0.5, 0.5) would be (0.9, 0.6, 0.4).
The second data set is
set.seed(42)
df2 <- data_frame(x = sample(seq(0, 1, 0.1), 10, replace = T), y = sample(seq(0, 1, 0.1), 10, replace = T), z= sample(seq(0, 1, 0.1), 10, replace = T) )
In this case there is no "neighbor", so Im supposed to get "FALSE" for all rows of df.
My actual data frames are much bigger than this (dozens of columns and hundreds of thousands of rows, so the naming has to be very general rather than "x", "y" and "z".
I have a sense that this can be done using mutate and funs, for example I tried this line:
df <- df %>% mutate_all(funs(close = (. <= df2(, .)+0.1) & (. >= df2(, .)-0.1))
But got an error.
Any ideas?
You can use package fuzzyjoin
library(fuzzyjoin)
# adding two rows that match
df2 <- rbind(df2,df[1:2,] +0.01)
df %>%
fuzzy_left_join(df2,match_fun= function(x,y) y<x+0.1 & y> x-0.1 ) %>%
mutate(found=!is.na(x.y)) %>%
select(-4:-6)
# # A tibble: 5 x 4
# x.x y.x z.x found
# <dbl> <dbl> <dbl> <lgl>
# 1 1 0.5 0.5 TRUE
# 2 1 0.8 0.7 TRUE
# 3 0.3 0.1 1 FALSE
# 4 0.9 0.7 0.2 FALSE
# 5 0.7 0.7 0.5 FALSE
find more info there: Joining/matching data frames in R
The machine learning approach to finding a close entry in a multi-dimensional dataset is Euclidian distance.
The general approach is to normalize all the attributes. Make the range for each column the same, zero to one or negative one to one. That equalizes the effect of the columns with large and small values. When more advanced approaches are used one would center the adjusted column values on zero. The test criteria is scaled the same.
The next step is to calculate the distance of each observation from its neighbors. If the data set is small or computing time is cheap, calculate the distance from every observation to every other. The Euclidian distance from observation1 (row1) to observation2 (row2) is sqrt((X1 - X2)^2 + sqrt((Y1 - Y2)^2 + ...). Choose your criteria and select.
In your case, the section criterion is simpler. Two observations are close if no attribute is more than 0.1 from the other observation. I assume that df and df2 have the same number of columns in the same order. I make the assumption that close observations are relatively rare. My approach tells me once we discover a pair is distant, discontinue investigation. If you have hundred of thousands of rows, you will likely exhaust memory if you try to calculate all the combinations at the same time.
~~~~~
You have a big problem. If your data sets df and df2 are one hundred thousand rows each, and four dozen columns, the machine needs to do 4.8e+11 comparisons. The scorecard at the end will have 1e+10 results (close or distant). I started with some subsetting to do comparisons with tearful results. R wanted matrices of the same size. The kluge I devised was unsuccessful. Therefore I regressed to the days of FORTRAN and did it with loops. With the loop approach, you could subset the problem and finish without smoking your machine.
From the sample data, I did the comparisons by hand, all 150 of them: nrow(df) * nrow(df2) * ncol(df). There were no close observations in the sample data by the definition you gave.
Here is how I intended to present the results before transferring the results to a new column in df.
dfclose <- matrix(TRUE, nrow = nrow(df), ncol = nrow(df2))
dfclose # Have a look
This matrix describes the distance from observation in df (rows in dfclose) to observation in df2 (colums in dfclose). If close, the entry is TRUE.
Here is the repository of the result of the distance measures:
dfdist <- matrix(0, nrow = nrow(df), ncol = nrow(df2))
dfdist # have a look; it's the same format, but with numbers
We start with the assumption that all observations in df aare close to df2.
The total distance is zero. To that we add the Manhattan Distance. When the total Manhattan distance is greater than .1, they are no longer close. We needn't evaluate any more.
closeCriterion <- function(origin, dest) {
manhattanDistance <- abs(origin-dest)
#print(paste("manhattanDistance =", manhattanDistance))
if (manhattanDistance < .1) ret <- 0 else ret <- 1
}
convertScore <- function(x) if (x>0) FALSE else TRUE
for (j in 1:ncol(df)) {
print(paste("col =",j))
for (i in 1:nrow(df)) {
print(paste("df row =",i))
for (k in 1:nrow(df2)) {
# print(paste("df2 row (and dflist column) =", k))
distantScore <- closeCriterion(df[i,j], df2[k,j])
#print(paste("df and dfdist row =", i, " df2 row (and dflist column) =", k, " distantScore = ", distantScore))
dfdist[i,k] <- dfdist[i,k] + distantScore
}
}
}
dfdist # have a look at the numerical results
dfclose <- matrix(lapply(dfdist, convertScore), ncol = nrow(df2))
I wanted to see what the process would look like at scale.
set.seed(42)
df <- matrix(rnorm(3000), ncol = 30)
set.seed(42)
df2 <-matrix(rnorm(5580), ncol = 30)
dfdist <- matrix(0, nrow = nrow(df), ncol = nrow(df2))
Then I ran the code block to see what would happen.
~ ~ ~
You might consider the problem definition. I ran the model several times, changing the criterion for closeness. If the entry in each of three dozen columns in df2 has a 90% chance of matching its correspondent in df, the row only has a 2.2% chance of matching. The example data is not such a good test case for the algorithm.
Best of luck
Here's one way to calculate that column without fuzzyjoin
library(tidyverse)
found <-
expand.grid(row.df = seq(nrow(df)),
row.df2 = seq(nrow(df2))) %>%
mutate(in.range = pmap_lgl(., ~ all(abs(df[.x,] - df2[.y,]) <= 0.1))) %>%
group_by(row.df) %>%
summarise_at('in.range', any) %>%
select(in.range)
I encounter a problem with the use of the mice function to do multiple imputation. I want to do imputation only on part of the missing data, what looking at the help seems possible and straightworward. But i can't get it to work.
here is the example:
I have some missing data on x and y:
library(mice)
plouf <- data.frame(ID = rep(LETTERS[1:10],each = 10), x = sample(10,100,replace = T), y = sample(10,100,replace = T))
plouf[sample(100,10),c("x","y")] <- NA
I want only to impute missing data on y:
where <- data.frame(ID = rep(FALSE,100),x = rep(FALSE,100),y = is.na(plouf$y))
I do the imputation
plouf.imp <- mice(plouf, m = 1,method="pmm",maxit=5,where = where)
I look at the imputed values:
test <- complete(plouf.imp)
Here i still have NAs on y:
> sum(is.na(test$y))
[1] 10
if I use where to say to impute on all values, it works:
where <- data.frame(ID = rep(FALSE,100),x = is.na(plouf$x),y = is.na(plouf$y))
plouf.imp <- mice(plouf, m = 1,method="pmm",maxit=5,where = where)
test <- complete(plouf.imp)
> sum(is.na(test$y))
[1] 0
but it does the imputation on x too, that I don't want in this specific case (speed reason in a statistial simulation study)
Has anyone any idea ?
This is happening because of below code -
plouf[sample(100,10),c("x","y")] <- NA
Let's consider your 1st case wherein you want to impute y only. Check it's PredictorMatrix
plouf.imp <- mice(plouf, m = 1, method="pmm", maxit=5, where = whr)
plouf.imp
#PredictorMatrix:
# ID x y
#ID 0 0 0
#x 0 0 0
#y 1 1 0
It says that y's missing value will be predicted based on ID & x since it's value is 1 in row y.
Now check your sample data where you are populating NA in x & y column. You can notice that wherever y is NA x is also having the same NA value.
So what happens is that when mice refers PredictorMatrix for imputation in y column it encounters NA in x and ignore those rows as all independent variables (i.e. ID & x) are expected to be non-missing in order to predict the outcome i.e. missing values in y.
Try this -
library(mice)
#sample data
set.seed(123)
plouf <- data.frame(ID = rep(LETTERS[1:10],each = 10), x = sample(10,100,replace = T), y = sample(10,100,replace = T))
plouf[sample(100,10), "x"] <- NA
set.seed(999)
plouf[sample(100,10), "y"] <- NA
#missing value imputation
whr <- data.frame(ID = rep(FALSE,100), x = rep(FALSE,100), y = is.na(plouf$y))
plouf.imp <- mice(plouf, m = 1, method="pmm", maxit=5, where = whr)
test <- complete(plouf.imp)
sum(is.na(test$y))
#[1] 1
Here only one value of y is left to be imputed and in this case both x & y are having NA value i.e. row number 39 (similar to your 1st case).
I am working on calculating a new raster (output ras) based on 2 rasters (input ras) and a 'stratum' raster. The Stratum raster values (1 to 4) refer to the rows in the bias and weight dataframes. Strata value '4' was used to fill any 'NA' in the Strata raster, otherwise the function would crash. The following input is required.
# load library
library(raster)
# reproducing the bias and weight data.frames
bias <- data.frame(
ras_1 = c(56,-7,-30,0),
ras_2 = c(29,18,-52,0),
ras_3 = c(44,4,-15,0)
)
rownames(bias) <- c("Strat 1","Strat 2","Strat 3","Strat 4")
weight <- data.frame(
ras_1 = c(0.56,0.66,0.23,0.33),
ras_2 = c(0.03,0.18,0.5,0.33),
ras_3 = c(0.41,0.16,0.22,0.34)
)
rownames(weight) <- c("Strat 1","Strat 2","Strat 3","Strat 4")
The following function (fusion) allows me to add a 'bias' value to the input rasters. After the bias has been added, the two corrected input raster cell values will be multiplied by a weight value, depending in which stratum they belong.
The result of the input 2 raster values will be summed and returned using 'calc'.
## Create raster data for input
# create 2 rasters
r1 <- raster(ncol=10,nrow=10)
r2 <- raster(ncol=10,nrow=10)
r1[] <- sample(seq(from = 1, to = 500, by = 1), size = 100, replace = TRUE)
r2[] <- sample(seq(from = 1, to = 500, by = 1), size = 100, replace = TRUE)
r2[1:2] <- NA # include NA in input maps for example purpose
# Create strata raster (4 strata's)
r3 <- raster(ncol=10,nrow=10)
r3[] <- sample(seq(from = 1, to = 4, by = 1), size = 100, replace = TRUE)
Strata.n <- 4 # number of strata values in this example
fusion <- function(x) {
result <- matrix(NA, dim(x)[1], 1)
for (n in 1:Strata.n) {
ok <- !is.na(x[,3]) & x[,3] == n
a <- x[ok,1] + bias[n,1] # add bias to first input raster value
b <- x[ok,2] + bias[n,2] # add bias to second input raster value
result[ok] <- a * weight[n,1] + b * weight[n,2] # Multiply values by weight
}
return(result)
}
s <- stack(r1,r2,r3)
Fused.map <- calc(s, fun = fusion, progress = 'text')
The problem with the above function is that:
It is only suited for 2 rasters
If one raster has NA, then the result will be NA for that cell
is.na(Fused.map#data#values) # check for NA in the fused map
What I would like to have is:
A function that takes any number of input rasters
It can work with NA values (ignores NA values in the rasters)
Re-adjusts the 'weight' if a raster has a NA value, so that the remaining weight values add up to 1
EDIT
The following function does what I need, but is significantly slower than the function above on large rasters. Fusion does it in 10 seconds, fusion2 function below needs 8 hours on large rasters...
fusion2 <- function(x) {
m <- matrix(x, nrow= 1, ncol=3) # Create matrix per stack of cells
n <- m[,3] # get the stratum
g <- m[1:(Strata.n-1)] + as.matrix(bias[n,]) # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight[n,1:(Strata.n-1)] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
pp <- as.numeric(pp)
result <- as.integer(round(sum(pp*g, na.rm = T))) # return raster value
return(result)
}
Fused.map <- calc(s, fun = fusion2, progress = 'text')
Any way to optimize the fusion2 function to a similar method as fusion1?
> sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Thank you for your time!
There seems to be a lot of unnecessary format conversions going on, and using the simplest data structures available is the fastest. calc parameter is a numeric vector, so you can use numeric vectors everywhere. Also, rounding and casting into an integer is redundant.
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[1:(Strata.n-1)] + as.numeric(bias[n,]) # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- as.numeric(weight[n,1:(Strata.n-1)]) # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
On a 100x100 raster, your original functions take:
system.time(Fused.map <- calc(s, fun = fusion, progress = 'text'))
user system elapsed
0.015 0.000 0.015
system.time(Fused.map <- calc(s, fun = fusion2, progress = 'text'))
user system elapsed
8.270 0.078 8.312
The modified function is already 5 times faster:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
1.970 0.026 1.987
Next, precompute matrices from the data frames so you don't need to do that for each pixel:
bias_matrix = as.matrix(bias)
weight_matrix = as.matrix(weight)
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[1:(Strata.n-1)] + bias_matrix[n,] # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight_matrix[n,1:(Strata.n-1)] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
We get:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
0.312 0.008 0.318
And finally, also precompute 1:(Strata.n-1):
bias_matrix = as.matrix(bias)
weight_matrix = as.matrix(weight)
Strata.minus1 = 1:(Strata.n-1)
fusion3 <- function(x) {
n <- x[3] # get the stratum
g <- x[Strata.minus1] + bias_matrix[n,] # add bias to raster values
g[g < 0] <- 0 # set values below 0 to 0
w <- weight_matrix[n,Strata.minus1] # get correct strata weight values
w[is.na(g)]<- NA # set weight to NA if (g) raster values are NA
p <- sum(w, na.rm = T) # calculate sum of weight values
pp <- w/p # divide weight values by sum to get the proportion to == 1
result <- as.integer(sum(pp*g, na.rm = T)+0.5) # return raster value
return(result)
}
We get:
system.time(Fused.map3 <- calc(s, fun = fusion3, progress = 'text'))
user system elapsed
0.252 0.011 0.262
That's not quite 0.015 yet, but you also have to take into consideration that your original function does not output integers, nor does it set values below 0 to 0, nor does it make the proportions sum to 1, nor as you mentioned deal with NAs.
Mind you, this function still only works with only two rasters, because you hardcode stratum as layer 3. You should instead use raster::overlay with two parameters, the stratum raster and the layers themselves (or use calc with the stratum raster as layer 1, but that's not what calc is designed for).
I am trying to simulate the unequal sample size in the multilevel model.I have four groups, the sample size is 100,200,300,and 400, respectively.
So, the total sample size is 1000. w, u0,u1 variables are in the level 2 ; x , r0 are in the level 1. y is an outcome
nSubWithinGroup <- c(100,200,300,400)###the sample size in each group
nGroup <-4 ## 4 groups
gamma00 <- 1
gamma01 <- 1 ## b0 = gamma00+gamma01*w+u0
gamma10 <- 1 ## b1 = gamma10+gamma11*w+u1
gamma11 <- 1
dataLevel1 <- mat.or.vec(sum(nSubWithinGroup),4)
colnames(dataLevel1) <- c("Group","X","W","Y")
rowIndex <- 0
for (group in 1:nGroup) {
u0 <- rnorm(1,mean=0,sd=1)
u1 <- rnorm(1,mean=0,sd=1)
w <- rnorm(1,mean=0,sd=1)
for(i in 1:length(nSubWithinGroup)){
for (j in 1:nSubWithinGroup[i]){
r0 <- rnorm(1,mean=0,sd=1)
x <- rnorm(1,mean=0,sd=1)
y <- (gamma00+gamma01*w+u0)+(gamma10+gamma11*w+u1)*x+r0
rowIndex <- rowIndex + 1
dataLevel1[rowIndex,] <- c(group,x,w,y)
}
}
}
I ran the codes, and it showed me the value in the "Group" column is 1 , no 2,3, or 4. Also, it has errors, which is:
"Error in [<-(*tmp*, rowIndex, , value = c(2, -1.94476463667851, -0.153516782293473, :
subscript out of bounds"
Your original issue was a bit hard to find with all the for-loops, but you were looping twice on your grouping level (one time in 1:nGroup and then again in 1:length(nSubWithinGroup). This lead to more combinations than you had allowed for in your matrix, and thus your error. (If you want to check, run your loop without assigining to dataLevel1 and see what value rowIndex has at the end.
However, generating data like this in R can be notoriously slow and every function you use with n=1 can just as easily be used to generate nTotal numbers. I have rewritten your code to something that's (hopefully) more readable, but also more vectorized.
#set seed; you can never reproduce your result if you don't do this
set.seed(289457)
#set constants
gamma00 <- 1
gamma01 <- 1 ## b0 = gamma00+gamma01*w+u0
gamma10 <- 1 ## b1 = gamma10+gamma11*w+u1
gamma11 <- 1
#set size parameters
nSubWithinGroup <- c(100,200,300,400)###the sample size in each group
nGroup <-4
nTotal <- sum(nSubWithinGroup)
#simulate group-level data
level2_data <- data.frame(group=1:nGroup,
size=nSubWithinGroup, #not really necessary here, but I like to have everything documented/accessible
u0 = rnorm(nGroup,mean=0,sd=1),
u1 = rnorm(nGroup,mean=0,sd=1),
w = rnorm(nGroup,mean=0,sd=1)
)
#simulate individual_level data (from example code x and r0 where generated in the same way for each individual)
level1_data <- data.frame(id=1:nTotal,
group=rep(1:nGroup, nSubWithinGroup),
r0 = rnorm(nTotal,mean=0,sd=1),
x = rnorm(nTotal, mean=0,sd=1)
)
#several possibilities here, you can merge the two dataframes together or reference the level2data when calculating the outcome
#merging generates more data, but is also readable
combined_data <- merge(level1_data,level2_data,by="group",all.x=T)
#calculate outcome. This can be shortened for instance by calculating some linear parts before
#merging but wanted to stay as close to original code as possible.
combined_data$y <- (gamma00+gamma01*combined_data$w+combined_data$u0)+
(gamma10+gamma11*combined_data$w+combined_data$u1)*combined_data$x+combined_data$r0