I have the following data set:
Age<-c(2,2.1,2.2,3.4,3.5,4.2,4.7,4.8,5,5.6,NA, 5.9, NA)
R<-c(2,2.1,2.2,3.4,3.5,4.2,4.7,4.8,5,5.6,NA, 5.9, NA)
sex<-c(1,0,1,1,1,1,1,0,0,0,NA, 0,1)
df1<-data.frame(Age,R,sex)
# Second dataset:
Age2<-seq(2,20,0.25)
Mspline<-rnorm(73)
df2.F<-data.frame(Age2, Mspline)
# Third data
Age2<-seq(2,20,0.25)
Mspline<-rnorm(73)
df2.M<-data.frame(Age2, Mspline)
I was wondering how I can include gender into the calculation and combine these two algorithm to make a loop function. What I need is:
If sex=1 then use the following function to calculate Time
last = dim(df2.F)[1]
fM.F<-approxfun(df2.F$Age2, df2.F$Mspline, yleft = df2.F$Mspline[1] , yright = df2.F$Mspline[last])
df1$Time<-fM.F(df1$Age)
and If sex=0 then use this function to calculate Time
last = dim(df2.M)[1]
fM.M<-approxfun(df2.M$Age2, df2.M$Mspline, yleft = df2.M$Mspline[1] , yright = df2.M$Mspline[last])
df1$Time<-fM.M(df1$Age)
I mean: Read the first record in df1 if it is Female (with age=4.1) the time=fM.F(its age=4.1) but if the gender is Male then to calculate Time apply fM.M on its age so time=fM.M(4.1)
You can create a function that takes the Age vector, the sex value, and the male and female specific dataframes, and selects the frame to use based on the sex value.
f <- function(age, s, m,f) {
if(is.na(s)) return(NA)
if(s==0) df = m
else df = f
last = dim(df)[1]
fM<-approxfun(df$Age2, df$Mspline, yleft = df$Mspline[1] , yright = df$Mspline[last])
fM(age)
}
Now, just apply the function by group, using pull(cur_group(),sex) to get the sex value for the current group.
library(dplyr)
df1 %>%
group_by(sex) %>%
mutate(time = f(Age, pull(cur_group(),sex), df2.M, df2.F))
Output:
Age R sex time
<dbl> <dbl> <dbl> <dbl>
1 2 2 1 -0.186
2 2.1 2.1 0 1.02
3 2.2 2.2 1 -1.55
4 3.4 3.4 1 -0.461
5 3.5 3.5 1 0.342
6 4.2 4.2 1 -0.560
7 4.7 4.7 1 -0.114
8 4.8 4.8 0 0.247
9 5 5 0 -0.510
10 5.6 5.6 0 -0.982
11 NA NA NA NA
12 5.9 5.9 0 -0.231
13 NA NA 1 NA
Related
I have a huge amount of DFs in R (>50), which correspond to different filtering I've performed, here's an example of 7 of them:
Steps_Day1 <- filter(PD2, Gait_Day == 1)
Steps_Day2 <- filter(PD2, Gait_Day == 2)
Steps_Day3 <- filter(PD2, Gait_Day == 3)
Steps_Day4 <- filter(PD2, Gait_Day == 4)
Steps_Day5 <- filter(PD2, Gait_Day == 5)
Steps_Day6 <- filter(PD2, Gait_Day == 6)
Steps_Day7 <- filter(PD2, Gait_Day == 7)
Each of the dataframes contains 19 variables, however I'm only interested in their speed (to calculate mean) and their subjectID, as each subject has multiple observations of speed in the same DF.
An example of the data we're interested in, in dataframe - Steps_Day1:
Speed SubjectID
0.6 1
0.7 1
0.7 2
0.8 2
0.1 2
1.1 3
1.2 3
1.5 4
1.7 4
0.8 4
The data goes up to 61 pts. and each particpants number of observations is much larger than this.
Now what I want to do, is create a code that automatically cycles through each of 50 dataframes (taking the 7 above as an example) and calculates the mean speed for each participant and stores this and saves it in a new dataframe, alongside the variables containing to mean for each participant in the other DFs.
An example of Steps day 1 (Values not accurate)
Speed SubjectID
0.6 1
0.7 2
1.2 3
1.7 4
and so on... Before I end up with a final DF containing in column vectors the means for each participant from each of the other data frames, which may look something like:
Steps_Day1 StepsDay2 StepsDay3 StepsDay4 SubjectID
0.6 0.8 0.5 0.4 1
0.7 0.9 0.6 0.6 2
1.2 1.1 0.4 0.7 3
1.7 1.3 0.3 0.8 4
I could do this through some horrible, messy long code - but looking to see if anyone has more intuitive ideas please!
:)
To add to the previous answer, I agree that it is much easier to do this without creating a new data frame for each day. Using some generated data, you can achieve your desired results as follows:
# Generate some data
df <- data.frame(
day = rep(1:5, 1, 100),
subject = rep(5:10, 1, 100),
speed = runif(500)
)
df %>%
group_by(day, subject) %>%
summarise(avg_speed = mean(speed)) %>%
pivot_wider(names_from = day,
names_prefix = "Steps_Day",
values_from = avg_speed)
# A tibble: 6 × 6
subject Steps_Day1 Steps_Day2 Steps_Day3 Steps_Day4 Steps_Day5
<int> <dbl> <dbl> <dbl> <dbl> <dbl>
1 5 0.605 0.416 0.502 0.516 0.517
2 6 0.592 0.458 0.625 0.531 0.460
3 7 0.475 0.396 0.586 0.517 0.449
4 8 0.430 0.435 0.489 0.512 0.548
5 9 0.512 0.645 0.509 0.484 0.566
6 10 0.530 0.453 0.545 0.497 0.460
You don't include a MCVE of your dataset so I can't test out a solution, but it seems like a pretty simple problem using tidyverse solutions.
First, why do you split PD2 into separate dataframes? If you skip that, you can just use group and summarize to get the average for groups:
PD2 %>%
group_by(Gait_Day, SubjectID) %>%
summarize(Steps = mean(Speed))
This will give you a "long-form" data.frame with 3 variables: Gait_Day, SubjectID, and Steps, which has the mean speed for that subject and day. If you want it in the format you show at the end, just pivot into "wide-form" using pivot_wider. You can see this question for further explaination on that: How to reshape data from long to wide format
I need to prepare a table that includes the means and standards deviations for each level of several demographic variables and for many variables.
Consider the following data:
df <- tibble(place=c("London","Paris","London","Rome","Rome","Madrid","Madrid"),gender=c("m","f","f","f","m","m","f"), education = c(1,1,2,3,5,5,3), var1 = c(2.2,3.1,4.5,1,5,1.4,2.3),var2 = c(4.2,2.1,2.5,4,5,4.4,1.3),var3 = c(0.2,0.1,3.5,3,5,2.4,4.3))
I would like to get a dataframe that contains the grouping variables (place, gender, education) and their levels (e.g., London, Paris, etc.) in the first column and their means and standard deviations for each variable starting with var (var1, var2, var3) in additional columns.
I know how to do this for one group and several variables at a time. However, since I need to repeat this dozens of times I am looking for a way to automate this process. It would be great to have a function to which I simply need to pass (a) the names of the grouping variables (e.g., gender, education) and (b) the variables from which to get the M / SD (e.g. var1, var2).
The solution I look for should look like this (the stats are not correct in the example below):
my_results <- tibble(grouping_vars = c("place_London","place_Paris","place_Rome","place_Madrid","gender_m","gender_f","last_element"),mean_var1=c(1.3,2.5,4.5,1.7,2.5,3.6,4.0),sd_var1=c(0.01,0.41,0.21,0.12,0.02,0.38,0.28),mean_var2=c(4.3,4.5,4.0,1.2,2.5,1.6,2.3),sd_var2=c(0.21,0.1,0.1,0.32,0.22,0.18,0.08),mean_var3=c(2.3,2.5,2.0,3.2,3.5,0.6,5),sd_var3=c(0.51,0.15,0.51,0.52,0.52,0.15,0.48))
grouping_vars mean_var1 sd_var1 mean_var2 sd_var2 mean_var3 sd_var3
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 place_London 1.3 0.01 4.3 0.21 2.3 0.51
2 place_Paris 2.5 0.41 4.5 0.1 2.5 0.15
3 place_Rome 4.5 0.21 4 0.1 2 0.51
4 place_Madrid 1.7 0.12 1.2 0.32 3.2 0.52
5 gender_m 2.5 0.02 2.5 0.22 3.5 0.52
6 gender_f 3.6 0.38 1.6 0.18 0.6 0.15
7 last_element 4 0.28 2.3 0.08 5 0.48
Since I typically work with tidyverse, I would particularly appreciate solutions that use these packages (probably dplyr or purrr?).
EDIT:
I thought there would be an elegant way to do this using map(). Maybe there is but I haven't found it yet. For the mean time, I figured out a way that simply restructures the data into an appropriate long format and then computes the statistics.
df %>%
# all grouping vars need to be of the same type, here "factor" is most appropriate
mutate_at(grouping_vars, list(factor)) %>%
# pivot longer, so that each row is a unique combination of grouping variable and grouping level
pivot_longer(
cols = one_of(grouping_vars),
names_to = "group_var",
values_to = "group_level"
) %>%
# merge grouping variable and group level into a single column
unite(var_level,group_var,group_level, sep="_") %>%
# group by group level
group_by(var_level) %>%
# compute means and sd for each test variable
summarise_at(test_vars, list(~mean(., na.rm = TRUE), ~sd(., na.rm = TRUE)))
The result seems fine, e.g., the mean of var1 of the two people who live in London (2.2 + 4.5) is 3.35.
# A tibble: 10 x 7
var_level var1_mean var2_mean var3_mean var1_sd var2_sd var3_sd
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 education_1 2.65 3.15 0.15 0.636 1.48 0.0707
2 education_2 4.5 2.5 3.5 NA NA NA
3 education_3 1.65 2.65 3.65 0.919 1.91 0.919
4 education_5 3.2 4.7 3.7 2.55 0.424 1.84
5 gender_f 2.72 2.48 2.72 1.47 1.13 1.83
6 gender_m 2.87 4.53 2.53 1.89 0.416 2.40
7 place_London 3.35 3.35 1.85 1.63 1.20 2.33
8 place_Madrid 1.85 2.85 3.35 0.636 2.19 1.34
9 place_Paris 3.1 2.1 0.1 NA NA NA
10 place_Rome 3 4.5 4 2.83 0.707 1.41
Any thoughts on possible risks of this approach or how this could be improved?
One option is the describeBy function from psych:
library(psych)
describeBy(df,group = c("gender","education"), mat= TRUE)
Then subset what you want from there.
Another, surprisingly simple option with dplyr:
library(dplyr)
group.vars <- c("gender","education")
measure.vars <- c("var1","var2")
df %>%
group_by_at(group.vars) %>%
summarize_at(measure.vars,
list(mean =~ mean(.),sd =~ sd(.)))
# A tibble: 5 x 6
# Groups: gender [2]
gender education var1_mean var2_mean var1_sd var2_sd
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 f 1 3.1 2.1 NA NA
2 f 2 4.5 2.5 NA NA
3 f 3 1.65 2.65 0.919 1.91
4 m 1 2.2 4.2 NA NA
5 m 5 3.2 4.7 2.55 0.424
You can continue adding additional function to that list. For every element, the name will be appended to the variable and the result will be come the column values. Recall that ~ is shorthand for function(x).
I am trying to create a new vector by applying a transformation to a variable in one of my dataframe based on data from another dataframe.
I have two dataframes df1 and df2. df1 and df2 have different dimension, I have over 20,000 rows in df1 and 76 rows in df2.
df1 is my original dataset. I created df2 for Ag_ppm as follow:
df2 <- df1%>%
filter(!is.na(Ag_ppm)) %>%
group_by(Year,Zone, SubZone) %>%
summarise(
n = sum(!is.na(Ag_ppm)),
min = min(Ag_ppm),
max = max(Ag_ppm),
mean = mean(Ag_ppm),
sd = sd(Ag_ppm),
iqr = IQR(Ag_ppm),
Q1 = quantile(Ag_ppm, 0.25),
median = median(Ag_ppm),
Q3 = quantile(Ag_ppm, 0.75),
LW = min(Ag_ppm > (quantile(Ag_ppm, .25)-1.5*IQR(Ag_ppm))),
UF = quantile(Ag_ppm, .75) + 1.5*IQR(Ag_ppm))
Here is what the first rows of each data frames look like:
head(df1, n=5)
# A tibble: 5 x 12
Year Zone SubZone Au_ppm Ag_ppm Cu_ppm Pb_ppm Zn_ppm As_ppm Sb_ppm Bi_ppm Mo_ppm
<chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1990 BugLake BugLake 0.007 3.7 17 27 23 1 1 NA 1
2 1983 Johnny Mountain Johnny Mountain 0.01 1.6 71 63 550 4 NA NA NA
3 1983 Khyber Pass Khyber Pass 0.12 11.5 275 204 8230 178 7 60 NA
4 1987 Chebry Ridge Line Grid 0.05 2.2 35 21 105 16 6 NA NA
5 1987 Chebry Handel Grid 0.004 1.3 29 27 663 45 2 NA NA
head(df2, n=5)
# A tibble: 5 x 14
# Groups: Year, Zone [3]
Year Zone SubZone n min max mean sd iqr Q1 median Q3 LW UF
<chr> <chr> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl>
1 1981 Chebry Handel 52 0.6 5.1 1.83 0.947 0.925 1.2 1.6 2.12 1 3.51
2 1981 Imperial Metals Handel 24 0.9 6.9 2.81 1.43 1.35 1.95 2.65 3.3 1 5.33
3 1983 Chebry Chebry 5 0.7 3.7 1.78 1.19 0.9 1.2 1.2 2.1 1 3.45
4 1983 Chebry Handel 17 0.1 0.7 0.318 0.163 0.2 0.2 0.3 0.4 1 0.7
5 1983 Chebry Handel Grid 225 0.1 16 0.892 1.33 0.7 0.3 0.6 1 1 2.05
I want to apply the following equation to my column Ag_ppm in df1 using the median and IQR calculated for each subgroup in df2:
Z = (X - median)/IQR
For that purpose, I wrote:
# Initialize Ag_std vector with NA values
Ag_std <- rep(NA, times = nrow(df1))
# Populate Ag_std vector with standardized Ag values
Ag_std <-
for (i in 1:nrow(df1)) {
if (!is.na(df1$Ag_ppm[i])) {
filter(df2, Zone == df1$Zone[i], Year == df1$Year[i],
SubZone == df1$SubZone[i])
Ag_std[i] <- (df1$Ag_ppm[i] - df2$median)/df2$iqr
}
}
But the loop does not work (it returns a NULL vector) and I have this warning:
1: In Ag_std[i] <- (df1$Ag_ppm[i] - df2$median)/df2$iqr :
number of items to replace is not a multiple of replacement length
I've looked similar questions, and I did not find an answer that would work for me. Any help would be much appreciated!
If there are better ways of achieving the same without a loop (I'm sure there are, e.g. apply()), I would appreciate such comments as well. Unfortunately I'm not familiar enough with the alternatives to be able to implement them quickly.
This can be done relatively easily in data.table
library(data.table)
DT <- data.table(df1)
#function to apply
fun <- function(x) (x - median(x)) / diff (quantile( x, c(.25, .75)))
# create a new column with desired result
DT[, Ag_std := fun(Ag_ppm), by = list(Year, Zone, SubZone)]
Also, I think your loop can be fixed by assigning the result of 'filter' to a temporary object
for (i in 1:nrow(df1)) {
if (!is.na(df1$Ag_ppm[i])) {
temp.var <- filter(df2, Zone == df1$Zone[i], Year == df1$Year[i],
SubZone == df1$SubZone[i])
Ag_std[i] <- (df1$Ag_ppm[i] - temp.var$median)/temp.var$iqr
}
}
Since you have df2 as a seperate dataframe, you can join and mutate:
df1 %>%
left_join(df2, by = c("Year", "Zone", "SubZone")) %>%
mutate(Z = (Ag_ppm - median) / iqr)
In fact, you could have generated info in df2 in df1 itself using summarise
Ok so i have a pretty large data set of around 500 observations and 3 variables. The first column refers to time.
For a test data set I am using:
dat=as.data.frame(matrix(c(1,2,3,4,5,6,7,8,9,10,
1,1.8,3.5,3.8,5.6,6.2,7.8,8.2,9.8,10.1,
2,4.8,6.5,8.8,10.6,12.2,14.8,16.2,18.8,20.1),10,3))
colnames(dat)=c("Time","Var1","Var2")
Time Var1 Var2
1 1 1.0 2.0
2 2 1.8 4.8
3 3 3.5 6.5
4 4 3.8 8.8
5 5 5.6 10.6
6 6 6.2 12.2
7 7 7.8 14.8
8 8 8.2 16.2
9 9 9.8 18.8
10 10 10.1 20.1
So what I need to do is create a new column that each observations is the slope respect to time of some past points. For example taking 3 past points it would be something like:
slopeVar1[i]=slope(Var1[i-2:i],Time[i-2:i]) #Not real code
slopeVar[i]=slope(Var2[i-2:i],Time[i-2:i]) #Not real code
Time Var1 Var2 slopeVar1 slopeVar2
1 1 1 2 NA NA
2 2 1.8 4.8 NA NA
3 3 3.5 6.5 1.25 2.25
4 4 3.8 8.8 1.00 2.00
5 5 5.6 10.6 1.05 2.05
6 6 6.2 12.2 1.20 1.70
7 7 7.8 14.8 1.10 2.10
8 8 8.2 16.2 1.00 2.00
9 9 9.8 18.8 1.00 2.00
10 10 10.1 20.1 0.95 1.95
I actually got as far as using a for() function, but for really large data sets (>100,000) it starts taking too long.
The for() argument that I used is shown bellow:
#CREATE DATA FRAME
rm(dat)
dat=as.data.frame(matrix(c(1,2,3,4,5,6,7,8,9,10,
1,1.8,3.333,3.8,5.6,6.2,7.8,8.2,9.8,10.1,
2,4.8,6.5,8.8,10.6,12.2,14.8,16.2,18.8,20.1),10,3))
colnames(dat)=c("Time","Var1","Var2")
dat
plot(dat)
#CALCULATE SLOPE OF n POINTS FROM i TO i-n.
#In this case I am taking just 3 points, but it should
#be possible to change the number of points taken.
attach(dat)
n=3 #number for points to take slope
l=dim(dat[1])[1] #number of iterations
y=0
x=0
slopeVar1=NA
slopeVar2=NA
for (i in 1:l) {
if (i<n) {slopeVar1[i]=NA} #For the rows where there are not enough previous observations, it outputs NA
if (i>=n) {
y1=Var1[(i-n+1):i] #y data sets for calculating slope of Var1
y2=Var2[(i-n+1):i]#y data sets for calculating slope of Var2
x=Time[(i-n+1):i] #x data sets for calculating slope of Var1&Var2
z1=lm(y1~x) #Temporal value of slope of Var1
z2=lm(y2~x) #Temporal value of slope of Var2
slope1=as.data.frame(z1[1]) #Temporal value of slope of Var1
slopeVar1[i]=slope1[2,1] #Populating string of slopeVar1
slope2=as.data.frame(z2[1])#Temporal value of slope of Var2
slopeVar2[i]=slope2[2,1] #Populating string of slopeVar2
}
}
slopeVar1 #Checking results.
slopeVar2
(result=cbind(dat,slopeVar1,slopeVar2)) #Binds original data with new calculated slopes.
This code actually outputs what I want; but again, for really large data sets is quite inefficient.
This quick rollapply implemenation seems to be speeding it up somewhat -
library("zoo")
slope_func = function(period) {
y1=period[,2] #y data sets for calculating slope of Var1
y2=period[,3] #y data sets for calculating slope of Var2
x=period[,1] #x data sets for calculating slope of Var1&Var2
z1=lm(y1~x) #Temporal value of slope of Var1
z2=lm(y2~x) #Temporal value of slope of Var2
slope1=as.data.frame(z1[1]) #Temporal value of slope of Var1
slopeVar1[i]=slope1[2,1] #Populating string of slopeVar1
slope2=as.data.frame(z1[1])#Temporal value of slope of Var2
slopeVar2[i]=slope2[2,1] #Populating string of slopeVar2
}
}
start = Sys.time()
rollapply(dat[1:3], FUN=slope_func, width=3, by.column=FALSE)
end=Sys.time()
print(end-start)
Time difference of 0.04980111 secs
OP's previous implementation was taking Time difference of 0.2666121 secs for the same
How can I randomly add missing values to some or each column (say random ~5% missing in each) in a simulated dataframe, plus, is there a more efficient way of simulating a dataframe with both continuous and factor columns?
#Simulate some data
N <- 2000
data <- data.frame(id = 1:2000,age = rnorm(N,18:90),bmi = rnorm(N,15:40),
chol = rnorm(N,50:350), insulin = rnorm(N,2:40),sbp = rnorm(N, 50:200),
dbp = rnorm(N, 30:150), sex = c(rep(1, 1000), rep(2, 1000)),
smoke = rep(c(1, 2), 1000), educ = sample(LETTERS[1:4]))
#Manually add some missing values
data <- data %>%
mutate(age = "is.na<-"(age, age <19 | age >88),
bmi = "is.na<-"(bmi, bmi >38 | bmi <16),
insulin = "is.na<-"(insulin, insulin >38),
educ = "is.na<-"(educ, bmi >35))
Best solution in my opinion would be using the mice package for this. This is a R package dedicated to imputation. It also has a function called amputate for introducing missing data into a data.frame.
ampute - Generate Missing Data For Simulation Purposes
This function generates multivariate missing data in a MCAR, MAR or MNAR manner.
The advantage of this solution is you can set multiple parameters for the simulation of your missing data.
ampute(data, prop = 0.5, patterns = NULL, freq = NULL, mech = "MAR",
weights = NULL, cont = TRUE, type = NULL, odds = NULL,
bycases = TRUE, run = TRUE)
As you can see you can set the percentage of missing values, the missing data mechanism (MCAR would be your choice for missing completely at random) and several other parameters. This solution would also be quite clean since it is only 1 line of code.
Here's a tidyverse approach that will remove roughly 20% of your data for each column you specify:
set.seed(1)
# example data
N <- 20
data <- data.frame(id = 1:N,
age = rnorm(N,18:90),
bmi = rnorm(N,15:40),
chol = rnorm(N,50:350))
library(tidyverse)
# specify which variables should have missing data and prc of missing data
c_names = c("age","bmi")
prc_missing = 0.20
data %>%
gather(var, value, -id) %>% # reshape data
mutate(r = runif(nrow(.)), # simulate a random number from 0 to 1 for each row
value = ifelse(var %in% c_names & r <= prc_missing, NA, value)) %>% # if it's one of the variables you specified and the random number is less than your threshold update to NA
select(-r) %>% # remove random number
spread(var, value) # reshape back to original format
# id age bmi chol
# 1 1 17.37355 15.91898 49.83548
# 2 2 19.18364 16.78214 50.74664
# 3 3 19.16437 17.07456 52.69696
# 4 4 NA 16.01065 53.55666
# 5 5 22.32951 19.61983 53.31124
# 6 6 22.17953 19.94387 54.29250
# 7 7 24.48743 NA 56.36458
# 8 8 25.73832 20.52925 57.76853
# 9 9 26.57578 NA 57.88765
# 10 10 26.69461 24.41794 59.88111
# 11 11 29.51178 26.35868 60.39811
# 12 12 NA 25.89721 60.38797
# 13 13 NA 27.38767 62.34112
# 14 14 28.78530 27.94619 61.87064
# 15 15 33.12493 27.62294 65.43302
# 16 16 32.95507 NA 66.98040
# 17 17 33.98381 30.60571 65.63278
# 18 18 35.94384 NA 65.95587
# 19 19 36.82122 34.10003 68.56972
# 20 20 37.59390 34.76318 68.86495
And this is an alternative that will remove exactly 20% of data for the columns you specify:
set.seed(1)
# example data
N <- 20
data <- data.frame(id = 1:N,
age = rnorm(N,18:90),
bmi = rnorm(N,15:40),
chol = rnorm(N,50:350))
library(tidyverse)
# specify which variables should have missing data and prc of missing data
c_names = c("age","bmi")
prc_missing = 0.20
n_remove = prc_missing * nrow(data)
data %>%
gather(var, value, -id) %>% # reshape data
sample_frac(1) %>% # shuffle rows
group_by(var) %>% # for each variables
mutate(value = ifelse(var %in% c_names & row_number() <= n_remove, NA, value)) %>% # update to NA top x number of rows if it's one of the variables you specified
spread(var, value) # reshape to original format
# # A tibble: 20 x 4
# id age bmi chol
# <int> <dbl> <dbl> <dbl>
# 1 1 17.4 15.9 49.8
# 2 2 19.2 16.8 50.7
# 3 3 19.2 17.1 52.7
# 4 4 NA 16.0 53.6
# 5 5 22.3 NA 53.3
# 6 6 22.2 19.9 54.3
# 7 7 24.5 20.8 56.4
# 8 8 25.7 NA 57.8
# 9 9 26.6 NA 57.9
# 10 10 NA NA 59.9
# 11 11 NA 26.4 60.4
# 12 12 NA 25.9 60.4
# 13 13 29.4 27.4 62.3
# 14 14 28.8 27.9 61.9
# 15 15 33.1 27.6 65.4
# 16 16 33.0 29.6 67.0
# 17 17 34.0 30.6 65.6
# 18 18 35.9 31.9 66.0
# 19 19 36.8 34.1 68.6
# 20 20 37.6 34.8 68.9
Would this work?
n_rows <- nrow(data)
perc_missing <- 5 # percentage missing data
row_missing <- sample(1:n_rows, sample(1:n_rows, round(perc_missing/100 * n_rows,0))) # sample randomly x% of rows
col_missing <- 1 # define column
data[row_missing, col_missing] <- NA # assign missing values