Related
I am working on spatiotemporal observations of temperatures, stored in arrays of size 100*100*504 (100*100 grid, for 504 different hours representing 21 days). I am computing various indicators from those observations, for different periods (3 to 21 days), which obviously require some time, and I'm looking at improving computation efficiency. I am not really accustomed with R so I am not sure if what I am doing is the most efficient way.
One of the things I want to do is to find (for each cell) the longest continuous period of time where temperature is above a certain threshold. This is what I'm doing at the moment :
First I compute a boolean array based on the threshold using the following function.
utci_test = array(runif(100*100*504, min = 18, max = 42), c(100,100,504))
to_hs = function(utci, period=1:length(utci[1,1,]), hs_threshold){
utci_hs = utci*0
utci_hs[which(utci > hs_threshold)] = 1
utci_hs[is.na(utci)] = 0
return(utci_hs)
}
Then I transform each vector representing the hourly value for each cell into an rle object, and I return the maximum length of the 1's sequences (representing a continuous period over threshold).
max_duration_hs = function(utci_hs, period=1:length(utci_hs[1,1,]) ){
apply(utci_hs, MARGIN=c(1,2), FUN=function(x){
r = rle(x)
max(r$lengths[as.logical(r$values)], fill = 0)
})
}
Looking at the time required I noticed the second step is taking some time (bear in mind that I have to repeat this operation ~8000 times in total)
system.time(to_hs(utci_test, hs_threshold=32.0))
# utilisateur système écoulé
# 0.051 0.004 0.055
system.time(to_hs(utci_test, hs_threshold=32.0))
# utilisateur système écoulé
# 0.053 0.000 0.052
utci_test_sh = to_hs(utci_test, hs_threshold=32.0)
system.time(max_duration_hs(utci_test_sh))
# utilisateur système écoulé
# 0.456 0.012 0.468
So, I'm wondering if there is a more efficient way to do this as I guess transforming into rle object might be inefficient ?
You can get a bit of a speed bump by writing your own version of the rle() function that works because you know you want runs of 1's, and does a little less comparison. This gets you about 2x faster, down to a median time of about 250 milliseconds or so on my machine (a generic macbook).
If you have to do this 8,000 times you'll save yourself the most time by parallelizing the code to run on a multicore machine, which is straightforward to do in R (check out e.g. the parallel package).
Below the code for the speedup.
# generate data
set.seed(123)
utci_test <- array(runif(100*100*504, min = 18, max = 42), c(100,100,504))
# original functions
to_hs = function(utci, period=1:length(utci[1,1,]), hs_threshold){
utci_hs = utci*0
utci_hs[which(utci > hs_threshold)] = 1
utci_hs[is.na(utci)] = 0
return(utci_hs)
}
max_duration_hs = function(utci_hs, period=1:length(utci_hs[1,1,]) ){
apply(utci_hs, MARGIN=c(1,2), FUN=function(x){
r = rle(x)
max(r$lengths[as.logical(r$values)], fill = 0)
})
}
# helper func for rle
rle_max <- function(v) {
max(diff(c(0L, which(v==0), length(v)+1))) - 1
}
max_dur_hs_2 <- function(utci_hs) {
apply(utci_hs, MARGIN=c(1,2), FUN= rle_max)
}
# Check equivalence
utci_hs <- to_hs(utci = utci_test, hs_threshold = 32)
all.equal(max_dur_hs_2(utci_hs),
max_duration_hs(utci_hs))
#> [1] TRUE
# Test speed
library(microbenchmark)
microbenchmark(max_dur_hs_2(utci_hs),
max_duration_hs(utci_hs))
#> Unit: milliseconds
#> expr min lq mean median uq max
#> max_dur_hs_2(utci_hs) 216.1481 236.7825 250.9277 247.9918 262.4369 296.0146
#> max_duration_hs(utci_hs) 454.5740 476.5710 501.5119 489.9536 509.8750 774.9963
#> neval cld
#> 100 a
#> 100 b
Created on 2020-05-07 by the reprex package (v0.3.0)
I have to calculate cosine similarity (patient similarity metric) in R between 48k patients data with some predictive variables. Here is the equation: PSM(P1,P2) = P1.P2/ ||P1|| ||P2||
where P1 and P2 are the predictor vectors corresponding to two different patients, where for example P1 index patient and P2 will be compared with index (P1) and finally pairwise patient similarity metric PSM(P1,P2) will be calculated.
This process will go on for all 48k patients.
I have added sample data-set for 300 patients in a .csv file. Please find the sample data-set here.https://1drv.ms/u/s!AhoddsPPvdj3hVTSbosv2KcPIx5a
First things first: You can find more rigorous treatments of cosine similarity at either of these posts:
Find cosine similarity between two arrays
Creating co-occurrence matrix
Now, you clearly have a mixture of data types in your input, at least
decimal
integer
categorical
I suspect that some of the integer values are Booleans or additional categoricals. Generally, it will be up to you to transform these into continuous numerical vectors if you want to use them as input into the similarity calculation. For example, what's the distance between admission types ELECTIVE and EMERGENCY? Is it a nominal or ordinal variable? I will only be modelling the columns that I trust to be numerical dependent variables.
Also, what have you done to ensure that some of your columns don't correlate with others? Using just a little awareness of data science and biomedical terminology, it seems likely that the following are all correlated:
diasbp_max, diasbp_min, meanbp_max, meanbp_min, sysbp_max and sysbp_min
I suggest going to a print shop and ordering a poster-size printout of psm_pairs.pdf. :-) Your eyes are better at detecting meaningful (but non-linear) dependencies between variable. Including multiple measurements of the same fundamental phenomenon may over-weight that phenomenon in your similarity calculation. Don't forget that you can derive variables like
diasbp_rage <- diasbp_max - diasbp_min
Now, I'm not especially good at linear algebra, so I'm importing a cosine similarity function form the lsa text analysis package. I'd love to see you write out the formula in your question as an R function. I would write it to compare one row to another, and use two nested apply loops to get all comparisons. Hopefully we'll get the same results!
After calculating the similarity, I try to find two different patients with the most dissimilar encounters.
Since you're working with a number of rows that's relatively large, you'll want to compare various algorithmic methodologies for efficiency. In addition, you could use SparkR/some other Hadoop solution on a cluster, or the parallel package on a single computer with multiple cores and lots of RAM. I have no idea whether the solution I provided is thread-safe.
Come to think of it, the transposition alone (as I implemented it) is likely to be computationally costly for a set of 1 million patient-encounters. Overall, (If I remember my computational complexity correctly) as the number of rows in your input increases, the performance could degrade exponentially.
library(lsa)
library(reshape2)
psm_sample <- read.csv("psm_sample.csv")
row.names(psm_sample) <-
make.names(paste0("patid.", as.character(psm_sample$subject_id)), unique = TRUE)
temp <- sapply(psm_sample, class)
temp <- cbind.data.frame(names(temp), as.character(temp))
names(temp) <- c("variable", "possible.type")
numeric.cols <- (temp$possible.type %in% c("factor", "integer") &
(!(grepl(
pattern = "_id$", x = temp$variable
))) &
(!(
grepl(pattern = "_code$", x = temp$variable)
)) &
(!(
grepl(pattern = "_type$", x = temp$variable)
))) | temp$possible.type == "numeric"
psm_numerics <- psm_sample[, numeric.cols]
row.names(psm_numerics) <- row.names(psm_sample)
psm_numerics$gender <- as.integer(psm_numerics$gender)
psm_scaled <- scale(psm_numerics)
pair.these.up <- psm_scaled
# checking for independence of variables
# if the following PDF pair plot is too big for your computer to open,
# try pair-plotting some random subset of columns
# keep.frac <- 0.5
# keep.flag <- runif(ncol(psm_scaled)) < keep.frac
# pair.these.up <- psm_scaled[, keep.flag]
# pdf device sizes are in inches
dev <-
pdf(
file = "psm_pairs.pdf",
width = 50,
height = 50,
paper = "special"
)
pairs(pair.these.up)
dev.off()
#transpose the dataframe to get the
#similarity between patients
cs <- lsa::cosine(t(psm_scaled))
# this is super inefficnet, because cs contains
# two identical triangular matrices
cs.melt <- melt(cs)
cs.melt <- as.data.frame(cs.melt)
names(cs.melt) <- c("enc.A", "enc.B", "similarity")
extract.pat <- function(enc.col) {
my.patients <-
sapply(enc.col, function(one.pat) {
temp <- (strsplit(as.character(one.pat), ".", fixed = TRUE))
return(temp[[1]][[2]])
})
return(my.patients)
}
cs.melt$pat.A <- extract.pat(cs.melt$enc.A)
cs.melt$pat.B <- extract.pat(cs.melt$enc.B)
same.pat <- cs.melt[cs.melt$pat.A == cs.melt$pat.B ,]
different.pat <- cs.melt[cs.melt$pat.A != cs.melt$pat.B ,]
most.dissimilar <-
different.pat[which.min(different.pat$similarity),]
dissimilar.pat.frame <- rbind(psm_numerics[rownames(psm_numerics) ==
as.character(most.dissimilar$enc.A) ,],
psm_numerics[rownames(psm_numerics) ==
as.character(most.dissimilar$enc.B) ,])
print(t(dissimilar.pat.frame))
which gives
patid.68.49 patid.9
gender 1.00000 2.00000
age 41.85000 41.79000
sysbp_min 72.00000 106.00000
sysbp_max 95.00000 217.00000
diasbp_min 42.00000 53.00000
diasbp_max 61.00000 107.00000
meanbp_min 52.00000 67.00000
meanbp_max 72.00000 132.00000
resprate_min 20.00000 14.00000
resprate_max 35.00000 19.00000
tempc_min 36.00000 35.50000
tempc_max 37.55555 37.88889
spo2_min 90.00000 95.00000
spo2_max 100.00000 100.00000
bicarbonate_min 22.00000 26.00000
bicarbonate_max 22.00000 30.00000
creatinine_min 2.50000 1.20000
creatinine_max 2.50000 1.40000
glucose_min 82.00000 129.00000
glucose_max 82.00000 178.00000
hematocrit_min 28.10000 37.40000
hematocrit_max 28.10000 45.20000
potassium_min 5.50000 2.80000
potassium_max 5.50000 3.00000
sodium_min 138.00000 136.00000
sodium_max 138.00000 140.00000
bun_min 28.00000 16.00000
bun_max 28.00000 17.00000
wbc_min 2.50000 7.50000
wbc_max 2.50000 13.70000
mingcs 15.00000 15.00000
gcsmotor 6.00000 5.00000
gcsverbal 5.00000 0.00000
gcseyes 4.00000 1.00000
endotrachflag 0.00000 1.00000
urineoutput 1674.00000 887.00000
vasopressor 0.00000 0.00000
vent 0.00000 1.00000
los_hospital 19.09310 4.88130
los_icu 3.53680 5.32310
sofa 3.00000 5.00000
saps 17.00000 18.00000
posthospmort30day 1.00000 0.00000
Usually I wouldn't add a second answer, but that might be the best solution here. Don't worry about voting on it.
Here's the same algorithm as in my first answer, applied to the iris data set. Each row contains four spatial measurements of the flowers form three different varieties of iris plants.
Below that you will find the iris analysis, written out as nested loops so you can see the equivalence. But that's not recommended for production with large data sets.
Please familiarize yourself with starting data and all of the intermediate dataframes:
The input iris data
psm_scaled (the spatial measurements, scaled to mean=0, SD=1)
cs (the matrix of pairwise similarities)
cs.melt (the pairwise similarities in long format)
At the end I have aggregated the mean similarities for all comparisons between one variety and another. You will see that comparisons between individuals of the same variety have mean similarities approaching 1, and comparisons between individuals of the same variety have mean similarities approaching negative 1.
library(lsa)
library(reshape2)
temp <- iris[, 1:4]
iris.names <- paste0(iris$Species, '.', rownames(iris))
psm_scaled <- scale(temp)
rownames(psm_scaled) <- iris.names
cs <- lsa::cosine(t(psm_scaled))
# this is super inefficient, because cs contains
# two identical triangular matrices
cs.melt <- melt(cs)
cs.melt <- as.data.frame(cs.melt)
names(cs.melt) <- c("enc.A", "enc.B", "similarity")
names(cs.melt) <- c("flower.A", "flower.B", "similarity")
class.A <-
strsplit(as.character(cs.melt$flower.A), '.', fixed = TRUE)
cs.melt$class.A <- sapply(class.A, function(one.split) {
return(one.split[1])
})
class.B <-
strsplit(as.character(cs.melt$flower.B), '.', fixed = TRUE)
cs.melt$class.B <- sapply(class.B, function(one.split) {
return(one.split[1])
})
cs.melt$comparison <-
paste0(cs.melt$class.A , '_vs_', cs.melt$class.B)
cs.agg <-
aggregate(cs.melt$similarity, by = list(cs.melt$comparison), mean)
print(cs.agg[order(cs.agg$x),])
which gives
# Group.1 x
# 3 setosa_vs_virginica -0.7945321
# 7 virginica_vs_setosa -0.7945321
# 2 setosa_vs_versicolor -0.4868352
# 4 versicolor_vs_setosa -0.4868352
# 6 versicolor_vs_virginica 0.3774612
# 8 virginica_vs_versicolor 0.3774612
# 5 versicolor_vs_versicolor 0.4134413
# 9 virginica_vs_virginica 0.7622797
# 1 setosa_vs_setosa 0.8698189
If you’re still not comfortable with performing lsa::cosine() on a scaled, numerical dataframe, we can certainly do explicit pairwise calculations.
The formula you gave for PSM, or cosine similarity of patients, is expressed in two formats at Wikipedia
Remembering that vectors A and B represent the ordered list of attributes for PatientA and PatientB, the PSM is the dot product of A and B, divided by (the scalar product of [the magnitude of A] and [the magnitude of B])
The terse way of saying that in R is
cosine.sim <- function(A, B) { A %*% B / sqrt(A %*% A * B %*% B) }
But we can rewrite that to look more similar to your post as
cosine.sim <- function(A, B) { A %*% B / (sqrt(A %*% A) * sqrt(B %*% B)) }
I guess you could even re-write that (the calculations of similarity between a single pair of individuals) as a bunch of nested loops, but in the case of a manageable amount of data, please don’t. R is highly optimized for operations on vectors and matrices. If you’re new to R, don’t second guess it. By the way, what happened to your millions of rows? This will certainly be less stressful now that your down to tens of thousands.
Anyway, let’s say that each individual only has two elements.
individual.1 <- c(1, 0)
individual.2 <- c(1, 1)
So you can think of individual.1 as a line that passes between the origin (0,0) and (0, 1) and individual.2 as a line that passes between the origin and (1, 1).
some.data <- rbind.data.frame(individual.1, individual.2)
names(some.data) <- c('element.i', 'element.j')
rownames(some.data) <- c('individual.1', 'individual.2')
plot(some.data, xlim = c(-0.5, 2), ylim = c(-0.5, 2))
text(
some.data,
rownames(some.data),
xlim = c(-0.5, 2),
ylim = c(-0.5, 2),
adj = c(0, 0)
)
segments(0, 0, x1 = some.data[1, 1], y1 = some.data[1, 2])
segments(0, 0, x1 = some.data[2, 1], y1 = some.data[2, 2])
So what’s the angle between vector individual.1 and vector individual.2? You guessed it, 0.785 radians, or 45 degrees.
cosine.sim <- function(A, B) { A %*% B / (sqrt(A %*% A) * sqrt(B %*% B)) }
cos.sim.result <- cosine.sim(individual.1, individual.2)
angle.radians <- acos(cos.sim.result)
angle.degrees <- angle.radians * 180 / pi
print(angle.degrees)
# [,1]
# [1,] 45
Now we can use the cosine.sim function I previously defined, in two nested loops, to explicitly calculate the pairwise similarities between each of the iris flowers. Remember, psm_scaled has already been defined as the scaled numerical values from the iris dataset.
cs.melt <- lapply(rownames(psm_scaled), function(name.A) {
inner.loop.result <-
lapply(rownames(psm_scaled), function(name.B) {
individual.A <- psm_scaled[rownames(psm_scaled) == name.A, ]
individual.B <- psm_scaled[rownames(psm_scaled) == name.B, ]
similarity <- cosine.sim(individual.A, individual.B)
return(list(name.A, name.B, similarity))
})
inner.loop.result <-
do.call(rbind.data.frame, inner.loop.result)
names(inner.loop.result) <-
c('flower.A', 'flower.B', 'similarity')
return(inner.loop.result)
})
cs.melt <- do.call(rbind.data.frame, cs.melt)
Now we repeat the calculation of cs.melt$class.A, cs.melt$class.B, and cs.melt$comparison as above, and calculate cs.agg.from.loops as the mean similarity between the various types of comparisons:
cs.agg.from.loops <-
aggregate(cs.agg.from.loops$similarity, by = list(cs.agg.from.loops $comparison), mean)
print(cs.agg.from.loops[order(cs.agg.from.loops$x),])
# Group.1 x
# 3 setosa_vs_virginica -0.7945321
# 7 virginica_vs_setosa -0.7945321
# 2 setosa_vs_versicolor -0.4868352
# 4 versicolor_vs_setosa -0.4868352
# 6 versicolor_vs_virginica 0.3774612
# 8 virginica_vs_versicolor 0.3774612
# 5 versicolor_vs_versicolor 0.4134413
# 9 virginica_vs_virginica 0.7622797
# 1 setosa_vs_setosa 0.8698189
Which, I believe is identical to the result we got with lsa::cosine.
So what I'm trying to say is... why wouldn't you use lsa::cosine?
Maybe you should be more concerned with
selection of variables, including removal of highly correlated variables
scaling/normalizing/standardizing the data
performance with a large input data set
identifying known similars and dissimilars for quality control
as previously addressed
I took a class a few years ago about power market optimization and tried building a small example in R that I have worked up the courage to tackle again. However I need some help.
I would like to take the constraints of ~4 power plants and try to satisfy a single location's demand for power, as well as demand for 3 other types of ancillary services needed in the power market (called Reserves). I am looking to minimize the total cost of the generators of electricity.
I've laid out a lot of information but can't seem to figure out to how to start using any optimization packages (I have used optim before but couldn't quite work with these constraints). This will be a little lengthy but I'm using comments in R code for an easy copy-paste-run for those interested in helping or viewing what I have.
Warning: You're going to learn more about power plants and electricity markets than you want to from this post.
#####-------------------------------
# G for GENERATOR No.
# For each generator we have the following data:
## Maximum Capacity: the most power it can be producing
## Technical Minimum Capacity: the smallest amount (other than being off)
## Cost per Megawatt: The cost of generating power ($/MW)
## Ramp Rate: The speed a plant can change to a higher or lower output (MW/min)
G1 <- c(200, 100, 50, 2)
G2 <- c(150, 10, 80, 10)
G3 <- c(200, 100, 55, 2)
G4 <- c(150, 10, 85, 10)
Gdat.1 <- rbind(G1,G2,G3,G4)
colnames(Gdat.1) = c("MWMax","MWMin","Cost","RampRate")
Gdat.1
n <- nrow(Gdat.1) # number of generators
#####-------------------------------
# System Requirements: Demand
## Supply (MW) must equal demand.
Demand <- 415
# System Requirements: Reserves
## Total Reserves of the system must be met.
### R1: Primary Reserves
##### 0.5 minute response time, bi-direcitonal (Ramp UP/DOWN)
### R2: Secondary Reserves
##### 5 minute response time, bi-directional (Ramp UP/DOWN)
### R3: Tertiary Reserves
##### 15 minute response time, uni-directional (Ramp UP only)
# R for Reserve Type.
# For each Reserve Type we have the following data:
# Total: MW Needed
# minutes: within how much time the MW is needed by
# bid: amount the operator will pay for MW reserves $/MW)
R1 <- c(2, 0.5, 60) # Primary
R2 <- c(8, 5, 40) # Secondary
R3 <- c(20, 15, 0) # Tertiary
Reserves <- rbind(R1,R2,R3)
colnames(Reserves) = c("Total","Minutes","Bid")
Reserves
#####-------------------------------
## Ramp Rate constraint of generators
### For R1 (Primary Reserves) the system needs 2 MW that can be supplied 30 seconds,
### a Generator with a ramprate of 2 MW/min will only be able to supply
### 1 MW for primary reserves, while a Generator with a ramprate of 10 MW/min
### will be able to supply 5 MW.
# How much each Generator can supply in the time time
R1max <- Gdat.1[,"RampRate"] * Reserves["R1","Minutes"]
R2max <- Gdat.1[,"RampRate"] * Reserves["R2","Minutes"]
R3max <- Gdat.1[,"RampRate"] * Reserves["R3","Minutes"]
R1min <- -R1max # recall, bi-directional
R2min <- -R2max # bi-directional
R3min <- 0/R3max # uni-direction up
# we no longer need RampRate since we used it to calculate
Gdat <- cbind(Gdat.1[,-4], cbind(R1max,R2max,R3max,R1min,R2min,R3min))
#####-------------------------------
# Now we initialize each generator's commitments that can change during optimization
MW.Demand = rep(0,n) # general MW to satisfy demand
MW.R1 = rep(0,n) # MW to satisfy Primary Reserves
MW.R2 = rep(0,n) # MW to satisfy Secondary Reserves
MW.R3 = rep(0,n) # MW to satisfy Tertiary Reserves
Commit.orig <- cbind(MW.Demand,MW.R1,MW.R2,MW.R3)
rownames(Commit.orig) <- paste0("G",seq(1,n))
Commit <- Commit.orig
# Some initial guess (may be exactly the right answer...)
Commit <- matrix(c(200,0,0,0,
17.5,1,6.5,20,
197.5,1,1.5,0,
0,0,0,0), 4, 4, byrow = T, dimnames(Commit.orig))
#####-------------------------------
# Objective Function, cost per MW of each generator times their total MW output
# minimize the total cost, not sure which way to list it, or if this way even works
sum(Commit * Gdat[,"Cost"])
sum(Gdat[,"Cost"] %*% Commit)
sum(rowSums(Ctest * Gdat[,"Cost"]))
#####-------------------------------
# Constraints
sum(Commit[,"MW.Demand"]) == Demand & # All generators together must sum to meet system demand requirements
sum(Commit[,"MW.R1"]) == Reserves["R1","Total"] & # Total Primary Reserves are met
sum(Commit[,"MW.R2"]) == Reserves["R2","Total"] & # Total Secondary
sum(Commit[,"MW.R3"]) == Reserves["R3","Total"] & # Total Tertiary
(rowSums(Commit) <= Gdat[,"MWMax"] | rowSums(Commit) == 0) & # Generators must be less than or equal to its max, or off
(rowSums(Commit) >= Gdat[,"MWMin"] | rowSums(Commit) == 0) & # Genreators must be more than or equal to its min, or off
Commit[,"MW.R1"] <= Gdat[,"R1max"] & Commit[,"MW.R1"] >= Gdat[,"R1min"] & # Genrators cannot exceed ramprate limitations
Commit[,"MW.R2"] <= Gdat[,"R2max"] & Commit[,"MW.R2"] >= Gdat[,"R2min"] & # - for the bi-directional
Commit[,"MW.R3"] <= Gdat[,"R3max"] & Commit[,"MW.R3"] >= Gdat[,"R3min"] # - or unidirectional reserves
Thank you anyone willing to take a look at this.
My problem:
If the sex ratio at birth (male to female) is 1.1, but people adopt the following
strategy: have children until you have one son, and then stop, unless you have 12
daughters (in which case you stop, too). What would be the average sex ratio in the
population? (Calculate by simulation. Suppose you randomly select 10,000 families.)
My code
pm=0.5238095 # Probability of Male
pw=0.4761905 # Female
w=0 # initial number of Female
n=1 # loop
p=0 # count of number
for(i in 1:n){
s=rbinom(1,1,0.4761905)
if(s==1){
w=w+1
}
p=p+1
while(w<=12){ ####1. How to count the number of female? ###
while(s==1){
s=rbinom(1,1,0.4761905)
if(s==1){
w=w+1
}
p=p+1
}
}
f[i]=p
}
w/p
My question
How to count the number of female? I'm using loop to count the number of women$(if(s==1){
w=w+1
}). $It seems inefficient.I think MAYBE counting true or false is more efficient.
How to write the code more concise?
The answer, of course, is that this strategy won't affect the sex ratio at all! At least as you've set this up, no matter what a couple's previous birth history is, the probability of a male arising from each birth is always the same.
Here's one way to confirm that with some calculations. (The code's offered without further explanation, at least for now.):
pm <- 0.5238095
m <- cbind(boys=c(rep(1, 12), 0), girls=0:12)
p <- c(dgeom(0:11, pm), 1-pgeom(11, pm))
## Calculate expected number of boys and girls for an immortal couple pursuing
## this "strategy"
(res <- p %*% m)
# boys girls
# [1,] 0.9998641 0.9089674
p[1] / sum(p)
# [1] 0.5238095 ## Look familiar
Yes, this is very inefficient. Perhaps I can address just a couple of things that almost make sense and it will give you your answer. In your code...
for(i in 1:n){
s=rbinom(1,1,0.4761905)
if(s==1){
w=w+1
}
can be rewritten as...
s = rbinom(n,1,0.4761905)
w = sum(s)
That's the same result. Keep in mind that rbinom is producing 0's and 1's. You can just sum them to know how many 1's. Given that you define n then the number of 0s (females) is...
n - w
But, if you didn't it would be easy to find too...
length(s) - sum(s)
It is still probably inefficient but at least it's correct for what you're trying to do:
# set.seed(1)
pw <- 0.4761905 # Initial sex ratio
w <- 0 # number of daughters
n <- 10000 # number of families
p <- 0 # number of kids
f <- data.frame(Daughters=vector(length=n), Kids=vector(length=n))
for(i in 1:n){
while(w < 12 & w==p){ #As long as you don't have 12 daughters or 1 son...
s <- rbinom(1,1,pw)
if(s==1){w <- w+1}
p <- p+1
}
f[i,] <- c(w,p) #Number of daughter and total kids in each families
w <- p <- 0 # Reset number of kids and daughters for the next family
}
colSums(f)[1]/colSums(f)[2] #Final sex ratio
Daughters
0.4736842 # So as #JoshO'Brien pointed out, very close to the original sex ratio.
And you can verify vector f to see that there is never more than 1 son (number of kids minus number of daughters):
range(f[,2]-f[,1])
[1] 1 1 # Range of the number of boys per family
range(f[,1])
[1] 0 11 # Range of the number of daughters per family
nrow(f[f[,1]==0,])
[1] 5275 # Number of families having 1 son and no daughters (to be compared with 1-pw)
I have a dataset composed of values obtained from studies and experiments. Experiments are nested within studies. I want to subsample the dataset so that only 1 experiment is represented for each study. I want to repeat this procedure 10,000 times, randomly drawing the 1 experiment each time, and then calculate some summary statistics for the values. Here is an example dataset:
df=data.frame(study=c(1,1,2,2,2,3,4,4),expt=c(1,2,1,2,3,1,1,2),value=runif(8))
I wrote the following function to do the above, but it is taking forever. Does anyone have any suggestions for streamlining this code? Thanks!
subsample=function(x,A) {
subsample.list=sapply(1:A,function(m) {
idx=ddply(x,c("study"),function(i) sample(1:nrow(i),1)) #Sample one experiment from each study
x[paste(x$study,x$expt,sep="-") %in% paste(idx$study,idx$V1,sep="-"),"value"] } ) #Match the study-experiment combinations and retrieve values
means.list=ldply(subsample.list,mean) #Calculate the mean of 'values' for each iteration
c(quantile(means.list$V1,0.025),mean(means.list$V1),upper=quantile(means.list$V1,0.975)) } #Calculate overall means and 95% CIs
You can vectorise this way more (even using plyr), and go much much faster:
function=yoursummary(x)c(quantile(x,0.025),mean(x),upper=quantile(x,0.975))
subsampleX=function(x,M)
yoursummary(
aaply(
daply(.drop_o=F,df,.(study),
function(x)sample(x$value,M,replace=T)
),1,mean
)
)
The trick here is to do all the sampling up front. If we want to sample M times, why not do all that while you have access to the study.
Original code:
> system.time(subsample(df,20000))
user system elapsed
123.23 0.06 124.74
New vectorised code:
> system.time(subsampleX(df,20000))
user system elapsed
0.24 0.00 0.25
That's about 500x faster.
Here's a base R solution which avoids ddply for speed reasons:
df=data.frame(study=c(1,1,2,2,2,3,4,4),expt=c(1,2,1,2,3,1,1,2),value=runif(8))
sample.experiments <- function(df) {
r <- rle(df$study)
samp <- sapply( r$lengths , function(x) sample(seq(x),1) )
start.idx <- c(0,cumsum(r$lengths)[1:(length(r$lengths)-1)] )
df[samp + start.idx,]
}
> sample.experiments(df)
study expt value
1 1 1 0.6113196
4 2 2 0.5026527
6 3 1 0.2803080
7 4 1 0.9824377
Benchmarks
> m <- microbenchmark(
+ ddply(df,.(study),function(i) i[sample(1:nrow(i),1),]) ,
+ sample.experiments(df)
+ )
> m
Unit: microseconds
expr min lq median uq max
1 ddply(df, .(study), function(i) i[sample(1:nrow(i), 1), ]) 3808.652 3883.632 3936.805 4022.725 6530.506
2 sample.experiments(df) 337.327 350.734 357.644 365.915 580.097