How to remove loop from the following model function? - r

I'm rewriting some code, and I am currently creating a small population model. I have re-created the current model function below from a book, it's a simple population model based on a few parameters. I've left them at default and returned the data frame. Everything works well. However, I was wondering whether I could somehow exclude the loop from the function.
I know R is great because of vectorized calculation, but I'm not sure in this case whether it would be possible. I thought of using something like lead/lag to do it, but would this work? Perhaps not as things need to be calculated sequentially?
# Nt numbers at start of time t
# Ct = removed at the end of time t
# Nt0 = numbers at time 0
# r = intrinsic rate of population growth
# K = carrying capacity
mod_fun = function (r = 0.5, K = 1000, N0 = 50, Ct = 0, Yrs = 10, p = 1)
{
# sets years to year value plus 1
yr1 <- Yrs + 1
# creates sequence of length years from year 1 to Yrs value +!
years <- seq(1, yr1, 1)
# uses years length to create a vector of length Yrs + 1
pop <- numeric(yr1)
# sets population at time 0
pop[1] <- N0
# creates a loop that calculates model for each year after first year
for (i in 2:yr1) {
# sets starting value of population for step to one calculated previous step
# thus Nt is always the previous step pop size
Nt <- pop[i - 1]
pop[i] <- max((Nt + (r * Nt/p) * (1 - (Nt/K)^p) -
Ct), 0)
}
# sets pop2 to original pop length
pop2 <- pop[2:yr1]
# binds together years (sequence from 1 to length Yrs),
# pop which is created in loop and is the population at the start of step t
# pop2 which is the population at the end of step t
out <- cbind(year = years, nt = pop, nt1 = c(pop2, NA))
# sets row names to
rownames(out) <- years
out <- out[-yr1, ]
#returns data.frame
return(out)
}
result = mod_fun()
This is what the output looks like. Basically rowwise starting from row 1 given the starting population of 50 the loop calculates nt1 then sets next nt row to lag(nt1) and then things continue in a similar fashion.
result
#> year nt nt1
#> 1 1 50.0000 73.7500
#> 2 2 73.7500 107.9055
#> 3 3 107.9055 156.0364
#> 4 4 156.0364 221.8809
#> 5 5 221.8809 308.2058
#> 6 6 308.2058 414.8133
#> 7 7 414.8133 536.1849
#> 8 8 536.1849 660.5303
#> 9 9 660.5303 772.6453
#> 10 10 772.6453 860.4776
Created on 2022-04-24 by the reprex package (v2.0.1)

mod_fun = function (r = 0.5, K = 1000, N0 = 50, Ct = 0, Yrs = 10, p = 1)
{
years <- seq_len(Yrs)
pop <- Reduce(function(Nt, y)max((Nt + (r * Nt/p) * (1 - (Nt/K)^p) - Ct), 0),
years, init = N0, accumulate = TRUE)
data.frame(year = years, nt = head(pop,-1), nt1 = pop[-1])
}
year nt nt1
1 1 50.0000 73.7500
2 2 73.7500 107.9055
3 3 107.9055 156.0364
4 4 156.0364 221.8809
5 5 221.8809 308.2058
6 6 308.2058 414.8133
7 7 414.8133 536.1849
8 8 536.1849 660.5303
9 9 660.5303 772.6453
10 10 772.6453 860.4776

Related

How to make the loop faster?

My code looks as below, I am wondering if there any better way to make it faster:
pos=NULL
row=data.frame(matrix(nrow=216,ncol=4))
colnames(row)=c("sub","subi","group","trial")
for (i in 1:100000){
row$sub="Positive"
row$subi=NA
row$group=NA
row$subi[1:144]=c(1:144)
row$group[1:144]=1
row$subi[145:216]=c(1:72)
row$group[145:216]=2
row$trial=i
pos=rbind(pos,row)
}
No loop needed. You can build a data.frame or tibble(my example) on your own.
Given you want to adjust the row length later:
library(dplyr)
n_rows <- 10000
tibble(
trail = 1:n_rows,
sub = "positive",
subi = c(seq(1:144), seq(1:72), rep(NA, n_rows-216)),
group = c(rep(1, 144), rep(2, 72), rep(NA, n_rows-216))
)
Output is:
# A tibble: 10,000 × 4
trail sub subi group
<int> <chr> <int> <dbl>
1 1 positive 1 1
2 2 positive 2 1
3 3 positive 3 1
4 4 positive 4 1
5 5 positive 5 1
6 6 positive 6 1
7 7 positive 7 1
8 8 positive 8 1
9 9 positive 9 1
10 10 positive 10 1
# … with 9,990 more rows
The only thing different in each pass of the loop is trial. rep is your friend. For the other columns, R will automatically recycle to match the longest column (here, it is trial with 21.6M rows).
pos <- data.frame(
sub = "Positive",
subi = c(1:144, 1:72),
group = rep.int(1:2, c(144, 72)),
trial = rep(1:1e5, each = 216)
)
It looks like you are trying to replicate this data frame 100,000 times, with each iteration of the frame having a different trial number.
data.frame(sub = rep("Positive", 216),
subi = c(1:144, 1:72),
group = rep(c(1, 2), c(144, 72)))
The replicate function is great for running static code multiple time. So one option is to create your 100,000 copies and then update the trial number.
FrameList <-
replicate(n = 100,
{
data.frame(sub = rep("Positive", 216),
subi = c(1:144, 1:72),
group = rep(c(1, 2), c(144, 72)),
trial = rep(NA_real_, 216))
},
simplify = FALSE)
To update the trial number, you can go with a for loop
for (i in seq_along(FrameList)){
FrameList$trial <- i
}
or you can try something fancy-pants, but taking a lot more code
FrameList <- mapply(function(FL, i){
FL$trial <- i
FL
},
FrameList,
seq_along(FrameList),
SIMPLIFY = FALSE)
Whichever way you go, you can stack them all together with
Frame <- do.call("rbind", FrameList)
This certainly isn't the most elegant way to do this, so watch for others to give you other clever tricks. But this, I would guess, would be the basic process to follow.

Loop same actions in R

Have an issue here.
I want to loop my operations in R, however, do not know how to make this properly and efficiently.
I have several different sized datasets and performing the same block of code each time is time-consuming.
Here is the code I need to apply to each of the datasets and write the data or the output from a model into the datasets with different names.
##########################################################################################################################
#the combined list of separate data frames where the last letter is changing A, B, C...
z <- list(Data_A, Data_B, Data_C)
#need to loop these operations performed by using data from datasets. Here is an example by using data from Data_A dataset.
# TFP estimation by using ACF method
ACF_A <- prodest::prodestACF(Data_A$turn, fX = Data_A$cogs, sX = Data_A$tfa, pX = Data_A$cogs, idvar = Data_A$ID, timevar = Data_A$Year,
R = 100, cX = NULL, opt = 'DEoptim', theta0 = NULL, cluster = NULL)
omegaACF_A <- prodest::omega(ACF_A)
Data_A$omegaACF_A <- prodest::omega(ACF_A)
#########################################################################################################################
# Growth variables
Data_A <- Data_A %>%
arrange(ID, Year) %>%
group_by(ID) %>%
mutate(domegaACF_A = omegaACF_A - dplyr::lag(omegaACF_A),
debt = LOAN + LTD,
ddebt = debt - dplyr::lag(debt),
dsales = SALE - dplyr::lag(SALE)) %>%
ungroup
# Panel data frame
PData_A <- pdata.frame(Data_A, index = c("ID","Year"))
# Within estimator
within_2way_A <- plm(domegaACF_A ~ dplyr::lag(domegaACF_A, 1) + dplyr::lag(domegaACF_A, 2) + ddebt + lag(ff1, 1) + ddebt:lag(ff1, 1) + log(Age) + ta + dsales,
data = PData_A, effect = "twoways", model ="within", index = c("ID", "Year"))
The main problem is that I do not know how to store the data in separate datasets with according names. For example, in the block of the following code, _A should be changing to _B, _C according to the dataset that is used.
ACF_A <- prodest::prodestACF(Data_A$turn, fX = Data_A$cogs, sX = Data_A$tfa, pX = Data_A$cogs, idvar = Data_A$ID, timevar = Data_A$Year,
R = 100, cX = NULL, opt = 'DEoptim', theta0 = NULL, cluster = NULL)
omegaACF_A <- prodest::omega(ACF_A)
Data_A$omegaACF_A <- prodest::omega(ACF_A)
I know there are lapply and for loops but I do not know how to use them with changing names of storing variables:
z <- list (df1, df2, df3)
for (i in z){
ACF_[1 or 2 or 3] <- prodest::prodestACF(i$turn, fX = i$cogs, sX = i$tfa, pX = i$cogs, idvar = i$ID, timevar = i$Year,
R = 100, cX = NULL, opt = 'DEoptim', theta0 = NULL, cluster = NULL)
omegaACF_[1 or 2 or 3] <- prodest::omega(ACF_[1 or 2 or 3])
Data_[]$omegaACF_[1 or 2 or 3] <- prodest::omega(ACF_[1 or 2 or 3])
{
UPD: Here are several datasets: https://drive.google.com/drive/folders/1gBV2ZkywW6JqDjRICafCwtYhh2DHWaUq?usp=sharing
UPD2:
Data_A
turn cogs tfa SALE
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
Data_B
turn cogs tfa SALE
5 5 5 5
6 6 6 6
7 7 7 7
8 8 8 8
After running the loop I need:
ACF_A, ACF_B, etc. storage variable, where the results of the estimations of prodest function will be stored
omegaACF_A, omegaACF_B, etc. storage where omega variable from prodest will be stored
omegaACF_A, omegaACF_B results of estimations should be added to Data_A, Data_B datasets accordingly as a new variable.
After that, in Data_A, Data_B datasets, growth variables should be created
The plm regression should be stored in within_2way_A, within_2way_B accordingly
So in the end, I need:
Data_A
turn cogs tfa SALE omegaACF_A domegaACF_A debt ddebt dsales
1 1 1 1 0.1 NA 1 NA NA
2 2 2 2 0.3 0.2 2 1 1
3 3 3 3 0.6 0.3 3 1 1
4 4 4 4 0.9 0.3 4 1 1
Data_B
turn cogs tfa SALE omegaACF_B domegaACF_B debt ddebt dsales
5 5 5 5 1.1 NA 5 NA NA
6 6 6 6 1.5 0.4 6 1 1
7 7 7 7 1.7 0.2 7 1 1
8 8 8 8 2.0 0.3 8 1 1
One approach is to separate the ACF estimation and omega calculation from the summary creation with different lapply() commands. Since you did not supply any example data, it's a blind shot, but try the following. Note that I assumed that every dataset has the same column names! In case it doesn't solve your problem I will remove my answer.
data <- list(Data_A, Data_B, Data_C)
Estimates <- lapply(data, function(x){
prodest::prodestACF(x$turn, fX = x$cogs, sX = x$tfa, pX = x$cogs, idvar = x$ID, timevar = x$Year,
R = 100, cX = NULL, opt = 'DEoptim', theta0 = NULL, cluster = NULL)
}
Summaries_estimates <- lapply(Estimates, summary)
Omegas <- lapply(Estimates, function(x) prodest::omega(x))
Summaries_omega <- lapply(Omegas, summary)
Alternative using loops
Since you asked, it is also possible to define a loop that loops everything together though this is usually much slower. For this, we have to define empty lists that carry the results of ACF etc. and loop over the lists of data.frames that we already created.
data <- list(Data_A, Data_B, Data_C)
Estimates <- list()
Summaries_estimates <- list()
Omegas <- list()
Summaries_omegas <- list()
for(i in 1:(length(data))){
Estimates[[i]] <- prodest::prodestACF(data[[i]]$turn, fX = data[[i]]$cogs, sX = data[[i]]$tfa, pX = data[[i]]$cogs, idvar = data[[i]]$ID, timevar = data[[i]]$Year,
R = 100, cX = NULL, opt = 'DEoptim', theta0 = NULL, cluster = NULL)
}
Summaries_estimates[[i]] <- summary(Estimates[[i]])
Omegas[[i]] <- prodest::omega(Estimates[[i]]))
Summaries_omega[[i]] <- summary(Omegas[[i]])
}

Estimating Probabilities in Perudo

I am new to coding and using R. I am working on a project to simulate the game Liar's Dice, also known as Perudo, and have some questions about creating the simulation.
Basically, the game consists of two or more players rolling five dice in a cup, turning it over, and and making bids on how many of a certain side they think is on the table. You can look at your own dice, but not anyone else's. To make bids, on your turn you would say "two 5's," which would mean there are at least two dice that landed on 5. Each bid will either increase the side or the amount. So if you said "two 5's," I could then say "two 6's" or "three 3's" on my turn.
When you believe the last bid is incorrect, you would say "Liar" on your turn, then everyone reveals their dice. If you were wrong, you lose a dice, but if you were right, the last bidder loses a dice. This continues until there is only one player left who has dice.
First, I decided to create a function called cup() which rolls a cup of five six-sided dice.
cup <- function(sides = 6, dice = 5){
sample(1:sides, size = dice, replace = TRUE)
}
Next, with a little assistance, I created a new function called cups() which rolls three cups for three players.
cups <- function(players = 3, sides = 6, dice = 5){
out <- cup(sides, dice)
for(i in 2:players){
out <- rbind(out, cup(sides, dice))
}
rownames(out) <- 1:players
rownames(out) <- c("P1", "P2", "P3")
return(out)
}
What I want to accomplish next is to create a table of probabilities of possible dice outcomes. In other words, what's the probability of there being at least two of a side given fifteen dice (five for each player) in play? And then the probability of there being three, four, five, etc. all the way up to fifteen in this case.
My question is how would I go about doing this in R? And what direction should I go in after getting the probabilities in R?
Here is an empirical process for determining the percentage outcomes of all the same, 4 the same, 3 the same, 2 the same, none the same upon rolling 5 die:
library(gtools) # package with permutations function
allcombos <- permutations(6, 5, repeats.allowed = TRUE) # all 6 choose 5 with replacment combos
alluniques <- apply(allcombos, 1, unique) # uniques for each combo
alllengths <- sapply(alluniques, length) # lengths for each combo imputes num repeats
alllengths2 <- as.factor(alllengths) # convert to factor to count unique
allsum <- summary(alllengths2) # sum by num uniques
allsum
1 2 3 4 5 # 1=all same, 2=4 same, 3=3 same, 4=2 same, 5=all different
6 450 3000 3600 720
totsum <- sum(allsum)
allfrac <- allsum / totsum
allpercent <- allfrac * 100
allpercent
1 2 3 4 5
0.07716049 5.78703704 38.58024691 46.29629630 9.25925926 # percentage breakout
There is no doubt an analytical solution but I don't know what it is. You could use standard probability calculations to estimate specific outcomes among multiple players. E.g. P(at least 1 4-same | 3 players) or run some simulations.
Here's likely more than you asked for but focusing on number of sides on the dice, total number of dice and probability of rolling Nrolled or more
dicegame <- function(Nsides = 6,
Ndice = 5,
Nrolled = 1,
verbose = FALSE)
{
total_possible_outcomes <- choose(Nsides + Ndice - 1, Ndice)
outcomes_matrix <- t(combn(Nsides + Ndice - 1,
Ndice,
sort)) - matrix(rep(c(0:(Ndice - 1)),
each = total_possible_outcomes),
nrow = total_possible_outcomes)
chances <- sum(apply(outcomes_matrix, 1, function(x) sum(x==2)) >= Nrolled) / total_possible_outcomes
if(verbose) {
cat(paste("Number of dice",
Ndice,
"each with", Nsides, "sides",
"chances of rolling", Nrolled,
"\n or more of any one side are:\n"))
}
return(chances)
# return(total_possible_outcomes)
# return(outcomes_matrix)
}
dicegame(verbose = TRUE)
#> Number of dice 5 each with 6 sides chances of rolling 1
#> or more of any one side are:
#> [1] 0.5
dicegame(6, 15, 10)
#> [1] 0.01625387
Using probability we can demonstrate that the probability to get a value n times is equal to :
we can easily write this into an R function:
prob_get_n <- function(ntimes, players=3, dice=5, sides=6){
if(missing(ntimes)) ntimes <- 0:(players*dice)
choose(players*dice,ntimes)*(1-1/sides)^((players*dice)-ntimes)*sides^(-ntimes)
}
Notice that this function is by construction vectorised ie it accepts 1:2, c(9,5) as valid inputs.
prob_get_n() -> probs
data.frame(ntimes=1:length(probs)-1, probs=probs,or_more= rev(cumsum(rev(probs))))
ntimes probs or_more
1 0 6.490547e-02 1.000000e+00
2 1 1.947164e-01 9.350945e-01
3 2 2.726030e-01 7.403781e-01
4 3 2.362559e-01 4.677751e-01
5 4 1.417535e-01 2.315192e-01
6 5 6.237156e-02 8.976567e-02
7 6 2.079052e-02 2.739411e-02
8 7 5.346134e-03 6.603585e-03
9 8 1.069227e-03 1.257451e-03
10 9 1.663242e-04 1.882242e-04
11 10 1.995890e-05 2.190005e-05
12 11 1.814445e-06 1.941153e-06
13 12 1.209630e-07 1.267076e-07
14 13 5.582909e-09 5.744548e-09
15 14 1.595117e-10 1.616385e-10
16 15 2.126822e-12 2.126822e-12
Edit
Or we can use R built in dbinom function to get the distribution and pbinom to get the cumulative probability function:
probs <- function(ntimes, players=3, dice=5, sides=6){
if(missing(ntimes)) ntimes <- 0:(players*dice)
data.frame(ntimes=ntimes, probs=dbinom(ntimes, players*dice, 1/sides), or_more=1-pbinom(ntimes-1, players*dice, 1/sides))
}
ntimes probs or_more
1 0 6.490547e-02 1.000000e+00
2 1 1.947164e-01 9.350945e-01
3 2 2.726030e-01 7.403781e-01
4 3 2.362559e-01 4.677751e-01
5 4 1.417535e-01 2.315192e-01
6 5 6.237156e-02 8.976567e-02
7 6 2.079052e-02 2.739411e-02
8 7 5.346134e-03 6.603585e-03
9 8 1.069227e-03 1.257451e-03
10 9 1.663242e-04 1.882242e-04
11 10 1.995890e-05 2.190005e-05
12 11 1.814445e-06 1.941153e-06
13 12 1.209630e-07 1.267076e-07
14 13 5.582909e-09 5.744548e-09
15 14 1.595117e-10 1.616385e-10
16 15 2.126822e-12 2.126743e-12

R conditional lookup and sum

I have data on college course completions, with estimated numbers of students from each cohort completing after 1, 2, 3, ... 7 years. I want to use these estimates to calculate the total number of students outputting from each College and Course in any year.
The output of students in a given year will be the sum of the previous 7 cohorts outputting after 1, 2, 3, ... 7 years.
For example, the number of students outputting in 2014 from COLLEGE 1, COURSE A is equal to the sum of:
Output of 2013 cohort (College 1, Course A) after 1 year +
Output of 2012 cohort (College 1, Course A) after 2 years +
Output of 2011 cohort (College 1, Course A) after 3 years +
Output of 2010 cohort (College 1, Course A) after 4 years +
Output of 2009 cohort (College 1, Course A) after 5 years +
Output of 2008 cohort (College 1, Course A) after 6 years +
Output of 2007 cohort (College 1, Course A) after 7 years +
So there are two dataframes: a lookup table that contains all the output estimates, and a smaller summary table that I'm trying to modify. I want to update dummy.summary$output with, for each row, the total output based on the above calculation.
The following code will replicate my data pretty well
# Lookup table
dummy.lookup <- data.frame(cohort = rep(1998:2014, each = 210),
college = rep(rep(paste("College", 1:6), each = 35), 17),
course = rep(rep(paste("Course", LETTERS[1:5]), each = 7),102),
intake = rep(sample(x = 150:300, size = 510, replace=TRUE), each = 7),
output.year = rep(1:7, 510),
output = sample(x = 10:20, size = 3570, replace=TRUE))
# Summary table to be modified
dummy.summary <- aggregate(x = dummy.lookup["intake"], by = list(dummy.lookup$cohort, dummy.lookup$college, dummy.lookup$course), FUN = mean)
names(dummy.summary)[1:3] <- c("year", "college", "course")
dummy.summary <- dummy.summary[order(dummy.summary$year, dummy.summary$college, dummy.summary$course), ]
dummy.summary$output <- 0
The following code does not work, but shows the approach I've been attempting.
dummy.summary$output <- sapply(dummy.summary$output, function(x){
# empty vector to fill with output values
vec <- c()
# Find relevant output for college + course, from each cohort and exit year
for(j in 1:7){
append(x = vec,
values = dummy.lookup[dummy.lookup$college==dummy.summary[x, "college"] &
dummy.lookup$course==dummy.summary[x, "course"] &
dummy.lookup$cohort==dummy.summary[x, "year"]-j &
dummy.lookup$output.year==j, "output"])
}
# Sum and return total output
sum_vec <- sum(vec)
return(sum_vec)
}
)
I guess it doesn't work because I was hoping to use 'x' in the anonymous function to index particular values of the dummy.summary dataframe. But that clearly isn't happening and is only returning zero for each row, presumably because the starting value of 'x' is zero each time. I don't know if it is possible to access the index position of each value that sapply loops over, and use that to index my summary dataframe.
Is this approach fixable or do I need a completely different approach?
Even if it is fixable, is there a more elegant/faster way to acheive what I'm trying to do?
Thanks in anticipation.
I've just updated your output.year to output.year2 where instead of a value from 1 to 7 it gets a value of a year based on the cohort you have.
I've realised that the output information you want corresponds to the output.year, but the intake information you want corresponds to the cohort. So, I calculate them separately and then I join tables/information. This automatically creates empty (NA that I transform to 0) output info for 1998.
# fix your random sampling
set.seed(24)
# Lookup table
dummy.lookup <- data.frame(cohort = rep(1998:2014, each = 210),
college = rep(rep(paste("College", 1:6), each = 35), 17),
course = rep(rep(paste("Course", LETTERS[1:5]), each = 7),102),
intake = rep(sample(x = 150:300, size = 510, replace=TRUE), each = 7),
output.year = rep(1:7, 510),
output = sample(x = 10:20, size = 3570, replace=TRUE))
dummy.lookup$output[dummy.lookup$yr %in% 1:2] <- 0
library(dplyr)
# create result table for output info
dt_output =
dummy.lookup %>%
mutate(output.year2 = output.year+cohort) %>% # update output.year to get a year value
group_by(output.year2, college, course) %>% # for each output year, college, course
summarise(SumOutput = sum(output)) %>% # calculate sum of intake
ungroup() %>%
arrange(college,course,output.year2) %>% # for visualisation purposes
rename(cohort = output.year2) # rename column
# create result for intake info
dt_intake =
dummy.lookup %>%
select(cohort, college, course, intake) %>% # select useful columns
distinct() # keep distinct rows/values
# join info
dt_intake %>%
full_join(dt_output, by=c("cohort","college","course")) %>%
mutate(SumOutput = ifelse(is.na(SumOutput),0,SumOutput)) %>%
arrange(college,course,cohort) %>% # for visualisation purposes
tbl_df() # for printing purposes
# Source: local data frame [720 x 5]
#
# cohort college course intake SumOutput
# (int) (fctr) (fctr) (int) (dbl)
# 1 1998 College 1 Course A 194 0
# 2 1999 College 1 Course A 198 11
# 3 2000 College 1 Course A 223 29
# 4 2001 College 1 Course A 198 45
# 5 2002 College 1 Course A 289 62
# 6 2003 College 1 Course A 163 78
# 7 2004 College 1 Course A 211 74
# 8 2005 College 1 Course A 181 108
# 9 2006 College 1 Course A 277 101
# 10 2007 College 1 Course A 157 109
# .. ... ... ... ... ...

Using R to remove data which is below a quartile threshold

I am creating correlations using R, with the following code:
Values<-read.csv(inputFile, header = TRUE)
O<-Values$Abundance_O
S<-Values$Abundance_S
cor(O,S)
pear_cor<-round(cor(O,S),4)
outfile<-paste(inputFile, ".jpg", sep = "")
jpeg(filename = outfile, width = 15, height = 10, units = "in", pointsize = 10, quality = 75, bg = "white", res = 300, restoreConsole = TRUE)
rx<-range(0,20000000)
ry<-range(0,200000)
plot(rx,ry, ylab="S", xlab="O", main="O vs S", type="n")
points(O,S, col="black", pch=3, lwd=1)
mtext(sprintf("%s %.4f", "pearson: ", pear_cor), adj=1, padj=0, side = 1, line = 4)
dev.off()
pear_cor
I now need to find the lower quartile for each set of data and exclude data that is within the lower quartile. I would then like to rewrite the data without those values and use the new column of data in the correlation analysis (because I want to threshold the data by the lower quartile). If there is a way I can write this so that it is easy to change the threshold by applying arguments from Java (as I have with the input file name) that's even better!
Thank you so much.
I have now implicated the answer below and that is working, however I need to keep the pairs of data together for the correlation. Here is an example of my data (from csv):
Abundance_O Abundance_S
3635900.752 1390.883073
463299.4622 1470.92626
359101.0482 989.1609251
284966.6421 3248.832403
415283.663 2492.231265
2076456.856 10175.48946
620286.6206 5074.268802
3709754.717 269.6856808
803321.0892 118.2935093
411553.0203 4772.499758
50626.83554 17.29893001
337428.8939 203.3536852
42046.61549 152.1321255
1372013.047 5436.783169
939106.3275 7080.770535
96618.01393 1967.834701
229045.6983 948.3087208
4419414.018 23735.19352
So I need to exclude both values in the row if one does not meet my quartile threshold (0.25 quartile). So if the quartile for O was 45000 then the row "42046.61549,152.1321255" would be removed. Is this possible? If I read in both columns as a dataframe can I search each column separately? Or find the quartiles and then input that value into code to remove the appropriate rows?
Thanks again, and sorry for the evolution of the question!
Please try to provide a reproducible example, but if you have data in a data.frame, you can subset it using the quantile function as the logical test. For instance, in the following data we want to select only rows from the dataframe where the value of the measured variable 'Val' is above the bottom quartile:
# set.seed so you can reproduce these values exactly on your system
set.seed(39856)
df <- data.frame( ID = 1:10 , Val = runif(10) )
df
ID Val
1 1 0.76487516
2 2 0.59755578
3 3 0.94584374
4 4 0.72179297
5 5 0.04513418
6 6 0.95772248
7 7 0.14566118
8 8 0.84898704
9 9 0.07246594
10 10 0.14136138
# Now to select only rows where the value of our measured variable 'Val' is above the bottom 25% quartile
df[ df$Val > quantile(df$Val , 0.25 ) , ]
ID Val
1 1 0.7648752
2 2 0.5975558
3 3 0.9458437
4 4 0.7217930
6 6 0.9577225
7 7 0.1456612
8 8 0.8489870
# And check the value of the bottom 25% quantile...
quantile(df$Val , 0.25 )
25%
0.1424363
Although this is an old question, I came across it during research of my own and I arrived at a solution that someone may be interested in.
I first defined a function which will convert a numerical vector into its quantile groups. Parameter n determines the quantile length (n = 4 for quartiles, n = 10 for deciles).
qgroup = function(numvec, n = 4){
qtile = quantile(numvec, probs = seq(0, 1, 1/n))
out = sapply(numvec, function(x) sum(x >= qtile[-(n+1)]))
return(out)
}
Function example:
v = rep(1:20)
> qgroup(v)
[1] 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4
Consider now the following data:
dt = data.table(
A0 = runif(100),
A1 = runif(100)
)
We apply qgroup() across the data to obtain two quartile group columns:
cols = colnames(dt)
qcols = c('Q0', 'Q1')
dt[, (qcols) := lapply(.SD, qgroup), .SDcols = cols]
head(dt)
> A0 A1 Q0 Q1
1: 0.72121846 0.1908863 3 1
2: 0.70373594 0.4389152 3 2
3: 0.04604934 0.5301261 1 3
4: 0.10476643 0.1108709 1 1
5: 0.76907762 0.4913463 4 2
6: 0.38265848 0.9291649 2 4
Lastly, we only include rows for which both quartile groups are above the first quartile:
dt = dt[Q0 + Q1 > 2]

Resources