The concentration of germs of hands following j surface contacts can be dictated by the following recursive relationship:
H[j+1]=H[j]+T[j]*(S[j]-H[j])
Where S is the surface concentration the hand touches (and is assumed random for ease). T is the transfer efficiency for each contact. I would like to calculate the eventual hand concentration (with zero starting concentration).
I have a data frame that has a vector of surface contacts and transfer efficiencies for each surface. I have two groups a & b and within each group assume I will touch each one sequentially 1:length(df):
df <- data.frame(S = runif(10)*100, T = runif(10),g=rep(c("a","b"),each=5))
I would like to compute the cumulative sum of H by group using dplyr where possible.
a special case:
If g = "a", the starting value of H is 0.
If g=="b" then the starting value of H is the last value from when g=="a"
Here is a similar approach as showed by #AnilGoyal for a generalized case
library(dplyr)
library(purrr)
df %>%
mutate(H = accumulate2(S, T* !lead(!duplicated(g), default = FALSE),
.init = 0, ~ ..1 + ..3 * (..2 - ..1))[-n()])
For the sake of completeness and taking clues from Arun and Onyambu (on a separate question), I am adding baseR answer here too.
transform(df, H = Reduce(function(.x, .y) .x + df$T[.y] * (df$S[.y] - .x) * !c(!duplicated(df$g)[-1], 0)[.y],
seq(nrow(df)),
init = 0,
accumulate = TRUE)[-(1 + nrow(df))])
S T g H
1 37.698250 0.8550377 a 0.00000
2 3.843585 0.4722659 a 32.23342
3 33.150788 0.3684791 a 18.82587
4 8.948116 0.8893603 a 24.10430
5 57.061844 0.5452377 a 10.62499
6 49.648827 0.7719067 b 10.62499
7 95.403697 0.5835950 b 40.74775
8 10.598677 0.1220491 b 72.64469
9 91.913365 0.2166443 b 65.07203
10 69.644200 0.2603413 b 70.88705
Earlier Answer
A slight variation of my friend's answer above, I hope that may serve your purpose. Only assumption I am having is that your data is sorted by groups already and a precedes b (exactly as shown in sample). Since you have not given the random seed, I am also taking the same data took by my friend.
Strategy/hack, I used 0 value of T inside accumulate2 argument so that last value of H in group a is repeated in first value of group b
library(tidyverse)
df <- read.table(header = TRUE, text = ' S T g
1 37.698250 0.8550377 a
2 3.843585 0.4722659 a
3 33.150788 0.3684791 a
4 8.948116 0.8893603 a
5 57.061844 0.5452377 a
6 49.648827 0.7719067 b
7 95.403697 0.5835950 b
8 10.598677 0.1220491 b
9 91.913365 0.2166443 b
10 69.644200 0.2603413 b')
df %>%
mutate(H = accumulate2(S, replace(T, length(g[g=='a']), 0), .init = 0, ~ ..1 + ..3 * (..2 - ..1))[-(1+n())])
S T g H
1 37.698250 0.8550377 a 0.00000
2 3.843585 0.4722659 a 32.23342
3 33.150788 0.3684791 a 18.82587
4 8.948116 0.8893603 a 24.10430
5 57.061844 0.5452377 a 10.62499
6 49.648827 0.7719067 b 10.62499
7 95.403697 0.5835950 b 40.74775
8 10.598677 0.1220491 b 72.64469
9 91.913365 0.2166443 b 65.07203
10 69.644200 0.2603413 b 70.88705
#check - formula
#H[j+1]=H[j]+T[j]*(S[j]-H[j])
# for j =2
# H[2] = H[1] + T[1] * (S[1] -H[1])
0 + 0.8550377 * (37.698250 - 0)
#> [1] 32.23342
#for j=7 (second row group b)
#H[6] + T[6] * (S[6] - H[6])
10.62499 + 0.7719067 * (49.648827 - 10.62499)
#> [1] 40.74775
Created on 2021-07-10 by the reprex package (v2.0.0)
Here is another generalized version I would use for this question:
df$H <- Reduce(function(x, y) {
x + df$T[y] * (df$g[y] == df$g[y + 1]) * (df$S[y] - x)
}, init = 0,
seq_len(nrow(df))[-nrow(df)], accumulate = TRUE)
df
S T g H
1 37.698250 0.8550377 a 0.00000
2 3.843585 0.4722659 a 32.23342
3 33.150788 0.3684791 a 18.82587
4 8.948116 0.8893603 a 24.10430
5 57.061844 0.5452377 a 10.62499
6 49.648827 0.7719067 b 10.62499
7 95.403697 0.5835950 b 40.74775
8 10.598677 0.1220491 b 72.64469
9 91.913365 0.2166443 b 65.07203
10 69.644200 0.2603413 b 70.88705
Related
I'm rewriting some code, and I am currently creating a small population model. I have re-created the current model function below from a book, it's a simple population model based on a few parameters. I've left them at default and returned the data frame. Everything works well. However, I was wondering whether I could somehow exclude the loop from the function.
I know R is great because of vectorized calculation, but I'm not sure in this case whether it would be possible. I thought of using something like lead/lag to do it, but would this work? Perhaps not as things need to be calculated sequentially?
# Nt numbers at start of time t
# Ct = removed at the end of time t
# Nt0 = numbers at time 0
# r = intrinsic rate of population growth
# K = carrying capacity
mod_fun = function (r = 0.5, K = 1000, N0 = 50, Ct = 0, Yrs = 10, p = 1)
{
# sets years to year value plus 1
yr1 <- Yrs + 1
# creates sequence of length years from year 1 to Yrs value +!
years <- seq(1, yr1, 1)
# uses years length to create a vector of length Yrs + 1
pop <- numeric(yr1)
# sets population at time 0
pop[1] <- N0
# creates a loop that calculates model for each year after first year
for (i in 2:yr1) {
# sets starting value of population for step to one calculated previous step
# thus Nt is always the previous step pop size
Nt <- pop[i - 1]
pop[i] <- max((Nt + (r * Nt/p) * (1 - (Nt/K)^p) -
Ct), 0)
}
# sets pop2 to original pop length
pop2 <- pop[2:yr1]
# binds together years (sequence from 1 to length Yrs),
# pop which is created in loop and is the population at the start of step t
# pop2 which is the population at the end of step t
out <- cbind(year = years, nt = pop, nt1 = c(pop2, NA))
# sets row names to
rownames(out) <- years
out <- out[-yr1, ]
#returns data.frame
return(out)
}
result = mod_fun()
This is what the output looks like. Basically rowwise starting from row 1 given the starting population of 50 the loop calculates nt1 then sets next nt row to lag(nt1) and then things continue in a similar fashion.
result
#> year nt nt1
#> 1 1 50.0000 73.7500
#> 2 2 73.7500 107.9055
#> 3 3 107.9055 156.0364
#> 4 4 156.0364 221.8809
#> 5 5 221.8809 308.2058
#> 6 6 308.2058 414.8133
#> 7 7 414.8133 536.1849
#> 8 8 536.1849 660.5303
#> 9 9 660.5303 772.6453
#> 10 10 772.6453 860.4776
Created on 2022-04-24 by the reprex package (v2.0.1)
mod_fun = function (r = 0.5, K = 1000, N0 = 50, Ct = 0, Yrs = 10, p = 1)
{
years <- seq_len(Yrs)
pop <- Reduce(function(Nt, y)max((Nt + (r * Nt/p) * (1 - (Nt/K)^p) - Ct), 0),
years, init = N0, accumulate = TRUE)
data.frame(year = years, nt = head(pop,-1), nt1 = pop[-1])
}
year nt nt1
1 1 50.0000 73.7500
2 2 73.7500 107.9055
3 3 107.9055 156.0364
4 4 156.0364 221.8809
5 5 221.8809 308.2058
6 6 308.2058 414.8133
7 7 414.8133 536.1849
8 8 536.1849 660.5303
9 9 660.5303 772.6453
10 10 772.6453 860.4776
I am working with the R programming language.
I generated the following random data set that contains x and y points:
set.seed(123)
x_cor = rnorm(10,100,100)
y_cor = rnorm(10,100,100)
my_data = data.frame(x_cor,y_cor)
x_cor y_cor
1 43.95244 222.40818
2 76.98225 135.98138
3 255.87083 140.07715
4 107.05084 111.06827
5 112.92877 44.41589
6 271.50650 278.69131
7 146.09162 149.78505
8 -26.50612 -96.66172
9 31.31471 170.13559
10 55.43380 52.72086
I am trying to write a "greedy search" algorithm that shows which point is located the "shortest distance" from some starting point.
For example, suppose we start at -26.50612, -96.66172
distance <- function(x1,x2, y1,y2) {
dist <- sqrt((x1-x2)^2 + (y1 - y2)^2)
return(dist)
}
Then I calculated the distance between -26.50612, -96.66172 and each point :
results <- list()
for (i in 1:10){
distance_i <- distance(-26.50612, my_data[i,1], -96.66172, my_data[i,2] )
index = i
my_data_i = data.frame(distance_i, index)
results[[i]] <- my_data_i
}
results_df <- data.frame(do.call(rbind.data.frame, results))
However, I don't think this is working because the distance between the starting point -26.50612, -96.66172 and itself is not 0 (see 8th row):
distance_i index
1 264.6443 1
2 238.7042 2
3 191.3048 3
4 185.0577 4
5 151.7506 5
6 306.4785 6
7 331.2483 7
8 223.3056 8
9 213.3817 9
10 331.6455 10
My Question:
Can someone please show me how to write a function that correctly finds the nearest point from an initial point
(Step 1) Then removes the nearest point and the initial point from "my_data"
(Step 2) And then re-calculates the nearest point from "my_data" using the nearest point identified Step 1 (i.e. with the removed data)
And in the end, shows the path that was taken (e.g. 5,7,1,9,3, etc)
Can someone please show me how to do this?
Thanks!
This could be helpful and I think you can solve the further tasks by yourself
start <- c(x= -26.50612, y= -96.66172)
library(dplyr)
my_data <- data.frame(x_cor,y_cor) %>%
rowwise() %>%
mutate(dist = distance(start["x"], x_cor, start["y"], y_cor))
The solution is implemented as a recursive function distmin, which finds the closest point between an input x and a dataframe Y and then calls itself with the closest point and the dataframe without the closest point as arguments.
EDIT: I reimplemented distmin to use dataframes.
my_data = data.frame(x_cor,y_cor) |>
mutate(idx = row_number())
distmin <- function(x, Y) {
if(nrow(Y) == 0) {
NULL
} else {
dst <- sqrt((x$x_cor - Y$x_cor)^2 + (x$y_cor - Y$y_cor)^2)
m <- which.min(dst)
res <- data.frame(x, dist = dst[m], nearest = Y[m,"idx"])
rbind(res, distmin(Y[m,], Y[-m,]))
}}
N <- 5
distmin(my_data[N,], my_data[-N,])
##> x_cor y_cor idx dist nearest
##> 5 112.92877 44.41589 5 58.09169 10
##> 10 55.43380 52.72086 10 77.90211 4
##> 4 107.05084 111.06827 4 39.04847 2
##> 2 76.98225 135.98138 2 57.02661 9
##> 9 31.31471 170.13559 9 53.77858 1
##> 1 43.95244 222.40818 1 125.32571 7
##> 7 146.09162 149.78505 7 110.20762 3
##> 3 255.87083 140.07715 3 139.49323 6
##> 6 271.50650 278.69131 6 479.27176 8
The following shows the order in which points are called.
distmin(my_data[N,], my_data[-N,]) |>
mutate(ord = row_number()) |>
ggplot(aes(x = x_cor, y_cor)) +
geom_text(aes(label = ord))
I am trying to create the adstock effect for some variable (adstock is defined as value of the previous observation + value of previous observation*adstock rate). I have a table abc that has two columns: GEOG (A, B ,C) and GRPs (1 to 6) for a total of 18 observations. I would like to create a variable b by taking the first obs of the first GEOG and adstocking it by say .5. Then when I get to the first obs of the second GEOG, reinitialize it=to GRPs and do it again. I created a code that works really well with only one geography. But I cannot figure out how to do it BY geography. Coming from a different statistical language, I am still wrapping my head around the way R works. Can anyone help? Thanks in advance. Here is the code that works for one GOEG:
rate1=.5
rate2=0
for (i in 1:nrow(abc)) {
if (i ==1)
abc[i,3] <- abc[i,2]
else if (i == 2)
#Effect = impression + last week effect * decay rate
abc[i,3] <- abc[i,2] + (abc[i-1,3] * rate1)
else
#Effect = impression + last week effect * decay rate
abc[i,3] <- abc[i,2] + (abc[i-1,3] * rate1) + (abc[i-2,3]*rate2)
}
Output:
GEOG a b
A 1 1
A 2 2.5
A 3 4.25
A 4 6.125
A 5 8.0625
A 6 10.03125
B 1 1
B 2 2.5
B 3 4.25
B 4 6.125
B 5 8.0625
C 1 1
C 2 2.5
C 3 4.25
C 4 6.125
C 5 8.0625
transfo <- function(df, rate1 = 0.5, rate2 = 0) {
b <- df[["a"]]
for (i in seq_along(b)) {
if (i == 2) {
b[i] <- b[i] + rate1 * b[i-1]
} else if (i > 2) {
b[i] <- b[i] + rate1 * b[i-1] + rate2 * b[i-2]
}
}
df[["b"]] <- b
df
}
abc %>%
group_by(GEOG) %>%
nest() %>%
mutate(data = map(data, transfo)) %>%
unnest(data)
Learn more at http://r4ds.had.co.nz/many-models.html.
You can use the stats::filter function and dplyr.
library(tidyverse)
df %>%
group_by(GEOG) %>%
mutate(adstock = stats::filter(x=abc, filter=0.5, method="recursive")) %>%
ungroup()
I have written a function which takes a subset of data based on the value of name column.It Computes the outlier for column "mark" and replaces all the outliers.
However when I try to combine these different subsets, the order of my elements changes. Is there any way by which I can maintain the order of my elements in the column "mark"
My data set is:
name mark
A 100.0
B 0.5
C 100.0
A 50.0
B 90.0
B 1000.0
C 1200.0
C 5000.0
A 210.0
The function which I have written is :
data.frame(do.call("rbind", as.list(by(data, data$name,
function(x){apply(x[, .(mark)],2,
function(y) {y[y > (quantile(x$mark, na.rm=TRUE)[[3]][[1]] + 1.5 * IQR(x$mark))]
<- (quantile(x$mark, na.rm=TRUE)[[3]][[1]] + 1.5 * IQR(x$mark));y})}))))
The result of the above function is the first column below (I've manually added back name for illustratory purposes):
mark NAME
100.000 ----- A
50.000 ----- A
210.000 ----- A
0.500 ----- B
90.000 ----- B
839.625 ----- B
100.000 ----- C
1200.000 ----- C
4875.000 ----- C
In the above result, the order of the values for mark column are changed. Is there any way by which I can maintain the order of the elements ?
Are you sure that code is doing what you think it is?
It looks like you're replacing any value greater than the median (third returned value of quantile) with the median + 1.5*IQR. Maybe that's what you intend, I don't know. The bigger problem is that you're doing that in an apply function, so it's going to re-calculate that median and IQR each iteration, updated with the previous rows already being changed. I'd wager that's not what you intend, but I suppose I've seen stranger.
A better option might be to create an external function to do the work, which takes in all of the data, does the calculation, then outputs all the data. I like dplyr for this simply because it's clean.
Reading your data in (why the "----"?)
scores <- read.table(text="
name mark
A 100.0
B 0.5
C 100.0
A 50.0
B 90.0
B 1000.0
C 1200.0
C 5000.0
A 210.0", header=TRUE)
and creating a function that does something a little more sensible; replaces any value greater than the 75% quantile (referenced by name so you know what it is) or less than the 25% quantile with that limiting value
scale_outliers <- function(data) {
lim <- quantile(data, na.rm = TRUE)
data[data > lim["75%"]] <- lim["75%"]
data[data < lim["25%"]] <- lim["25%"]
return(data)
}
Chaining this processing into dplyr::mutate is neat, and can then be passed on to ggplot. Here's the original data
gg1 <- scores %>% ggplot(aes(x=name, y=mark))
gg1 <- gg1 + geom_point() + geom_boxplot() + coord_cartesian(ylim=range(scores$mark))
gg1
And if we alter it with the new function we get the data back without rows changed around
scores %>% mutate(new_mark = scale_outliers(mark))
#> name mark new_mark
#> 1 A 100.0 100
#> 2 B 0.5 90
#> 3 C 100.0 100
#> 4 A 50.0 90
#> 5 B 90.0 90
#> 6 B 1000.0 1000
#> 7 C 1200.0 1000
#> 8 C 5000.0 1000
#> 9 A 210.0 210
and we can plot that,
gg2 <- scores %>% mutate(new_mark = scale_outliers(mark)) %>% ggplot(aes(x=name, y=new_mark))
gg2 <- gg2 + geom_point() + geom_boxplot() + coord_cartesian(ylim=range(scores$mark))
gg2
Best of all, if you now want to do that quantile comparison group-wise (say, by the name column, it's as easy as using dplyr::group_by(name),
gg3 <- scores %>% group_by(name) %>% mutate(new_mark = scale_outliers(mark)) %>% ggplot(aes(x=name, y=new_mark))
gg3 <- gg3 + geom_point() + geom_boxplot() + coord_cartesian(ylim=range(scores$mark))
gg3
A slightly refactored version of Hack-R's answer -- you can add a index to your data.table:
data <- data.table(name = c("A", "B","C", "A","B","B","C","C","A"),mark = c(100,0.5,100,50,90,1000,1200,5000,210))
data[,i:=.I]
Then you perform your calculation but you keep the name and i:
df <- data.frame(do.call("rbind", as.list(
by(data, data$name,
function(x) cbind(i=x$i,
name=x$name,
apply(x[, .(mark)], 2,function(y) {y[y > (quantile(x$mark, na.rm=TRUE)[[3]][[1]] + 1.5 * IQR(x$mark))] <- (quantile(x$mark, na.rm=TRUE)[[3]][[1]] + 1.5 * IQR(x$mark));y})
)))))
And finally you order using the index:
df[order(df$i),]
i name mark
1 1 A 100
4 2 B 0.5
7 3 C 100
2 4 A 50
5 5 B 90
6 6 B 839.625
8 7 C 1200
9 8 C 4875
3 9 A 210
I am creating correlations using R, with the following code:
Values<-read.csv(inputFile, header = TRUE)
O<-Values$Abundance_O
S<-Values$Abundance_S
cor(O,S)
pear_cor<-round(cor(O,S),4)
outfile<-paste(inputFile, ".jpg", sep = "")
jpeg(filename = outfile, width = 15, height = 10, units = "in", pointsize = 10, quality = 75, bg = "white", res = 300, restoreConsole = TRUE)
rx<-range(0,20000000)
ry<-range(0,200000)
plot(rx,ry, ylab="S", xlab="O", main="O vs S", type="n")
points(O,S, col="black", pch=3, lwd=1)
mtext(sprintf("%s %.4f", "pearson: ", pear_cor), adj=1, padj=0, side = 1, line = 4)
dev.off()
pear_cor
I now need to find the lower quartile for each set of data and exclude data that is within the lower quartile. I would then like to rewrite the data without those values and use the new column of data in the correlation analysis (because I want to threshold the data by the lower quartile). If there is a way I can write this so that it is easy to change the threshold by applying arguments from Java (as I have with the input file name) that's even better!
Thank you so much.
I have now implicated the answer below and that is working, however I need to keep the pairs of data together for the correlation. Here is an example of my data (from csv):
Abundance_O Abundance_S
3635900.752 1390.883073
463299.4622 1470.92626
359101.0482 989.1609251
284966.6421 3248.832403
415283.663 2492.231265
2076456.856 10175.48946
620286.6206 5074.268802
3709754.717 269.6856808
803321.0892 118.2935093
411553.0203 4772.499758
50626.83554 17.29893001
337428.8939 203.3536852
42046.61549 152.1321255
1372013.047 5436.783169
939106.3275 7080.770535
96618.01393 1967.834701
229045.6983 948.3087208
4419414.018 23735.19352
So I need to exclude both values in the row if one does not meet my quartile threshold (0.25 quartile). So if the quartile for O was 45000 then the row "42046.61549,152.1321255" would be removed. Is this possible? If I read in both columns as a dataframe can I search each column separately? Or find the quartiles and then input that value into code to remove the appropriate rows?
Thanks again, and sorry for the evolution of the question!
Please try to provide a reproducible example, but if you have data in a data.frame, you can subset it using the quantile function as the logical test. For instance, in the following data we want to select only rows from the dataframe where the value of the measured variable 'Val' is above the bottom quartile:
# set.seed so you can reproduce these values exactly on your system
set.seed(39856)
df <- data.frame( ID = 1:10 , Val = runif(10) )
df
ID Val
1 1 0.76487516
2 2 0.59755578
3 3 0.94584374
4 4 0.72179297
5 5 0.04513418
6 6 0.95772248
7 7 0.14566118
8 8 0.84898704
9 9 0.07246594
10 10 0.14136138
# Now to select only rows where the value of our measured variable 'Val' is above the bottom 25% quartile
df[ df$Val > quantile(df$Val , 0.25 ) , ]
ID Val
1 1 0.7648752
2 2 0.5975558
3 3 0.9458437
4 4 0.7217930
6 6 0.9577225
7 7 0.1456612
8 8 0.8489870
# And check the value of the bottom 25% quantile...
quantile(df$Val , 0.25 )
25%
0.1424363
Although this is an old question, I came across it during research of my own and I arrived at a solution that someone may be interested in.
I first defined a function which will convert a numerical vector into its quantile groups. Parameter n determines the quantile length (n = 4 for quartiles, n = 10 for deciles).
qgroup = function(numvec, n = 4){
qtile = quantile(numvec, probs = seq(0, 1, 1/n))
out = sapply(numvec, function(x) sum(x >= qtile[-(n+1)]))
return(out)
}
Function example:
v = rep(1:20)
> qgroup(v)
[1] 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4
Consider now the following data:
dt = data.table(
A0 = runif(100),
A1 = runif(100)
)
We apply qgroup() across the data to obtain two quartile group columns:
cols = colnames(dt)
qcols = c('Q0', 'Q1')
dt[, (qcols) := lapply(.SD, qgroup), .SDcols = cols]
head(dt)
> A0 A1 Q0 Q1
1: 0.72121846 0.1908863 3 1
2: 0.70373594 0.4389152 3 2
3: 0.04604934 0.5301261 1 3
4: 0.10476643 0.1108709 1 1
5: 0.76907762 0.4913463 4 2
6: 0.38265848 0.9291649 2 4
Lastly, we only include rows for which both quartile groups are above the first quartile:
dt = dt[Q0 + Q1 > 2]