Problem passing extra arguments to lapply() with uniroot function - r

So I'm trying to find the roots for specific values of Y with uniroot(). I have them all in a column in a dataframe, and I want to create a new column with the root found for each one of the Ys of the original column via lapply().
The way I'm creating the function that uniroot takes as an argument to find its roots, is I am substracting the Y value to the last coefficient of this function, and that Y value is passed as an extra argument to uniroot (according to the uniroot help page).
After a couple hours trying to figure out what was happening I realized that the value that lapply() feeds to the function is the Y, but it's being read as the argument "interval" inside uniroot, thus giving me errors about this argument.
I think I could implement this another way, but it'd be much better and simpler if this way has a solution.
pol_mod <- lm(abs_p ~ poly(patron, 5, raw = TRUE), data = bradford)
a <- as.numeric(coefficients(pol_mod)[6])
b <- as.numeric(coefficients(pol_mod)[5])
c <- as.numeric(coefficients(pol_mod)[4])
d <- as.numeric(coefficients(pol_mod)[3])
e <- as.numeric(coefficients(pol_mod)[2])
f <- as.numeric(coefficients(pol_mod)[1])
fs <- function (x,y) {a*x^5 + b*x^4 + c*x^3 + d*x^2 + e*x + f - y}
interpol <- function (y, fs) {
return(uniroot(fs,y=y, interval=c(0,2000)))
}
bradford$concentracion <- lapply(bradford$abs_m, interpol, fs=fs)
The error I'm getting:
Error in uniroot(fs, y = y, interval = c(0, 2000)) :
f.lower = f(lower) is NA
Needless to say, everything works when applied outside of lapply()
I'd be really happy If someone could lend a hand! Thanks in advance!
EDIT: This is how the dataframe looks like.
bradford
# A tibble: 9 x 3
patron abs_p abs_m
<dbl> <dbl> <dbl>
1 0 0 1.57
2 25 0.041 1.27
3 125 0.215 1.59
4 250 0.405 1.61
5 500 0.675 0.447
6 750 0.97 0.441
7 1000 1.23 NA
8 1500 1.71 NA
9 2000 2.04 NA

Related

times of function apply()? apply() vs for loop? [duplicate]

It is often said that one should prefer lapply over for loops.
There are some exception as for example Hadley Wickham points out in his Advance R book.
(http://adv-r.had.co.nz/Functionals.html) (Modifying in place, Recursion etc).
The following is one of this case.
Just for sake of learning, I tried to rewrite a perceptron algorithm in a functional form in order to benchmark
relative performance.
source (https://rpubs.com/FaiHas/197581).
Here is the code.
# prepare input
data(iris)
irissubdf <- iris[1:100, c(1, 3, 5)]
names(irissubdf) <- c("sepal", "petal", "species")
head(irissubdf)
irissubdf$y <- 1
irissubdf[irissubdf[, 3] == "setosa", 4] <- -1
x <- irissubdf[, c(1, 2)]
y <- irissubdf[, 4]
# perceptron function with for
perceptron <- function(x, y, eta, niter) {
# initialize weight vector
weight <- rep(0, dim(x)[2] + 1)
errors <- rep(0, niter)
# loop over number of epochs niter
for (jj in 1:niter) {
# loop through training data set
for (ii in 1:length(y)) {
# Predict binary label using Heaviside activation
# function
z <- sum(weight[2:length(weight)] * as.numeric(x[ii,
])) + weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y[ii] - ypred) * c(1,
as.numeric(x[ii, ]))
weight <- weight + weightdiff
# Update error function
if ((y[ii] - ypred) != 0) {
errors[jj] <- errors[jj] + 1
}
}
}
# weight to decide between the two species
return(errors)
}
err <- perceptron(x, y, 1, 10)
### my rewriting in functional form auxiliary
### function
faux <- function(x, weight, y, eta) {
err <- 0
z <- sum(weight[2:length(weight)] * as.numeric(x)) +
weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y - ypred) * c(1, as.numeric(x))
weight <<- weight + weightdiff
# Update error function
if ((y - ypred) != 0) {
err <- 1
}
err
}
weight <- rep(0, 3)
weightdiff <- rep(0, 3)
f <- function() {
t <- replicate(10, sum(unlist(lapply(seq_along(irissubdf$y),
function(i) {
faux(irissubdf[i, 1:2], weight, irissubdf$y[i],
1)
}))))
weight <<- rep(0, 3)
t
}
I did not expected any consistent improvement due to the aforementioned
issues. But nevertheless I was really surprised when I saw the sharp worsening
using lapply and replicate.
I obtained this results using microbenchmark function from microbenchmark library
What could possibly be the reasons?
Could it be some memory leak?
expr min lq mean median uq
f() 48670.878 50600.7200 52767.6871 51746.2530 53541.2440
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 4184.131 4437.2990 4686.7506 4532.6655 4751.4795
perceptronC(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 95.793 104.2045 123.7735 116.6065 140.5545
max neval
109715.673 100
6513.684 100
264.858 100
The first function is the lapply/replicate function
The second is the function with for loops
The third is the same function in C++ using Rcpp
Here According to Roland the profiling of the function.
I am not sure I can interpret it in the right way.
It looks like to me most of the time is spent in subsetting
Function profiling
First of all, it is an already long debunked myth that for loops are any slower than lapply. The for loops in R have been made a lot more performant and are currently at least as fast as lapply.
That said, you have to rethink your use of lapply here. Your implementation demands assigning to the global environment, because your code requires you to update the weight during the loop. And that is a valid reason to not consider lapply.
lapply is a function you should use for its side effects (or lack of side effects). The function lapply combines the results in a list automatically and doesn't mess with the environment you work in, contrary to a for loop. The same goes for replicate. See also this question:
Is R's apply family more than syntactic sugar?
The reason your lapply solution is far slower, is because your way of using it creates a lot more overhead.
replicate is nothing else but sapply internally, so you actually combine sapply and lapply to implement your double loop. sapply creates extra overhead because it has to test whether or not the result can be simplified. So a for loop will be actually faster than using replicate.
inside your lapply anonymous function, you have to access the dataframe for both x and y for every observation. This means that -contrary to in your for-loop- eg the function $ has to be called every time.
Because you use these high-end functions, your 'lapply' solution calls 49 functions, compared to your for solution that only calls 26. These extra functions for the lapply solution include calls to functions like match, structure, [[, names, %in%, sys.call, duplicated, ...
All functions not needed by your for loop as that one doesn't do any of these checks.
If you want to see where this extra overhead comes from, look at the internal code of replicate, unlist, sapply and simplify2array.
You can use the following code to get a better idea of where you lose your performance with the lapply. Run this line by line!
Rprof(interval = 0.0001)
f()
Rprof(NULL)
fprof <- summaryRprof()$by.self
Rprof(interval = 0.0001)
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10)
Rprof(NULL)
perprof <- summaryRprof()$by.self
fprof$Fun <- rownames(fprof)
perprof$Fun <- rownames(perprof)
Selftime <- merge(fprof, perprof,
all = TRUE,
by = 'Fun',
suffixes = c(".lapply",".for"))
sum(!is.na(Selftime$self.time.lapply))
sum(!is.na(Selftime$self.time.for))
Selftime[order(Selftime$self.time.lapply, decreasing = TRUE),
c("Fun","self.time.lapply","self.time.for")]
Selftime[is.na(Selftime$self.time.for),]
There is more to the question of when to use for or lapply and which "performs" better. Sometimes speed is important, other times memory is important. To further complicate things, the time complexity may not be what you expect - that is, different behavior can be observed at different scopes, invalidating any blanket statement such as "faster than" or "at least as fast as". Finally, one performance metric often overlooked is thought-to-code, pre-mature optimization yada yada.
That said, in the Introduction to R the authors hint at some performance concerns:
Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a ‘whole object’ view is likely to be both clearer and faster in R.
Given a similar use case, input and output, disregarding user preferences, is one clearly better than the other?
Benchmark - Fibonacci sequence
I compare approaches to compute 1 to N Fibonacci numbers (inspired by the benchmarkme package), shunning the 2nd Circle and ensuring that inputs and outputs for each approach are the same. Four additional approaches are included to throw some oil on the fire - a vectorized approach and purrr::map, and *apply variants vapply and sapply.
fib <- function(x, ...){
x <- 1:x ; phi = 1.6180339887498949 ; v = \() vector("integer", length(x))
bench::mark(
vector = {
y=v(); y = ((rep(phi, length(x))^x) - ((-rep(phi, length(x)))^-x)) / sqrt(5); y},
lapply = {
y=v(); y = unlist(lapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)), use.names = F); y},
loop = {
y=v(); `for`(i, x, {y[i] = (phi^i - (-phi)^(-i)) / sqrt(5)}); y},
sapply = {
y=v(); y = sapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)); y},
vapply = {
y=v(); y = vapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5), 1); y},
map = {
y=v(); y <- purrr::map_dbl(x, ~ (phi^. - (-phi)^(-.))/sqrt(5)); y
}, ..., check = T
)[c(1:9)]
}
Here is a comparison of the performance, ranked by median time.
lapply(list(3e2, 3e3, 3e4, 3e5, 3e6, 3e7), fib) # n iterations specified separately
N = 300
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
1 vector 38.8us 40.9us 21812. 8.44KB 0 1000 0 45.8ms
2 vapply 500us 545us 1653. 3.61KB 1.65 999 1 604ms
3 sapply 518us 556us 1725. 12.48KB 0 1000 0 580ms
4 lapply 513.4us 612.8us 1620. 6KB 8.14 995 5 614.2ms
5 loop 549.9us 633.6us 1455. 3.61KB 8.78 994 6 683.3ms
6 map 649.6us 754.6us 1312. 3.61KB 9.25 993 7 756.9ms
N = 3000
1 vector 769.7us 781.5us 1257. 82.3KB 1.26 999 1 794.83ms
2 vapply 5.38ms 5.58ms 173. 35.2KB 0.697 996 4 5.74s
3 sapply 5.59ms 5.83ms 166. 114.3KB 0.666 996 4 6.01s
4 loop 5.38ms 5.91ms 167. 35.2KB 8.78 950 50 5.69s
5 lapply 5.24ms 6.49ms 156. 58.7KB 8.73 947 53 6.07s
6 map 6.11ms 6.63ms 148. 35.2KB 9.13 942 58 6.35s
N = 30 000
1 vector 10.7ms 10.9ms 90.9 821KB 0.918 297 3 3.27s
2 vapply 57.3ms 60.1ms 16.4 351.66KB 0.741 287 13 17.5s
3 loop 59.2ms 60.7ms 15.9 352KB 16.7 146 154 9.21s
4 sapply 59.6ms 62.1ms 15.7 1.05MB 0.713 287 13 18.2s
5 lapply 57.3ms 67.6ms 15.1 586KB 20.5 127 173 8.43s
6 map 66.7ms 69.1ms 14.4 352KB 21.6 120 180 8.35s
N = 300 000
1 vector 190ms 193ms 5.14 8.01MB 0.206 100 4 19.45s
2 loop 693ms 713ms 1.40 3.43MB 7.43 100 532 1.19m
3 map 766ms 790ms 1.26 3.43MB 7.53 100 598 1.32m
4 vapply 633ms 814ms 1.33 3.43MB 0.851 100 39 45.8s
5 lapply 685ms 966ms 1.06 5.72MB 9.13 100 864 1.58m
6 sapply 694ms 813ms 1.27 12.01MB 0.810 100 39 48.1s
N = 3 000 000
1 vector 3.17s 3.21s 0.312 80.1MB 0.249 20 16 1.07m
2 vapply 8.22s 8.37s 0.118 34.3MB 4.97 20 845 2.83m
3 loop 8.3s 8.42s 0.119 34.3MB 4.35 20 733 2.81m
4 map 9.09s 9.17s 0.109 34.3MB 4.91 20 903 3.07m
5 lapply 10.42s 11.09s 0.0901 57.2MB 4.10 20 909 3.7m
6 sapply 10.43s 11.28s 0.0862 112.1MB 3.58 20 830 3.87m
N = 30 000 000
1 vector 44.8s 45.94s 0.0214 801MB 0.00854 10 4 7.8m
2 vapply 1.56m 1.6m 0.0104 343MB 0.883 10 850 16m
3 loop 1.56m 1.62m 0.00977 343MB 0.366 10 374 17.1m
4 map 1.72m 1.74m 0.00959 343MB 1.23 10 1279 17.4m
5 lapply 2.15m 2.22m 0.00748 572MB 0.422 10 565 22.3m
6 sapply 2.05m 2.25m 0.00747 1.03GB 0.405 10 542 22.3m
# Intel i5-8300H CPU # 2.30GHz / R version 4.1.1 / purrr 0.3.4
for and lapply approaches perform similarly, but lapply is greedier when it comes to memory, and a bit slower when the size of input increases (for this task). Note that purrr::map memory usage is equivalent to the for-loop, superior to that of lapply, in itself a debated topic. However, when the appropriate *apply* is used, here vapply, the performance is similar. But the choice could have a large impact on memory use, sapply being noticeably less memory efficient than vapply.
A peek under the hood reveals the reason of different performance for the approaches. The for-loop performs many type checks, resulting in some overhead. lapply on the other hand, suffers from a flawed language design where lazy evaluation, or use of promises, comes at a cost, the source code confirming that the X and FUN arguments to .Internal(lapply) are promises.
Vectorized approaches are fast, and probably desirable over a for or lapply approach. Notice how the vectorized approach grows irregularly compared to the other approaches. However, aesthetics of vectorized code may be a concern: which approach would you prefer to debug?
Overall, I'd say a choice between lapply or for is not something the average R user should ponder over. Stick to what is easiest to write, think of, and debug or that is less (silent?) error prone. What is lost in performance will likely be canceled out by time saved writing. For performance critical applications, make sure to run some tests with different input sizes and to properly chunk code.
Actually,
I did test the difference with a a problem that a solve recently.
Just try yourself.
In my conclusion, have no difference but for loop to my case were insignificantly more faster than lapply.
Ps: I try mostly keep the same logic in use.
ds <- data.frame(matrix(rnorm(1000000), ncol = 8))
n <- c('a','b','c','d','e','f','g','h')
func <- function(ds, target_col, query_col, value){
return (unique(as.vector(ds[ds[query_col] == value, target_col])))
}
f1 <- function(x, y){
named_list <- list()
for (i in y){
named_list[[i]] <- func(x, 'a', 'b', i)
}
return (named_list)
}
f2 <- function(x, y){
list2 <- lapply(setNames(nm = y), func, ds = x, target_col = "a", query_col = "b")
return(list2)
}
benchmark(f1(ds2, n ))
benchmark(f2(ds2, n ))
As you could see, I did a simple routine to build a named_list based in a dataframe, the func function does the column values extracted, the f1 uses a for loop to iterate through the dataframe and the f2 uses a lapply function.
In my computer I get this results:
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
&&
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0

Fill missing values with linear regression

I have a dataframe that contains 7 columns.
str(df)
'data.frame': 8760 obs. of 7 variables:
$ G1_d20_2014.SE1_ : num 25.1 25.1 25 25 25.1 ...
$ G1_d20_2014.SE4_ : num 42.4 42.3 42.3 42.3 42.3 ...
$ G1_d20_2014.SE7_ : num 34.4 34.4 34.4 34.4 34.4 ...
$ G1_d20_2014.SE22_: num 42.5 42.4 42.3 42.4 42.3 ...
$ G1_d20_2014.SE14_: num 52.5 52.5 52.5 52.5 52.4 ...
$ G1_d20_2014.SE26 : num 40.8 40.8 40.8 40.8 40.8 ...
Each column represents a unique sensor and the columns contain measurement data from sensors. Some of the columns contain missing values. I want to fill the data gaps in each column by linear regression. I already did this manually but there is one condition that is very important and I'm looking for a function that does this on its own, as it'd take too much time to do this for all the columns. Here's the condition:
Lets say G1_d20_2014_SE1 contains missing data. Then I want to fill the data gaps from that sensor with a complete dataset from another sensor where the correlation coefficient is highest.
Here is how I did that manually:
I created a function that creates an indicator variable. Indicator variable turns to 1 if value is not NA and to 0 if it is NA. Then I added this variable as a column to the dataset:
Indvar <- function(t) {
x <- dim(length(t))
x[which(!is.na(t))] = 1
x[which(is.na(t))] = 0
return(x)
}
df$I <- Indvar(df$G1_d20_2014.SE1_)
Next I looked between which sensor and sensor 1 the correlation coefficient is highest (in that case correlation coefficient highest between SE1 and SE14). Then I computed the linear regression, took the equation from it and put it into a for loop that fills up the NA values according to the equation whenever the indicator variable is 0:
lm(df$G1_d20_2014.SE1_ ~ df$G1_d20_2014.SE14_, data = df)
for (i in 1:nrow(df)) {
if (df$I[i] == 0)
{
df$G1_d20_2014.SE1_[i] = 8.037 + 0.315*df$G1_d20_2014.SE14_[i]
}
}
This works perfectly fine but it takes too much time doing this because I have a lot of dataframes that looks like the one up in the post.
I already tried using impute_lm from the simputation package but unfortunately it does not seem to care about where the correlation is highest before filling the data gaps. Here is what I wrote:
impute_fun <- impute_lm(df,
formula = SE1_ + SE4_ ~ SE14_ + SE26)
As I wrote SE14_ + SE26_ I checked if he uses the values from SE14 for imputing the values in SE1 but he doesn't, as the result is different from my manual result.
Is there any function that does what I want? I'm really frustrated because I've been looking for this for over 2 weeks now. I'd really really appreciate some help!
EDIT/Answer to #jay.sf
So I tried to make a function (s. below) out of it but there's something I struggle with:
I don't know how to specify in the function that I want to do this for for every column and that it removes the name of that sensor that I want to fill from the sapply(c("SE1_", "SE2_", ...) Because obviously, if I do this for SE1_ and SE1_ is still in the code the correlation will be 1 and nothing happens. Now as you can see this is also problematic for the rest of the code, e.g. in the line cor(df$SE1_, df[, x], use = "complete.obs")) as it says df$SE1_ here. Same for the df$SE1_imp <- ... line.
Of course I could just delete the sensor from the sapply(...) code so the first problem does not occur. I'm just wondering if there's a nicer way to do this. Same for the df$SE1_ parts, if I wanna impute the values for SE2_ then I'd have to change df$SE1_ to df$SE2_ and so on.
I tried to run the code like this (but without the SE1_ in the sapply(...) of course) and I got the error: Error in df[, x] : incorrect number of dimensions.
Any ideas how to solve these issues?
impFUN <- function(df) {
corr <- sapply(c("SE1_", "SE2_", "SE4_", "SE5_","SE6_",
"SE7_", "SE12_", "SE13_","SE14_", "SE15_",
"SE16_", "SE22_","SE23", "SE24", "SE25",
"SE26", "SE33", "SE34", "SE35", "SE36",
"SE37", "SE46", "SE51", "SE52", "SE53",
"SE54", "SE59", "SE60", "SE61", "SE62",
"SE68", "SE69", "SE70", "SE71", "SE72",
"SE73","SE74", "SE82", "SE83", "SE84",
"SE85", "SE86", "SE87", "SE99","SE100",
"SE101", "SE102", "SE103","SE104",
"SE106", "SE107","SE121"), function(x)
cor(df$SE1_, df[, x], use = "complete.obs"))
imp.use <- names(which.max(corr))
regr.model <- lm(reformulate(imp.use, "SE1_"))
df$SE1_imp <-
ifelse(is.na(df$SE1_), lm.cf[1] + df[[imp.use]]*lm.cf[2], df$SE1_)
}
What about this? First check which sensor correlates most with sensor 1.
corr <- sapply(c("sensor.2", "sensor.3", "sensor.4"), function(x)
cor(dat$sensor.1, dat[,x], use="complete.obs"))
# sensor.2 sensor.3 sensor.4
# 0.04397132 0.26880412 -0.06487781
imp.use <- names(which.max(corr))
# [1] "sensor.3"
Calculate the regression model,
lm.cf <- lm(reformulate(imp.use, "sensor.1"), dat)$coef
and to impute sensor 1 use the coefficients in an ifelse like this:
dat$sensor.1.imp <-
ifelse(is.na(dat$sensor.1), lm.cf[1] + dat[[imp.use]]*lm.cf[2], dat$sensor.1)
Result
head(dat)
# sensor.1 sensor.2 sensor.3 sensor.4 sensor.1.imp
# 1 2.0348728 -0.6374294 2.0005714 0.03403394 2.0348728
# 2 -0.8830567 -0.8779942 0.7914632 -0.66143678 -0.8830567
# 3 NA 1.2481243 -0.9897785 -0.36361831 -0.1943438
# 4 NA -0.1162450 0.6672969 -2.84821295 0.2312968
# 5 1.0407590 0.1906306 0.3327787 1.16064011 1.0407590
# 6 0.5817020 -0.6133034 0.5689318 0.71543751 0.5817020
Toy data:
library('MASS')
set.seed(42)
M <- mvrnorm(n=1e2, mu=c(0, 0, 0, 0),
Sigma=matrix(c(1, .2, .3, .1,
.2, 1, 0, 0,
.3, 0, 1, 0,
.1, 0, 0, 1), nrow=4),
empirical=TRUE)
dat <- as.data.frame(`colnames<-`(M, paste0("sensor.", 1:4)))
dat[sample(1:nrow(dat), 30), "sensor.1"] <- NA ## generate 30% missings

lapply error - PGLS (caper) with multiples comparative.data

i need help in the following problem.
I generated a list containing 1000 comparative.dataand i want to run 1000 pgls using each of these comparative.data. I tried to use lapply function for this, using the following code:
pg <- lapply(obj, function(z){pgls(formula = y ~ x, cd[[z]], lambda = "ML")})
obj is a list of 1000 data.frames with my data. cd is my list of 1000 comparative.data.
When i tried to run this code the followin error returned:
Error in pgls(formula = y ~ x, cd[[z]], lambda = "ML") :
object 'z' not found
I can not see where is the error's source
Thanks in advance
More informations
obj is used to generate the comparative.data. To generate the 1000 comparative.data using the 1000 data frames in obj list, i used:
cd <- lapply(1:1000, function(x) comparative.data(phy = phylogeny,
data = as.data.frame(obj[[x]]),
names.col = species_name,
vcv=T, vcv.dim=3))
To run one pgls for the hundredth comparative.data the code is:
mod <- pgls(formula = y ~ x, cd[[100]], lambda = "ML")
Calling the hundredth obj and hundredth cd
obj[[100]]
# A tibble: 136 x 3
# Groups: Binomial, herbivores [136]
Binomial herbivores tm
* <chr> <dbl> <dbl>
1 Abies_alba 30. 0.896
2 Abies_balsamea 2. 0.990
3 Abies_borisii-regis 1. 0.940
4 Alcea_rosea 7. 0.972
5 Amaranthus_caudatus 1. 0.173
6 Amaranthus_hybridus_subsp._cruentus 1. 0.310
7 Aquilegia_vulgaris 9. 0.365
8 Arabidopsis_thaliana 8. 0.00280
9 Arabis_alpina 2. 0.978
10 Ariocarpus_fissuratus 1. 0.930
# ... with 126 more rows
cd[[100]]
Comparative dataset of 136 taxa:
Phylogeny: tree
136 tips, 134 internal nodes
chr [1:136] "Mercurialis_annua" "Manihot_esculenta"
"Malpighia_emarginata" "Comarum_palustre" ...
VCV matrix present:
VCV.array [1:136, 1:136, 1:16] 61.9 189.3 189.3 189.3 189.3 ...
Data: as.data.frame(obj[[x]])
$ herbivores: num [1:136] 4 1 1 5 19 21 7 4 4 2 ...
$ tm : num [1:136] 0.516 0.915 1.013 0.46 0.236 ...
Since cd was created from obj, there is no need to reference obj in lapply call but simply pass your list of comparative.data which you can do by object:
# BELOW d IS DATA FRAME OBJECT PASSED INTO LAPPLY LOOP
pg_list <- lapply(cd, function(d) pgls(formula = y ~ x, d, lambda = "ML"))
Or by index:
# BELOW i IS INTEGER VALUE PASSED INTO LAPPLY LOOP
pg_list <- lapply(seq_along(cd), function(i) pgls(formula = y ~ x, cd[[i]], lambda = "ML"))
Alternatively, you can combine both lapply calls, assuming you do not need the intermediate object, cd list, for other purposes:
# BELOW x IS OBJECT PASSED INTO LAPPLY LOOP
pg_list <- lapply(obj, function(x) {
cd <- comparative.data(phy = phylogeny,
data = as.data.frame(x),
names.col = species_name,
vcv=T, vcv.dim=3))
pgls(formula = y ~ x, cd, lambda = "ML")
})

R for loop vs lapply (performance) [duplicate]

It is often said that one should prefer lapply over for loops.
There are some exception as for example Hadley Wickham points out in his Advance R book.
(http://adv-r.had.co.nz/Functionals.html) (Modifying in place, Recursion etc).
The following is one of this case.
Just for sake of learning, I tried to rewrite a perceptron algorithm in a functional form in order to benchmark
relative performance.
source (https://rpubs.com/FaiHas/197581).
Here is the code.
# prepare input
data(iris)
irissubdf <- iris[1:100, c(1, 3, 5)]
names(irissubdf) <- c("sepal", "petal", "species")
head(irissubdf)
irissubdf$y <- 1
irissubdf[irissubdf[, 3] == "setosa", 4] <- -1
x <- irissubdf[, c(1, 2)]
y <- irissubdf[, 4]
# perceptron function with for
perceptron <- function(x, y, eta, niter) {
# initialize weight vector
weight <- rep(0, dim(x)[2] + 1)
errors <- rep(0, niter)
# loop over number of epochs niter
for (jj in 1:niter) {
# loop through training data set
for (ii in 1:length(y)) {
# Predict binary label using Heaviside activation
# function
z <- sum(weight[2:length(weight)] * as.numeric(x[ii,
])) + weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y[ii] - ypred) * c(1,
as.numeric(x[ii, ]))
weight <- weight + weightdiff
# Update error function
if ((y[ii] - ypred) != 0) {
errors[jj] <- errors[jj] + 1
}
}
}
# weight to decide between the two species
return(errors)
}
err <- perceptron(x, y, 1, 10)
### my rewriting in functional form auxiliary
### function
faux <- function(x, weight, y, eta) {
err <- 0
z <- sum(weight[2:length(weight)] * as.numeric(x)) +
weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y - ypred) * c(1, as.numeric(x))
weight <<- weight + weightdiff
# Update error function
if ((y - ypred) != 0) {
err <- 1
}
err
}
weight <- rep(0, 3)
weightdiff <- rep(0, 3)
f <- function() {
t <- replicate(10, sum(unlist(lapply(seq_along(irissubdf$y),
function(i) {
faux(irissubdf[i, 1:2], weight, irissubdf$y[i],
1)
}))))
weight <<- rep(0, 3)
t
}
I did not expected any consistent improvement due to the aforementioned
issues. But nevertheless I was really surprised when I saw the sharp worsening
using lapply and replicate.
I obtained this results using microbenchmark function from microbenchmark library
What could possibly be the reasons?
Could it be some memory leak?
expr min lq mean median uq
f() 48670.878 50600.7200 52767.6871 51746.2530 53541.2440
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 4184.131 4437.2990 4686.7506 4532.6655 4751.4795
perceptronC(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 95.793 104.2045 123.7735 116.6065 140.5545
max neval
109715.673 100
6513.684 100
264.858 100
The first function is the lapply/replicate function
The second is the function with for loops
The third is the same function in C++ using Rcpp
Here According to Roland the profiling of the function.
I am not sure I can interpret it in the right way.
It looks like to me most of the time is spent in subsetting
Function profiling
First of all, it is an already long debunked myth that for loops are any slower than lapply. The for loops in R have been made a lot more performant and are currently at least as fast as lapply.
That said, you have to rethink your use of lapply here. Your implementation demands assigning to the global environment, because your code requires you to update the weight during the loop. And that is a valid reason to not consider lapply.
lapply is a function you should use for its side effects (or lack of side effects). The function lapply combines the results in a list automatically and doesn't mess with the environment you work in, contrary to a for loop. The same goes for replicate. See also this question:
Is R's apply family more than syntactic sugar?
The reason your lapply solution is far slower, is because your way of using it creates a lot more overhead.
replicate is nothing else but sapply internally, so you actually combine sapply and lapply to implement your double loop. sapply creates extra overhead because it has to test whether or not the result can be simplified. So a for loop will be actually faster than using replicate.
inside your lapply anonymous function, you have to access the dataframe for both x and y for every observation. This means that -contrary to in your for-loop- eg the function $ has to be called every time.
Because you use these high-end functions, your 'lapply' solution calls 49 functions, compared to your for solution that only calls 26. These extra functions for the lapply solution include calls to functions like match, structure, [[, names, %in%, sys.call, duplicated, ...
All functions not needed by your for loop as that one doesn't do any of these checks.
If you want to see where this extra overhead comes from, look at the internal code of replicate, unlist, sapply and simplify2array.
You can use the following code to get a better idea of where you lose your performance with the lapply. Run this line by line!
Rprof(interval = 0.0001)
f()
Rprof(NULL)
fprof <- summaryRprof()$by.self
Rprof(interval = 0.0001)
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10)
Rprof(NULL)
perprof <- summaryRprof()$by.self
fprof$Fun <- rownames(fprof)
perprof$Fun <- rownames(perprof)
Selftime <- merge(fprof, perprof,
all = TRUE,
by = 'Fun',
suffixes = c(".lapply",".for"))
sum(!is.na(Selftime$self.time.lapply))
sum(!is.na(Selftime$self.time.for))
Selftime[order(Selftime$self.time.lapply, decreasing = TRUE),
c("Fun","self.time.lapply","self.time.for")]
Selftime[is.na(Selftime$self.time.for),]
There is more to the question of when to use for or lapply and which "performs" better. Sometimes speed is important, other times memory is important. To further complicate things, the time complexity may not be what you expect - that is, different behavior can be observed at different scopes, invalidating any blanket statement such as "faster than" or "at least as fast as". Finally, one performance metric often overlooked is thought-to-code, pre-mature optimization yada yada.
That said, in the Introduction to R the authors hint at some performance concerns:
Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a ‘whole object’ view is likely to be both clearer and faster in R.
Given a similar use case, input and output, disregarding user preferences, is one clearly better than the other?
Benchmark - Fibonacci sequence
I compare approaches to compute 1 to N Fibonacci numbers (inspired by the benchmarkme package), shunning the 2nd Circle and ensuring that inputs and outputs for each approach are the same. Four additional approaches are included to throw some oil on the fire - a vectorized approach and purrr::map, and *apply variants vapply and sapply.
fib <- function(x, ...){
x <- 1:x ; phi = 1.6180339887498949 ; v = \() vector("integer", length(x))
bench::mark(
vector = {
y=v(); y = ((rep(phi, length(x))^x) - ((-rep(phi, length(x)))^-x)) / sqrt(5); y},
lapply = {
y=v(); y = unlist(lapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)), use.names = F); y},
loop = {
y=v(); `for`(i, x, {y[i] = (phi^i - (-phi)^(-i)) / sqrt(5)}); y},
sapply = {
y=v(); y = sapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)); y},
vapply = {
y=v(); y = vapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5), 1); y},
map = {
y=v(); y <- purrr::map_dbl(x, ~ (phi^. - (-phi)^(-.))/sqrt(5)); y
}, ..., check = T
)[c(1:9)]
}
Here is a comparison of the performance, ranked by median time.
lapply(list(3e2, 3e3, 3e4, 3e5, 3e6, 3e7), fib) # n iterations specified separately
N = 300
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
1 vector 38.8us 40.9us 21812. 8.44KB 0 1000 0 45.8ms
2 vapply 500us 545us 1653. 3.61KB 1.65 999 1 604ms
3 sapply 518us 556us 1725. 12.48KB 0 1000 0 580ms
4 lapply 513.4us 612.8us 1620. 6KB 8.14 995 5 614.2ms
5 loop 549.9us 633.6us 1455. 3.61KB 8.78 994 6 683.3ms
6 map 649.6us 754.6us 1312. 3.61KB 9.25 993 7 756.9ms
N = 3000
1 vector 769.7us 781.5us 1257. 82.3KB 1.26 999 1 794.83ms
2 vapply 5.38ms 5.58ms 173. 35.2KB 0.697 996 4 5.74s
3 sapply 5.59ms 5.83ms 166. 114.3KB 0.666 996 4 6.01s
4 loop 5.38ms 5.91ms 167. 35.2KB 8.78 950 50 5.69s
5 lapply 5.24ms 6.49ms 156. 58.7KB 8.73 947 53 6.07s
6 map 6.11ms 6.63ms 148. 35.2KB 9.13 942 58 6.35s
N = 30 000
1 vector 10.7ms 10.9ms 90.9 821KB 0.918 297 3 3.27s
2 vapply 57.3ms 60.1ms 16.4 351.66KB 0.741 287 13 17.5s
3 loop 59.2ms 60.7ms 15.9 352KB 16.7 146 154 9.21s
4 sapply 59.6ms 62.1ms 15.7 1.05MB 0.713 287 13 18.2s
5 lapply 57.3ms 67.6ms 15.1 586KB 20.5 127 173 8.43s
6 map 66.7ms 69.1ms 14.4 352KB 21.6 120 180 8.35s
N = 300 000
1 vector 190ms 193ms 5.14 8.01MB 0.206 100 4 19.45s
2 loop 693ms 713ms 1.40 3.43MB 7.43 100 532 1.19m
3 map 766ms 790ms 1.26 3.43MB 7.53 100 598 1.32m
4 vapply 633ms 814ms 1.33 3.43MB 0.851 100 39 45.8s
5 lapply 685ms 966ms 1.06 5.72MB 9.13 100 864 1.58m
6 sapply 694ms 813ms 1.27 12.01MB 0.810 100 39 48.1s
N = 3 000 000
1 vector 3.17s 3.21s 0.312 80.1MB 0.249 20 16 1.07m
2 vapply 8.22s 8.37s 0.118 34.3MB 4.97 20 845 2.83m
3 loop 8.3s 8.42s 0.119 34.3MB 4.35 20 733 2.81m
4 map 9.09s 9.17s 0.109 34.3MB 4.91 20 903 3.07m
5 lapply 10.42s 11.09s 0.0901 57.2MB 4.10 20 909 3.7m
6 sapply 10.43s 11.28s 0.0862 112.1MB 3.58 20 830 3.87m
N = 30 000 000
1 vector 44.8s 45.94s 0.0214 801MB 0.00854 10 4 7.8m
2 vapply 1.56m 1.6m 0.0104 343MB 0.883 10 850 16m
3 loop 1.56m 1.62m 0.00977 343MB 0.366 10 374 17.1m
4 map 1.72m 1.74m 0.00959 343MB 1.23 10 1279 17.4m
5 lapply 2.15m 2.22m 0.00748 572MB 0.422 10 565 22.3m
6 sapply 2.05m 2.25m 0.00747 1.03GB 0.405 10 542 22.3m
# Intel i5-8300H CPU # 2.30GHz / R version 4.1.1 / purrr 0.3.4
for and lapply approaches perform similarly, but lapply is greedier when it comes to memory, and a bit slower when the size of input increases (for this task). Note that purrr::map memory usage is equivalent to the for-loop, superior to that of lapply, in itself a debated topic. However, when the appropriate *apply* is used, here vapply, the performance is similar. But the choice could have a large impact on memory use, sapply being noticeably less memory efficient than vapply.
A peek under the hood reveals the reason of different performance for the approaches. The for-loop performs many type checks, resulting in some overhead. lapply on the other hand, suffers from a flawed language design where lazy evaluation, or use of promises, comes at a cost, the source code confirming that the X and FUN arguments to .Internal(lapply) are promises.
Vectorized approaches are fast, and probably desirable over a for or lapply approach. Notice how the vectorized approach grows irregularly compared to the other approaches. However, aesthetics of vectorized code may be a concern: which approach would you prefer to debug?
Overall, I'd say a choice between lapply or for is not something the average R user should ponder over. Stick to what is easiest to write, think of, and debug or that is less (silent?) error prone. What is lost in performance will likely be canceled out by time saved writing. For performance critical applications, make sure to run some tests with different input sizes and to properly chunk code.
Actually,
I did test the difference with a a problem that a solve recently.
Just try yourself.
In my conclusion, have no difference but for loop to my case were insignificantly more faster than lapply.
Ps: I try mostly keep the same logic in use.
ds <- data.frame(matrix(rnorm(1000000), ncol = 8))
n <- c('a','b','c','d','e','f','g','h')
func <- function(ds, target_col, query_col, value){
return (unique(as.vector(ds[ds[query_col] == value, target_col])))
}
f1 <- function(x, y){
named_list <- list()
for (i in y){
named_list[[i]] <- func(x, 'a', 'b', i)
}
return (named_list)
}
f2 <- function(x, y){
list2 <- lapply(setNames(nm = y), func, ds = x, target_col = "a", query_col = "b")
return(list2)
}
benchmark(f1(ds2, n ))
benchmark(f2(ds2, n ))
As you could see, I did a simple routine to build a named_list based in a dataframe, the func function does the column values extracted, the f1 uses a for loop to iterate through the dataframe and the f2 uses a lapply function.
In my computer I get this results:
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
&&
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0

Problems with nonlinear least square fit nls non-numeric argument with binary operator

I just started working with R and would like to get a Nonlinear least square fit nls(...) to the formula y=A(1-exp(-bL))+R.
I define my function g by
> g<-function(x,y,A,b,R) {
y~A(1-exp(-bx))+R
}
and want to perform nls by
>nls((y~g(x,y,A,b,R)),data=Data, start=list(A=-2,b=0,R=-5))
And I end with the following error message.
>Error in lhs - rhs : non-numeric argument to binary operator
I guess it's just a stupid basic mistake by another beginner, but I'd be extremely glad if anyone could help me out.
Next question would be, whether I can implement the fitted curve into my graph
>plot(x,y,main="VI.20.29")
Thanks to everyone taking time to read and hopefully answer my question!
Detailed information:
I have a table with the x values (Light.intensity) and y values (e.g. VI.20.29)
> photo.data<-read.csv("C:/X/Y/Z.csv", header=T)
> names(photo.data)
[1] "Light.intensity" "SR.8.6" "SR.8.7"
[4] "SR.8.18" "SR.8.20" "VI.20.1"
[7] "VI.20.5" "VI.20.20" "VI.20.29"
[10] "DP.19.1" "DP.19.15" "DP.19.33"
[13] "DP.19.99"
> x<-photo.data$Light.intensity
> x
[1] 0 50 100 200 400 700 1000 1500 2000
> y<-photo.data$VI.20.29
> y
[1] -2.76 -2.26 -1.72 -1.09 0.18 0.66 1.47 1.48 1.63
> plot(x,y,main="VI.20.29")
> Data<-data.frame(x,y)
> Data
x y
1 0 -2.76
2 50 -2.26
3 100 -1.72
4 200 -1.09
5 400 0.18
6 700 0.66
7 1000 1.47
8 1500 1.48
9 2000 1.63
> g<-function(x,y,A,b,R) {
+ y~A(1-exp(-bx))+R
+ }
> nls((y~g(x,y,A,b,R)),data=Data, start=list(A=-2,b=0,R=-5))
Error in lhs - rhs : non-numeric argument to binary operator
The problem is that you're calling a function within a function. You're saying y~g(...), when the function g(...) itself calls y~(other variables). It's kind of 'double counting' in a way.
Just do:
nls(y~A*(1-exp(-b*x))+R, data=Data, start=list(A=-2,b=0,R=-5))
Your initial guess for parameters were way off. I saved your data in 'data.csv'
for this example that converges and then does the plot... To get this, I
adjusted parameters to get close and then did the nls fit...
df <- read.csv('data.csv')
x <- df$x
y <- df$y
plot(x,y)
fit <- nls(y~A*(1-exp(-b*x))+R, data=df, start=list(A=3,b=0.005,R=-2))
s <- summary(fit)
A <- s[["parameters"]][1]
b <- s[["parameters"]][2]
R <- s[["parameters"]][3]
f <- function(z){
v <- A*(1-exp(-b*z))+R
v
}
x.t <- 0:max(x)
y.c <- sapply(x.t, f)
lines(x.t, y.c, col='red')
print(s)
Computers do what you tell them:
y~A(1-exp(-bx))+R
Here R interprets A(...) as a function and bx as a variable.
You want y~A*(1-exp(-b*x))+R.

Resources