It is often said that one should prefer lapply over for loops.
There are some exception as for example Hadley Wickham points out in his Advance R book.
(http://adv-r.had.co.nz/Functionals.html) (Modifying in place, Recursion etc).
The following is one of this case.
Just for sake of learning, I tried to rewrite a perceptron algorithm in a functional form in order to benchmark
relative performance.
source (https://rpubs.com/FaiHas/197581).
Here is the code.
# prepare input
data(iris)
irissubdf <- iris[1:100, c(1, 3, 5)]
names(irissubdf) <- c("sepal", "petal", "species")
head(irissubdf)
irissubdf$y <- 1
irissubdf[irissubdf[, 3] == "setosa", 4] <- -1
x <- irissubdf[, c(1, 2)]
y <- irissubdf[, 4]
# perceptron function with for
perceptron <- function(x, y, eta, niter) {
# initialize weight vector
weight <- rep(0, dim(x)[2] + 1)
errors <- rep(0, niter)
# loop over number of epochs niter
for (jj in 1:niter) {
# loop through training data set
for (ii in 1:length(y)) {
# Predict binary label using Heaviside activation
# function
z <- sum(weight[2:length(weight)] * as.numeric(x[ii,
])) + weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y[ii] - ypred) * c(1,
as.numeric(x[ii, ]))
weight <- weight + weightdiff
# Update error function
if ((y[ii] - ypred) != 0) {
errors[jj] <- errors[jj] + 1
}
}
}
# weight to decide between the two species
return(errors)
}
err <- perceptron(x, y, 1, 10)
### my rewriting in functional form auxiliary
### function
faux <- function(x, weight, y, eta) {
err <- 0
z <- sum(weight[2:length(weight)] * as.numeric(x)) +
weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y - ypred) * c(1, as.numeric(x))
weight <<- weight + weightdiff
# Update error function
if ((y - ypred) != 0) {
err <- 1
}
err
}
weight <- rep(0, 3)
weightdiff <- rep(0, 3)
f <- function() {
t <- replicate(10, sum(unlist(lapply(seq_along(irissubdf$y),
function(i) {
faux(irissubdf[i, 1:2], weight, irissubdf$y[i],
1)
}))))
weight <<- rep(0, 3)
t
}
I did not expected any consistent improvement due to the aforementioned
issues. But nevertheless I was really surprised when I saw the sharp worsening
using lapply and replicate.
I obtained this results using microbenchmark function from microbenchmark library
What could possibly be the reasons?
Could it be some memory leak?
expr min lq mean median uq
f() 48670.878 50600.7200 52767.6871 51746.2530 53541.2440
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 4184.131 4437.2990 4686.7506 4532.6655 4751.4795
perceptronC(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 95.793 104.2045 123.7735 116.6065 140.5545
max neval
109715.673 100
6513.684 100
264.858 100
The first function is the lapply/replicate function
The second is the function with for loops
The third is the same function in C++ using Rcpp
Here According to Roland the profiling of the function.
I am not sure I can interpret it in the right way.
It looks like to me most of the time is spent in subsetting
Function profiling
First of all, it is an already long debunked myth that for loops are any slower than lapply. The for loops in R have been made a lot more performant and are currently at least as fast as lapply.
That said, you have to rethink your use of lapply here. Your implementation demands assigning to the global environment, because your code requires you to update the weight during the loop. And that is a valid reason to not consider lapply.
lapply is a function you should use for its side effects (or lack of side effects). The function lapply combines the results in a list automatically and doesn't mess with the environment you work in, contrary to a for loop. The same goes for replicate. See also this question:
Is R's apply family more than syntactic sugar?
The reason your lapply solution is far slower, is because your way of using it creates a lot more overhead.
replicate is nothing else but sapply internally, so you actually combine sapply and lapply to implement your double loop. sapply creates extra overhead because it has to test whether or not the result can be simplified. So a for loop will be actually faster than using replicate.
inside your lapply anonymous function, you have to access the dataframe for both x and y for every observation. This means that -contrary to in your for-loop- eg the function $ has to be called every time.
Because you use these high-end functions, your 'lapply' solution calls 49 functions, compared to your for solution that only calls 26. These extra functions for the lapply solution include calls to functions like match, structure, [[, names, %in%, sys.call, duplicated, ...
All functions not needed by your for loop as that one doesn't do any of these checks.
If you want to see where this extra overhead comes from, look at the internal code of replicate, unlist, sapply and simplify2array.
You can use the following code to get a better idea of where you lose your performance with the lapply. Run this line by line!
Rprof(interval = 0.0001)
f()
Rprof(NULL)
fprof <- summaryRprof()$by.self
Rprof(interval = 0.0001)
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10)
Rprof(NULL)
perprof <- summaryRprof()$by.self
fprof$Fun <- rownames(fprof)
perprof$Fun <- rownames(perprof)
Selftime <- merge(fprof, perprof,
all = TRUE,
by = 'Fun',
suffixes = c(".lapply",".for"))
sum(!is.na(Selftime$self.time.lapply))
sum(!is.na(Selftime$self.time.for))
Selftime[order(Selftime$self.time.lapply, decreasing = TRUE),
c("Fun","self.time.lapply","self.time.for")]
Selftime[is.na(Selftime$self.time.for),]
There is more to the question of when to use for or lapply and which "performs" better. Sometimes speed is important, other times memory is important. To further complicate things, the time complexity may not be what you expect - that is, different behavior can be observed at different scopes, invalidating any blanket statement such as "faster than" or "at least as fast as". Finally, one performance metric often overlooked is thought-to-code, pre-mature optimization yada yada.
That said, in the Introduction to R the authors hint at some performance concerns:
Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a ‘whole object’ view is likely to be both clearer and faster in R.
Given a similar use case, input and output, disregarding user preferences, is one clearly better than the other?
Benchmark - Fibonacci sequence
I compare approaches to compute 1 to N Fibonacci numbers (inspired by the benchmarkme package), shunning the 2nd Circle and ensuring that inputs and outputs for each approach are the same. Four additional approaches are included to throw some oil on the fire - a vectorized approach and purrr::map, and *apply variants vapply and sapply.
fib <- function(x, ...){
x <- 1:x ; phi = 1.6180339887498949 ; v = \() vector("integer", length(x))
bench::mark(
vector = {
y=v(); y = ((rep(phi, length(x))^x) - ((-rep(phi, length(x)))^-x)) / sqrt(5); y},
lapply = {
y=v(); y = unlist(lapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)), use.names = F); y},
loop = {
y=v(); `for`(i, x, {y[i] = (phi^i - (-phi)^(-i)) / sqrt(5)}); y},
sapply = {
y=v(); y = sapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)); y},
vapply = {
y=v(); y = vapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5), 1); y},
map = {
y=v(); y <- purrr::map_dbl(x, ~ (phi^. - (-phi)^(-.))/sqrt(5)); y
}, ..., check = T
)[c(1:9)]
}
Here is a comparison of the performance, ranked by median time.
lapply(list(3e2, 3e3, 3e4, 3e5, 3e6, 3e7), fib) # n iterations specified separately
N = 300
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
1 vector 38.8us 40.9us 21812. 8.44KB 0 1000 0 45.8ms
2 vapply 500us 545us 1653. 3.61KB 1.65 999 1 604ms
3 sapply 518us 556us 1725. 12.48KB 0 1000 0 580ms
4 lapply 513.4us 612.8us 1620. 6KB 8.14 995 5 614.2ms
5 loop 549.9us 633.6us 1455. 3.61KB 8.78 994 6 683.3ms
6 map 649.6us 754.6us 1312. 3.61KB 9.25 993 7 756.9ms
N = 3000
1 vector 769.7us 781.5us 1257. 82.3KB 1.26 999 1 794.83ms
2 vapply 5.38ms 5.58ms 173. 35.2KB 0.697 996 4 5.74s
3 sapply 5.59ms 5.83ms 166. 114.3KB 0.666 996 4 6.01s
4 loop 5.38ms 5.91ms 167. 35.2KB 8.78 950 50 5.69s
5 lapply 5.24ms 6.49ms 156. 58.7KB 8.73 947 53 6.07s
6 map 6.11ms 6.63ms 148. 35.2KB 9.13 942 58 6.35s
N = 30 000
1 vector 10.7ms 10.9ms 90.9 821KB 0.918 297 3 3.27s
2 vapply 57.3ms 60.1ms 16.4 351.66KB 0.741 287 13 17.5s
3 loop 59.2ms 60.7ms 15.9 352KB 16.7 146 154 9.21s
4 sapply 59.6ms 62.1ms 15.7 1.05MB 0.713 287 13 18.2s
5 lapply 57.3ms 67.6ms 15.1 586KB 20.5 127 173 8.43s
6 map 66.7ms 69.1ms 14.4 352KB 21.6 120 180 8.35s
N = 300 000
1 vector 190ms 193ms 5.14 8.01MB 0.206 100 4 19.45s
2 loop 693ms 713ms 1.40 3.43MB 7.43 100 532 1.19m
3 map 766ms 790ms 1.26 3.43MB 7.53 100 598 1.32m
4 vapply 633ms 814ms 1.33 3.43MB 0.851 100 39 45.8s
5 lapply 685ms 966ms 1.06 5.72MB 9.13 100 864 1.58m
6 sapply 694ms 813ms 1.27 12.01MB 0.810 100 39 48.1s
N = 3 000 000
1 vector 3.17s 3.21s 0.312 80.1MB 0.249 20 16 1.07m
2 vapply 8.22s 8.37s 0.118 34.3MB 4.97 20 845 2.83m
3 loop 8.3s 8.42s 0.119 34.3MB 4.35 20 733 2.81m
4 map 9.09s 9.17s 0.109 34.3MB 4.91 20 903 3.07m
5 lapply 10.42s 11.09s 0.0901 57.2MB 4.10 20 909 3.7m
6 sapply 10.43s 11.28s 0.0862 112.1MB 3.58 20 830 3.87m
N = 30 000 000
1 vector 44.8s 45.94s 0.0214 801MB 0.00854 10 4 7.8m
2 vapply 1.56m 1.6m 0.0104 343MB 0.883 10 850 16m
3 loop 1.56m 1.62m 0.00977 343MB 0.366 10 374 17.1m
4 map 1.72m 1.74m 0.00959 343MB 1.23 10 1279 17.4m
5 lapply 2.15m 2.22m 0.00748 572MB 0.422 10 565 22.3m
6 sapply 2.05m 2.25m 0.00747 1.03GB 0.405 10 542 22.3m
# Intel i5-8300H CPU # 2.30GHz / R version 4.1.1 / purrr 0.3.4
for and lapply approaches perform similarly, but lapply is greedier when it comes to memory, and a bit slower when the size of input increases (for this task). Note that purrr::map memory usage is equivalent to the for-loop, superior to that of lapply, in itself a debated topic. However, when the appropriate *apply* is used, here vapply, the performance is similar. But the choice could have a large impact on memory use, sapply being noticeably less memory efficient than vapply.
A peek under the hood reveals the reason of different performance for the approaches. The for-loop performs many type checks, resulting in some overhead. lapply on the other hand, suffers from a flawed language design where lazy evaluation, or use of promises, comes at a cost, the source code confirming that the X and FUN arguments to .Internal(lapply) are promises.
Vectorized approaches are fast, and probably desirable over a for or lapply approach. Notice how the vectorized approach grows irregularly compared to the other approaches. However, aesthetics of vectorized code may be a concern: which approach would you prefer to debug?
Overall, I'd say a choice between lapply or for is not something the average R user should ponder over. Stick to what is easiest to write, think of, and debug or that is less (silent?) error prone. What is lost in performance will likely be canceled out by time saved writing. For performance critical applications, make sure to run some tests with different input sizes and to properly chunk code.
Actually,
I did test the difference with a a problem that a solve recently.
Just try yourself.
In my conclusion, have no difference but for loop to my case were insignificantly more faster than lapply.
Ps: I try mostly keep the same logic in use.
ds <- data.frame(matrix(rnorm(1000000), ncol = 8))
n <- c('a','b','c','d','e','f','g','h')
func <- function(ds, target_col, query_col, value){
return (unique(as.vector(ds[ds[query_col] == value, target_col])))
}
f1 <- function(x, y){
named_list <- list()
for (i in y){
named_list[[i]] <- func(x, 'a', 'b', i)
}
return (named_list)
}
f2 <- function(x, y){
list2 <- lapply(setNames(nm = y), func, ds = x, target_col = "a", query_col = "b")
return(list2)
}
benchmark(f1(ds2, n ))
benchmark(f2(ds2, n ))
As you could see, I did a simple routine to build a named_list based in a dataframe, the func function does the column values extracted, the f1 uses a for loop to iterate through the dataframe and the f2 uses a lapply function.
In my computer I get this results:
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
&&
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
Related
I want to efficiently solve a degree-7 polynomial in k.
For example, with the following set of 7 unconditional probabilities,
p <- c(0.0496772, 0.04584501, 0.04210299, 0.04026439, 0.03844668, 0.03487194, 0.03137491)
the overall event probability is approximately 25% :
> 1 - prod(1 - p)
[1] 0.2506676
And if I want to approximate a constant k to proportionally change all elements of p so that the overall event probability is now approximately 30%, I can do this using an equation solver (such as Wolfram Alpha), which may use Newton's method or bisection to approximate k in:
here, k is approximately 1.23:
> 1 - prod(1 - 1.23*p)
[1] 0.3000173
But what if I want to solve this for many different overall event probabilities, how can I efficiently do this in R?
I've looked at the function SMfzero in the package NLRoot, but it's still not clear to me how I can achieve it.
EDIT
I've benchmarked the solutions so far. On the toy data p above:
Unit: nanoseconds
expr min lq mean median uq max neval
approximation_fun 800 1700 3306.7 3100 4400 39500 1000
polynom_fun 1583800 1748600 2067028.6 1846300 2036300 16332600 1000
polyroot_fun 596800 658300 863454.2 716250 792100 44709000 1000
bsoln_fun 48800 59800 87029.6 85100 102350 613300 1000
find_k_fun 48500 60700 86657.4 85250 103050 262600 1000
NB, I'm not sure if its fair to compare the approximation_fun with the others but I did ask for an approximate solution so it does meet the brief.
The real problem is a degree-52 polynomial in k. Benchmarking on the real data:
Unit: microseconds
expr min lq mean median uq max neval
approximation_fun 1.9 3.20 7.8745 5.50 14.50 55.5 1000
polynom_fun 10177.2 10965.20 12542.4195 11268.45 12149.95 80230.9 1000
bsoln_fun 52.3 60.95 91.4209 71.80 117.75 295.6 1000
find_k_fun 55.0 62.80 90.1710 73.10 118.40 358.2 1000
This can be solved with the polynom library.
library(polynom)
library(purrr)
p <- runif(3, 0, 1)
p
#> [1] 0.1072518 0.5781922 0.3877427
# Overall probability
1 - prod(1 - p)
#> [1] 0.7694434
# Target overall probability
target_op <- 0.3
# calculate polynomial to solve for k
poly_list <- p %>%
map(~polynomial(c(1, -.))) %>%
as.polylist()
# List of linear polynomials to be multiplied:
poly_list
#> [[1]]
#> 1 - 0.1072518*x
#>
#> [[2]]
#> 1 - 0.5781922*x
#>
#> [[3]]
#> 1 - 0.3877427*x
# we want to solve this polynomial
poly <- 1 - prod(poly_list) - target_op
poly
#> -0.3 + 1.073187*x - 0.3277881*x^2 + 0.02404476*x^3
roots <- solve(poly)
good_roots <-
roots %>%
# keep only real values
keep(~Im(.) == 0) %>%
Re() %>%
# only positive
keep(~.>0)
good_roots
#> [1] 0.1448852
k <- good_roots[[1]]
1 - prod(1 - k*p)
#> [1] 0.3
Created on 2021-04-28 by the reprex package (v1.0.0)
Following #IaroslavDomin's solutions, but constructing the coefficients for this particular case by hand, then using polyroot():
Here's a sequence of three functions (compute individual coeffs, put them together into a vector, find positive real roots):
## construct ith binomial coefficients: the sum of the products
## of all i-element combinations
bcoef <- function(p,i) {
sum(apply(combn(p,i),2,prod))
}
## compute all binomial coefficients and put them together
## into the vector of coeffs for 1-prod(1-k*p)
mypoly <- function(p,target=0.3) {
c(-target,-1*sapply(seq_along(p), bcoef, p =-p))
}
## compute real positive solutions
soln <- function(p, target=0.3) {
roots <- polyroot(mypoly(p))
roots <- Re(roots[abs(Im(roots))<1e-16])
roots <- roots[roots>0]
if (length(roots)>1) warn(">1 solution")
return(roots)
}
Try it out for a couple of cases:
p1 <- c(0.1072518,0.5781922, 0.3877427)
s1 <- soln(p1)
1-prod(1-s1*p1)
p2 <- c(0.0496772, 0.04584501, 0.04210299, 0.04026439, 0.03844668, 0.03487194, 0.03137491)
s2 <- soln(p2)
1-prod(1-s2*p2)
If you don't want to be clever, then brute force is perfectly adequate (56 microseconds on my machine when length(p) is 52):
bsoln <- function(p, target=0.3) {
f <- function(k) { (1-prod(1-k*p)) - target }
return(uniroot(f, c(0,20))$root)
}
asoln <- function(p, target=0.3) {
return(- log(1 - target) / sum(p))
}
I started to run benchmarks and gave up; I don't like the format of microbenchmark output and the approximate solution is too fast for rbenchmark::benchmark() to time accurately. In any case, one run of bsoln() with length(p)==52 takes on the order of 50 microseconds, so you're going to have to run this a whole bunch of times before speed becomes problematic ...
Another option would be to just search for a root on a segment without specifically calculating polynomial coefficients. This can be done e.g. with the uniroot function.
Only one not-so-trivial thing we need to do here is to specify the segment. k is obviously >=0 - so that would be the left point. Then we know that all the k*p values should be probabilities, hence <=1. Therefore k <= 1/max(p) - that's the right point.
And so the code is:
find_k <- function(p, taget_op) {
f <- function(x) 1 - prod(1 - x*p) - target_op
max_k <- 1/max(p)
res <- uniroot(f, c(0, max_k))
res$root
}
p <- runif(1000, 0, 1)
target_op <- 0.3
k <- find_k(p, target_op)
k
#> [1] 0.000710281
1 - prod(1 - k*p)
#> [1] 0.2985806
Created on 2021-04-29 by the reprex package (v1.0.0)
This works pretty fast even for 1000 probabilities.
I'm often dealing with open intervals of real numbers, i.e. I want to exclude the borders, naturally to avoid 1/0:1 and these guys.
I helped me by writing this function,
oInt <- function(x, y, res=1) seq(x, y, res)[!seq(x, y, res) %in% c(x, y)]
x1 <- oInt(0, 1, .001)
> x1[c(1:2, length(x1)-1, length(x1))]
[1] 0.001 0.002 0.998 0.999
> sample(x1, 7, replace = TRUE)
[1] 0.703 0.404 0.698 0.287 0.753 0.679 0.354
but no matter how small res= is defined, it will stay an approximation. And it would use pretty much bytes saving such stuff this way I guess.
Then I came across the sets package:
> (x2 <- sets::interval(0, 1, "open"))
(0, 1)
E.g. this works not bad:
f <- function(x) x*x
curve(f, unlist(x2))
But e.g. this won't:
> sample(unlist(x2), 7, replace=TRUE)
lc rc l rc rc rc l
0 0 0 0 0 0 0
Is there a neat solution in (base) R that I have overlooked? Maybe I'm not looking at the right places or am searching with the wrong words, but I found nothing. Or am I just on the wrong track again? Note: For sake of scarcity I excluded other cases like e.g. half-open intervals, but of course I'm dealing with those too.
It is often said that one should prefer lapply over for loops.
There are some exception as for example Hadley Wickham points out in his Advance R book.
(http://adv-r.had.co.nz/Functionals.html) (Modifying in place, Recursion etc).
The following is one of this case.
Just for sake of learning, I tried to rewrite a perceptron algorithm in a functional form in order to benchmark
relative performance.
source (https://rpubs.com/FaiHas/197581).
Here is the code.
# prepare input
data(iris)
irissubdf <- iris[1:100, c(1, 3, 5)]
names(irissubdf) <- c("sepal", "petal", "species")
head(irissubdf)
irissubdf$y <- 1
irissubdf[irissubdf[, 3] == "setosa", 4] <- -1
x <- irissubdf[, c(1, 2)]
y <- irissubdf[, 4]
# perceptron function with for
perceptron <- function(x, y, eta, niter) {
# initialize weight vector
weight <- rep(0, dim(x)[2] + 1)
errors <- rep(0, niter)
# loop over number of epochs niter
for (jj in 1:niter) {
# loop through training data set
for (ii in 1:length(y)) {
# Predict binary label using Heaviside activation
# function
z <- sum(weight[2:length(weight)] * as.numeric(x[ii,
])) + weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y[ii] - ypred) * c(1,
as.numeric(x[ii, ]))
weight <- weight + weightdiff
# Update error function
if ((y[ii] - ypred) != 0) {
errors[jj] <- errors[jj] + 1
}
}
}
# weight to decide between the two species
return(errors)
}
err <- perceptron(x, y, 1, 10)
### my rewriting in functional form auxiliary
### function
faux <- function(x, weight, y, eta) {
err <- 0
z <- sum(weight[2:length(weight)] * as.numeric(x)) +
weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y - ypred) * c(1, as.numeric(x))
weight <<- weight + weightdiff
# Update error function
if ((y - ypred) != 0) {
err <- 1
}
err
}
weight <- rep(0, 3)
weightdiff <- rep(0, 3)
f <- function() {
t <- replicate(10, sum(unlist(lapply(seq_along(irissubdf$y),
function(i) {
faux(irissubdf[i, 1:2], weight, irissubdf$y[i],
1)
}))))
weight <<- rep(0, 3)
t
}
I did not expected any consistent improvement due to the aforementioned
issues. But nevertheless I was really surprised when I saw the sharp worsening
using lapply and replicate.
I obtained this results using microbenchmark function from microbenchmark library
What could possibly be the reasons?
Could it be some memory leak?
expr min lq mean median uq
f() 48670.878 50600.7200 52767.6871 51746.2530 53541.2440
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 4184.131 4437.2990 4686.7506 4532.6655 4751.4795
perceptronC(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 95.793 104.2045 123.7735 116.6065 140.5545
max neval
109715.673 100
6513.684 100
264.858 100
The first function is the lapply/replicate function
The second is the function with for loops
The third is the same function in C++ using Rcpp
Here According to Roland the profiling of the function.
I am not sure I can interpret it in the right way.
It looks like to me most of the time is spent in subsetting
Function profiling
First of all, it is an already long debunked myth that for loops are any slower than lapply. The for loops in R have been made a lot more performant and are currently at least as fast as lapply.
That said, you have to rethink your use of lapply here. Your implementation demands assigning to the global environment, because your code requires you to update the weight during the loop. And that is a valid reason to not consider lapply.
lapply is a function you should use for its side effects (or lack of side effects). The function lapply combines the results in a list automatically and doesn't mess with the environment you work in, contrary to a for loop. The same goes for replicate. See also this question:
Is R's apply family more than syntactic sugar?
The reason your lapply solution is far slower, is because your way of using it creates a lot more overhead.
replicate is nothing else but sapply internally, so you actually combine sapply and lapply to implement your double loop. sapply creates extra overhead because it has to test whether or not the result can be simplified. So a for loop will be actually faster than using replicate.
inside your lapply anonymous function, you have to access the dataframe for both x and y for every observation. This means that -contrary to in your for-loop- eg the function $ has to be called every time.
Because you use these high-end functions, your 'lapply' solution calls 49 functions, compared to your for solution that only calls 26. These extra functions for the lapply solution include calls to functions like match, structure, [[, names, %in%, sys.call, duplicated, ...
All functions not needed by your for loop as that one doesn't do any of these checks.
If you want to see where this extra overhead comes from, look at the internal code of replicate, unlist, sapply and simplify2array.
You can use the following code to get a better idea of where you lose your performance with the lapply. Run this line by line!
Rprof(interval = 0.0001)
f()
Rprof(NULL)
fprof <- summaryRprof()$by.self
Rprof(interval = 0.0001)
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10)
Rprof(NULL)
perprof <- summaryRprof()$by.self
fprof$Fun <- rownames(fprof)
perprof$Fun <- rownames(perprof)
Selftime <- merge(fprof, perprof,
all = TRUE,
by = 'Fun',
suffixes = c(".lapply",".for"))
sum(!is.na(Selftime$self.time.lapply))
sum(!is.na(Selftime$self.time.for))
Selftime[order(Selftime$self.time.lapply, decreasing = TRUE),
c("Fun","self.time.lapply","self.time.for")]
Selftime[is.na(Selftime$self.time.for),]
There is more to the question of when to use for or lapply and which "performs" better. Sometimes speed is important, other times memory is important. To further complicate things, the time complexity may not be what you expect - that is, different behavior can be observed at different scopes, invalidating any blanket statement such as "faster than" or "at least as fast as". Finally, one performance metric often overlooked is thought-to-code, pre-mature optimization yada yada.
That said, in the Introduction to R the authors hint at some performance concerns:
Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a ‘whole object’ view is likely to be both clearer and faster in R.
Given a similar use case, input and output, disregarding user preferences, is one clearly better than the other?
Benchmark - Fibonacci sequence
I compare approaches to compute 1 to N Fibonacci numbers (inspired by the benchmarkme package), shunning the 2nd Circle and ensuring that inputs and outputs for each approach are the same. Four additional approaches are included to throw some oil on the fire - a vectorized approach and purrr::map, and *apply variants vapply and sapply.
fib <- function(x, ...){
x <- 1:x ; phi = 1.6180339887498949 ; v = \() vector("integer", length(x))
bench::mark(
vector = {
y=v(); y = ((rep(phi, length(x))^x) - ((-rep(phi, length(x)))^-x)) / sqrt(5); y},
lapply = {
y=v(); y = unlist(lapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)), use.names = F); y},
loop = {
y=v(); `for`(i, x, {y[i] = (phi^i - (-phi)^(-i)) / sqrt(5)}); y},
sapply = {
y=v(); y = sapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)); y},
vapply = {
y=v(); y = vapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5), 1); y},
map = {
y=v(); y <- purrr::map_dbl(x, ~ (phi^. - (-phi)^(-.))/sqrt(5)); y
}, ..., check = T
)[c(1:9)]
}
Here is a comparison of the performance, ranked by median time.
lapply(list(3e2, 3e3, 3e4, 3e5, 3e6, 3e7), fib) # n iterations specified separately
N = 300
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
1 vector 38.8us 40.9us 21812. 8.44KB 0 1000 0 45.8ms
2 vapply 500us 545us 1653. 3.61KB 1.65 999 1 604ms
3 sapply 518us 556us 1725. 12.48KB 0 1000 0 580ms
4 lapply 513.4us 612.8us 1620. 6KB 8.14 995 5 614.2ms
5 loop 549.9us 633.6us 1455. 3.61KB 8.78 994 6 683.3ms
6 map 649.6us 754.6us 1312. 3.61KB 9.25 993 7 756.9ms
N = 3000
1 vector 769.7us 781.5us 1257. 82.3KB 1.26 999 1 794.83ms
2 vapply 5.38ms 5.58ms 173. 35.2KB 0.697 996 4 5.74s
3 sapply 5.59ms 5.83ms 166. 114.3KB 0.666 996 4 6.01s
4 loop 5.38ms 5.91ms 167. 35.2KB 8.78 950 50 5.69s
5 lapply 5.24ms 6.49ms 156. 58.7KB 8.73 947 53 6.07s
6 map 6.11ms 6.63ms 148. 35.2KB 9.13 942 58 6.35s
N = 30 000
1 vector 10.7ms 10.9ms 90.9 821KB 0.918 297 3 3.27s
2 vapply 57.3ms 60.1ms 16.4 351.66KB 0.741 287 13 17.5s
3 loop 59.2ms 60.7ms 15.9 352KB 16.7 146 154 9.21s
4 sapply 59.6ms 62.1ms 15.7 1.05MB 0.713 287 13 18.2s
5 lapply 57.3ms 67.6ms 15.1 586KB 20.5 127 173 8.43s
6 map 66.7ms 69.1ms 14.4 352KB 21.6 120 180 8.35s
N = 300 000
1 vector 190ms 193ms 5.14 8.01MB 0.206 100 4 19.45s
2 loop 693ms 713ms 1.40 3.43MB 7.43 100 532 1.19m
3 map 766ms 790ms 1.26 3.43MB 7.53 100 598 1.32m
4 vapply 633ms 814ms 1.33 3.43MB 0.851 100 39 45.8s
5 lapply 685ms 966ms 1.06 5.72MB 9.13 100 864 1.58m
6 sapply 694ms 813ms 1.27 12.01MB 0.810 100 39 48.1s
N = 3 000 000
1 vector 3.17s 3.21s 0.312 80.1MB 0.249 20 16 1.07m
2 vapply 8.22s 8.37s 0.118 34.3MB 4.97 20 845 2.83m
3 loop 8.3s 8.42s 0.119 34.3MB 4.35 20 733 2.81m
4 map 9.09s 9.17s 0.109 34.3MB 4.91 20 903 3.07m
5 lapply 10.42s 11.09s 0.0901 57.2MB 4.10 20 909 3.7m
6 sapply 10.43s 11.28s 0.0862 112.1MB 3.58 20 830 3.87m
N = 30 000 000
1 vector 44.8s 45.94s 0.0214 801MB 0.00854 10 4 7.8m
2 vapply 1.56m 1.6m 0.0104 343MB 0.883 10 850 16m
3 loop 1.56m 1.62m 0.00977 343MB 0.366 10 374 17.1m
4 map 1.72m 1.74m 0.00959 343MB 1.23 10 1279 17.4m
5 lapply 2.15m 2.22m 0.00748 572MB 0.422 10 565 22.3m
6 sapply 2.05m 2.25m 0.00747 1.03GB 0.405 10 542 22.3m
# Intel i5-8300H CPU # 2.30GHz / R version 4.1.1 / purrr 0.3.4
for and lapply approaches perform similarly, but lapply is greedier when it comes to memory, and a bit slower when the size of input increases (for this task). Note that purrr::map memory usage is equivalent to the for-loop, superior to that of lapply, in itself a debated topic. However, when the appropriate *apply* is used, here vapply, the performance is similar. But the choice could have a large impact on memory use, sapply being noticeably less memory efficient than vapply.
A peek under the hood reveals the reason of different performance for the approaches. The for-loop performs many type checks, resulting in some overhead. lapply on the other hand, suffers from a flawed language design where lazy evaluation, or use of promises, comes at a cost, the source code confirming that the X and FUN arguments to .Internal(lapply) are promises.
Vectorized approaches are fast, and probably desirable over a for or lapply approach. Notice how the vectorized approach grows irregularly compared to the other approaches. However, aesthetics of vectorized code may be a concern: which approach would you prefer to debug?
Overall, I'd say a choice between lapply or for is not something the average R user should ponder over. Stick to what is easiest to write, think of, and debug or that is less (silent?) error prone. What is lost in performance will likely be canceled out by time saved writing. For performance critical applications, make sure to run some tests with different input sizes and to properly chunk code.
Actually,
I did test the difference with a a problem that a solve recently.
Just try yourself.
In my conclusion, have no difference but for loop to my case were insignificantly more faster than lapply.
Ps: I try mostly keep the same logic in use.
ds <- data.frame(matrix(rnorm(1000000), ncol = 8))
n <- c('a','b','c','d','e','f','g','h')
func <- function(ds, target_col, query_col, value){
return (unique(as.vector(ds[ds[query_col] == value, target_col])))
}
f1 <- function(x, y){
named_list <- list()
for (i in y){
named_list[[i]] <- func(x, 'a', 'b', i)
}
return (named_list)
}
f2 <- function(x, y){
list2 <- lapply(setNames(nm = y), func, ds = x, target_col = "a", query_col = "b")
return(list2)
}
benchmark(f1(ds2, n ))
benchmark(f2(ds2, n ))
As you could see, I did a simple routine to build a named_list based in a dataframe, the func function does the column values extracted, the f1 uses a for loop to iterate through the dataframe and the f2 uses a lapply function.
In my computer I get this results:
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
&&
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
I have a simple task to do. I have a 3D array (10,1350,1280) and I want to calculate the min over the first dimensions. I can do it using aaply like the following
minObs <- plyr::aaply(obs, c(2,3), min) # min of observation
But it is extremely slow compared to when I just write a nested loop.
minObs<-matrix(nrow=dim(obs)[2],ncol=dim(obs)[3])
for (i in 1:dim(obs)[2]){
for (j in 1:dim(obs)[3]){
minObs[i,j]<-min(obs[,i,j],na.rm = TRUE)
}
}
I am new to R , but I am guessing that I am doing something wrong with aaply function. And hint would be very much appreciated. How can I speed up using aaply?
Why not just use the base apply function?
apply(obs, c(2,3), min)
It's fast, doesn't require loading an additional package and gives the same result, as per:
all.equal(
apply(obs, 2:3, min),
aaply(obs, 2:3, min), check.attributes=FALSE)
#[1] TRUE
Timings using system.time() using a 10 x 1350 x 1280 array:
Loop
# user system elapsed
# 3.79 0.00 3.79
Base apply()
# user system elapsed
# 2.87 0.02 2.89
plyr::aaply()
#Timing stopped at: 122.1 0.04 122.24
It is often said that one should prefer lapply over for loops.
There are some exception as for example Hadley Wickham points out in his Advance R book.
(http://adv-r.had.co.nz/Functionals.html) (Modifying in place, Recursion etc).
The following is one of this case.
Just for sake of learning, I tried to rewrite a perceptron algorithm in a functional form in order to benchmark
relative performance.
source (https://rpubs.com/FaiHas/197581).
Here is the code.
# prepare input
data(iris)
irissubdf <- iris[1:100, c(1, 3, 5)]
names(irissubdf) <- c("sepal", "petal", "species")
head(irissubdf)
irissubdf$y <- 1
irissubdf[irissubdf[, 3] == "setosa", 4] <- -1
x <- irissubdf[, c(1, 2)]
y <- irissubdf[, 4]
# perceptron function with for
perceptron <- function(x, y, eta, niter) {
# initialize weight vector
weight <- rep(0, dim(x)[2] + 1)
errors <- rep(0, niter)
# loop over number of epochs niter
for (jj in 1:niter) {
# loop through training data set
for (ii in 1:length(y)) {
# Predict binary label using Heaviside activation
# function
z <- sum(weight[2:length(weight)] * as.numeric(x[ii,
])) + weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y[ii] - ypred) * c(1,
as.numeric(x[ii, ]))
weight <- weight + weightdiff
# Update error function
if ((y[ii] - ypred) != 0) {
errors[jj] <- errors[jj] + 1
}
}
}
# weight to decide between the two species
return(errors)
}
err <- perceptron(x, y, 1, 10)
### my rewriting in functional form auxiliary
### function
faux <- function(x, weight, y, eta) {
err <- 0
z <- sum(weight[2:length(weight)] * as.numeric(x)) +
weight[1]
if (z < 0) {
ypred <- -1
} else {
ypred <- 1
}
# Change weight - the formula doesn't do anything
# if the predicted value is correct
weightdiff <- eta * (y - ypred) * c(1, as.numeric(x))
weight <<- weight + weightdiff
# Update error function
if ((y - ypred) != 0) {
err <- 1
}
err
}
weight <- rep(0, 3)
weightdiff <- rep(0, 3)
f <- function() {
t <- replicate(10, sum(unlist(lapply(seq_along(irissubdf$y),
function(i) {
faux(irissubdf[i, 1:2], weight, irissubdf$y[i],
1)
}))))
weight <<- rep(0, 3)
t
}
I did not expected any consistent improvement due to the aforementioned
issues. But nevertheless I was really surprised when I saw the sharp worsening
using lapply and replicate.
I obtained this results using microbenchmark function from microbenchmark library
What could possibly be the reasons?
Could it be some memory leak?
expr min lq mean median uq
f() 48670.878 50600.7200 52767.6871 51746.2530 53541.2440
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 4184.131 4437.2990 4686.7506 4532.6655 4751.4795
perceptronC(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10) 95.793 104.2045 123.7735 116.6065 140.5545
max neval
109715.673 100
6513.684 100
264.858 100
The first function is the lapply/replicate function
The second is the function with for loops
The third is the same function in C++ using Rcpp
Here According to Roland the profiling of the function.
I am not sure I can interpret it in the right way.
It looks like to me most of the time is spent in subsetting
Function profiling
First of all, it is an already long debunked myth that for loops are any slower than lapply. The for loops in R have been made a lot more performant and are currently at least as fast as lapply.
That said, you have to rethink your use of lapply here. Your implementation demands assigning to the global environment, because your code requires you to update the weight during the loop. And that is a valid reason to not consider lapply.
lapply is a function you should use for its side effects (or lack of side effects). The function lapply combines the results in a list automatically and doesn't mess with the environment you work in, contrary to a for loop. The same goes for replicate. See also this question:
Is R's apply family more than syntactic sugar?
The reason your lapply solution is far slower, is because your way of using it creates a lot more overhead.
replicate is nothing else but sapply internally, so you actually combine sapply and lapply to implement your double loop. sapply creates extra overhead because it has to test whether or not the result can be simplified. So a for loop will be actually faster than using replicate.
inside your lapply anonymous function, you have to access the dataframe for both x and y for every observation. This means that -contrary to in your for-loop- eg the function $ has to be called every time.
Because you use these high-end functions, your 'lapply' solution calls 49 functions, compared to your for solution that only calls 26. These extra functions for the lapply solution include calls to functions like match, structure, [[, names, %in%, sys.call, duplicated, ...
All functions not needed by your for loop as that one doesn't do any of these checks.
If you want to see where this extra overhead comes from, look at the internal code of replicate, unlist, sapply and simplify2array.
You can use the following code to get a better idea of where you lose your performance with the lapply. Run this line by line!
Rprof(interval = 0.0001)
f()
Rprof(NULL)
fprof <- summaryRprof()$by.self
Rprof(interval = 0.0001)
perceptron(as.matrix(irissubdf[1:2]), irissubdf$y, 1, 10)
Rprof(NULL)
perprof <- summaryRprof()$by.self
fprof$Fun <- rownames(fprof)
perprof$Fun <- rownames(perprof)
Selftime <- merge(fprof, perprof,
all = TRUE,
by = 'Fun',
suffixes = c(".lapply",".for"))
sum(!is.na(Selftime$self.time.lapply))
sum(!is.na(Selftime$self.time.for))
Selftime[order(Selftime$self.time.lapply, decreasing = TRUE),
c("Fun","self.time.lapply","self.time.for")]
Selftime[is.na(Selftime$self.time.for),]
There is more to the question of when to use for or lapply and which "performs" better. Sometimes speed is important, other times memory is important. To further complicate things, the time complexity may not be what you expect - that is, different behavior can be observed at different scopes, invalidating any blanket statement such as "faster than" or "at least as fast as". Finally, one performance metric often overlooked is thought-to-code, pre-mature optimization yada yada.
That said, in the Introduction to R the authors hint at some performance concerns:
Warning: for() loops are used in R code much less often than in compiled languages. Code that takes a ‘whole object’ view is likely to be both clearer and faster in R.
Given a similar use case, input and output, disregarding user preferences, is one clearly better than the other?
Benchmark - Fibonacci sequence
I compare approaches to compute 1 to N Fibonacci numbers (inspired by the benchmarkme package), shunning the 2nd Circle and ensuring that inputs and outputs for each approach are the same. Four additional approaches are included to throw some oil on the fire - a vectorized approach and purrr::map, and *apply variants vapply and sapply.
fib <- function(x, ...){
x <- 1:x ; phi = 1.6180339887498949 ; v = \() vector("integer", length(x))
bench::mark(
vector = {
y=v(); y = ((rep(phi, length(x))^x) - ((-rep(phi, length(x)))^-x)) / sqrt(5); y},
lapply = {
y=v(); y = unlist(lapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)), use.names = F); y},
loop = {
y=v(); `for`(i, x, {y[i] = (phi^i - (-phi)^(-i)) / sqrt(5)}); y},
sapply = {
y=v(); y = sapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5)); y},
vapply = {
y=v(); y = vapply(x, \(.) (phi^. - (-phi)^(-.)) / sqrt(5), 1); y},
map = {
y=v(); y <- purrr::map_dbl(x, ~ (phi^. - (-phi)^(-.))/sqrt(5)); y
}, ..., check = T
)[c(1:9)]
}
Here is a comparison of the performance, ranked by median time.
lapply(list(3e2, 3e3, 3e4, 3e5, 3e6, 3e7), fib) # n iterations specified separately
N = 300
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
1 vector 38.8us 40.9us 21812. 8.44KB 0 1000 0 45.8ms
2 vapply 500us 545us 1653. 3.61KB 1.65 999 1 604ms
3 sapply 518us 556us 1725. 12.48KB 0 1000 0 580ms
4 lapply 513.4us 612.8us 1620. 6KB 8.14 995 5 614.2ms
5 loop 549.9us 633.6us 1455. 3.61KB 8.78 994 6 683.3ms
6 map 649.6us 754.6us 1312. 3.61KB 9.25 993 7 756.9ms
N = 3000
1 vector 769.7us 781.5us 1257. 82.3KB 1.26 999 1 794.83ms
2 vapply 5.38ms 5.58ms 173. 35.2KB 0.697 996 4 5.74s
3 sapply 5.59ms 5.83ms 166. 114.3KB 0.666 996 4 6.01s
4 loop 5.38ms 5.91ms 167. 35.2KB 8.78 950 50 5.69s
5 lapply 5.24ms 6.49ms 156. 58.7KB 8.73 947 53 6.07s
6 map 6.11ms 6.63ms 148. 35.2KB 9.13 942 58 6.35s
N = 30 000
1 vector 10.7ms 10.9ms 90.9 821KB 0.918 297 3 3.27s
2 vapply 57.3ms 60.1ms 16.4 351.66KB 0.741 287 13 17.5s
3 loop 59.2ms 60.7ms 15.9 352KB 16.7 146 154 9.21s
4 sapply 59.6ms 62.1ms 15.7 1.05MB 0.713 287 13 18.2s
5 lapply 57.3ms 67.6ms 15.1 586KB 20.5 127 173 8.43s
6 map 66.7ms 69.1ms 14.4 352KB 21.6 120 180 8.35s
N = 300 000
1 vector 190ms 193ms 5.14 8.01MB 0.206 100 4 19.45s
2 loop 693ms 713ms 1.40 3.43MB 7.43 100 532 1.19m
3 map 766ms 790ms 1.26 3.43MB 7.53 100 598 1.32m
4 vapply 633ms 814ms 1.33 3.43MB 0.851 100 39 45.8s
5 lapply 685ms 966ms 1.06 5.72MB 9.13 100 864 1.58m
6 sapply 694ms 813ms 1.27 12.01MB 0.810 100 39 48.1s
N = 3 000 000
1 vector 3.17s 3.21s 0.312 80.1MB 0.249 20 16 1.07m
2 vapply 8.22s 8.37s 0.118 34.3MB 4.97 20 845 2.83m
3 loop 8.3s 8.42s 0.119 34.3MB 4.35 20 733 2.81m
4 map 9.09s 9.17s 0.109 34.3MB 4.91 20 903 3.07m
5 lapply 10.42s 11.09s 0.0901 57.2MB 4.10 20 909 3.7m
6 sapply 10.43s 11.28s 0.0862 112.1MB 3.58 20 830 3.87m
N = 30 000 000
1 vector 44.8s 45.94s 0.0214 801MB 0.00854 10 4 7.8m
2 vapply 1.56m 1.6m 0.0104 343MB 0.883 10 850 16m
3 loop 1.56m 1.62m 0.00977 343MB 0.366 10 374 17.1m
4 map 1.72m 1.74m 0.00959 343MB 1.23 10 1279 17.4m
5 lapply 2.15m 2.22m 0.00748 572MB 0.422 10 565 22.3m
6 sapply 2.05m 2.25m 0.00747 1.03GB 0.405 10 542 22.3m
# Intel i5-8300H CPU # 2.30GHz / R version 4.1.1 / purrr 0.3.4
for and lapply approaches perform similarly, but lapply is greedier when it comes to memory, and a bit slower when the size of input increases (for this task). Note that purrr::map memory usage is equivalent to the for-loop, superior to that of lapply, in itself a debated topic. However, when the appropriate *apply* is used, here vapply, the performance is similar. But the choice could have a large impact on memory use, sapply being noticeably less memory efficient than vapply.
A peek under the hood reveals the reason of different performance for the approaches. The for-loop performs many type checks, resulting in some overhead. lapply on the other hand, suffers from a flawed language design where lazy evaluation, or use of promises, comes at a cost, the source code confirming that the X and FUN arguments to .Internal(lapply) are promises.
Vectorized approaches are fast, and probably desirable over a for or lapply approach. Notice how the vectorized approach grows irregularly compared to the other approaches. However, aesthetics of vectorized code may be a concern: which approach would you prefer to debug?
Overall, I'd say a choice between lapply or for is not something the average R user should ponder over. Stick to what is easiest to write, think of, and debug or that is less (silent?) error prone. What is lost in performance will likely be canceled out by time saved writing. For performance critical applications, make sure to run some tests with different input sizes and to properly chunk code.
Actually,
I did test the difference with a a problem that a solve recently.
Just try yourself.
In my conclusion, have no difference but for loop to my case were insignificantly more faster than lapply.
Ps: I try mostly keep the same logic in use.
ds <- data.frame(matrix(rnorm(1000000), ncol = 8))
n <- c('a','b','c','d','e','f','g','h')
func <- function(ds, target_col, query_col, value){
return (unique(as.vector(ds[ds[query_col] == value, target_col])))
}
f1 <- function(x, y){
named_list <- list()
for (i in y){
named_list[[i]] <- func(x, 'a', 'b', i)
}
return (named_list)
}
f2 <- function(x, y){
list2 <- lapply(setNames(nm = y), func, ds = x, target_col = "a", query_col = "b")
return(list2)
}
benchmark(f1(ds2, n ))
benchmark(f2(ds2, n ))
As you could see, I did a simple routine to build a named_list based in a dataframe, the func function does the column values extracted, the f1 uses a for loop to iterate through the dataframe and the f2 uses a lapply function.
In my computer I get this results:
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0
&&
test replications elapsed relative user.self sys.self user.child
1 f1(ds2, n) 100 110.24 1 110.112 0 0
sys.child
1 0