Why is R making a copy-on-modification after using str? - r

I was wondering why R is making a copy-on-modification after using str.
I create a matrix. I can change its dim, one element or even all. No copy is made. But when a call str R is making a copy during the next modification operation on the Matrix. Why is this happening?
m <- matrix(1:12, 3)
tracemem(m)
#[1] "<0x559df861af28>"
dim(m) <- 4:3
m[1,1] <- 0L
m[] <- 12:1
str(m)
# int [1:4, 1:3] 12 11 10 9 8 7 6 5 4 3 ...
dim(m) <- 3:4 #Here after str a copy is made
#tracemem[0x559df861af28 -> 0x559df838e4a8]:
dim(m) <- 3:4
str(m)
# int [1:3, 1:4] 12 11 10 9 8 7 6 5 4 3 ...
dim(m) <- 3:4 #Here again after str a copy
#tracemem[0x559df838e4a8 -> 0x559df82c9d78]:
Also I was wondering why a copy is made when having a Task Callback.
TCB <- addTaskCallback(function(...) TRUE)
m <- matrix(1:12, nrow = 3)
tracemem(m)
#[1] "<0x559dfa79def8>"
dim(m) <- 4:3 #Copy on modification
#tracemem[0x559dfa79def8 -> 0x559dfa8998e8]:
removeTaskCallback(TCB)
#[1] TRUE
dim(m) <- 4:3 #No copy
sessionInfo()
R version 4.0.3 (2020-10-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Debian GNU/Linux 10 (buster)
Matrix products: default
BLAS: /usr/local/lib/R/lib/libRblas.so
LAPACK: /usr/local/lib/R/lib/libRlapack.so
locale:
[1] LC_CTYPE=de_AT.UTF-8 LC_NUMERIC=C
[3] LC_TIME=de_AT.UTF-8 LC_COLLATE=de_AT.UTF-8
[5] LC_MONETARY=de_AT.UTF-8 LC_MESSAGES=de_AT.UTF-8
[7] LC_PAPER=de_AT.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=de_AT.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_4.0.3
This is a follow up question to Is there a way to prevent copy-on-modify when modifying attributes?.
I start R with R --vanilla to have a clean session.

I have asked this question on R-help as suggested by #sam-mason in the comments.
The answer from Luke Tierney solved the issue with str:
As of R 4.0.0 it is in some cases possible to reduce reference counts
internally and so avoid a copy in cases like this. It would be too
costly to try to detect all cases where a count can be dropped, but it
this case we can do better. It turns out that the internals of
pos.to.env were unnecessarily creating an extra reference to the call
environment (here in a call to exists()). This is fixed in r79528.
Thanks.
And related to Task Callback:
It turns out there were some issues with the way calls to the
callbacks were handled. This has been revised in R-devel in r79541.
This example will no longere need to duplicate in R-devel.
Thanks for the report.

Related

Session aborts when debugging a function

I have a toy function, foo, that just adds 5 to a variable x. I have a second function, n_foo that applies foo to a data.table n times. It works like so:
# Load library
library(data.table)
# Dummy function
foo <- function(x){
x + 5
}
# Apply foo n times
n_foo <- function(x, n){
Reduce(function(a, b) foo(a), 1:n, init = x)
}
# Dummy data
dt <- data.table(values = 1:10)
# Run foo 5 times
dt[, test := n_foo(.SD, 5)]
# See results
dt
#> values test
#> 1: 1 26
#> 2: 2 27
#> 3: 3 28
#> 4: 4 29
#> 5: 5 30
#> 6: 6 31
#> 7: 7 32
#> 8: 8 33
#> 9: 9 34
#> 10: 10 35
Great! Now, say something was amiss and I wanted to debug n_foo, I'd pull out the trusty debug function.
WARNING: THE FOLLOWING CODE MIGHT CRASH YOUR SESSION.
# Load library
library(data.table)
# Dummy function
foo <- function(x){
x + 5
}
# Apply foo n times
n_foo <- function(x, n){
Reduce(function(a, b) foo(a), 1:n, init = x)
}
# Dummy data
dt <- data.table(values = 1:10)
debug(n_foo)
# Run foo 5 times
dt[, test := n_foo(.SD, 5)]
# See results
dt
produces,
Curiously, the session doesn't crash if this code is run using reprex. Why does this code lead to a fatal error?
Edit:
It turns out I can only produce this issue in RStudio and not at the CLI. RStudio tag added accordingly.
R version 4.0.0 (2020-04-24)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Catalina 10.15.5
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
locale:
[1] en_CA.UTF-8/en_CA.UTF-8/en_CA.UTF-8/C/en_CA.UTF-8/en_CA.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] data.table_1.12.8
loaded via a namespace (and not attached):
[1] compiler_4.0.0 tools_4.0.0
no crash... but goes into debugging...
debugging in: n_foo(.SD, 5)
debug at #1: {
Reduce(function(a, b) foo(a), 1:n, init = x)
}
Browse[2]>
info
> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19041)
other attached packages:
[1] data.table_1.12.8
rstudio 1.3.959
After upgrading to RStudio v. 1.3.959, I could no longer reproduce the error.

parLapply and Part of Speech tagging

I am trying to use the parLapply along with the openNLP R package to do part of speech tagging of a corpus of ~600k documents. However, while I was able to successfully part of speech tag a different set of ~90k documents, I get a strange error after ~25 mins of running the same code over the ~600k documents:
Error in checkForRemoteErrors(val) : 10 nodes produced errors; first error: no word token annotations found
The documents are simply digital newspaper articles, where I run the tagger over the body field (after cleaning). This field is nothing but raw text which I save into a list of strings.
Here's my code:
# I set the Java heap size (memory) allocation - I experimented with different sizes
options(java.parameters = "- Xmx3GB")
# Convert the corpus into a list of strings
myCorpus <- lapply(contentCleaned, function(x){x <- as.String(x)})
# tag Corpus Function
tagCorpus <- function(x, ...){
s <- as.String(x) # This is a repeat and may not be required
WTA <- Maxent_Word_Token_Annotator()
a2 <- Annotation(1L, "sentence", 1L, nchar(s))
a2 <- annotate(s, WTA, a2)
a3 <- annotate(s, PTA, a2)
word_subset <- a3[a3$type == "word"]
POStags <- unlist(lapply(word_subset$features, `[[`, "POS"))
POStagged <- paste(sprintf("%s/%s", s[word_subset], POStags), collapse = " ")
list(text = s, POStagged = POStagged, POStags = POStags, words = s[word_subset])
}
# I have 12 cores in my box
cl <- makeCluster(mc <- getOption("cl.cores", detectCores()-2))
# I tried both exporting the word token annotator and not
clusterEvalQ(cl, {
library(openNLP);
library(NLP);
PTA <- Maxent_POS_Tag_Annotator();
WTA <- Maxent_Word_Token_Annotator()
})
# Each cluster node has the following description:
[[1]]
An annotator inheriting from classes
Simple_Word_Token_Annotator Annotator
with description
Computes word token annotations using the Apache OpenNLP Maxent tokenizer employing the default model for language 'en'.
clusterEvalQ(cl, sessionInfo())
# ClusterEvalQ outputs for each worker:
[[1]]
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.5 LTS
Matrix products: default
BLAS: /usr/lib/libblas/libblas.so.3.6.0
LAPACK: /usr/lib/lapack/liblapack.so.3.6.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=en_US.UTF-8
[9] LC_ADDRESS=en_US.UTF-8 LC_TELEPHONE=en_US.UTF-8 LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] NLP_0.1-11 openNLP_0.2-6
loaded via a namespace (and not attached):
[1] openNLPdata_1.5.3-4 compiler_3.4.4 parallel_3.4.4 rJava_0.9-10
packageDescription('openNLP') # Version: 0.2-6
packageDescription('parallel') # Version: 3.4.4
startTime <- Sys.time()
print(startTime)
corpus.tagged <- parLapply(cl, myCorpus, tagCorpus)
endTime <- Sys.time()
print(endTime)
endTime - startTime
Kindly note that I have consulted many web forums & the one which stood out is:
parallel parLapply setup
However, this doesn't seem to address my issue. Furthermore, I am confused why the setup works with the ~90k articles but not the ~600k articles (I have a total of 12 cores and 64GB memory). Any advice is much appreciated.
I have managed to get this to work by directly using the qdap package (https://github.com/trinker/qdap) by Tyler Rinker. It took ~20 hours to run. Here's how the function pos from the qdap package does this in a one liner:
corpus.tagged <- qdap::pos(myCorpus, parallel =TRUE, cores =detectCores()-2)

Different results when subsetting data.table columns with numeric indices in different ways

See the minimal example:
library(data.table)
DT <- data.table(x = 2, y = 3, z = 4)
DT[, c(1:2)] # first way
# x y
# 1: 2 3
DT[, (1:2)] # second way
# [1] 1 2
DT[, 1:2] # third way
# x y
# 1: 2 3
As described in this post, subsetting columns with numeric indices is possible now. However, I would like to known why indices are evaluated to a vector in the second way rather than column indices?
In addition, I updated data.table just now:
> sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.4 LTS
Matrix products: default
BLAS: /usr/lib/atlas-base/atlas/libblas.so.3.0
LAPACK: /usr/lib/atlas-base/atlas/liblapack.so.3.0
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] data.table_1.11.2
loaded via a namespace (and not attached):
[1] compiler_3.4.4 tools_3.4.4 yaml_2.1.17
By looking at the source code we can simulate data.tables behaviour for different inputs
if (!missing(j)) {
jsub = replace_dot_alias(substitute(j))
root = if (is.call(jsub)) as.character(jsub[[1L]])[1L] else ""
if (root == ":" ||
(root %chin% c("-","!") && is.call(jsub[[2L]]) && jsub[[2L]][[1L]]=="(" && is.call(jsub[[2L]][[2L]]) && jsub[[2L]][[2L]][[1L]]==":") ||
( (!length(av<-all.vars(jsub)) || all(substring(av,1L,2L)=="..")) &&
root %chin% c("","c","paste","paste0","-","!") &&
missing(by) )) { # test 763. TODO: likely that !missing(by) iff with==TRUE (so, with can be removed)
# When no variable names (i.e. symbols) occur in j, scope doesn't matter because there are no symbols to find.
# If variable names do occur, but they are all prefixed with .., then that means look up in calling scope.
# Automatically set with=FALSE in this case so that DT[,1], DT[,2:3], DT[,"someCol"] and DT[,c("colB","colD")]
# work as expected. As before, a vector will never be returned, but a single column data.table
# for type consistency with >1 cases. To return a single vector use DT[["someCol"]] or DT[[3]].
# The root==":" is to allow DT[,colC:colH] even though that contains two variable names.
# root == "-" or "!" is for tests 1504.11 and 1504.13 (a : with a ! or - modifier root)
# We don't want to evaluate j at all in making this decision because i) evaluating could itself
# increment some variable and not intended to be evaluated a 2nd time later on and ii) we don't
# want decisions like this to depend on the data or vector lengths since that can introduce
# inconistency reminiscent of drop=TRUE in [.data.frame that we seek to avoid.
with=FALSE
Basically, "[.data.table" catches the expression passed to j and decides how to treat it based on some predefined rules. If one of the rules is satisfied, it sets with=FALSE which basically means that column names were passed to j, using standard evaluation.
The rules are (roughly) as follows:
Set with=FALSE,
1.1. if j expression is a call and the call is : or
1.2. if the call is a combination of c("-","!") and ( and : or
1.3. if some value (character, integer, numeric, etc.) or .. was passed to j and the call is in c("","c","paste","paste0","-","!") and there is no a by call
otherwise set with=TRUE
So we can convert this into a function and see if any of the conditions were satisfied (I've skipped the converting the . to list function as it is irrelevant here. We will just test with list directly)
is_satisfied <- function(...) {
jsub <- substitute(...)
root = if (is.call(jsub)) as.character(jsub[[1L]])[1L] else ""
if (root == ":" ||
(root %chin% c("-","!") &&
is.call(jsub[[2L]]) &&
jsub[[2L]][[1L]]=="(" &&
is.call(jsub[[2L]][[2L]]) &&
jsub[[2L]][[2L]][[1L]]==":") ||
( (!length(av<-all.vars(jsub)) || all(substring(av,1L,2L)=="..")) &&
root %chin% c("","c","paste","paste0","-","!"))) TRUE else FALSE
}
is_satisfied("x")
# [1] TRUE
is_satisfied(c("x", "y"))
# [1] TRUE
is_satisfied(..x)
# [1] TRUE
is_satisfied(1:2)
# [1] TRUE
is_satisfied(c(1:2))
# [1] TRUE
is_satisfied((1:2))
# [1] FALSE
is_satisfied(y)
# [1] FALSE
is_satisfied(list(x, y))
# [1] FALSE

Error using Apply Function in R on Tutorial Example

I am trying to learn about how to use the apply function and I came across this tutorial: http://nsaunders.wordpress.com/2010/08/20/a-brief-introduction-to-apply-in-r/ which seems clear and concise, but I'm running into a problem right away. The very first example they give to demonstrate apply is:
> # create a matrix of 10 rows x 2 columns
> m <- matrix(c(1:10, 11:20), nrow = 10, ncol = 2)
> # mean of the rows
> apply(m, 1, mean)
[1] 6 7 8 9 10 11 12 13 14 15
This seems very basic, but I thought I'd give it a try. Here is my result:
> # create a matrix of 10 rows x 2 columns
> m <- matrix(c(1:10, 11:20), nrow = 10, ncol = 2)
> # mean of the rows
> apply(m, 1, mean)
Error in FUN(newX[, i], ...) : unused argument(s) (newX[, i])
Needless to say, I'm lost on this one...
To provide some more information, I attempted another example provided in the tutorial and got the correct result. The difference in this case was that the function was specifically stated in the apply function:
apply(m, 1:2, function(x) x/2)
[,1] [,2]
[1,] 0.5 5.5
[2,] 1.0 6.0
[3,] 1.5 6.5
[4,] 2.0 7.0
[5,] 2.5 7.5
[6,] 3.0 8.0
[7,] 3.5 8.5
[8,] 4.0 9.0
[9,] 4.5 9.5
[10,] 5.0 10.0
sessionInfo() output is below:
R version 2.15.3 (2013-03-01)
Platform: x86_64-apple-darwin9.8.0/x86_64 (64-bit)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] tools_2.15.3
And the output for conflicts(details = TRUE)
$.GlobalEnv
[1] "edit" "mean"
$`package:utils`
[1] "edit"
$`package:methods`
[1] "body<-" "kronecker"
$`package:base`
[1] "body<-" "kronecker" "mean"
As others have identified, it's probably because you have a conflict on mean. When you call anything (functions, objects), R goes through the search path until it's found (and if it isn't found R will complain accordingly):
> search()
[1] ".GlobalEnv" "tools:RGUI" "package:stats"
[4] "package:graphics" "package:grDevices" "package:utils"
[7] "package:datasets" "package:methods" "Autoloads"
[10] "package:base"
If you're fairly new to R, note that when you create a function, unless you specify otherwise, it's usually going to live in ".GlobalEnv". R looks there first before going any further, so it's fairly important to name your functions wisely, so as not to conflict with common functions (e.g. mean, plot, summary).
It's probably a good idea to start with a clean session once in a while. It's fairly common in the debugging phase to name variables x or y (names picked for convenience rather than informativeness... we're only human after all), which can be unexpectedly problematic down the line. When you have a workspace that's fairly crowded, the probability of conflicts increases, so (a) pick names carefully and (b) restart without restoring would be my advice.

Ubuntu R ForEach / DoMC not using multiple cores

I have built a function in R (running on Ubuntu 12.04 LTS 64bit, 4 core i7 server with multithreading and 6gb ram) where I've installed R using the standard packages:
sudo apt-get install r-base r-recommended r-base-dev
sudo apt-get install r-cran-multicore r-cran-iterators r-cran-foreach r-cran-domc
NB: I also installed foreach & doMC inside R (which didn't help either), like I installed the deldir package:
install.packages(c("deldir"), dependencies = TRUE)
My function runs fine, but it does not use parallel cores (just maxes out 1 of the 8):
library(deldir)
library(foreach)
library(doMC)
registerDoMC(cores=8)
#getDoParWorkers()
#getDoParName()
#getDoParVersion()
# loop through files
inputfiles <- dir(path="/home/geoadmin/data/objects/", pattern='.txt')
for( inputfilenr in 1:length(inputfiles))
{
# set file variables
curinputfile = paste("/home/geoadmin/data/objects/",inputfiles[[inputfilenr]], sep = "", collapse = NULL)
print (curinputfile)
curoutputfile = paste("/home/geoadmin/data/objects/",substr(inputfiles[[inputfilenr]], start=1, stop=10), '.out', sep = "", collapse = NULL)
# select the point x/y coordinates into a data frame...
points <- read.csv(curinputfile, header = TRUE, sep = ",", dec=".", fill = TRUE)
# set calculation variables, precision on 3 digits only because of the RDW coordinate system
voro = deldir(points$x, points$y, digits=3, list(ndx=2,ndy=2), rw=c(min(points$x)-abs(min(points$x)-max(points$x)), max(points$x)+abs(min(points$x)-max(points$x)), min(points$y)-abs(min(points$y)-max(points$y)), max(points$y)+abs(min(points$y)-max(points$y))))
tiles = tile.list(voro)
poly = array()
# start loop
poly <- foreach (i=1:length(tiles), .combine=cbind) %dopar%
{
# load tile info
tile = tiles[[i]]
# start with EWKB notation
curpoly = "POLYGON(("
# add list of coordinates by looping through the points in tile
for (j in 1:length(tiles[[i]]$x)) { curpoly = sprintf("%s %.6f %.6f,",curpoly,tile$x[[j]],tile$y[[j]]) }
# then again the first point to close the polygon and end the EWKB notation, adding that to the poly array
sprintf("%s %.6f %.6f))",curpoly,tile$x[[1]],tile$y[[1]])
}
write.csv(t(poly), file = curoutputfile, row.names = FALSE)
}
So the results are good, but no parallelism...
doMC did register correctly:
> getDoParWorkers()
[1] 8
> getDoParName()
[1] "doMC"
> getDoParVersion()
[1] "1.2.5"
If I look at the usage (with top):
top - 01:03:19 up 9 min, 3 users, load average: 1.02, 0.86, 0.45
Tasks: 131 total, 2 running, 127 sleeping, 0 stopped, 2 zombie
Cpu(s): 12.5%us, 0.0%sy, 0.0%ni, 87.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 6104932k total, 1240512k used, 4864420k free, 16656k buffers
Swap: 6283260k total, 0k used, 6283260k free, 141996k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1553 zzzzzzzz 20 0 913m 850m 3716 R 100 14.3 8:22.03 R
So just maxing out one core. Does anyone have any idea what could cause foreach/doMC to not use multiple cores?
> sessionInfo()
R version 2.14.1 (2011-12-22)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=C LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] doMC_1.2.5 multicore_0.1-7 iterators_1.0.6 foreach_1.4.0
[5] deldir_0.0-19
loaded via a namespace (and not attached):
[1] codetools_0.2-8
To add the likely answer for the question:
As foreach/mc does work on the computer itself (with the standard example), it's the specific code itself and likely that the voro=deldir part takes up the time, not the loop after it. This however means that the deldir package needs to be adjusted. Looking at the code in the DelDir source it seems I would need to adjust this snippet in the code:
# Call the master subroutine to do the work:
repeat {
tmp <- .Fortran(
'master',
x=as.double(x),
y=as.double(y),
sort=as.logical(sort),
rw=as.double(rw),
npd=as.integer(npd),
ntot=as.integer(ntot),
nadj=integer(tadj),
madj=as.integer(madj),
ind=integer(npd),
tx=double(npd),
ty=double(npd),
ilist=integer(npd),
eps=as.double(eps),
delsgs=double(tdel),
ndel=as.integer(ndel),
delsum=double(ntdel),
dirsgs=double(tdir),
ndir=as.integer(ndir),
dirsum=double(ntdir),
nerror=integer(1),
PACKAGE='deldir'
)
Not sure yet how i can format this into a thing which would work with foreach though...

Resources