parallel data.table -- what's the correct syntax - r

Following up some data.table parallelism (1) (2) (3) I'm trying to figure it out. What's wrong with this syntax?
library(data.table)
set.seed(1234)
dt <- data.table(id= factor(sample(1L:10000L, size= 1e6, replace= TRUE)),
val= rnorm(n= 1e6), key="id")
foo <- function(l) sum(l)
dt2 <- dt[, foo(.SD), by= "id"]
library(parallel)
cl <- makeCluster(detectCores())
dt3 <- clusterApply(cl, x= parallel:::splitRows(dt, detectCores()),
fun=lapply, FUN= function(x,foo) {
x[, foo(data.table:::".SD"), by= "id"]
}, foo= foo)
stopCluster(cl)
# note that library(parallel) is annoying and you often have to do this type ("::", ":::") of exporting to the parallel package
Error in checkForRemoteErrors(val) :
4 nodes produced errors; first error: incorrect number of dimensions
cl <- makeCluster(detectCores())
dt3 <- clusterApply(cl, x= parallel:::splitRows(dt, detectCores()),
fun=lapply, FUN= function(x,foo) {
x <- data.table::data.table(x)
x[, foo(data.table:::".SD"), by= "id"]
}, foo= foo)
stopCluster(cl)
Error in checkForRemoteErrors(val) :
4 nodes produced errors; first error: object 'id' not found
I've played around with the syntax quite a bit. These two seem to be the closest I can get. And obviously something's still not right.
My real problem is similarly structured but has many more rows and I'm using a machine with 24 cores / 48 logical processors. So watching my computer use roughly 4% of it's computing power (by using only 1 core) is really annoying

You may want to evaluate Rserve solution for parallelism.
See below example build on Rserve using 2 R nodes locally in parallel. It can be distributed over remote instances also.
library(data.table)
set.seed(1234)
dt <- data.table(id= factor(sample(1L:10000L, size= 1e6, replace= TRUE)),
val= rnorm(n= 1e6), key="id")
foo <- function(l) sum(l)
library(big.data.table)
# start 2 R instances
library(Rserve)
port = 6311:6312
invisible(sapply(port, function(port) Rserve(debug = FALSE, port = port, args = c("--no-save"))))
# client side
rscl = rscl.connect(port = port, pkgs = "data.table") # connect and auto require packages
bdt = as.big.data.table(dt, rscl) # create big.data.table from local data.table and list of connections to R nodes
rscl.assign(rscl, "foo", foo) # assign `foo` function to nodes
bdt[, foo(.SD), by="id"][, foo(.SD), by="id"] # first query is run remotely, second locally
# id V1
# 1: 1 10.328998
# 2: 2 -8.448441
# 3: 3 21.475910
# 4: 4 -5.302411
# 5: 5 -11.929699
# ---
# 9996: 9996 -4.905192
# 9997: 9997 -4.293194
# 9998: 9998 -2.387100
# 9999: 9999 16.530731
#10000: 10000 -15.390543
# optionally with special care
# bdt[, foo(.SD), by= "id", outer.aggregate = TRUE]
session info:
R version 3.2.3 (2015-12-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 14.04.4 LTS
locale:
[1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C LC_TIME=en_GB.UTF-8 LC_COLLATE=en_GB.UTF-8 LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 LC_PAPER=en_GB.UTF-8
[8] LC_NAME=C LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] Rserve_1.8-5 big.data.table_0.3.3 data.table_1.9.7
loaded via a namespace (and not attached):
[1] RSclient_0.7-3 tools_3.2.3

Related

indirect indexing/subscripting inside %dopar%

I'm not understanding how to do indirect subscripting in %dopar% or in llply( .parallel = TRUE). My actual use-case is a list of formulas, then generating a list of glmer results in a first foreach %dopar%, then calling PBmodcomp on specific pairs of results in a separate foreach %dopar%. My toy example, using numeric indices rather than names of objects in the lists, works fine for %do% but not %dopar%, and fine for alply without .parallel = TRUE but not with .parallel = TRUE. [My real example with glmer and indexing lists by names rather than by integers works with %do% but not %dopar%.]
library(doParallel)
library(foreach)
library(plyr)
cl <- makePSOCKcluster(2) # tiny for toy example
registerDoParallel(cl)
mB <- c(1,2,1,3,4,10)
MO <- c("Full", "noYS", "noYZ", "noYSZS", "noS", "noZ",
"noY", "justS", "justZ", "noSZ", "noYSZ")
# Works
testouts <- foreach(i = 1:length(mB)) %do% {
# mB[i]
MO[mB[i]]
}
testouts
# all NA
testouts2 <- foreach(i = 1:length(mB)) %dopar% {
# mB[i]
MO[mB[i]]
}
testouts2
# Works
testouts3 <- alply(mB, 1, .fun = function(i) { MO[mB[i]]} )
testouts3
# fails "$ operator is invalid for atomic vectors"
testouts4 <- alply(mB, 1, .fun = function(i) { MO[mB[i]]},
.parallel = TRUE,
.paropts = list(.export=ls(.GlobalEnv)))
testouts4
stopCluster(cl)
I've tried various combinations of double brackets like MO[mB[[i]]], to no avail. mB[i] instead of MO[mB[i]] works in all 4 and returns a list of the numbers. I've tried .export(c("MO", "mB")) but just get the message that those objects are already exported.
I assume that there's something I misunderstand about evaluation of expressions like MO[mB[i]] in different environments, but there may be other things I misunderstand, too.
sessionInfo() R version 3.5.1 (2018-07-02) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build
7601) Service Pack 1
Matrix products: default
locale: [1] LC_COLLATE=English_United States.1252 [2]
LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United
States.1252 [4] LC_NUMERIC=C [5]
LC_TIME=English_United States.1252
attached base packages: [1] parallel stats graphics grDevices
utils datasets methods [8] base
other attached packages: [1] plyr_1.8.4 doParallel_1.0.13
iterators_1.0.9 foreach_1.5.0
loaded via a namespace (and not attached): [1] compiler_3.5.1
tools_3.5.1 listenv_0.7.0 Rcpp_0.12.17 [5]
codetools_0.2-15 digest_0.6.15 globals_0.12.1 future_1.8.1
[9] fortunes_1.5-5
The problem appears to be with version 1.5.0 of foreach on r-forge. Version 1.4.4 from CRAN works fine for both foreach %do par% and llply( .parallel = TRUE). For anyone finding this post when searching for %dopar% with lists, here's the code where mList is a named list of formulas, and tList is a named list of pairs of model names to be compared.
tList <- list(Z1 = c("Full", "noYZ"),
Z2 = c("noYS", "noYSZS"),
S1 = c("Full", "noYS"),
S2 = c("noYZ", "noYSZS"),
A1 = c("noYSZS", "noY"),
A2 = c("noSZ", "noYSZ")
)
cl <- makePSOCKcluster(params$nCores) # value from YAML params:
registerDoParallel(cl)
# first run the models
modouts <- foreach(imod = 1:length(mList),
.packages = "lme4") %dopar% {
glmer(as.formula(mList[[imod]]),
data = dsn,
family = poisson,
control = glmerControl(optimizer = "bobyqa",
optCtrl = list(maxfun = 100000),
check.conv.singular = "warning")
)
}
names(modouts) <- names(mList)
####
# now run the parametric bootstrap tests
nSim <- 500
testouts <- foreach(i = seq_along(tList),
.packages = "pbkrtest") %dopar% {
PBmodcomp(modouts[[tList[[i]][1]]],
modouts[[tList[[i]][2]]],
nsim = nSim)
}
names(testouts) <- names(tList)
stopCluster(Cl)

Slow dot product in R

I am trying to take the dot product from a 331x23152 and 23152x23152 matrix.
In Python and Octave this is a trivial operation, but in R this seems to be incredibly slow.
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is
user system elapsed
101.95 0.04 101.99
In other words, this dot product takes over 100 seconds to execute.
I am running R-3.4.0 64-bit, with RStudio v1.0.143 on a i7-4790 with 16 GB RAM. As such, I did not expect this operation to take so long.
Am I overlooking something? I have started looking into the packages bigmemory and bigalgebra, but I can't help but think there's a solution without having to resort to packages.
EDIT
To give you an idea of time difference, here's a script for Octave:
n = 331;
m = 23152;
mat_1 = rand(n,m);
mat_2 = rand(m,m);
tic
mat_3 = mat_1*mat_2;
toc
The output is
Elapsed time is 3.81038 seconds.
And in Python:
import numpy as np
import time
n = 331
m = 23152
mat_1 = np.random.random((n,m))
mat_2 = np.random.random((m,m))
tm_1 = time.time()
mat_3 = np.dot(mat_1,mat_2)
tm_2 = time.time()
tm_3 = tm_2 - tm_1
print(tm_3)
The output is
2.781277894973755
As you can see, these numbers are not even in the same ballpark.
EDIT 2
At Zheyuan Li's request, here are toy examples for dot products.
In R:
mat_1 = matrix(c(1,2,1,2,1,2), nrow = 2, ncol = 3)
mat_2 = matrix(c(1,1,1,2,2,2,3,3,3), nrow = 3, ncol = 3)
mat_3 = mat_1 %*% mat_2
print(mat_3)
The output is:
[,1] [,2] [,3]
[1,] 3 6 9
[2,] 6 12 18
In Octave:
mat_1 = [1,1,1;2,2,2];
mat_2 = [1,2,3;1,2,3;1,2,3];
mat_3 = mat_1*mat_2
The output is:
mat_3 =
3 6 9
6 12 18
In Python:
import numpy as np
mat_1 = np.array([[1,1,1],[2,2,2]])
mat_2 = np.array([[1,2,3],[1,2,3],[1,2,3]])
mat_3 = np.dot(mat_1, mat_2)
print(mat_3)
The output is:
[[ 3 6 9]
[ 6 12 18]]
For more information on matrix dot products: https://en.wikipedia.org/wiki/Matrix_multiplication
EDIT 3
The output for sessionInfo() is:
> sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Dutch_Netherlands.1252 LC_CTYPE=Dutch_Netherlands.1252 LC_MONETARY=Dutch_Netherlands.1252
[4] LC_NUMERIC=C LC_TIME=Dutch_Netherlands.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
loaded via a namespace (and not attached):
[1] compiler_3.4.0 tools_3.4.0
EDIT 4
I tried the bigalgebra package but this did not seem to speed things up:
library('bigalgebra')
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_1 <- as.big.matrix(mat_1)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
101.79 0.00 101.81
EDIT 5
James suggested to alter my randomly generated matrix:
N <- 331
M <- 23152
mat_1 = matrix( runif(N*M), N, M)
mat_2 = matrix( runif(M*M), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
102.46 0.05 103.00
This is a trivial operation?? Matrix multiplication is always an expensive operation in linear algebra computations.
Actually I think it is quite fast. A matrix multiplication at this size has
2 * 23.152 * 23.152 * 0.331 = 354.8 GFLOP
With 100 seconds your performance is 3.5 GFLOPs. Note that on most machines, the performance is at most 0.8 GLOPs - 2 GFLOPs, unless you have an optimized BLAS library.
If you think implementation elsewhere is faster, check the possibility of usage of optimized BLAS, or parallel computing. R is doing this with a standard BLAS and no parallelism.
Important
From R-3.4.0, more tools are available with BLAS.
First of all, sessionInfo() now returns the full path of the linked BLAS library. Yes, this does not point to the symbolic link, but the final shared object! The other answer here just shows this: it has OpenBLAS.
The timing result (in the other answer) implies that parallel computing (via multi-threading in OpenBLAS) is in place. It is hard for me to tell the number of threads used, but looks like hyperthreading is on, as the slot for "system" is quite big!
Second, options can now set matrix multiplications methods, via matprod. Although this was introduced to deal with NA / NaN, it offers testing of performance, too!
"internal" is an implementation in non-optimized triple loop nest. This is written in C, and has equal performance to the standard (reference) BLAS written in F77;
"default", "blas" and "default.simd" mean using linked BLAS for computation, but the way for checking NA and NaN differs. If R is linked to standard BLAS, then as said, it has the same performance with "internal"; but otherwise we see significant boost. Also note that R team says that "default.simd" might be removed in future.
Based off the replies from knb and Zheyuan Li, I started investigating optimized BLAS packages. I came across GotoBlas, OpenBLAS, and MKL, e.g. here.
My conclusion is that MKL should outperform default BLAS by far.
It seems R has to be built from source in order to incorporate MKL. Instead, I found R Open. This has MKL (optionally) built-in, so installing is a breeze.
With the following code:
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1%*%mat_2
})
print(tm3)
The output is:
user system elapsed
10.61 0.10 3.12
As such, one solution to this problem is to use MKL instead of default BLAS.
However, upon investigation my real life matrices are highly sparse. I was able to take advantage of that fact by using the Matrix package. In practice I used it like e.g. Matrix(x = mat_1, sparse = TRUE), where mat_1 would be a highly sparse matrix. This brought down the execution time to around 3 seconds.
I have a similar machine: Linux PC, 16 GB RAM, intel 4770K ,
Relevant output from sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.2 LTS
Matrix products: default
BLAS: /usr/lib/openblas-base/libblas.so.3
LAPACK: /usr/lib/libopenblasp-r0.2.18.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=de_DE.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=de_DE.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=de_DE.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] knitr_1.15.1 clipr_0.3.2 tibble_1.3.0 colorout_1.1-2
loaded via a namespace (and not attached):
[1] compiler_3.4.0 tools_3.4.0 Rcpp_0.12.10
On my machine, your code snippet takes ~5 seconds (started RStudio, created empty .R file, ran snippet, output):
user system elapsed
27.608 5.524 4.920
Snippet:
N <- 331
M <- 23152
mat_1 = matrix( rnorm(N*M,mean=0,sd=1), N, M)
mat_2 = matrix( rnorm(N*M,mean=0,sd=1), M, M)
tm3 <- system.time({
mat_3 = mat_1 %*% mat_2
})
print(tm3)

Quantstrat WFA with intraday Data

I've been getting WFA to run on the full set of intraday GBPUSD 30min data, and have come across a couple of things that need addressing. The first is I believe the save function needs changing to remove the time from the string (as shown here as a pull request on the R-Finance/quantstrat repo on github). The walk.forward function throws this error:
Error in gzfile(file, "wb") : cannot open the connection
In addition: Warning message:
In gzfile(file, "wb") :
cannot open compressed file 'wfa.GBPUSD.2002-10-21 00:30:00.2002-10-23 23:30:00.RData', probable reason 'Invalid argument'
The second is a rare case scenario where its ends up calling runSum on a data set with less rows than the period you are testing (n). This is the traceback():
8: stop("Invalid 'n'")
7: runSum(x, n)
6: runMean(x, n)
5: (function (x, n = 10, ...)
{
ma <- runMean(x, n)
if (!is.null(dim(ma))) {
colnames(ma) <- "SMA"
}
return(ma)
})(x = Cl(mktdata)[, 1], n = 25)
4: do.call(indFun, .formals)
3: applyIndicators(strategy = strategy, mktdata = mktdata, parameters = parameters,
...)
2: applyStrategy(strategy, portfolios = portfolio.st, mktdata = symbol[testing.timespan]) at custom.walk.forward.R#122
1: walk.forward(strategy.st, paramset.label = "WFA", portfolio.st = portfolio.st,
account.st = account.st, period = "days", k.training = 3,
k.testing = 1, obj.func = my.obj.func, obj.args = list(x = quote(result$apply.paramset)),
audit.prefix = "wfa", anchored = FALSE, verbose = TRUE)
The extended GBPUSD data used in the creation of the Luxor Demo includes an erroneous date (2002/10/27) with only 1 observation which causes this problem. I can also foresee this being an issue when testing longer signal periods on instruments like Crude where they have only a few trading hours on Sunday evenings (UTC).
Given that I have purely been following the Luxor demo with the same (extended) intra-day data set, are these genuine issues or have they been caused by package updates etc?
What is the preferred way for these things to be reported to the authors of QS, and find out if/when fixes are likely to be made?
SessionInfo():
R version 3.3.0 (2016-05-03)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252 LC_MONETARY=English_Australia.1252 LC_NUMERIC=C LC_TIME=English_Australia.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] quantstrat_0.9.1739 foreach_1.4.3 blotter_0.9.1741 PerformanceAnalytics_1.4.4000 FinancialInstrument_1.2.0 quantmod_0.4-5 TTR_0.23-1
[8] xts_0.9.874 zoo_1.7-13
loaded via a namespace (and not attached):
[1] compiler_3.3.0 tools_3.3.0 codetools_0.2-14 grid_3.3.0 iterators_1.0.8 lattice_0.20-33
quantstrat is on github here:
https://github.com/braverock/quantstrat
Issues and patches should be reported via github issues.

R file connection when using parallel

I am trying to make a file connection within a cluster (using parallel).
While it works correctly in the global environment, it gives me an error message when used within the members of the cluster (See the script below).
Do I missed something?
Any suggestion?
Thanks,
# This part works
#----------------
cat("This is a test file" , file={f <- tempfile()})
con <- file(f, "rt")
# Doing what I think is the same thing gives an error message when executed in parallel
#--------------------------------------------------------------------------------------
library(parallel)
cl <- makeCluster(2)
## Exporting the object f into the cluster
clusterExport(cl, "f")
clusterEvalQ(cl[1], con <- file(f[[1]], "rt"))
#Error in checkForRemoteErrors(lapply(cl, recvResult)) :
# one node produced an error: cannot open the connection
## Creating the object f into the cluster
clusterEvalQ(cl[1],cat("This is a test file" , file={f <- tempfile()}))
clusterEvalQ(cl[1],con <- file(f, "rt"))
#Error in checkForRemoteErrors(lapply(cl, recvResult)) :
# one node produced an error: cannot open the connection
############ Here is my sessionInfo() ###################
# R version 3.3.0 (2016-05-03)
# Platform: x86_64-w64-mingw32/x64 (64-bit)
# Running under: Windows 7 x64 (build 7601) Service Pack 1
#
# locale:
# [1] LC_COLLATE=French_Canada.1252 LC_CTYPE=French_Canada.1252
# [3] LC_MONETARY=French_Canada.1252 LC_NUMERIC=C
# [5] LC_TIME=French_Canada.1252
#
# attached base packages:
# [1] stats graphics grDevices utils datasets methods base
#
Try changing the code to return a NULL rather than the created connection object:
clusterEvalQ(cl[1], {con <- file(f[[1]], "rt"); NULL})
Connection objects can't be safely sent between the master and workers, but this method avoids that.

Loading ffdf data take a lot of memory

I am facing a strange problem:
I save ffdf data using
save.ffdf()
from ffbase package and when i load them in a new R session, doing
load.ffdf("data.f")
it gets loaded into RAM aprox 90% of the memory than the same data as a data.frame object in R.
Having this issue, it does not make a lot of sense to use ffdf, isnĀ“t it?
I can't use ffsave because i am working in a server and do not have the zip app on it.
packageVersion(ff) # 2.2.10
packageVersion(ffbase) # 0.6.3
Any ideas about ?
[edit] some code example to help to clarify:
data <- read.csv.ffdf(file = fn, header = T, colClasses = classes)
# file fn is a csv database with 5 columns and 2.6 million rows,
# with some factor cols and some integer cols.
data.1 <- data
save.ffdf(data.1 , dir = my.dir) # my.dir is a string pointing to the file. "C:/data/R/test.f" for example.
closing the R session... opening again:
load.ffdf(file.name) # file.name is a string pointing to the file.
#that gives me object data, with class(data) = ffdf.
then i have a data object ffdf[5] , and its memory size is almost as big as:
data.R <- data[,] # which is a data.frame.
[end of edit]
*[ SECOND EDIT :: FULL REPRODUCIBLE CODE ::: ]
As my question is not answered yet, and i still find the problem, i give a reproducible example ::
dir1 <- 'P:/Projects/RLargeData';
setwd(dir1);
library(ff)
library(ffbase)
memory.limit(size=4000)
N = 1e7;
df <- data.frame(
x = c(1:N),
y = sample(letters, N, replace =T),
z = sample( as.Date(sample(c(1:2000), N, replace=T), origin="1970-01-01")),
w = factor( sample(c(1:N/10) , N, replace=T)) )
df[1:10,]
dff <- as.ffdf(df)
head(dff)
#str(dff)
save.ffdf(dff, dir = "dframeffdf")
dim(dff)
# on disk, the directory "dframeffdf" is : 205 MB (215.706.264 bytes)
### resetting R :: fresh RStudio Session
dir1 <- 'P:/Projects/RLargeData';
setwd(dir1);
library(ff)
library(ffbase)
memory.size() # 15.63
load.ffdf(dir = "dframeffdf")
memory.size() # 384.42
gc()
memory.size() # 287
So we have into memory 384 Mb, and after gc() there are 287, which is around the size of the data in the disk. (checked also in "Process explorer" application for windows)
> sessionInfo()
R version 2.15.2 (2012-10-26)
Platform: i386-w64-mingw32/i386 (32-bit)
locale:
[1] LC_COLLATE=Danish_Denmark.1252 LC_CTYPE=Danish_Denmark.1252 LC_MONETARY=Danish_Denmark.1252 LC_NUMERIC=C LC_TIME=Danish_Denmark.1252
attached base packages:
[1] tools stats graphics grDevices utils datasets methods base
other attached packages:
[1] ffbase_0.7-1 ff_2.2-10 bit_1.1-9
[END SECOND EDIT ]
In ff, when you have factor columns, the factor levels are always in RAM. ff character columns currently don't exist and character columns are converted to factors in an ffdf.
Regarding your example: your 'w' column in 'dff' contains more than 6 Mio levels. These levels are all in RAM. If you wouldn't have columns with a lot of levels, you wouldn' see the RAM increase as shown below using your example.
N = 1e7;
df <- data.frame(
x = c(1:N),
y = sample(letters, N, replace =T),
z = sample( as.Date(sample(c(1:2000), N, replace=T), origin="1970-01-01")),
w = sample(c(1:N/10) , N, replace=T))
dff <- as.ffdf(df)
save.ffdf(dff, dir = "dframeffdf")
### resetting R :: fresh RStudio Session
library(ff)
library(ffbase)
memory.size() # 14.67
load.ffdf(dir = "dframeffdf")
memory.size() # 14.78
The ffdf package(s) have mechanisms for segregating object in 'physical' and 'virtual' storage. I suspect you are implicitly constructing items in physical memory, but since you offer not coding for how this workspace was created, there's only so much guessing that is possible.

Resources