I am working with a very large dataset and I would like to keep the data in H2O as much as possible without bringing it into R.
I noticed whenever I pass an H2O Frame to a function, any modification I make to the Frame is not reflected outside of the function. Is there a way to pass the Frame by Reference?
If not, what's the best way to modify the original frame inside a function with copying all of the Frame?
Another related question: does passing a Frame to other functions (read only), make extra copies on H2O side? My datasets are 30GB - 100GB. So want to make sure passing them around does not cause memory issues.
mod = function(fdx) {
fdx[,"x"] = -1
}
d = data.frame(x = rnorm(100),y=rnorm(100))
dx = as.h2o(d)
dx[1,]
mod(dx)
dx[1,] # does not change the original value of x
> dx[1,]
x y
1 0.3114706 0.9523058
> dx[1,]
x y
1 0.3114706 0.9523058
Thanks!
H2O does a classic copy-on-write optimization. Thus:
No true copy is made, unless you mutate the dataset.
Only changed/added columns are truly copied, all others pass-by-reference
Frames in R are pass-by-value, which H2O mimics
Frames in Python are pass-by-reference, which H2O mimics
In short, do as you would in R, and you're fine.
No extra copies.
Related
How it is possible that storing data into H2O matrix are slower than in data.table?
#Packages used "H2O" and "data.table"
library(h2o)
library(data.table)
#create the matrix
matrix1<-data.table(matrix(rnorm(1000*1000),ncol=1000,nrow=1000))
matrix2<-h2o.createFrame(1000,1000)
h2o.init(nthreads=-1)
#Data.table variable store
for(i in 1:1000){
matrix1[i,1]<-3
}
#H2O Matrix Frame store
for(i in 1:1000){
matrix2[i,1]<-3
}
Thanks!
H2O is a client/server architecture. (See http://docs.h2o.ai/h2o/latest-stable/h2o-docs/architecture.html)
So what you've shown is a very inefficient way to specify an H2O frame in H2O memory. Every write is going to be turning into a network call. You almost certainly don't want this.
For your example, since the data isn't large, a reasonable thing to do would be to do the initial assignment to a local data frame (or datatable) and then use push method of as.h2o().
h2o_frame = as.h2o(matrix1)
head(h2o_frame)
This pushes an R data frame from the R client into an H2O frame in H2O server memory. (And you can do as.data.table() to do the opposite.)
data.table Tips:
For data.table, prefer the in-place := syntax. This avoids copies. So, for example:
matrix1[i, 3 := 42]
H2O Tips:
The fastest way to read data into H2O is by ingesting it using the pull method in h2o.importFile(). This is parallel and distributed.
The as.h2o() trick shown above works well for small datasets that easily fit in memory of one host.
If you want to watch the network messages between R and H2O, call h2o.startLogging().
I can't answer your question because I don't know h20. However I can make a guess.
Your code to fill the data.table is slow because of the "copy-on-modify" semantic. If you update your table by reference you will incredibly speed-up your code.
for(i in 1:1000){
matrix1[i,1]<-3
}
for(i in 1:1000){
set(matrix1, i, 1L, 3)
}
With set my loop takes 3 millisec, while your loop takes 18 sec (6000 times more).
I suppose h2o to work the same way but with some extra stuff done because this is a special object. Maybe some message passing communication to the H2O cluster?
After importing a relatively big table from MySQL into H2O on my machine, I tried to run a hashing algorithm (murmurhash from the R digest package) on one of its columns and save it back to H2O. As I found out, using as.data.frame on a H2OFrame object is not always advised: originally my H2OFrame is ~43k rows large, but the coerced DataFrame contains usually only ~30k rows for some reason (the same goes for using base::apply/base::sapply/etc on the H2OFrame).
I found out there is an apply function used for H2OFrames as well, but as I see, it can only be used with built-in R functions.
So, for example my code would look like this:
data[, "subject"] <- h2o::apply(data[, "subject"], 2, function(x)
digest(x, algo = "murmur32"))
I get the following error:
Error in .process.stmnt(stmnt, formalz, envs) :
Don't know what to do with statement: digest
I understand the fact that only the predefined functions from the Java backend can be used to manipulate H2O data, but is there perhaps another way to use the digest package from the client side without converting the data to DataFrame? I was thinking that in the worst case, I will have to use the R-MySQL driver to load the data first, manipulate it as a DataFrame and then upload it to the H2O cloud. Thanks for help in advance.
Due to the way H2O works, it cannot support arbitrary user-defined functions applied to H2OFrames the way that you can apply any function to a regular R data.frame. We already use the Murmur hash function in the H2O backend, so I have added a JIRA ticket to expose it to the H2O R and Python APIs. What I would recommend in the meantime is to copy just the single column of interest from the H2O cluster into R, apply the digest function and then update the H2OFrame with the result.
The following code will pull the "subject" column into R as a 1-column data.frame. You can then use the base R apply function to apply the murmur hash to every row, and lastly you can copy the resulting 1-column data.frame back into the "subject" column in your original H2OFrame, called data.
sub <- as.data.frame(data[, "subject"])
subhash <- apply(sub, 1, digest, algo = "murmur32")
data[, "subject"] <- as.h2o(subhash)
Since you only have 43k rows, I would expect that you'd still be able to do this with no issues on even a mediocre laptop since you are only copying a single column from the H2O cluster to R memory (rather than the entire data frame).
I have to manually collect some rows so based on the R Cookbook, it recommended me to pre-allocate some memory for a large data frame. Say my code is
dataSize <- 500000;
shoesRead <- read.csv(file="someShoeCsv.csv", head=TRUE, sep=",");
shoes <- data.frame(size=integer(dataSize), price=double(dataSize),
cost=double(dataSize), retail=double(dataSize));
So now, I have some data about shoes which I imported via csv, and then I perform some calculation and want to insert into the data frame shoes. Let's say the someShoeCsv.csv has a column called ukSize and so
usSize <- ukSize * 1.05 #for example
My question is how do I do so? Running the code, noting now I have a usSize variable which was transformed from the ukSize column, read from the csv file:
shoes <- rbind(shoes,
data.frame("size"=usSize, "price"=price,
"cost"=cost, "retail"=retail));
adds to the already large data frame.
I have experimented with doing the list and then rbind but understand that it is tedious and so I am thinking of using this method but still to no avail.
I'm not quite sure what you're trying to do, but if you're trying to replace some of the pre-allocated rows with new data, you could do so like this:
Nreplace = length(usSize)
shoes$size[1:Nreplace] <- usSize
shoes$price[1:Nreplace] <- shoesRead$price
And so on, for the rest of the columns.
Here's some unsolicited advice. Looking at the code you've included, you reference ukSize and price etc without referencing the data frame, which makes it appear like you've done attach(shoesRead). Definitely never use attach(). If you want the price vector, for example, just do shoesRead$price. It's just a little bit more typing for the sake of much more readable code.
Lets say I have the example code
kkk<-data.frame(m.mean=1:1000, m.sd=1:1000/20)
kkk[,3:502]<-NA
for (i in 1:nrow(kkk)){
kkk[i,3:502]<-rnorm(n=500, mean=kkk[i,1], sd=kkk[i,2])
}
I would like to convert this function to run parallel with doMC. My problem is that foreach results in a list, whereas I need the results of each iteration to be a vector that can be then transfered to the data frame (which later will be exported as CVS for further processing).
Any ideas?
You don't need a loop for this, and putting a large matrix of numbers in a data frame only to treat is as a matrix is inefficient (although you may need to create a data frame at the end after doing all your math in order to write to a CSV file).
m.mean <- 1:1000
m.sd <- 1:1000/20
num.columns <- 500
x <- matrix(nrow=length(m.mean), ncol=num.columns,
data=rnorm(n=length(m.mean) * num.columns))
x <- x * cbind(m.sd)[,rep(1,num.columns)] + cbind(m.mean)[,rep(1,num.columns)]
kkk <- data.frame(m.mean=m.mean, m.sd=m.sd, unname(x))
write.csv(kkk, "kkk.txt")
To answer your original question about directly assigning results to an existing data structure from a foreach loop, that is not possible. The foreach package's parallel backends are designed to perform each computation in a separate R process, so each one has to return a separate object to the parent process, which collects them with the .combine function provided to foreach. You could write a parallel foreach loop that assignes directly to the kkk variable, but it would have no effect, because each assignment would happen in the separate processes and would not be shared with the main process.
I can't find something to the effect of an undo command in R (neither on An Introduction to R nor in R in a Nutshell). I am particularly interested in undoing/deleting when dealing with interactive graphs.
What approaches do you suggest?
You should consider a different approach which leads to reproducible work:
Pick an editor you like and which has R support
Write your code in 'snippets', ie short files for functions, and then use the facilities of the editor / R integration to send the code to the R interpreter
If you make a mistake, re-edit your snippet and run it again
You will always have a log of what you did
All this works tremendously well in ESS which is why many experienced R users like this environment. But editors are a subjective and personal choice; other people like Eclipse with StatET better. There are other solutions for Mac OS X and Windows too, and all this has been discussed countless times before here on SO and on other places like the R lists.
In general I do adopt Dirk's strategy. You should aim for your code to be a completely reproducible record of how you have transformed your raw data into output.
However, if you have complex code it can take a long time to re-run it all. I've had code that takes over 30 minutes to process the data (i.e., import, transform, merge, etc.).
In these cases, a single data-destroying line of code would require me to wait 30 minutes to restore my workspace.
By data destroying code I mean things like:
x <- merge(x, y)
df$x <- df$x^2
e.g., merges, replacing an existing variable with a transformation, removing rows or columns, and so on. In these cases, it's easy, especially when first learning R to make a mistake.
To avoid having to wait this 30 minutes, I adopt several strategies:
If I'm about to do something where there's a risk of destroying my active objects, I'll first copy the result into a temporary object. I'll then check that it worked with the temporary object and then rerun replacing it with the proper object.
E.g., first run temp <- merge(x, y); check that it worked str(temp); head(temp); tail(temp) and if everything looks good x <- merge(x, y)
As is common in psychological research, I often have large data frames with hundreds of variables and different subsets of cases. For a given analysis (e.g., a table, a figure, some results text), I'll often extract just the subset of cases and variables that I need into a separate object for the analysis and work with that object when preparing and finalising my analysis code. That way, I'm less likely to accidentally damage my main data frame. This assumes that the results of the analysis does not need to be fed back into the main data frame.
If I have finished performing a large number of complex data transformations, I may save a copy of the core workspace objects. E.g., save(x, y, z , file = 'backup.Rdata') That way, If I make a mistake, I only have to reload these objects.
df$x <- NULL is a handy way of removing a variable in a data frame that you did not want to create
However, in the end I still run all the code from scratch to check that the result is reproducible.