R language saving SpatialPixelsDataFrame objects - r

Excuse me in advance for the basic question
Converting SpatialGridDataFrame objects to SpatialPixelsDataFrame ones may be a time (and computer memory) demanding task, especially when big grids are involved.
I have been unsuccessfully goggleing around the possibility to create the SpatialPixelsDataFrame object once, and save it in such a way I will be able to upload it later as... an SpatialPixelsDataFrame object.
Can anybody tell me how to do that?
Danke schön
perep

You can save R objects in RDS files:
saveRDS(anything, file="anything.rds")
and then load it back:
anything = readRDS(file="anything.rds")
Someone may suggest you use save() to an RData file instead:
save(anything, file="mything.RData")
but that means unless you do a bit of fiddling you will have to load it into a thing called anything:
rm(anything)
load(file="mything.RData")
summary(anything) # A magic "anything" has appeared!
So use RDS files, then you can load them back to any object name you like:
foo = readRDS("anything.rds")
bar = readRDS("anything.rds")
and so on.

Related

How to load a single object from .Rdata file? [duplicate]

I have a Rdata file containing various objects:
New.Rdata
|_ Object 1 (e.g. data.frame)
|_ Object 2 (e.g. matrix)
|_...
|_ Object n
Of course I can load the data frame with load('New.Rdata'), however, is there a smart way to load only one specific object out of this file and discard the others?
.RData files don't have an index (the contents are serialized as one big pairlist). You could hack a way to go through the pairlist and assign only entries you like, but it's not easy since you can't do it at the R level.
However, you can simply convert the .RData file into a lazy-load database which serializes each entry separately and creates an index. The nice thing is that the loading will be on-demand:
# convert .RData -> .rdb/.rdx
e = local({load("New.RData"); environment()})
tools:::makeLazyLoadDB(e, "New")
Loading the DB then only loads the index but not the contents. The contents are loaded as they are used:
lazyLoad("New")
ls()
x # if you had x in the New.RData it will be fetched now from New.rdb
Just like with load() you can specify an environment to load into so you don't need to pollute the global workspace etc.
You can use attach rather than load which will attach the data object to the search path, then you can copy the one object you are interested in and detach the .Rdata object.
This still loads everything, but is simpler to work with than loading everything into the global workspace (possibly overwriting things you don't want overwritten) then getting rid of everything you don't want.
Simon Urbanek's answer is very, very nice. A drawback is that it doesn't seem to work if an object to be saved is too large:
tools:::makeLazyLoadDB(
local({
x <- 1:1e+09
cat("size:", object.size(x) ,"\n")
environment()
}), "lazytest")
size: 4e+09
Error: serialization is too large to store in a raw vector
I'm guessing that this is due to a limitation of the current implementation of R (I have 2.15.2) rather than running out of physical memory and swap. The saves package might be an alternative for some uses, however.
A function is useful to extract a single object without loading everything in the RData file.
extractorRData <- function(file, object) {
#' Function for extracting an object from a .RData file created by R's save() command
#' Inputs: RData file, object name
E <- new.env()
load(file=file, envir=E)
return(get(object, envir=E, inherits=F))
}
See full answer here. https://stackoverflow.com/a/65964065/4882696
This blog post gives an a neat practice that prevents this sort of issue in the first problem. The gist of it is to use saveRDS(), loadRDS() functions instead of the regular save(), load() functions.

How to output a list of dataframes, which is able to be used by another user

I have a list whose elements are several dataframes, which looks like this
Because it is hard for another user to use these data by re-running my original code. Hence, I would like to export it. As the graph shows, the dataframes in that list have different number of rows. I am wondering if there is any method to export it as file without damaging any information, and make it be able to be used by Rstudio. I have tried to save it as RData, but I don't know how to save the information.
Thanks a lot
To output objects in R, here are 4 common methods:
dput() writes a text representation of an R object
This is very convenient if you want to allow someone to get your object by copying and pasting text (for instance on this site), without having to email or upload and download a file. The downside however is that the output is long and re-reading the object into R (simply by assigning the copied text to an object) can hang R for large objects. This works best to create reproducible examples. For a list of data frames, this would not be a very good option.
You can print an object to a .csv, .xlsx, etc. file with write.table(), write.csv(), readr::write_csv(), xlsx::write.xlsx(), etc.
While the file can then be used by other software (and re-imported into R with read.csv(), readr::read_csv(), readxl::read_excel(), etc.), the data can be transformed in the process and some objects cannot be printed in a single file without prior modifications. So this is not ideal in your case either.
save.image() saves your entire workspace (objects + environment)
The workspace can then be recreated with load(). This can be useful, but you are here only interested in saving one object. In that case, it is preferable to use:
saveRDS() which allows to write one object to file
The object can then be re-created with readRDS(). This is the best option to save an R object to file, without any modification and then re-create it.
In your situation, this is definitely the best solution.

"filename.rdata" file Exploring and Converting to CSV

I'm no R-programmer (because of the problem I started learning it), I'm using Python, In a forcasting task I got a dataset signalList.rdata of a pheomenen called partial discharge.
I tried some commands to load, open and view, Hardly got a glimps
my_data <- get(load('C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/signalList.Rdata'))
but, since i lack deep knowledge about R, I wanted to convert it into a csv file, or any type that I can deal with in python.
or, explore it and copy-paste manually.
so, i'm asking for any solution whether using R or Python or any tool to get what's in the .rdata file.
Have you managed to load the data successfully into your working environment?
If so, write.csv is the function you are looking for.
If not,
setwd("C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/")
signalList <- load("signalList.Rdata")
write.csv(signalList, "signalList.csv")
should do the trick.
If you would like to remove signalList from your working directory,
rm(signalList)
will accomplish this.
Note: changing your working directory isn't necessary, it just makes it easier to read in a comment I feel. You may also specify another path for saving your csv to within the second argument of write.csv.

How to load only a dataframe (not entire RData)? [duplicate]

I have a Rdata file containing various objects:
New.Rdata
|_ Object 1 (e.g. data.frame)
|_ Object 2 (e.g. matrix)
|_...
|_ Object n
Of course I can load the data frame with load('New.Rdata'), however, is there a smart way to load only one specific object out of this file and discard the others?
.RData files don't have an index (the contents are serialized as one big pairlist). You could hack a way to go through the pairlist and assign only entries you like, but it's not easy since you can't do it at the R level.
However, you can simply convert the .RData file into a lazy-load database which serializes each entry separately and creates an index. The nice thing is that the loading will be on-demand:
# convert .RData -> .rdb/.rdx
e = local({load("New.RData"); environment()})
tools:::makeLazyLoadDB(e, "New")
Loading the DB then only loads the index but not the contents. The contents are loaded as they are used:
lazyLoad("New")
ls()
x # if you had x in the New.RData it will be fetched now from New.rdb
Just like with load() you can specify an environment to load into so you don't need to pollute the global workspace etc.
You can use attach rather than load which will attach the data object to the search path, then you can copy the one object you are interested in and detach the .Rdata object.
This still loads everything, but is simpler to work with than loading everything into the global workspace (possibly overwriting things you don't want overwritten) then getting rid of everything you don't want.
Simon Urbanek's answer is very, very nice. A drawback is that it doesn't seem to work if an object to be saved is too large:
tools:::makeLazyLoadDB(
local({
x <- 1:1e+09
cat("size:", object.size(x) ,"\n")
environment()
}), "lazytest")
size: 4e+09
Error: serialization is too large to store in a raw vector
I'm guessing that this is due to a limitation of the current implementation of R (I have 2.15.2) rather than running out of physical memory and swap. The saves package might be an alternative for some uses, however.
A function is useful to extract a single object without loading everything in the RData file.
extractorRData <- function(file, object) {
#' Function for extracting an object from a .RData file created by R's save() command
#' Inputs: RData file, object name
E <- new.env()
load(file=file, envir=E)
return(get(object, envir=E, inherits=F))
}
See full answer here. https://stackoverflow.com/a/65964065/4882696
This blog post gives an a neat practice that prevents this sort of issue in the first problem. The gist of it is to use saveRDS(), loadRDS() functions instead of the regular save(), load() functions.

How to save large output sufficiently fast in text or any other format?

My question is: how to save the output i.e., mydata
mydata=array(sample(100),dim=c(2,100,4000))
reasonably fast?
I used the reshape2 package as suggested here.
melt(mydata)
and
write.table(mydata,file="data_1")
But it is taking more than one hour to save the data into the file. I am looking for any other faster ways to do the job.
I strongly suggest to refer to this great post, that surely helps in make issues clear about file saving.
Anyway, saveRDS could be the most adequate for you. The difference more relevant, in this case, is that save can save many objects to a file in a single call, whilst saveRDS, being a lower-level function, works with a single object at a time.
save and load allow you to save a named R object to a file or other connection and restore that object again. But, when loaded, the named object is restored to the current environment with the same name it had when saved.
saveRDS and loadRDS, instead, allow to save a single R object to a connection (typically a file) and to restore the object, possibly with a different name. The low level operability makes RDS functions more efficient, probably, for your case.
Read the help text for saveRDS using ?saveRDS. This will probably be the best way for you to save and load large dataframes.
saveRDS(yourdata, file = "yourdata.Rda")

Resources