I googled but couldn't find any functions in Julia that can read and write R's RData (RDa) files. Is there any library / function / package that can allow me to do this? There appears to be a RDA.jl file in the src directory of DataFrames but I didn't find any mention of this in the DataFrames documentation.
The function you are looking for is read_rda which comes with the DataFrames package. So,
read_rda(filename)
should work and return a Dict with variable names => data.
Related
I am doing my computations on Julia and I got results stored in JLD files, and I want to know if there is some interface that makes R able to read them. Or is there another file extension cooler for transferring data-structures between R and Julia?
As mentioned in comments, use rhdf5 to read .jld files into R.
The syntax is simply:
h5read("path/to/file.jld", "field_name")
I have submitted a R package to CRAN. I need to include example .csv files which I included under the /data directory.
To get the R CMD check to pass, I have to have the examples in the .Rd file to refer to data .csv files as below
pkg-function(system.file("data", <csv file>, package = pkg-name),par1) -- (1)
Using this format passes the R CMD check and also works after the package is installed rather than actual path names.
But I want the user to be able refer to the .csv files in a simple way as follows
pkg-function(path-to-file, par1) -- (2)
Since the examples in the .Rd file will be in form (1) it will confuse the user.
Is there a clean way to call the package functions in the examples (.Rd) as format (2)
I would recommend a change in expectations in your case. Instead of trying to find a way to make a clear and obvious example in as little code as possible, perhaps you can write a little more code with comments to illustrate what you are doing.
For example:
#* retrieve the file path of a data file installed with
#* [your package's name]
#* see '?system.file' for details.
Path <- system.file (...)
#* execute function
pkg-function (Path, par1)
While developing a package I encountered the problem of supplementary data import - this has been 'kind of' solved here.
Nevertheless, I need to make use of a function of another package, which needs a path to the used file. Sadly, using GlobalEnvironment variables here is not an option.
[By the way: the file needs to be .txt, while supplementary data should be .RData. The function is quite picky.]
So I need to know how to get the path supplementary data file of a package. Is this even possible to do?
I had the idea of reading the .RData into the global environment and then saving it into a tmpfile for further processing. I would really like to know a clean way - the supplementary data is ~100MB large...
Thank you very much!
Use system.file() to reliably find the path to the installed package and sub-directories, typically these are created in your-pkg-source/inst/extdata/your-file.txt and then referenced as
system.file(package="your-pkg", "extdata", "your-file.txt")
I'm compiling knitr document from a .R file:
knit2pdf(example.Rnw, output=paste0(name,".tex"))
But in the document example.Rnw, there's the function:
do.call(cbind,mget(as.character(rep_names)))
Where rep_names it's the name of some dataframes created by a loop, which i want to merge. The thing is i don't know the extact number of dataframes created.
If i compile the document directly from knitr works perfectly, but when i execute from .R file, he didn't find the objects of rep_names. Example on .pdf output.
## Error: value for ’Object_1’ not found
Where Object_1 is rep_names[1]. The problem is on which session are the object created?
It's hard to provide any assistance here without a reproducible example. Looking at what you have, I imagine you should change the way you're creating the dataframes to put them in a list rather than as named objects in the global environment. This way you can change the problematic line to do.call(cbind, listofdfs). There are numerous questions and answers here that recommend this strategy.
I am making my first attempts to write a R package. I am loading one csv file from hard drive and I am hoping to bundle up my R codes and my csv files into one package later.
My question is how can I load my csv file when my pakage is generated, I mean right now my file address is something like c:\R\mydirectory....\myfile.csv but after I sent it to someone else how can I have a relative address to that file?
Feel free to correct this question if it is not clear to others!
You can put your csv files in the data directory or in inst/extdata.
See the Writing R Extensions manual - Section 1.1.5 Data in packages.
To import the data you can use, e.g.,
R> data("achieve", package="flexclust")
or
R> read.table(system.file("data/achieve.txt", package = "flexclust"))
Look at the R help for package.skeleton: this function
automates some of the setup for a new source package. It creates directories, saves functions, data, and R code files to appropriate places, and creates skeleton help files and a ‘Read-and-delete-me’ file describing further steps in packaging.
The directory structure created by package.skeleton includes a data directory. If you put your data here it will be distributed with the package.