Using R to write a .mat file not giving the right output? - r

I had a .csv file that I wanted to read into Octave (originally tried to use csvread). It was taking too long, so I tried to use R to workaround: How to read large matrix from a csv efficiently in Octave
This is what I did in R:
forest_test=read.csv('forest_test.csv')
library(R.matlab)
writeMat("forest_test.mat", forest_test_data=forest_test)
and then I went back to Octave and did this:
forest_test = load('forest_test.mat')
This is not giving me a matrix, but a struct. What am I doing wrong?

To answer your exact question, you are using the load function wrong. You must not assign it's output to a variable if you just want the variables on the file to be inserted in the workspace. From Octave's load help text:
If invoked with a single output argument, Octave returns data
instead of inserting variables in the symbol table. If the data
file contains only numbers (TAB- or space-delimited columns), a
matrix of values is returned. Otherwise, 'load' returns a
structure with members corresponding to the names of the variables
in the file.
With examples, following our case:
## inserts all variables in the file in the workspace
load ("forest_test.mat");
## each variable in the file becomes a field in the forest_test struct
forest_test = load ("forest_test.mat");
But still, the link you posted about Octave being slow with CSV files makes referece to Octave 3.2.4 which is a quite old version. Have you confirmed this is still the case in a recent version (last release was 3.8.2).

There is a function designed to convert dataframes to matrices:
?data.matrix
forest_test=data.matrix( read.csv('forest_test.csv') )
library(R.matlab)
writeMat("forest_test.mat", forest_test_data=forest_test)

Related

How to use a file modified by a R chunk in a Python one

I am working in Rmarkdown into primarily R chunks, which I used to modify data frames. Now that they are ready, a colleague gave me Python codes to process some of the data. But when transitioning from a R chunk to a Python one, the environment changes and I do not know how to use the previous files directly.
reticulate::repl_python()
biodata_file = women_personal_data
NameError: name 'women_personal_data' is not defined
NameError: name 'women_personal_data' is not defined
Ideally, I would like not to have to save the files on my computer between R and Python, and then back at R again, to avoid accumulating files that are not completely clean yet (because I figured it could be a solution).
I tried this solution but seems to not work with Data Frames
Thanks !
biodata_file = r.women_personal_data
The '.r' makes it take it from R, because the variable was called
r women_personal_data
TIP = to come back to R, the variable is now called py$women_personal_data

What are the commands for viewing a ".RData" file's data in RStudio?

I am trying to find out how I can see the data within a dataset with a .RData extension.
I tried view(), it gave me one object present in the dataset but I know that this dataset is a large dataset with over 300MB size and consists of a very large number of names list. I need to view all of the contents of it and have been unsuccessful so far.
Should I convert it into a CSV instead in order to view all of the contents? If yes, how can I do that using RStudio?
The cross-platform function is View. (Caps are discriminatory in R.) If you did:
obj <- load("filename.Rdata") # assuming a file exist in your working directory
Then type:
obj
You should see a print-listing of the character representations of the objects created (or possibly overwritten) in your global environment. The Rstudio aspect of this question would not affect the result.

Importing data from Excel to vector in R

I am a novice in R and I have been having some trouble trying to get R and Excel to cooperate.
I have written a code that makes it able to compare two vectors with each other and determine the differences between them:
data.x<-read.csv(file.choose(), header=T)
data.y<-read.csv(file.choose(), header=T)
newdata.x<-grep("DAG36|G379",data.x,value=TRUE,invert=TRUE)
newdata.x
newdata.y<-grep("DAG36|G379",data.y,value=TRUE,invert=TRUE)
newdata.y
setdiff(newdata.x,newdata.y)
setdiff(newdata.y,newdata.x)
The data I want to transfer from Excel to R is a long row of numbers placed as so:
“312334-2056”, “457689-0932”, “857384-9857”,….,
There are about 350 of these numbers placed in their own separate cell along a single row.
I used the command: = """" & A1 & """" To put double quotes around every number in order for R to read it properly.
At first I tried to simply copy/paste the data directly into a vector in R, but it's as if R won’t read it as a single row of data and therefore splits it up.
I also tried to save the excel file as a CSV file but that didn’t work either.
Lastly I tried to open it directly in to R using the command:
data.x<- read.csv(file.choose(), header=T)
But as I type in: data.x and press enter it simply says:
<0 rows> (or 0-lenghts row.names)
I simply can’t figure out what I’m doing wrong. Any help would be greatly appreciated.
It's hard to access without a reproducible example, but you should be able to transpose the Excel file into a single column. Then import using read_csv from the readr package. Take a look at the tidyverse package, which will contain some great tools to import and work with this type of data.
I use https://github.com/tidyverse/readxl/. It makes it easy to maintain formatting from excel into type safe tibbles.
If you can share some sample data a working solution can be generated.

How to export a dataset to SPSS?

I want to export a dataset in the MASS package to SPSS for further investigation. I'm looking for the EuStockMarkets data set in the package.
As described in http://www.statmethods.net/input/exportingdata.html, I did:
library(foreign)
write.foreign(EuStockMarkets, "c:/mydata.txt", "c:/mydata.sps", package="SPSS")
I got a text file but the sps file is not a valid SPSS file. I'm really looking for a way to export the dataset to something that a SPSS can open.
As Thomas has mentioned in the comments, write.foreign doesn't generate native SPSS datafiles (.sav). What it does generate is the data in a comma delimited format (the .txt file) and a basic syntax file for reading that data into SPSS (the .sps file). The EuStockMarkets data object class is multivariate time series (mts) so when it's exported the metadata is lost and the resulting .sps file, lacking variable names, throws an error when you try to run it in SPSS. To get around this you can export it as a data frame instead:
write.foreign(as.data.frame(EuStockMarkets), "c:/mydata.txt", "c:/mydata.sps", package="SPSS")
Now you just need to open mydata.sps as a syntax file (NOT as a datafile) in SPSS and run it to read in the datafile.
Rather than exporting it, use the STATS GET R extension command. It will take a specified data frame from an R workspace/dataset and convert it into a Statistics dataset. You need the R Essentials for Statistics and the extension command, which are available via the SPSS Community site (www.ibm.com/developerworks/spssdevcentral)
I'm not trying to answer a question that has been answered. I just think there is something else to complement for other users looking for this.
On your SPSS window, you just need to find the first line of code and edit it. It should be something like this:
"file-name.txt"
You need to find the folder path where you're keeping your file:
"C:\Users\DELL\Google Drive\Folder-With-Your-File"
Then you just need to add this path to your file's name:
"C:\Users\DELL\Google Drive\Folder-With-Your-File\file-name.txt"
Otherwise SPSS will not recognize the .txt file.
Sorry if I'm repeating some information here, I just wanted to make it easier to understand.
I suppose that EuStockMarkets is a (labelled) data frame.
This should work and even keep the variable and value labels:
require(sjlabelled)
write_spss(EuStockMarkets, "mydata.sav")
Or you try rio:
rio::export(EuStockMarkets, "mydata.sav")

How to put datasets into an R package

I am creating my own R package and I was wondering what are the possible methods that I can use to add (time-series) datasets to my package. Here are the specifics:
I have created a package subdirectory called data and I am aware that this is the location where I should save the datasets that I want to add to my package. I am also cognizant of the fact that the files containing the data may be .rda, .txt, or .csv files.
Each series of data that I want to add to the package consists of a single column of numbers (eg. of the form 340 or 4.5) and each series of data differs in length.
So far, I have saved all of the datasets into a .txt file. I have also successfully loaded the data using the data() function. Problem not solved, however.
The problem is that each series of data loads as a factor except for the series greatest in length. The series that load as factors contain missing values (of the form '.'). I had to add these missing values in order to make each column of data the same in length. I tried saving the data as unequal columns, but I received an error message after calling data().
A consequence of adding missing values to get the data to load is that once the data is loaded, I need to remove the NA's in order to get on with my analysis of the data! So, this clearly is not a good way of doing things.
Ideally (I suppose), I would like the data to load as numeric vectors or as a list. In this way, I wouldn't need the NA's appended to the end of each series.
How do I solve this problem? Should I save all of the data into one single file? If so, in what format should I do it? Perhaps I should save the datasets into a number of files? Again, in which format? What is the best practical way of doing this? Any tips would greatly be appreciated.
I'm not sure if I understood your question correctly. But, if you edit your data in your favorite format and save with
save(myediteddata, file="data.rda")
The data should be loaded exactly the way you saw it in R.
To load all files in data directory you should add
LazyData: true
To your DESCRIPTION file, in your package.
If this don't help you could post one of your files and a print of the format you want, this will help us to help you ;)
In addition to saving as rda files you could also choose to load them as numeric with:
read.table( ... , colClasses="numeric")
Or as non-factor-text:
read.table( ..., as.is=TRUE) # which does pretty much the same as stringsAsFactors=FALSE
read.table( ..., colClasses="character")
It also appears that the data function would accept these arguments sinc it is documented to be a simple wrapper for read.table(..., header=TRUE).
Preferred saving location of your data depends on its format.
As Hadley suggested:
If you want to store binary data and make it available to the user,
put it in data/. This is the best place to put example datasets.
If you want to store parsed data, but not make it available to the
user, put it in R/sysdata.rda. This is the best place to put data
that your functions need.
If you want to store raw data, put it in inst/extdata.
I suggest you have a look at the linked chapter as it goes into detail about working with data when developing R packages.
You'll need to create the data file and include it in the R package, and you may want to also document it. Here's how to do both.
Create the data file and include it in R package
Create a directory inside the package called /data and place any data in it. Use only .rda and .RData files.
When creating the rda/RData file from an R object, make sure the R object is named what you want it to be named when it's used in the package and use save() to create it. Example:
save(river_fish, file = "data/river_fish.rda", version = 2)
Add this on a new line in the file called DESCRIPTION:
LazyData: true
Documenting the dataset
Document the dataset by placing a string with the dataset name after the documentation:
#' This is data to be included in my package
#'
#' #author My Name \email{blahblah##roxygen.org}
#' #references \url{data_blah.com}
"data-name"
Here and here are some nice examples from dplyr.
Notes
To access the data in the package, run river_fish or whatever the name of the dataset is. Nothing more is needed.
Using version = 2 when calling save() ensures your data object is available for older R versions (i.e. prior to 3.5.0) i.e. it will prevent this warning:
WARNING: Added dependency on R >= 3.5.0 because serialized objects in serialize/load version 3 cannot be read in older versions of R.
No need to use load() in the R package (just call the object directly instead e.g. river_fish will be enough to yield the data from data/river_fish.rda), but in the event you do wish to load an rda/RData file for some reason (e.g. playing around or testing), this will do it:
load("data/river_fish.rda")
Informative sources here and here

Resources