read_excel is reading slow the data - r

Function read_excel from readxl package is reading too slow the excel files with a large amount of columns. The problem is that before reinstalling the OS I hadn't this problem.
For example then to read and process 10 files it took about 10 second, now it takes 20-30s to read only 1 file.
I also tried to install the same version of R. Anyone knows what could be the problem?

Related

Importing huge SAS database to Rstudio

I have a huge dataset in SAS (about 14 gb) that I am trying to import to R-studio for some analysis. However, nothing is working for importation. I tried 'haven', 'sas7bdat', 'foreign' but nothing. I converted the file to csv (became about 7gb) and tried read.csv, fread..but nothing again.
R-studio takes a huge time to process the file and at teh end says that it doesn't work. (something about not being able to allocate space for a vector of x size in mg about 60mg depending on the method used).
Does anyone have any ideas about how to solve this situation ?

Read a sample from sas7bdat file in R

I have a sas7bdat file of size around 80 GB. Since my pc has a memory of 4 GB the only way I can see is reading some of its rows. I tried using the sas7bdat package in R which gives the error "big endian files are not supported"
The read_sas() function in haven seems to work but the function supports reading specific columns only while I need to read any subset of rows with all columns. For example, it will be fine if I can read 1% of the data to understand it.
Is there any way to do this? Any package which can work?
Later on I plan to read parts of the file and divide it into 100 or so sections
If you have Windows you can use the SAS Universal Viewer, which is free, and export the dataset to CSV. Then you can import the CSV into R in more readable chunks using this method.

Working with excel file in R

I am still suffering every time I deal with excel file in R.
What is the best way to do the following?
1- Import excel in R as a "whole workbook" and be able to do analysis in any sheet in the workbook? if you think about using XLConnect, please bear in mind the "out of memory" problem with Java. I have over 30MB files and dealing with Java memory problem every time consume more time. (running -Xmx does not work for me).
2- Do not miss any data from any excel sheet? saving file into csv says that some sheets are "out of range" which is 65,536 rows and 256 columns. Also it can not deal with some formulas.
3- Do not have to import each sheet separately? Importing sheets to SPSS, STATA or Eviews and save it into their extension and then work with the output file in R works fine most of the time. However, this method has two major problems; one is that you have to have the software downloaded on the machine and the other is that it imports only one sheet at time. If I have over 30 sheets, it will become very time consuming.
This might be an ongoing question that has been answered many many times, however, each answer solving a part of the problem not the whole issue. It is like putting the fire not strategically solving the problem.
I am on Mac OS 10.10 with R 3.1.1
I have tried a few package to open an excel openxlsx is definitely the best route. It is way faster and more stable than the other ones. The function is : openxlsx::read.xlsx. My advice is to use it to read the whole sheet and then play with the data within R, rather than reading several times part of the sheet. I used it a lot to open large excel files (8000 col plus) for 1000 lines plus, and it always worked well. I use the package xlsx to write in excel, but it had numerous memory issues to read (that's why I moved to openxlsx)
-Add In
On a side note, if you want to use R with excel you sometimes need to execute a VBA code from R. I found the procedure to be quite difficult to achieve. I fully documented the proper way of doing it in a previous question in stack : Apply VBA from R
Consider using the xlsx package. It has methods for dealing with excel files and worksheets. Your question is quite broad, but I think this can be an example:
library(xlsx)
wb <- loadWorkbook('r_test.xlsx')
sheets <- getSheets(wb)
sheet <- sheets[[1]]
df <- readColumns(sheet,
startColumn = 1, endColumn = 3,
startRow = 1, endRow = 6)
df
## id name x_value
##1 1 A 10
##2 2 B 15
##3 3 C 20
##4 4 D 13
##5 5 E 17
As for the memory issue I think you should check the ff package:
The ff package provides data structures that are stored on disk but behave (almost) as if they were in RAM by transparently mapping only a section (pagesize) in main memory.
Another option (but it may be overkill) would be to load the data to a real database and deal with database connections. If you are dealing with really big datasets, a database may be the best approach.
Some options would be:
The RSQLite package
If you can load your data to an SQLite database, you can use this package to connect directly to that database and handle the data directly. That would "split" the workload between R and the database engine. SQLite is quite easy to use and (almost) "config free", and each SQLite database is stored in a single file.
The RMySQL package
Even better than the above option; MySQL is great for storing large datasets. However you'll need to install and configure a MySQL server in your computer.
Remember: If you work with R and a database, delegate as much heavy workload to the database (e.g. data filtering, aggregation, etc), and use R to get the final results.

How to read and rbind large CSV file efficiently?

I have 20 large CSV (100-150MB each) files i would like to load in R and rbind them in a large file and perform my analysis. Reading each CSV file is performed on one core only and takes about 7 min. I am on 64bit 8-core linux with 16gb RAM so resources should not be an issue.
Is there any way to perform this process more efficiently? I am also open to other (open source linux) software (for example binding the CSV files in a different programm and loading in R) or anything that could make this process faster.
Thank you very much
Maybe you want a function like paste. It's a bash function that merge lines of files.

Importing Excel files into R, xlsx or xls

Please can someone help me on the best way to import an excel 2007 (.xlsx) file into R. I have tried several methods and none seems to work. I have upgraded to 2.13.1, windows XP, xlsx 0.3.0, I don't know why the error keeps coming up. I tried:
AB<-read.xlsx("C:/AB_DNA_Tag_Numbers.xlsx","DNA_Tag_Numbers")
OR
AB<-read.xlsx("C:/AB_DNA_Tag_Numbers.xlsx",1)
but I get the error:
Error in .jnew("java/io/FileInputStream", file) :
java.io.FileNotFoundException: C:\AB_DNA_Tag_Numbers.xlsx (The system cannot find the file specified)
Thank you.
For a solution that is free of fiddly external dependencies*, there is now readxl:
The readxl package makes it easy to get data out of Excel and into R.
Compared to many of the existing packages (e.g. gdata, xlsx,
xlsReadWrite) readxl has no external dependencies so it's easy to
install and use on all operating systems. It is designed to work with
tabular data stored in a single sheet.
Readxl supports both the legacy .xls format and the modern xml-based
.xlsx format. .xls support is made possible the with libxls C library,
which abstracts away many of the complexities of the underlying binary
format. To parse .xlsx, we use the RapidXML C++ library.
It can be installed like so:
install.packages("readxl") # CRAN version
or
devtools::install_github("hadley/readxl") # development version
Usage
library(readxl)
# read_excel reads both xls and xlsx files
read_excel("my-old-spreadsheet.xls")
read_excel("my-new-spreadsheet.xlsx")
# Specify sheet with a number or name
read_excel("my-spreadsheet.xls", sheet = "data")
read_excel("my-spreadsheet.xls", sheet = 2)
# If NAs are represented by something other than blank cells,
# set the na argument
read_excel("my-spreadsheet.xls", na = "NA")
* not strictly true, it requires the Rcpp package, which in turn requires Rtools (for Windows) or Xcode (for OSX), which are dependencies external to R. But they don't require any fiddling with paths, etc., so that's an advantage over Java and Perl dependencies.
Update There is now the rexcel package. This promises to get Excel formatting, functions and many other kinds of information from the Excel file and into R.
You may also want to try the XLConnect package. I've had better luck with it than xlsx (plus it can read .xls files too).
library(XLConnect)
theData <- readWorksheet(loadWorkbook("C:/AB_DNA_Tag_Numbers.xlsx"),sheet=1)
also, if you are having trouble with your file not being found, try selecting it with file.choose().
I would definitely try the read.xls function in the gdata package, which is considerably more mature than the xlsx package. It may require Perl ...
Update
As the Answer below is now somewhat outdated, I'd just draw attention to the readxl package. If the Excel sheet is well formatted/lain out then I would now use readxl to read from the workbook. If sheets are poorly formatted/lain out then I would still export to CSV and then handle the problems in R either via read.csv() or plain old readLines().
Original
My preferred way is to save individual Excel sheets in comma separated value (CSV) files. On Windows, these files are associated with Excel so you don't loose the double-click-open-in-Excel "feature".
CSV files can be read into R using read.csv(), or, if you are in a location or using a computer set up with some European settings (where , is used as the decimal place), using read.csv2().
These functions have sensible defaults that makes reading appropriately formatted files simple. Just keep any labels for samples or variables in the first row or column.
Added benefits of storing files in CSV are that as the files are plain text they can be passed around very easily and you can be confident they will open anywhere; one doesn't need Excel to look at or edit the data.
Example 2012:
library("xlsx")
FirstTable <- read.xlsx("MyExcelFile.xlsx", 1 , stringsAsFactors=F)
SecondTable <- read.xlsx("MyExcelFile.xlsx", 2 , stringsAsFactors=F)
I would try 'xlsx' package for it is easy to handle and seems mature enough
worked fine for me and did not need any additionals like Perl or whatever
Example 2015:
library("readxl")
FirstTable <- read_excel("MyExcelFile.xlsx", 1)
SecondTable <- read_excel("MyExcelFile.xlsx", 2)
nowadays I use readxl and have made good experience with it.
no extra stuff needed
good performance
This new package looks nice http://cran.r-project.org/web/packages/openxlsx/openxlsx.pdf
It doesn't require rJava and is using 'Rcpp' for speed.
If you are running into the same problem and R is giving you an error -- could not find function ".jnew" -- Just install the library rJava. Or if you have it already just run the line library(rJava). That should be the problem.
Also, it should be clear to everybody that csv and txt files are easier to work with, but life is not easy and sometimes you just have to open an xlsx.
For me the openxlx package worked in the easiest way.
install.packages("openxlsx")
library(openxlsx)
rawData<-read.xlsx("your.xlsx");
I recently discovered Schaun Wheeler's function for importing excel files into R after realising that the xlxs package hadn't been updated for R 3.1.0.
https://gist.github.com/schaunwheeler/5825002
The file name needs to have the ".xlsx" extension and the file can't be open when you run the function.
This function is really useful for accessing other peoples work. The main advantages over using the read.csv function are when
Importing multiple excel files
Importing large files
Files that are updated regularly
Using the read.csv function requires manual opening and saving of each Excel document which is time consuming and very boring. Using Schaun's function to automate the workflow is therefore a massive help.
Big props to Schaun for this solution.
What's your operating system? What version of R are you running: 32-bit or 64-bit? What version of Java do you have installed?
I had a similar error when I first started using the read.xlsx() function and discovered that my issue (which may or may not be related to yours; at a minimum, this response should be viewed as "try this, too") was related to the incompatability of .xlsx pacakge with 64-bit Java. I'm fairly certain that the .xlsx package requires 32-bit Java.
Use 32-bit R and make sure that 32-bit Java is installed. This may address your issue.
You have checked that R is actually able to find the file, e.g. file.exists("C:/AB_DNA_Tag_Numbers.xlsx") ? – Ben Bolker Aug 14 '11 at 23:05
Above comment should've solved your problem:
require("xlsx")
read.xlsx("filepath/filename.xlsx",1)
should work fine after that.
I have tried very hard on all the answers above. However, they did not actually help because I used a mac. The rio library has this import function which can basically import any type of data file into Rstudio, even those file using languages other than English!
Try codes below:
library(rio)
AB <- import("C:/AB_DNA_Tag_Numbers.xlsx")
AB <- AB[,1]
Hope this help.
For more detailed reference: https://cran.r-project.org/web/packages/rio/vignettes/rio.html
You may be able to keep multiple tabs and more formatting information if you export to an OpenDocument Spreadsheet file (ods) or an older Excel format and import it with the ODS reader or the Excel reader you mentioned above.
As stated by many here, I am writing the same thing but with an additional point!
At first we need to make sure that our R Studio has these two packages installed:
"readxl"
"XLConnect"
In order to load a package in R you can use the below function:
install.packages("readxl/XLConnect")
library(XLConnect)
search()
search will display the list of current packages being available in your R Studio.
Now another catch, even though you might have these two packages but still you may encounter problem while reading "xlsx" file and the error could be like "error: more columns than column name"
To solve this issue you can simply resave your excel sheet "xlsx" in to
"CSV (Comma delimited)"
and your life will be super easy....
Have fun!!
The installation of xlsx package require rJava and xlsxjars. Indirectly they require the specific (32 or 64 bit) java runtime environment on the system.
Pro of read.xlsx: In the same package there are read.xlsx and write.xlsx
Con: Very low speed
As suggested, the easy way is to save in .csv format from excel.
Simple benchmark on a 5800x15 dataset (median)
read.xlsx: >10000ms
read_xlsx: 70ms
read.csv: 15ms

Resources