TLDR: How do I set Rstudio to import a CSV as a tibble exactly as Microsoft Excel (Rstudio for mac version: Version 1.3.959, Excel for mac: version 16.33 if that helps)? If this is not possible or it should already behave the same, how do I set it to read in a CSV file with no more than 8 columns and fill in blank values in rows so I can tidy it?
Long version:
I have a dozen CSV files (collected from archival animal tags) that messy (inconsistent width, multiple blocks of data on one file) and need to be read in. For workflow reasons, I would like to take the raw data and bring it straight into R. The data has a consistent structure between files: a metadata block, a summary by day that is 6 columns wide, and 2 blocks of constant logging that are 2 columns wide. If you were to count the blank cells in each section, it would be:
Section
Width
Length
Metadata
8
37
Summary Block
7
N days
Block 1
2
N*72
Block 2
2
N*72
The last three blocks of data can be thousands of entries long. I am unable to get this data to load into R as anything other than a single 1x500,000+ dataframe. Using tag1 = read_csv('file', skip = 37) to just start with the data I want crashes R. It works with read.csv(), but that removes the metadata block that I would like to keep.
Attempting to read the file into Excel shows the correct format (width, length, etc) but will not load all of the data. It cuts off a good chunk of the last block of data. reading in the data in a tabular format like read_xl() presents the same issue.
Ultimately, I like to either import the data as a nested tibble with these different sections, or better yet, automate this process so it can read in an entire folder's worth of csv files, automatically assign them to variables, and split them into sections. However, for now I just want to get this data into a workable format intact, and I would appreciate any help you can give me.
Get the number of lines in the file, n, and from that derive N. Then read the blocks one by one. Use the same connection so each read starts off from where the prior one ended.
n <- length(count.fields("myfile", sep = ""))
N = (n - 37) / (1 + 2 * 72)
con <- file("myfile", open = "r")
meta <- readLines(con, 37)
summary_block <- read.csv(con, header = FALSE, nrow = N)
block1 <- read.csv(con, header = FALSE, nrow = N * 37)
block2 <- read.csv(con, header = FALSE, nrow = N * 37)
close(con)
Related
My problem is that write.csv - or something else in my work process - has increment my file size from 19GB to more than 60GB. I mean more than 60GB because the saving process was interrupted because of memory problems. It had occurred when I added 250 000 rows to my DataBAse of near to 3 millions of rows. Now, I have 2 problems: 1) How to overcome the issue above and 2) how to read this huge Data and how to save it in a coherent size. For those who are not familiar with storing tweets, my file size is heavier than should be. I have been storing only 29 columns of 91 (standard columns when downloading tweets).
Here is my process:
I'm downloading tweets using rtweet thesee days. So that, I download tweets in chunks of 250 000 by day. For each day, using fread from data.table, I load my old Data Base. Then, I bind both data frames with rbind. Finally, I use write.csv to save my object. I repeat this process each time. Here my code:
base <- fread("tweets.csv")
base$user_id <- as.character(base$user_id)
base$status_id <- as.character(base$status_id)
base$retweet_status_id <- as.character(base$retweet_status_id)
base$retweet_user_id <- as.character(base$mentions_user_id)
datos <- search_tweets2("keywords", n = 250000, include_rts = T, lang = "es", retryonratelimit = T)
datos <- datos[,c(1,2,3,4,5,6,12,14,30,31,32,48,49,50,51,54,55,56,57,61,62,73,74,75,78,79,83,84)]
datos$mentions_user_id <- as.character(datos$mentions_user_id)
datos$mentions_screen_name <- as.character(datos$mentions_screen_name)
datos$created_at <- as.character(datos$created_at)
datos$retweet_created_at <- as.character(datos$retweet_created_at)
datos$account_created_at <- as.character(datos$account_created_at)
base <- rbind(base,datos)
write.csv(base, "tweets.csv")
Need to say than when I write the new file, I overwrite. Probably, here is the main problem when loading and overwritten. I don't know. Otherwise, I've been reading and think that my solution may be in read.csv.sql, loading by small parts my DataBase and saving in a correct way. But read.csv.sql presents some problems with my number of columns. It says: "Error in connection_import_file(conn#ptr, name, value, sep, eol, skip) :
RS_sqlite_import: tweets.csv line 2 expected 29 columns of data but found 6".
I load 100 rows using 'read.csv(file.csv, nrows = 100)` to know if everything is still OK in my file and it is.
Thank you in advance.
I'm currently trying to read a 20GB file. I only need 3 columns of that file.
My problem is, that I'm limited to 16 GB of ram. I tried using readr and processing the data in chunks with the function read_csv_chunked and read_csv with the skip parameter, but those both exceeded my RAM limits.
Even the read_csv(file, ..., skip = 10000000, nrow = 1) call that reads one line uses up all my RAM.
My question now is, how can I read this file? Is there a way to read chunks of the file without using that much ram?
The LaF package can read in ASCII data in chunks. It can be used directly or if you are using dplyr the chunked package uses it providing an interface for use with dplyr.
The readr package has readr_csv_chunked and related functions.
The section of this web page entitled The Loop as well as subsequent sections of that page describes how to do chunked reads with base R.
It may be that if you remove all but the first three columns that it will be small enough to just read it in and process in one go.
vroom in the vroom package can read in files very quickly and also has the ability to read in just the columns named in the select= argument which may make it small enough to read it in in one go.
fread in the data.table package is a fast reading function that also supports a select= argument which can select only specified columns.
read.csv.sql in the sqldf (also see github page) package can read a file larger than R can handle into a temporary external SQLite database which it creates for you and removes afterwards and reads the result of the SQL statement given into R. If the first three columns are named col1, col2 and col3 then try the code below. See ?read.csv.sql and ?sqldf for the remaining arguments which will depend on your file.
library(sqldf)
DF <- read.csv.sql("myfile", "select col1, col2, col3 from file",
dbname = tempfile(), ...)
read.table and read.csv in the base of R have a colClasses=argument which takes a vector of column classes. If the file has nc columns then use colClasses = rep(c(NA, "NULL"), c(3, nc-3)) to only read the first 3 columns.
Another approach is to pre-process the file using cut, sed or awk (available natively in UNIX and in the Rtools bin directory on Windows) or any of a number of free command line utilities such as csvfix outside of R to remove all but the first three columns and then see if that makes it small enough to read in one go.
Also check out the High Performance Computing task view.
We can try something like this, first a small example csv:
X = data.frame(id=1:1e5,matrix(runi(1e6),ncol=10))
write.csv(X,"test.csv",quote=F,row.names=FALSE)
You can use the nrow function, instead of providing a file, you provide a connection, and you store the results inside a list, for example:
data = vector("list",200)
con = file("test.csv","r")
data[[1]] = read.csv(con, nrows=1000)
dim(data[[1]])
COLS = colnames(data[[1]])
data[[1]] = data[[1]][,1:3]
head(data[[1]])
id X1 X2 X3
1 1 0.13870273 0.4480100 0.41655108
2 2 0.82249489 0.1227274 0.27173937
3 3 0.78684815 0.9125520 0.08783347
4 4 0.23481987 0.7643155 0.59345660
5 5 0.55759721 0.6009626 0.08112619
6 6 0.04274501 0.7234665 0.60290296
In the above, we read the first chunk, collected the colnames and subsetted. If you carry on reading through the connection, the headers will be missing, and we need to specify that:
for(i in 2:200){
data[[i]] = read.csv(con, nrows=1000,col.names=COLS,header=FALSE)[,1:3]
}
Finally, we build of all of those into a data.frame:
data = do.call(rbind,data)
all.equal(data[,1:3],X[,1:3])
[1] TRUE
You can see that I specified a much larger list than required, this is to show if you don't know how long the file is, as you specify something larger, it should work. This is a bit better than writing a while loop..
So we wrap it into a function, specifying the file, number of rows to read at one go, the number of times, and the column names (or position) to subset:
read_chunkcsv=function(file,rows_to_read,ntimes,col_subset){
data = vector("list",rows_to_read)
con = file(file,"r")
data[[1]] = read.csv(con, nrows=rows_to_read)
COLS = colnames(data[[1]])
data[[1]] = data[[1]][,col_subset]
for(i in 2:ntimes){
data[[i]] = read.csv(con,
nrows=rows_to_read,col.names=COLS,header=FALSE)[,col_subset]
}
return(do.call(rbind,data))
}
all.equal(X[,1:3],
read_chunkcsv("test.csv",rows_to_read=10000,ntimes=10,1:3))
Similar to How can you read a CSV file in R with different number of columns, I have some complex CSV-files. Mine are from SAP BusinessObjects and hold challenges different to those of the quoted question. I want to automate the capture of an arbitrary number of datasets held in one CSV file. There are many CSV-files, but let's start with one of them.
Given: One CSV file containing several flat tables.
Wanted: Several dataframes or other structure holding all data (S4?)
The method so far:
get line numbers of header data by counting number of columns
get headers by reading every line index held in vector described above
read data by calculating skip and nrows between data sets in index described by header lines as above.
give the read data column names from read header
I need help getting me on the right track to avoid loops/making the code more readable/compact when reading headers and datasets.
These CSVs are formatted as normal CSVs, only that they contain an more or less arbitrary amount of subtables. For each dataset I export, the structure is different. In the current example I will suppose there are five tables included in the CSV.
In order to give you an idea, here is some fictous sample data with line numbers. Separator and quote has been stripped:
1: n, Name, Species, Description, Classification
2: 90, Mickey, Mouse, Big ears, rat
3: 45, Minnie, Mouse, Big bow, rat
...
16835: Code, Species
16836: RT, rat
...
22673: n, Code, Country
22674: 1, RT, Murica
...
33211: Activity, Code, Descriptor
32212: running, RU, senseless activity
...
34749: Last update
34750: 2017/05/09 02:09:14
There are a number of ways going about reading each data set. What I have come up with so far:
filepath <- file.path(paste0(Sys.getenv("USERPROFILE"), "\\SAMPLE.CSV)
# Make a vector with column number per line
fieldVector <- utils::count.fields(filepath, sep = ",", quote = "\"")
# Make a vector with unique number of fields in file
nFields <- base::unique(fieldVector)
# Make a vector with indices for position of new dataset
iHeaders <- base::match(nFields, fieldVector)
With this, I can do things like:
header <- utils::read.csv2(filepath, header = FALSE, sep = ",", quote = "\"", skip = iHeaders[4], nrows = iHeaders[5]-iHeaders[4]-1)
data <- utils::read.csv2(filepath, header = FALSE, sep = ",", quote = "\"", skip = iHeaders[4] + 1, nrows = iHeaders[5]-iHeaders[4]-1)
names(data) <- header
As in the intro of this post, I have made a couple of functions which makes it easier to get headers for each dataset:
Headers <- GetHeaders(filepath, iHeaders)
colnames(data) <- Headers[[4]]
I have two functions now - one is GetHeader, which captures one line from the file with utils::read.csv2 while ensuring safe headernames (no æøå % etc).
The other returns a list of string vectors holding all headers:
GetHeaders <- function(filepath, linenums) {
# init an empty list of length(linenums)
l.headers <- vector(mode = "list", length = length(linenums))
for(i in seq_along(linenums)) {
# read.csv2(filepath, skip = linenums[i]-1, nrows = 1)
l.headers[[i]] <- GetHeader(filepath, linenums[i])
}
l.headers
}
What I struggle with is how to read in all possible datasets in one go. Specifically the last set is a bit hard to wrap my head around if I should write a common function, where I only know the line number of header, and not the number of lines in the following data.
Also, what is the best data structure for such a structure as described? The data in the subtables are all relevant to each other (can be used to normalize parts of the data). I understand that I must do manual work for each read CSV, but as I have to read TONS of these files, some common functions to structure them in a predictable manner at each pass would be excellent.
Before you answer, please keep in mind that, no, using a different export format is not an option.
Thank you so much for any pointers. I am a beginner in R and haven't completely wrapped my head around all possible solutions in this particular domain.
I am running into a big problem with importing data into R. The thing is the original dataset is over 5GB, which in no way I can read in my laptop with 4GB RAM in total. There are unknown number of rows in the dataset (at least thousands of rows). I was wondering if I could select say the first 2000 rows to load into R so the I can still fit the data into my working memory?
As Scott mentioned, you can limit the number of rows read from a text file with the nrows to read.table (and its variants like read.csv).
You can use this in conjunction with the skip argument to read later chunks in the dataset.
my_file <- "my file.csv"
chunk <- 2000
first <- read.csv(my_file, nrows = chunk)
second <- read.csv(my_file, nrows = chunk, skip = chunk)
third <- read.csv(my_file, nrows = chunk, skip = 2 * chunk)
You may also want to read the "Large memory and out-of-memory data" section of the high-performance computing task view.
I'm trying to input a large tab-delimited file (around 2GB) using the fread function in package data.table. However, because it's so large, it doesn't fit completely in memory. I tried to input it in chunks by using the skip and nrow arguments such as:
chunk.size = 1e6
done = FALSE
chunk = 1
while(!done)
{
temp = fread("myfile.txt",skip=(chunk-1)*chunk.size,nrow=chunk.size-1)
#do something to temp
chunk = chunk + 1
if(nrow(temp)<2) done = TRUE
}
In the case above, I'm reading in 1 million rows at a time, performing a calculation on them, and then getting the next million, etc. The problem with this code is that after every chunk is retrieved, fread needs to start scanning the file from the very beginning since after every loop iteration, skip increases by a million. As a result, after every chunk, fread takes longer and longer to actually get to the next chunk making this very inefficient.
Is there a way to tell fread to pause every say 1 million lines, and then continue reading from that point on without having to restart at the beginning? Any solutions, or should this be a new feature request?
You should use the LaF package. This introduces a sort of pointer on your data, thus avoiding the - for very large data - annoying behaviour of reading the whole file. As far as I get it fread() in data.table pckg need to know total number of rows, which takes time for GB data.
Using pointer in LaF you can go to every line(s) you want; and read in chunks of data that you can apply your function on, then move on to next chunk of data. On my small PC I ran trough a 25 GB csv-file in steps of 10e6 lines and extracted the totally ~5e6 observations needed - each 10e6 chunk took 30 seconds.
UPDATE:
library('LaF')
huge_file <- 'C:/datasets/protein.links.v9.1.txt'
#First detect a data model for your file:
model <- detect_dm_csv(huge_file, sep=" ", header=TRUE)
Then create a connection to your file using the model:
df.laf <- laf_open(model)
Once done you can do all sort of things without needing to know the size of the file as in data.table pckgs. For instance place the pointer to line no 100e6 and read 1e6 lines of data from here:
goto(df.laf, 100e6)
data <- next_block(df.laf,nrows=1e6)
Now data contains 1e6 lines of your CSV file (starting from line 100e6).
You can read in chunks of data (size depending on your memory) and only keep what you need. e.g. the huge_file in my example points to a file with all known protein sequences and has a size of >27 GB - way to big for my PC. To get only human sequence I filtered using organism id which is 9606 for human, and this should appear in start of the variable protein1. A dirty way is to put it into a simple for-loop and just go read one data chunk at a time:
library('dplyr')
library('stringr')
res <- df.laf[1,][0,]
for(i in 1:10){
raw <-
next_block(df.laf,nrows=100e6) %>%
filter(str_detect(protein1,"^9606\\."))
res <- rbind(res, raw)
}
Now res contains the filtered human data. But better - and for more complex operations, e.g. calculation on data on-the-fly - the function process_blocks() takes as argument a function. Hence in the function you do what ever you want at each piece of data. Read the documentation.
You can use readr's read_*_chunked to read in data and e.g. filter it chunkwise. See here and here for an example:
# Cars with 3 gears
f <- function(x, pos) subset(x, gear == 3)
read_csv_chunked(readr_example("mtcars.csv"), DataFrameCallback$new(f), chunk_size = 5)
A related option is the chunked package. Here is an example with a 3.5 GB text file:
library(chunked)
library(tidyverse)
# I want to look at the daily page views of Wikipedia articles
# before 2015... I can get zipped log files
# from here: hhttps://dumps.wikimedia.org/other/pagecounts-ez/merged/2012/2012-12/
# I get bz file, unzip to get this:
my_file <- 'pagecounts-2012-12-14/pagecounts-2012-12-14'
# How big is my file?
print(paste(round(file.info(my_file)$size / 2^30,3), 'gigabytes'))
# [1] "3.493 gigabytes" too big to open in Notepad++ !
# But can read with 010 Editor
# look at the top of the file
readLines(my_file, n = 100)
# to find where the content starts, vary the skip value,
read.table(my_file, nrows = 10, skip = 25)
This is where we start working in chunks of the file, we can use most dplyr verbs in the usual way:
# Let the chunked pkg work its magic! We only want the lines containing
# "Gun_control". The main challenge here was identifying the column
# header
df <-
read_chunkwise(my_file,
chunk_size=5000,
skip = 30,
format = "table",
header = TRUE) %>%
filter(stringr::str_detect(De.mw.De.5.J3M1O1, "Gun_control"))
# this line does the evaluation,
# and takes a few moments...
system.time(out <- collect(df))
And here we can work on the output as usual, since it's much smaller than the input file:
# clean up the output to separate into cols,
# and get the number of page views as a numeric
out_df <-
out %>%
separate(De.mw.De.5.J3M1O1,
into = str_glue("V{1:4}"),
sep = " ") %>%
mutate(V3 = as.numeric(V3))
head(out_df)
V1 V2 V3
1 en.z Gun_control 7961
2 en.z Category:Gun_control_advocacy_groups_in_the_United_States 1396
3 en.z Gun_control_policy_of_the_Clinton_Administration 223
4 en.z Category:Gun_control_advocates 80
5 en.z Gun_control_in_the_United_Kingdom 68
6 en.z Gun_control_in_america 59
V4
1 A34B55C32D38E32F32G32H20I22J9K12L10M9N15O34P38Q37R83S197T1207U1643V1523W1528X1319
2 B1C5D2E1F3H3J1O1P3Q9R9S23T197U327V245W271X295
3 A3B2C4D2E3F3G1J3K1L1O3P2Q2R4S2T24U39V41W43X40
4 D2H1M1S4T8U22V10W18X14
5 B1C1S1T11U12V13W16X13
6 B1H1M1N2P1S1T6U5V17W12X12
#--------------------
fread() can definitely help you read the data by chunks
What mistake you have made in your code is that you should keep your nrow a constant while you change the size of your skip parameter in the function during the loop.
Something like this is what I wrote for my data:
data=NULL
for (i in 0:20){
data[[i+1]]=fread("my_data.csv",nrow=10000,select=c(1,2:100),skip =10000*i)
}
And you may insert the follow code in your loop:
start_time <- Sys.time()
#####something!!!!
end_time <- Sys.time()
end_time - start_time
to check the time -- that each loop on average takes similar time.
Then you could use another loop to combine your data by rows with function default rbind function in R.
The sample code could be something like this:
new_data = data[[1]]
for (i in 1:20){
new_data=rbind(new_data,data[[i+1]],use.names=FALSE)
}
to unify into a large dataset.
Hope my answer may help with your question.
I loaded a 18Gb data with 2k+ columns, 200k rows in about 8 minutes using this method.