Reading a CSV file, looping through the rows, using connections - r

So I have a large csv excel file that my computer cannot handle opening without rstudio terminating.
To solve this I am trying to iterate through the rows of the file in order do my calculations on each row at a time, before storing the value and then moving on to the next row.
This I can normally achieve (eg on a smaller file) through simply reading and storing the whole csv file within Rstudio and running a simple for loop.
It is, however, the size of this storage of data that I am trying to avoid, hence I am trying to read a row of the csv file one at a time instead.
(I think that makes sense)
This was suggested :here
I have managed to get my calculations to be read and work quickly for the first row of my data file.
It is the looping over this that I am struggling with, as I am trying to use a for loop (potentially should be using a while/if statement) but I have nowhere for the "i" value to be called from within the loop: part of my code is below:
con = file(FileName, "r")
for (row in 1:nrow(con)) {
data <- read.csv(con, nrow=1) #reading of file
"insert calculations here"
}
So the "row" is not called upon so the loop only goes through once. I also have an issue with the "1:nrow(con)" as clearly the nrow(con) simply returns NULL
Any help with this would be great,
thanks.

read.csv() will generate an error if it tries to read past the end of the file. So you could do something like this:
con <- file(FileName, "rt")
repeat {
data <- try(read.csv(con, nrow = 1, header = FALSE), silent = TRUE) #reading of file
if (inherits(data, "try-error")) break
"insert calculations here"
}
close(con)
It will be really slow going one line at a time, but you can do it in larger batches if your calculation code supports that. And I'd recommend specifying the column types using colClasses in the read.csv() call, so that R doesn't guess differently sometimes.
Edited to add:
We've been told that there are 3000 columns of integers in the dataset. The first row only has partial header information. This code can deal with that:
n <- 1 # desired batch size
col.names <- paste0("C", 1:3000) # desired column names
con <- file(FileName, "rt")
readLines(con, 1) # Skip over bad header row
repeat {
data <- try(read.csv(con, nrow = n, header = FALSE,
col.names = col.names,
colClasses = "integer"),
silent = TRUE) #reading of file
if (inherits(data, "try-error")) break
"insert calculations here"
}
close(con)

You could read in your data in batches of, say, 10,000 rows at a time (but you can change n to do as much as you want), do your calculations and then write the changes to a new file, appending the each batch to the end of the file.
Something like:
i = 0
n = 10000
while (TRUE) {
df = readr::read_csv('my_file.csv', skip=i, n_max=n)
# If the number of rows in the file is divisible by n, it may be the case
# that the next pass will result in an empty data.frame being returned
if (nrow(df) > 0) {
# do your calculations
# If you have performed calculations on df and want to save those results,
# save the data.frame to a file, appending it to the file to avoid overwriting prior results.
readr::write_csv(df, 'my_new_file.csv', append=TRUE)
} else {
break
}
# Check to see if we need to keep going, if so add n to i
if (nrow(df) < n) {
break
} else {
i = i + n
}
}

Related

Searching for target in Excel spreadsheet using R

As an R noob, I'm currently rather stumped by what is probably a rather trivial problem. I have data that looks like in the second image below, essentially a long sheet of rows with values in three columns. What I need is for a way to scan the sheet looking for particular combinations of values in the first and second column - combinations that are specified in a second spreadsheet of targets (see picture 1). When that particular combination is found, I need the script to extract the whole row in question from the data file.
So far, I've managed to read the files without problem:
library(xlsx)
folder <- 'C:\\Users\\...\\Desktop\\R EXCEL test\\'
target_file <- paste(folder,(readline(prompt = "Enter filename for target list:")),sep = "")
data_file <- paste(folder,(readline(prompt = "Enter data file:")),sep = "")
targetsDb <- read.xlsx(target_file, sheetName = "Sheet1")
data <- read.xlsx(data_file, sheetName = "Sheet1")
targets <- vector(mode = "list", length = 3)
for(i in 1:nrow(targetsDb)){
targets[[i]] <- c(targetsDb[i,1],targetsDb[i,2])
}
And with the last command I've managed to save the target combinations as items in a list. However, I run into trouble when it comes to iterating through the file looking for any of those combinations of cell values in the first two columns. My approach was to create a list with one item,
SID_IA <- vector(mode = "list", length = 1)
and to fill it with the values of column 1 and 2 iteratively for each row of the data file:
for(n in 1:nrow(data)){
SID_IA[[n]] <- c(data[n,1],data[n,2])
I would then nest another for loop here, which basically goes through every row in the targets sheet to check if the combination of values currently in the SID_IA list matches any of the target ones. Then at the end of the loop, the list is emptied so it can be filled with the following combination of data values.
for(i in targets){
if(SID_IA[[n]] %in% targets){
print(SID_IA[[n]], "in sentence" , data[n,1], "is ", data[n,3])
}else{
print(FALSE)
}
SID_IA[[n]] <- NULL
}
}
However, if I try to run that last loop, it returns the following output and error:
[1] FALSE
Error in SID_IA[[n]] : subscript out of bounds
In addition: Warning message:
In if (SID_IA[[n]] %in% targets) { :
the condition has length > 1 and only the first element will be used
So, it seems to be doing something for at least one iteration, but then crashes. I'm sure I'm missing something very elementary, but I just can't see it. Any ideas?
EDIT: As requested, I've removed the images and made the test Excel sheets available here and here.
OK.. I'm attempting an answer that should require minimum use of fancy tricks.
data<- xlsx::read.xlsx(file = "Data.xlsx",sheetIndex = 1)
target<- xlsx::read.xlsx(file = "Targets.xlsx",sheetIndex = 1)
head(data)
target
These values are already in data.frame format. If all you want to know is which rows appear exactly same in data and target, then it will be as simple as finding a merge
merge(target,data,all = F)
If, on the other hand , you want to keep the data table with a marking of target rows, then the easiest way will be to make an index column
data$indx<- 1:nrow(data)
data
mrg<- merge(target,data,all = F)
data$test<- rep("test", nrow(data))
data$test[mrg$indx]<- "target"
data
This is like the original image you'd posted.
BTW , if yo are on a graphical interface you can also use File dialogue to open data files.. check out file.choose()
(Posted on behalf of the OP).
Following from #R.S.'s suggestion that didn't involve vectors and loops, and after some playing around, I have figured out how to extract the target lines, and then how to remove them from the original data, outputting both results. I'm leaving it here for future reference and considering this solved.
extracted <- merge(targets,data,all = F)
write.xlsx(extracted,output_file1)
combined <-rbind(data,extracted)
minus.target <- combined[!duplicated(combined,fromLast = FALSE)&!duplicated(combined,fromLast = TRUE),]
write.xls(minus.target,output_file2)

Reading in, breaking a data frame in pieces, and running operations using for loop in R

I have a data frame of 6 million data entries and 28 columns in a .csv file (lets call it rd.csv)thats far too big for my computer to handle. Im trying to read it in pieces of manageable sizes of one million entries, and run different analysis on it. At first I was doing it by simply specifying each function call for each piece of the original data frame but obviously it's better to do it based on an algorithm.
I'm using a for loop but the thing is that it only saves the last element of the loop to my global enviroment. So, it only saves the sixth iteration. Here's the code:
for(i in 3){
nam <- paste("cd", i, sep = "")
if(i == 1){
assign(nam, read.csv("rd.csv", header = TRUE, nrows = 1000000))
a <- sapply(nam, class)
b <- names(nam)
##Stats functions here
rm(nam)
}
else{
assign(nam, read.csv("rd.csv", colClasses = a, col.names = b, nrows = 1000000, skip = i*1000000))
##Stats functions here
rm(nam)
}
}
Can anybody help me with the the part of saving each variable created after an iteration?

Subset large .csv file at reading in R

I have a very large .csv file (~4GB) which I'd like to read, then subset.
The problem comes at reading (memory allocation error). Being that large reading crashes, so what I'd like is a way to subset the file before or while reading it, so that it only gets the rows for one city (Cambridge).
f:
id City Value
1 London 17
2 Coventry 21
3 Cambridge 14
......
I've already tried the usual approaches:
f <- read.csv(f, stringsAsFactors=FALSE, header=T, nrows=100)
f.colclass <- sapply(f,class)
f <- read.csv(f,sep = ",",nrows = 3000000, stringsAsFactors=FALSE,
header=T,colClasses=f.colclass)
which seem to work for up to 1-2M rows, but not for the whole file.
I've also tried subsetting at the reading itself using pipe:
f<- read.table(file = f,sep = ",",colClasses=f.colclass,stringsAsFactors = F,pipe('grep "Cambridge" f ') )
and this also seems to crash.
I thought packages sqldf or data.table would have something, but no success yet !!
Thanks in advance, p.
I think this was alluded to already but just in case it wasn't completely clear. The sqldf package creates a temporary SQLite DB on your machine based on the csv file and allows you to write SQL queries to perform subsets of the data before saving the results to a data.frame
library(sqldf)
query_string <- "select * from file where City=='Cambridge' "
f <- read.csv.sql(file = "f.csv", sql = query_string)
#or rather than saving all of the raw data in f, you may want to perform a sum
f_sum <- read.csv.sql(file = "f.csv",
sql = "select sum(Value) from file where City=='Cambridge' " )
One solution to this type of error is
you can convert your csv file to excel file first.
Then you can map your excel file into mysql table by using toad for mysql it is easy.Just check for datatype of variables.
then using RODBC package you can access such a large dataset.
I am working with a datasets of size more than 20 GB this way.
Although there's nothing wrong with the existing answers, they miss the most conventional/common way of dealing with this: chunks (Here's an example from one of the multitude of similar questions/answers).
The only difference is, unlike for most of the answers that load the whole file, you would read it chunk by chunk and only keep the subset you need at each iteration
# open connection to file (mostly convenience)
file_location = "C:/users/[insert here]/..."
file_name = 'name_of_file_i_wish_to_read.csv'
con <- file(paste(file_location, file_name,sep='/'), "r")
# set chunk size - basically want to make sure its small enough that
# your RAM can handle it
chunk_size = 1000 # the larger the chunk the more RAM it'll take but the faster it'll go
i = 0 # set i to 0 as it'll increase as we loop through the chunks
# loop through the chunks and select rows that contain cambridge
repeat {
# things to do only on the first read-through
if(i==0){
# read in columns only on the first go
grab_header=TRUE
# load the chunk
tmp_chunk = read.csv(con, nrows = chunk_size,header=grab_header)
# subset only to desired criteria
cond = tmp_chunk[,'City'] == "Cambridge"
# initiate container for desired data
df = tmp_chunk[cond,] # save desired subset in initial container
cols = colnames(df) # save column names to re-use on next chunks
}
# things to do on all subsequent non-first chunks
else if(i>0){
grab_header=FALSE
tmp_chunk = read.csv(con, nrows = chunk_size,header=grab_header,col.names = cols)
# set stopping criteria for the loop
# when it reads in 0 rows, exit loop
if(nrow(tmp_chunk)==0){break}
# subset only to desired criteria
cond = tmp_chunk[,'City'] == "Cambridge"
# append to existing dataframe
df = rbind(df, tmp_chunk[cond,])
}
# add 1 to i to avoid the things needed to do on the first read-in
i=i+1
}
close(con) # close connection
# check out the results
head(df)

Trimming big data

I am working on a similar issue as was stated on this other posting and tried adapting the code to select the columns I am interested in and making it fit my data file.
My issue, however, is that the resulting file has become larger than the original one, and I'm not sure the code is working the way I intended.
When I open with SPSS, the dataset seems to have taken in the header line, and then made millions of copies without end of the second line (I had to force stop the process).
I noticed there's no counter in the while loop specifying the line, might this be the case? My background in programming with R is very limited. The file is a .csv and is 4.8GB with 329 variables and millions of rows. I only need to keep around 30 of the variables.
This is the code I used:
##Open separate connections to hold cursor position
file.in <- file('npidata_20050523-20130707.csv', 'rt')
file.out<- file('Mainoutnpidata.txt', 'wt')
line<-readLines(file.in,n=1)
line.split <-strsplit(line, ',')
##Column picking, only column 1
cat(line.split[[1]][1:11],line.split[[1]][23:25], line.split[[1]][31:33], line.split[[1]][308:311], sep = ",", file = file.out, fill= TRUE)
##Use a loop to read in the rest of the lines
line <-readLines(file.in, n=1)
while (length(line)){
line.split <-strsplit(line, ',')
if (length(line.split[[1]])>1) {
cat(line.split[[1]][1:11],line.split[[1]][23:25], line.split[[1]][31:33], line.split[[1]][308:311],sep = ",", file = file.out, fill= TRUE)
}
}
close(file.in)
close(file.out)
One thing wrong that jumps out it that you are missing a lines <- readLines(file.in, n=1) inside your while loop. You are now stuck in an infinite loop. Also, reading only one line at a time is going to be terribly slow.
If in your file (unlike the one in the example you linked to) every row contains the same number of columns, you could use my LaF package. This should result in something along the lines of:
library(LaF)
m <- detect_dm_csv("npidata_20050523-20130707.csv", header=TRUE)
laf <- laf_open(m)
begin(laf)
con <- file("Mainoutnpidata.txt", 'wt')
while(TRUE) {
d <- next_block(laf, columns = c(1:11, 23:25, 31:33, 308:311))
if (nrow(d) == 0) break;
write.csv(d, file=con, row.names=FALSE, header=FALSE)
}
close(con)
close(laf)
If your 30 columns fit into memory you could even do:
library(LaF)
m <- detect_dm_csv("npidata_20050523-20130707.csv", header=TRUE)
laf <- laf_open(m)
d <- laf[, c(1:11, 23:25, 31:33, 308:311)]
close(laf)
I couldn't test the code above on your file, so can't guarantee there are no errors (let me know if there are).

Convert R read.csv to a readLines batch?

I have a fitted model that I'd like to apply to score a new dataset stored as a CSV. Unfortunately, the new data set is kind of large, and the predict procedure runs out of memory on it if I do it all at once. So, I'd like to convert the procedure that worked fine for small sets below, into a batch mode that processes 500 lines at a time, then outputs a file for each scored 500.
I understand from this answer (What is a good way to read line-by-line in R?) that I can use readLines for this. So, I'd be converting from:
trainingdata <- as.data.frame(read.csv('in.csv'), stringsAsFactors=F)
fit <- mymodel(Y~., data=trainingdata)
newdata <- as.data.frame(read.csv('newstuff.csv'), stringsAsFactors=F)
preds <- predict(fit,newdata)
write.csv(preds, file=filename)
to something like:
trainingdata <- as.data.frame(read.csv('in.csv'), stringsAsFactors=F)
fit <- mymodel(Y~., data=trainingdata)
con <- file("newstuff.csv", open = "r")
i = 0
while (length(mylines <- readLines(con, n = 500, warn = FALSE)) > 0) {
i = i+1
newdata <- as.data.frame(mylines, stringsAsFactors=F)
preds <- predict(fit,newdata)
write.csv(preds, file=paste(filename,i,'.csv',sep=''))
}
close(con)
However, when I print the mylines object inside the loop, it doesn't get auto-columned correctly the same way read.csv produces something that is---headers are still a mess, and whatever modulo column-width happens under the hood that wraps the vector into an ncol object isn't happening.
Whenever I find myself writing barbaric things like cutting the first row, wrapping the columns, I generally suspect R has a better way to do things. Any suggestions for how I can get a read.csv-like output form a readLines csv connection?
If you want to read your data into memory in chunks using read.csv by using the skip and nrows arguments. In pseudo-code:
read_chunk = function(start, n) {
read.csv(file, skip = start, nrows = n)
}
start_indices = (0:no_chunks) * chunk_size + 1
lapply(start_indices, function(x) {
dat = read_chunk(x, chunk_size)
pred = predict(fit, dat)
write.csv(pred)
}
Alternatively, you could put the data into an sqlite database, and use the sqlite package to query the data in chunks. See also this answer, or do some digging with [r] large csv on SO.

Resources