I am working on a similar issue as was stated on this other posting and tried adapting the code to select the columns I am interested in and making it fit my data file.
My issue, however, is that the resulting file has become larger than the original one, and I'm not sure the code is working the way I intended.
When I open with SPSS, the dataset seems to have taken in the header line, and then made millions of copies without end of the second line (I had to force stop the process).
I noticed there's no counter in the while loop specifying the line, might this be the case? My background in programming with R is very limited. The file is a .csv and is 4.8GB with 329 variables and millions of rows. I only need to keep around 30 of the variables.
This is the code I used:
##Open separate connections to hold cursor position
file.in <- file('npidata_20050523-20130707.csv', 'rt')
file.out<- file('Mainoutnpidata.txt', 'wt')
line<-readLines(file.in,n=1)
line.split <-strsplit(line, ',')
##Column picking, only column 1
cat(line.split[[1]][1:11],line.split[[1]][23:25], line.split[[1]][31:33], line.split[[1]][308:311], sep = ",", file = file.out, fill= TRUE)
##Use a loop to read in the rest of the lines
line <-readLines(file.in, n=1)
while (length(line)){
line.split <-strsplit(line, ',')
if (length(line.split[[1]])>1) {
cat(line.split[[1]][1:11],line.split[[1]][23:25], line.split[[1]][31:33], line.split[[1]][308:311],sep = ",", file = file.out, fill= TRUE)
}
}
close(file.in)
close(file.out)
One thing wrong that jumps out it that you are missing a lines <- readLines(file.in, n=1) inside your while loop. You are now stuck in an infinite loop. Also, reading only one line at a time is going to be terribly slow.
If in your file (unlike the one in the example you linked to) every row contains the same number of columns, you could use my LaF package. This should result in something along the lines of:
library(LaF)
m <- detect_dm_csv("npidata_20050523-20130707.csv", header=TRUE)
laf <- laf_open(m)
begin(laf)
con <- file("Mainoutnpidata.txt", 'wt')
while(TRUE) {
d <- next_block(laf, columns = c(1:11, 23:25, 31:33, 308:311))
if (nrow(d) == 0) break;
write.csv(d, file=con, row.names=FALSE, header=FALSE)
}
close(con)
close(laf)
If your 30 columns fit into memory you could even do:
library(LaF)
m <- detect_dm_csv("npidata_20050523-20130707.csv", header=TRUE)
laf <- laf_open(m)
d <- laf[, c(1:11, 23:25, 31:33, 308:311)]
close(laf)
I couldn't test the code above on your file, so can't guarantee there are no errors (let me know if there are).
Related
So I have a large csv excel file that my computer cannot handle opening without rstudio terminating.
To solve this I am trying to iterate through the rows of the file in order do my calculations on each row at a time, before storing the value and then moving on to the next row.
This I can normally achieve (eg on a smaller file) through simply reading and storing the whole csv file within Rstudio and running a simple for loop.
It is, however, the size of this storage of data that I am trying to avoid, hence I am trying to read a row of the csv file one at a time instead.
(I think that makes sense)
This was suggested :here
I have managed to get my calculations to be read and work quickly for the first row of my data file.
It is the looping over this that I am struggling with, as I am trying to use a for loop (potentially should be using a while/if statement) but I have nowhere for the "i" value to be called from within the loop: part of my code is below:
con = file(FileName, "r")
for (row in 1:nrow(con)) {
data <- read.csv(con, nrow=1) #reading of file
"insert calculations here"
}
So the "row" is not called upon so the loop only goes through once. I also have an issue with the "1:nrow(con)" as clearly the nrow(con) simply returns NULL
Any help with this would be great,
thanks.
read.csv() will generate an error if it tries to read past the end of the file. So you could do something like this:
con <- file(FileName, "rt")
repeat {
data <- try(read.csv(con, nrow = 1, header = FALSE), silent = TRUE) #reading of file
if (inherits(data, "try-error")) break
"insert calculations here"
}
close(con)
It will be really slow going one line at a time, but you can do it in larger batches if your calculation code supports that. And I'd recommend specifying the column types using colClasses in the read.csv() call, so that R doesn't guess differently sometimes.
Edited to add:
We've been told that there are 3000 columns of integers in the dataset. The first row only has partial header information. This code can deal with that:
n <- 1 # desired batch size
col.names <- paste0("C", 1:3000) # desired column names
con <- file(FileName, "rt")
readLines(con, 1) # Skip over bad header row
repeat {
data <- try(read.csv(con, nrow = n, header = FALSE,
col.names = col.names,
colClasses = "integer"),
silent = TRUE) #reading of file
if (inherits(data, "try-error")) break
"insert calculations here"
}
close(con)
You could read in your data in batches of, say, 10,000 rows at a time (but you can change n to do as much as you want), do your calculations and then write the changes to a new file, appending the each batch to the end of the file.
Something like:
i = 0
n = 10000
while (TRUE) {
df = readr::read_csv('my_file.csv', skip=i, n_max=n)
# If the number of rows in the file is divisible by n, it may be the case
# that the next pass will result in an empty data.frame being returned
if (nrow(df) > 0) {
# do your calculations
# If you have performed calculations on df and want to save those results,
# save the data.frame to a file, appending it to the file to avoid overwriting prior results.
readr::write_csv(df, 'my_new_file.csv', append=TRUE)
} else {
break
}
# Check to see if we need to keep going, if so add n to i
if (nrow(df) < n) {
break
} else {
i = i + n
}
}
When I run this Loop I can print the results and I want to create a data frame with this data but I cant. Until now I have this:
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
for (i in 1:numfiles) {
file <- read.table(filenames[i],header = TRUE)
ts = subset(file, file$name == "plantNutrientUptake")
tss = subset (ts, ts$path == "//plants/nitrate")
tssc = tss[,2:3]
d40 = tssc[41,2]
print(d40)
print(filenames[i])
}
This is not the most efficient way to do this, but it takes advantage of what code you've already written. First, you'll create an empty data frame with the columns you want, but filled with NA. Then, in each iteration of the loop, you'll fill one row of the data frame.
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
# Create an empty data.frame
df <- data.frame(filename = rep(NA, numfiles), d40 = rep(NA, numfiles))
for (i in 1:numfiles){
file <- read.table(filenames[i],header = TRUE)
ts = subset(file, file$name == "plantNutrientUptake")
tss = subset (ts, ts$path == "//plants/nitrate")
tssc = tss[,2:3]
d40 = tssc[41,2]
# Fill row i of the data frame
df[i,"filename"] = filenames[i]
df[i,"d40"] = d40
}
Hope that does it! Good luck :)
There are a lot of ways to do what you are asking. Also, without a reproducible example it is difficult to validate that code will run. I couldn't tell what type of data was in each of your variable so I just guessed that they were mostly characters with one numeric. You'll need to change the code if that's not true.
The following method is using base R (no other packages). It builds off of what you have done. There are other ways to do this using map, do.call, or apply. But it's important to be able to run through a loop.
As someone commented, your code is just re-writing itself every loop. Luckily you have the variable i that you can use to specify where things go.
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
# Declare an empty dataframe for efficiency purposes
df <- data.frame(
ts = rep(NA_character_,numfiles),
tss = rep(NA_character_,numfiles),
tssc = rep(NA_character_,numfiles),
d40 = rep(NA_real_,numfiles),
stringsAsFactors = FALSE
)
# Loop through the files and fill in the data
for (i in 1:numfiles){
file <- read.table(filenames[i],header = TRUE)
df$ts[i] <- subset(file, file$name == "plantNutrientUptake")
df$tss[i] <- subset (ts, ts$path == "//plants/nitrate")
df$tssc[i] <- tss[,2:3]
df$d40[i] <- tssc[41,2]
print(d40)
print(filenames[i])
}
You'll notice a few things about this code that are extra.
First, I'm declaring the variable type for each column explicitly. You can use rep(NA,numfiles) but that leave R to guess what the column should be. This may not be a problem for you if all of your variables are obviously of the same type. But imagine you have a variable a = c("1","A","B") of all characters. R will go through the first iteration of the loop and guess that the column is numeric. Then on the second run of the loop will crash when it runs into a character.
Next, I'm declaring the entire dataframe before entering the loop. When people tell you that loops in [modern] R are slow it is often because you are re-allocating memory every loop. By declaring the entire dataframe up front you speed up the loop significantly. This also allows you to reference any cell in the dataframe...which is exactly what you want to do in the loop.
Finally, I'm using the $ syntax to make things clear. Writing df[i,"d40"] <- d40 is the same as writing df$d40[i] <- d40. I just think it is clear to use the second method. This is a matter of personal preference.
I'm a newbie in R and pre-processing a big data of million lines to label the connected component and sending the output to a file. But It is taking aweful lot of time using for loop and cat(). Is there any alternative way to write the output file in most faster way in R? I am sharing a sample of code. Any alternative methods or rewriting it with a function that makes it more efficient would be highly appreciated.
#Simple example of undirected graph
g <- graph_from_literal(a--b, a--c, b--c, d--e)
plot(g)
#Connected components
#The option, mode, is ignored for undirected graphs
comp <- components(g, mode = "weak")
#output to a file
fout <- file("output.txt", "w")
for (v in V(g)) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
cat(sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size), file=fout)
}
close(fout)
It seems like everything is vectorized and no for loop is needed. This gives the same output and uses data.table::fwrite, which will be quite a bit faster than cat.
vv = V(g)
vn = vv$name
comp_id = comp$membership[vv$name]
comp_size = comp$csize[comp_id]
data.table::fwrite(data.table(vn, comp_id, comp_size), "output.txt", col.names = FALSE, sep = "\t")
If you don't want the data table dependency, you could use base::write.table, which would still be better than pasting together strings with tabs yourself.
I faced similar problem, i.e. how to write 3 millions of (short) lines into a text file. I found that using writeChar speed up increasingly the file writting process (from several minutes to seconds).
Below, I replaced cat by writeChar in your code:
g <- graph_from_literal(a--b, a--c, b--c, d--e)
plot(g)
#Connected components
#The option, mode, is ignored for undirected graphs
comp <- components(g, mode = "weak")
# first clean the file if it exists
fout <- file("output.txt", "wb")
close(fout)
# switch in appending mode
fout <- file("output.txt", "ab")
for (v in V(g)) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
# set eos = NULL to avoid NULL terminators
writeChar(sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size), con = fout, eos = NULL)
}
close(fout)
(Caveat emptor: I don't have any of your data, so this is untested.)
Instead of doing the write each time within your loop, instead generate a vector of the strings (one file-line each) and write once at the end. This type of file I/O is much more efficient.
all_lines <- sapply(V(g), function(v) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size)
})
writeLines(all_lines, "output.txt")
The use of sapply is one efficiency of R, doing things as "vectors of things". Though it is not strictly necessary (this could be done with a for loop, though several precautions need to be taken in order to not be grossly inefficient, especially when dealing with a million lines), once one can "grok" the intent of vector-mechanics, it might become easier to understand and deal with.
I'm trying to input a large tab-delimited file (around 2GB) using the fread function in package data.table. However, because it's so large, it doesn't fit completely in memory. I tried to input it in chunks by using the skip and nrow arguments such as:
chunk.size = 1e6
done = FALSE
chunk = 1
while(!done)
{
temp = fread("myfile.txt",skip=(chunk-1)*chunk.size,nrow=chunk.size-1)
#do something to temp
chunk = chunk + 1
if(nrow(temp)<2) done = TRUE
}
In the case above, I'm reading in 1 million rows at a time, performing a calculation on them, and then getting the next million, etc. The problem with this code is that after every chunk is retrieved, fread needs to start scanning the file from the very beginning since after every loop iteration, skip increases by a million. As a result, after every chunk, fread takes longer and longer to actually get to the next chunk making this very inefficient.
Is there a way to tell fread to pause every say 1 million lines, and then continue reading from that point on without having to restart at the beginning? Any solutions, or should this be a new feature request?
You should use the LaF package. This introduces a sort of pointer on your data, thus avoiding the - for very large data - annoying behaviour of reading the whole file. As far as I get it fread() in data.table pckg need to know total number of rows, which takes time for GB data.
Using pointer in LaF you can go to every line(s) you want; and read in chunks of data that you can apply your function on, then move on to next chunk of data. On my small PC I ran trough a 25 GB csv-file in steps of 10e6 lines and extracted the totally ~5e6 observations needed - each 10e6 chunk took 30 seconds.
UPDATE:
library('LaF')
huge_file <- 'C:/datasets/protein.links.v9.1.txt'
#First detect a data model for your file:
model <- detect_dm_csv(huge_file, sep=" ", header=TRUE)
Then create a connection to your file using the model:
df.laf <- laf_open(model)
Once done you can do all sort of things without needing to know the size of the file as in data.table pckgs. For instance place the pointer to line no 100e6 and read 1e6 lines of data from here:
goto(df.laf, 100e6)
data <- next_block(df.laf,nrows=1e6)
Now data contains 1e6 lines of your CSV file (starting from line 100e6).
You can read in chunks of data (size depending on your memory) and only keep what you need. e.g. the huge_file in my example points to a file with all known protein sequences and has a size of >27 GB - way to big for my PC. To get only human sequence I filtered using organism id which is 9606 for human, and this should appear in start of the variable protein1. A dirty way is to put it into a simple for-loop and just go read one data chunk at a time:
library('dplyr')
library('stringr')
res <- df.laf[1,][0,]
for(i in 1:10){
raw <-
next_block(df.laf,nrows=100e6) %>%
filter(str_detect(protein1,"^9606\\."))
res <- rbind(res, raw)
}
Now res contains the filtered human data. But better - and for more complex operations, e.g. calculation on data on-the-fly - the function process_blocks() takes as argument a function. Hence in the function you do what ever you want at each piece of data. Read the documentation.
You can use readr's read_*_chunked to read in data and e.g. filter it chunkwise. See here and here for an example:
# Cars with 3 gears
f <- function(x, pos) subset(x, gear == 3)
read_csv_chunked(readr_example("mtcars.csv"), DataFrameCallback$new(f), chunk_size = 5)
A related option is the chunked package. Here is an example with a 3.5 GB text file:
library(chunked)
library(tidyverse)
# I want to look at the daily page views of Wikipedia articles
# before 2015... I can get zipped log files
# from here: hhttps://dumps.wikimedia.org/other/pagecounts-ez/merged/2012/2012-12/
# I get bz file, unzip to get this:
my_file <- 'pagecounts-2012-12-14/pagecounts-2012-12-14'
# How big is my file?
print(paste(round(file.info(my_file)$size / 2^30,3), 'gigabytes'))
# [1] "3.493 gigabytes" too big to open in Notepad++ !
# But can read with 010 Editor
# look at the top of the file
readLines(my_file, n = 100)
# to find where the content starts, vary the skip value,
read.table(my_file, nrows = 10, skip = 25)
This is where we start working in chunks of the file, we can use most dplyr verbs in the usual way:
# Let the chunked pkg work its magic! We only want the lines containing
# "Gun_control". The main challenge here was identifying the column
# header
df <-
read_chunkwise(my_file,
chunk_size=5000,
skip = 30,
format = "table",
header = TRUE) %>%
filter(stringr::str_detect(De.mw.De.5.J3M1O1, "Gun_control"))
# this line does the evaluation,
# and takes a few moments...
system.time(out <- collect(df))
And here we can work on the output as usual, since it's much smaller than the input file:
# clean up the output to separate into cols,
# and get the number of page views as a numeric
out_df <-
out %>%
separate(De.mw.De.5.J3M1O1,
into = str_glue("V{1:4}"),
sep = " ") %>%
mutate(V3 = as.numeric(V3))
head(out_df)
V1 V2 V3
1 en.z Gun_control 7961
2 en.z Category:Gun_control_advocacy_groups_in_the_United_States 1396
3 en.z Gun_control_policy_of_the_Clinton_Administration 223
4 en.z Category:Gun_control_advocates 80
5 en.z Gun_control_in_the_United_Kingdom 68
6 en.z Gun_control_in_america 59
V4
1 A34B55C32D38E32F32G32H20I22J9K12L10M9N15O34P38Q37R83S197T1207U1643V1523W1528X1319
2 B1C5D2E1F3H3J1O1P3Q9R9S23T197U327V245W271X295
3 A3B2C4D2E3F3G1J3K1L1O3P2Q2R4S2T24U39V41W43X40
4 D2H1M1S4T8U22V10W18X14
5 B1C1S1T11U12V13W16X13
6 B1H1M1N2P1S1T6U5V17W12X12
#--------------------
fread() can definitely help you read the data by chunks
What mistake you have made in your code is that you should keep your nrow a constant while you change the size of your skip parameter in the function during the loop.
Something like this is what I wrote for my data:
data=NULL
for (i in 0:20){
data[[i+1]]=fread("my_data.csv",nrow=10000,select=c(1,2:100),skip =10000*i)
}
And you may insert the follow code in your loop:
start_time <- Sys.time()
#####something!!!!
end_time <- Sys.time()
end_time - start_time
to check the time -- that each loop on average takes similar time.
Then you could use another loop to combine your data by rows with function default rbind function in R.
The sample code could be something like this:
new_data = data[[1]]
for (i in 1:20){
new_data=rbind(new_data,data[[i+1]],use.names=FALSE)
}
to unify into a large dataset.
Hope my answer may help with your question.
I loaded a 18Gb data with 2k+ columns, 200k rows in about 8 minutes using this method.
I have a fitted model that I'd like to apply to score a new dataset stored as a CSV. Unfortunately, the new data set is kind of large, and the predict procedure runs out of memory on it if I do it all at once. So, I'd like to convert the procedure that worked fine for small sets below, into a batch mode that processes 500 lines at a time, then outputs a file for each scored 500.
I understand from this answer (What is a good way to read line-by-line in R?) that I can use readLines for this. So, I'd be converting from:
trainingdata <- as.data.frame(read.csv('in.csv'), stringsAsFactors=F)
fit <- mymodel(Y~., data=trainingdata)
newdata <- as.data.frame(read.csv('newstuff.csv'), stringsAsFactors=F)
preds <- predict(fit,newdata)
write.csv(preds, file=filename)
to something like:
trainingdata <- as.data.frame(read.csv('in.csv'), stringsAsFactors=F)
fit <- mymodel(Y~., data=trainingdata)
con <- file("newstuff.csv", open = "r")
i = 0
while (length(mylines <- readLines(con, n = 500, warn = FALSE)) > 0) {
i = i+1
newdata <- as.data.frame(mylines, stringsAsFactors=F)
preds <- predict(fit,newdata)
write.csv(preds, file=paste(filename,i,'.csv',sep=''))
}
close(con)
However, when I print the mylines object inside the loop, it doesn't get auto-columned correctly the same way read.csv produces something that is---headers are still a mess, and whatever modulo column-width happens under the hood that wraps the vector into an ncol object isn't happening.
Whenever I find myself writing barbaric things like cutting the first row, wrapping the columns, I generally suspect R has a better way to do things. Any suggestions for how I can get a read.csv-like output form a readLines csv connection?
If you want to read your data into memory in chunks using read.csv by using the skip and nrows arguments. In pseudo-code:
read_chunk = function(start, n) {
read.csv(file, skip = start, nrows = n)
}
start_indices = (0:no_chunks) * chunk_size + 1
lapply(start_indices, function(x) {
dat = read_chunk(x, chunk_size)
pred = predict(fit, dat)
write.csv(pred)
}
Alternatively, you could put the data into an sqlite database, and use the sqlite package to query the data in chunks. See also this answer, or do some digging with [r] large csv on SO.