How to read part of the data from very large files?
The sample data is generated as:
set.seed(123)
df <- data.frame(replicate(10, sample(0:2000, 15 * 10^5, rep = TRUE)),
replicate(10, stringi::stri_rand_strings(1000, 5)))
head(df)
# X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X1.1 X2.1 X3.1 X4.1 X5.1 X6.1 X7.1 X8.1 X9.1 X10.1
# 1 575 1843 1854 883 592 1362 1075 210 1526 1365 Qk8NP Xvw9z OYRa1 8BGIV bejiv CCoIE XDKJN HR7zc 2kKNY 1I5h8
# 2 1577 390 1861 912 277 636 758 1461 1978 1865 ZaHFl QLsli E7lbs YGq8u DgUAW c6JQ0 RAZFn Sc0Zt mif8I 3Ys6U
# 3 818 1076 147 1221 257 1115 759 1959 1088 1292 jM5Uw ctM3y 0HiXR hjOHK BZDOP ULQWm Ei8qS BVneZ rkKNL 728gf
# 4 1766 884 1331 1144 1260 768 1620 1231 1428 1193 r4ZCI eCymC 19SwO Ht1O0 repPw YdlSW NRgfL RX4ta iAtVn Hzm0q
# 5 1881 1851 1324 1930 1584 1318 940 1796 830 15 w8d1B qK1b0 CeB8u SlNll DxndB vaufY ZtlEM tDa0o SEMUX V7tLQ
# 6 91 264 1563 414 914 1507 1935 1970 287 409 gsY1u FxIgu 2XqS4 8kreA ymngX h0hkK reIsn tKgQY ssR7g W3v6c
saveRDS is used to save the file.
saveRDS(df, 'df.rds')
The file size is looked using the below commands:
file.info('df.rds')$size
# [1] 29935125
utils:::format.object_size(29935125, "auto")
# [1] "28.5 Mb"
The saved file is read using the below function.
readRDS('df.rds')
However, some of my files are in GBs and would need only few columns for certain processing. Is it possible to read selected columns from RDS files?
Note: I already have RDS files, generated after considerably large amounts of processing. Now, I want to know the best possible way to read selected columns from the existing RDS files.
I don't think you can read only a portion of an rds or rda file.
An alternative would be to use feather. As an example, using a large-ish feather I'm working with:
library(feather)
file.info("../feathers/C1.feather")["size"]
# size
# ../feathers/C1.feather 498782328
system.time( c1whole <- read_feather("../feathers/C1.feather") )
# user system elapsed
# 0.860 0.856 5.540
system.time( c1dyn <- feather("../feathers/C1.feather") )
# user system elapsed
# 0 0 0
ls.objects()
# Type Size PrettySize Dim
# c1dyn feather 3232 3.2 Kb 2886147 x 36
# c1whole tbl_df 554158688 528.5 Mb 2886147 x 36
You can react with both variables as full data.frames: though c1whole is already in memory (so may be a little faster), accessing c1dyn is still quite speedy.
NB: some functions (e.g., several within dplyr) do not work on feather as they do on data.frame or tbl_df. If your intent is solely to pick-and-choose specific columns, then you'll be fine.
SQLite also could be a common way to store tabular/matrix/dataframe data on your hard drive using an SQLite database. This also allows the use of standard SQL commands or DPLYR to interrogate the data. Just be warned that SQLite does not have a date format so any dates need to be converted to character before writing them to the database.
set.seed(123)
df <- data.frame(replicate(10, sample(0:2000, 15 * 10^5, rep = TRUE)),
replicate(10, stringi::stri_rand_strings(1000, 5)))
library(RSQLite)
conn <- dbConnect(RSQLite::SQLite(), dbname="myDB")
dbWriteTable(conn,"mytable",df)
alltables <- dbListTables(conn)
# Use sql queries to query data...
oneColumn <- dbGetQuery(conn,"SELECT X1 FROM mytable")
library(dplyr)
library(dbplyr)
my_db <- tbl(conn, "mytable")
my_db
# Use dplyr functions to query data...
my_db %>% select(X1)
Related
I am looping to load multiple xlsx files. This I am doing well. But when I want to add the name of the columns of the documents (the same names for all files) I have not managed to do it.
library(dplyr)
library(readr)
library(openxlsx)
library(readxl)
setwd("C:/Users/MiguelAngel/Documents/R Miguelo/Guillermo Ahumada")
ldf <- list()
listxlsx <- dir(pattern = "*.xlsx")
for (k in 1:length(listxlsx)){
ldf[[k]] <-as.data.frame(read.xlsx(listxlsx[k]))
}
The result:
355 1500 1100 43831
1 190 850 600 43832
2 93 4000 3000 43833
3 114 4000 3000 43834
4 431 1000 700 43835
5 182 1000 700 43836
6 496 500 300 43837
7 254 500 300 43838
8 174 600 300 43839
9 397 1500 945 43840
10 198 1500 900 43841
11 271 1500 900 43842
12 94 3000 2000 43843
13 206 400 230 43844
14 305 1500 1100 43845
15 184 850 600 43846
16 90 4000 3000 43847
17 70 4000 3000 43848
18 492 1000 700 43849
19 168 1000 700 43850
20 530 500 300 43851
They load all the files well but without the name of the columns.
I need add the name of columns:
list_file <- dir(pattern = "*.xlsx") %>%
lapply(read.xlsx) %>% # *I use stringAsFactor but appear error.
bind_rows
but appear this
list_file
Form of original columns all files
I need put this columns names after make the loop with for.
Thanks for help me guys
I cannot check this since I don't have Excel files to load, but I think this should work:
listxlsx <- list.files(path = "C:/Users/MiguelAngel/Documents/R Miguelo/Guillermo Ahumada", pattern = "*.xlsx", full.nams = TRUE)
names(listxlsx) <- listxlsx
purrr::map_dfr(listxlsx, readxl::read_excel, .id = "Filename")
(The first line is a better practice to get the filenames than relying on setwd.)
When listxlsx is a named vector the function map_dfr gives a column named Filename where the values are taken from listxlsx.
An VERY simplified example of my dataset:
HUC8 YEAR RO_MM
1: 10010001 1961 78.2
2: 10010001 1962 84.0
3: 10010001 1963 70.2
4: 10010001 1964 130.5
5: 10010001 1965 54.3
I found this code online which sort of, but not quite, does what I want:
#create a list of the files from your target directory
file_list <- list.files(path="~/Desktop/Rprojects")
#initiate a blank data frame, each iteration of the loop will append the data from the given file to this variable
allHUCS <- data.frame()
#I want to read each .csv from a folder named "Rprojects" on my desktop into one huge dataframe for further use.
for (i in 1:length(file_list)){
temp_data <- fread(file_list[i], stringsAsFactors = F)
allHUCS <- rbindlist(list(allHUCS, temp_data), use.names = T)
}
Question: I have read that one should not use rbindlist for a large dataset:
"You should never ever ever iteratively rbind within a loop: performance might be okay in the beginning, but with each call to rbind it makes a complete copy of the data, so with each pass the total data to copy increases. It scales horribly. Consider do.call(rbind.data.frame, file_list)." – #r2evans
I know this may seem simple but I'm unclear about how to use his directive. Would I write this for the last line?
allHUCS <- do.call(rbind.data.frame(allHUCS, temp_data), use.names = T)
Or something else? In my actual data, each .csv has 2099 objects with 3 variables (but I only care about the last two.) The total dataframe should contain 47,000,000+ objects of 2 variables. When I ran the original code I got these errors:
Error in rbindlist(list(allHUCS, temp_data), use.names = T) : Item 2
has 2 columns, inconsistent with item 1 which has 3 columns. To fill
missing columns use fill=TRUE.
In addition: Warning messages: 1: In fread(file_list[i],
stringsAsFactors = F) : Detected 1 column names but the data has 2
columns (i.e. invalid file). Added 1 extra default column name for the
first column which is guessed to be row names or an index. Use
setnames() afterwards if this guess is not correct, or fix the file
write command that created the file to create a valid file.
2: In fread(file_list[i], stringsAsFactors = F) : Stopped early on
line 20. Expected 2 fields but found 3. Consider fill=TRUE and
comment.char=. First discarded non-empty line: <<# mv *.csv .. ; >>
Except for the setnames() suggestion, I don't understand what I'm being told. I know it says it stopped early, but I don't even know how to see the entire dataset or to tell where it stopped.
I'm now reading that rbindlist and rbind are two different things and rbindlist is faster than do.call(rbind, data). But the suggestion is do.call(rbind.data.frame(allHUCS, temp_data). Which is going to be fastest?
Since the original post does not include a reproducible example, here is one that reads data from the Pokémon Stats data that I maintain on Github.
First, we download a zip file containing one CSV file for each generation of Pokémon, and unzip it to the ./pokemonData subdirectory of the R working directory.
download.file("https://raw.githubusercontent.com/lgreski/pokemonData/master/PokemonData.zip",
"pokemonData.zip",
method="curl",mode="wb")
unzip("pokemonData.zip",exdir="./pokemonData")
Next, we obtain a list of files in the directory to which we unzipped the CSV files.
thePokemonFiles <- list.files("./pokemonData",
full.names=TRUE)
Finally, we load the data.table package, use lapply() with data.table::fread() to read the files, combine the resulting list of data tables with do.call(), and print the head() and `tail() of the resulting data frame with all 8 generations of Pokémon stats.
library(data.table)
data <- do.call(rbind,lapply(thePokemonFiles,fread))
head(data)
tail(data)
...and the output:
> head(data)
ID Name Form Type1 Type2 Total HP Attack Defense Sp. Atk Sp. Def Speed
1: 1 Bulbasaur Grass Poison 318 45 49 49 65 65 45
2: 2 Ivysaur Grass Poison 405 60 62 63 80 80 60
3: 3 Venusaur Grass Poison 525 80 82 83 100 100 80
4: 4 Charmander Fire 309 39 52 43 60 50 65
5: 5 Charmeleon Fire 405 58 64 58 80 65 80
6: 6 Charizard Fire Flying 534 78 84 78 109 85 100
Generation
1: 1
2: 1
3: 1
4: 1
5: 1
6: 1
> tail(data)
ID Name Form Type1 Type2 Total HP Attack Defense Sp. Atk
1: 895 Regidrago Dragon 580 200 100 50 100
2: 896 Glastrier Ice 580 100 145 130 65
3: 897 Spectrier Ghost 580 100 65 60 145
4: 898 Calyrex Psychic Grass 500 100 80 80 80
5: 898 Calyrex Ice Rider Psychic Ice 680 100 165 150 85
6: 898 Calyrex Shadow Rider Psychic Ghost 680 100 85 80 165
Sp. Def Speed Generation
1: 50 80 8
2: 110 30 8
3: 80 130 8
4: 80 80 8
5: 130 50 8
6: 100 150 8
>
I have a data set where I have the Levels and Trends for say 50 cities for 3 scenarios. Below is the sample data -
City <- paste0("City",1:50)
L1 <- sample(100:500,50,replace = T)
L2 <- sample(100:500,50,replace = T)
L3 <- sample(100:500,50,replace = T)
T1 <- runif(50,0,3)
T2 <- runif(50,0,3)
T3 <- runif(50,0,3)
df <- data.frame(City,L1,L2,L3,T1,T2,T3)
Now, across the 3 scenarios I find the minimum Level and Minimum Trend using the below code -
df$L_min <- apply(df[,2:4],1,min)
df$T_min <- apply(df[,5:7],1,min)
Now I want to check if these minimum values are significantly different between the levels and trends respectively. So check L_min with columns 2-4 and T_min with columns 5-7. This needs to be done for each city (row) and if significant then return which column it is significantly different with.
It would help if some one could guide how this can be done.
Thank you!!
I'll put my idea here, nevertheless I'm looking forward for ideas for others.
> head(df)
City L1 L2 L3 T1 T2 T3 L_min T_min
1 City1 251 176 263 1.162313 0.07196579 2.0925715 176 0.07196579
2 City2 385 406 264 0.353124 0.66089524 2.5613980 264 0.35312402
3 City3 437 333 426 2.625795 1.43547766 1.7667891 333 1.43547766
4 City4 431 405 493 2.042905 0.93041254 1.3872058 405 0.93041254
5 City5 101 429 100 1.731004 2.89794314 0.3535423 100 0.35354230
6 City6 374 394 465 1.854794 0.57909775 2.7485841 374 0.57909775
> df$FC <- rowMeans(df[,2:4])/df[,8]
> df <- df[order(-df$FC), ]
> head(df)
City L1 L2 L3 T1 T2 T3 L_min T_min FC
18 City18 461 425 117 2.7786757 2.6577894 0.75974121 117 0.75974121 2.857550
38 City38 370 117 445 0.1103141 2.6890014 2.26174542 117 0.11031411 2.655271
44 City44 101 473 222 1.2754675 0.8667007 0.04057544 101 0.04057544 2.627063
10 City10 459 361 132 0.1529519 2.4678493 2.23373484 132 0.15295194 2.404040
16 City16 232 393 110 0.8628494 1.3995549 1.01689217 110 0.86284938 2.227273
15 City15 499 475 182 0.3679611 0.2519497 2.82647041 182 0.25194969 2.117216
Now you have the most different rows based on columns 2:4 at the top. Columns 5:7 in analogous way.
And some tips for stastical tests:
Always use t.test(parametrical, based on mean) instead of wilcoxon(u-mann whitney - non-parametrical, based on median), it has more power; HOWEVER:
-Data sets should be big ex. hipotesis: Montreal has taller citizens than Quebec; t.test will work fine when you take a 100 people from each city, so we have height measurment of 200 people 100 vs 100.
-Distribution should be close to normal distribution in all samples; or both samples should have similar distribution far from normal - it may be binominal. Anyway we can't use this test when one sample has normal distribution, and second hasn't.
-Size of both samples should be eqal, so 100 vs 100 is ok, but 87 vs 234 not exactly, p-value will be below 0.05, however it may be misrepresented.
If your data doesn't meet above conditions, I prefer non-parametrical test, less power but more resistant.
I have a file which is like this :
"1943" 359 1327 "t000000" 8
"1944" 359 907 "t000000" 8
"1946" 359 472 "t000000" 8
"1947" 359 676 "t000000" 8
"1948" 326 359 "t000000" 8
"1949" 359 585 "t000000" 8
"1950" 359 1157 "t000000" 8
"2460" 275 359 "t000000" 8
"2727" 22 556 "t000000" 8
"2730" 22 676 "t000000" 8
"479" 17 1898 "t0000000" 5
"864" 347 720 "t000s" 12
"3646" 349 691 "t000s" 7
"6377" 870 1475 "t000s" 14
"7690" 566 870 "t000s" 14
"7691" 870 2305 "t000s" 14
"8120" 870 1179 "t000s" 14
"8122" 44 870 "t000s" 14
"8124" 870 1578 "t000s" 14
"8125" 206 870 "t000s" 14
"8126" 870 1834 "t000s" 14
"6455" 1 1019 "t000t" 13
"4894" 126 691 "t00t" 9
"4896" 126 170 "t00t" 9
"560" 17 412 "t0t" 7
"130" 65 522 "tq" 18
"1034" 17 990 "tq" 10
"332" 3 138 "ts" 2
"2063" 61 383 "ts" 5
"2089" 127 147 "ts" 11
"2431" 148 472 "ts" 15
"2706" 28 43 "ts" 21
.....................
The first column is the random row number ( got after some sorting that I needed ), the fourth column contains the pattern for which I actually want different notepad files.
What I want is that I get individual notepad files named for example, f1.txt,f2.txt,f3.txt...containing all the rows for a value in column 4. For example, I get a different file for "t000000" and then a different one for "t000s" and then a seperate one for "t00t" and so on...
I did this,
list2env(split(sort, sort[,4]),envir=.GlobalEnv)
Here sort is my text file name of data set and 3 is that column.
And then I can use the write.table command, but since my file is huge, I get around 100's of files like that and doing write.table manually like that is very difficult. Is there any way I can automate it?
Using the excellent data.table package:
library(data.table)
# get your source file
the_file <- fread('~/Desktop/file.txt') #replace with your file path
# vector of unique values of column 4 & the roots of your output filename
fl_names <- unique(the_file$V4)
# dump all the relevant subsets to files
for (f in fl_names) write.table(the_file[V4==f, ], paste0(f, '.txt'), row.names=FALSE)
You've already figured out split, but instead of list2env, which will make more work for you just use lapply:
# Generally confusing to name a data.frame
# the same as a common function!
X <- split(sort, sort[, 4])
invisible(lapply(names(X), function(y)
write.csv(X[[y]], file = paste0(y, ".csv"))))
Proof of concept:
Dir <- getwd() # Won't be necessary in your actual script
setwd(tempdir()) # I just don't want my working directory filled
list.files(pattern=".csv") # with random csv files, so I'm using tempdir()
# character(0) # Note that there are no csv files presently
X <- split(sort, sort[, 4]) # You've already figured this step out
## invisible is just so you don't have to see an empty list
## printed in your console. The rest is pretty straightforward
invisible(lapply(names(X), function(y)
write.csv(X[[y]], file = paste0(y, ".csv"))))
list.files(pattern=".csv") # Check that the files are there
# [1] "t000000.csv" "t0000000.csv" "t000s.csv" "t000t.csv"
# [5] "t00t.csv" "t0t.csv" "tq.csv" "ts.csv"
setwd(Dir) # Won't be necessary for your actual script
I got a following my_data:
geneid chr acc_no start end size strand S1 S2 A1 A2
1 gene_010010 1 AC12345.1 3662 4663 1002 - 328 336 757 874
2 gene_010020 1 AC12345.1 5750 7411 1662 - 480 589 793 765
3 gene_010030 2 AC12345.1 9003 11024 2022 - 653 673 875 920
4 gene_010040 2 AC12345.1 12006 12566 561 - 573 623 483 430
5 gene_010050 3 AC12345.1 15035 17032 1998 - 2256 2333 1866 1944
6 gene_010060 3 AC12345.1 18188 18937 750 - 526 642 650 586
I am able to calculate sums for a given column, i.e:
chr.sums <- data.frame(with (my_data, tapply(S1, INDEX=chr, FUN=sum)))
Problem is, I want to get chr.sums with four columns (S1, S2, A1 and A2) and 30 rows corresponding to unique chr numbers. I do not want to switch to Python back and forth, but looping through columns and assigning output to specific columns in data.frame baffles me.
EDIT
Toy data set above.
You can use ddply from plyr. Here is some code:
plyr::ddply(my_data, .(chr), summarize, S1 = sum(S1), S2 = sum(S2),
A1 = sum(A1), A2 = sum(A2))
EDIT. A more compact solution would be:
plyr::ddply(my_data, .(chr), colwise(sum, .(S1, S2, A1, A2)))
Here is how it works. The data is first split into pieces based on chr. Then, the columns S1, S2, A1, A2 are summed up for each piece. Finally, they are assembled back into a single data frame.
Any place you have this kind of a split-apply-combine problem, think plyr as a solution.
tapply won't handle multiple columns but the formula version of aggregate will.
chr.sums <- aggregate(cbind(S1,S2,A1,A2) ~ chr, data = my_data, FUN=sum)))