Exporting an FFDF without NA values - r

I need to share data sets that I've imported into R as ffdf objects. My aim is to easily be able to export my ffdf datasets into CSV format, without having to worry about NA values which just inflate the size of the output file.
If I were working with a simple dataframe, I would use the following syntax:
write.csv(df, "C:/path/data.csv", row.names=FALSE, na="")
But the write.csv.ffdf function doesn't seem to take "na" as an argument. Can anyone tell me the correct syntax so that I don't have to do post processing on the output file to take away the NA values?

I think you are making inaccurate characterization of the behavior of write.csv.ffdf.
require(ff)
# What follows is a minor modification of the first example in the `write.* help page.
> x <- data.frame(log=rep(c(FALSE, TRUE), length.out=26), int=c(NA, 2:26),
dbl=c(1:25,NA) + 0.1, fac=factor(c(letters[2:26], NA)),
ord=c(NA, ordered(LETTERS[2:26])), dct=Sys.time()+1:26,
dat=seq(as.Date("1910/1/1"), length.out=26, by=1))
> ffx <- as.ffdf(x)
> write.csv(ffx, na="")
"","log","int","dbl","fac","ord","dct","dat"
"1",FALSE,,1.1,"b",,2012-12-18 12:18:23,1910-01-01
"2",TRUE,2,2.1,"c",1,2012-12-18 12:18:24,1910-01-02
"3",FALSE,3,3.1,"d",2,2012-12-18 12:18:25,1910-01-03
"4",TRUE,4,4.1,"e",3,2012-12-18 12:18:26,1910-01-04
"5",FALSE,5,5.1,"f",4,2012-12-18 12:18:27,1910-01-05
"6",TRUE,6,6.1,"g",5,2012-12-18 12:18:28,1910-01-06
"7",FALSE,7,7.1,"h",6,2012-12-18 12:18:29,1910-01-07
"8",TRUE,8,8.1,"i",7,2012-12-18 12:18:30,1910-01-08
"9",FALSE,9,9.1,"j",8,2012-12-18 12:18:31,1910-01-09
"10",TRUE,10,10.1,"k",9,2012-12-18 12:18:32,1910-01-10
"11",FALSE,11,11.1,"l",10,2012-12-18 12:18:33,1910-01-11
"12",TRUE,12,12.1,"m",11,2012-12-18 12:18:34,1910-01-12
"13",FALSE,13,13.1,"n",12,2012-12-18 12:18:35,1910-01-13
"14",TRUE,14,14.1,"o",13,2012-12-18 12:18:36,1910-01-14
"15",FALSE,15,15.1,"p",14,2012-12-18 12:18:37,1910-01-15
"16",TRUE,16,16.1,"q",15,2012-12-18 12:18:38,1910-01-16
"17",FALSE,17,17.1,"r",16,2012-12-18 12:18:39,1910-01-17
"18",TRUE,18,18.1,"s",17,2012-12-18 12:18:40,1910-01-18
"19",FALSE,19,19.1,"t",18,2012-12-18 12:18:41,1910-01-19
"20",TRUE,20,20.1,"u",19,2012-12-18 12:18:42,1910-01-20
"21",FALSE,21,21.1,"v",20,2012-12-18 12:18:43,1910-01-21
"22",TRUE,22,22.1,"w",21,2012-12-18 12:18:44,1910-01-22
"23",FALSE,23,23.1,"x",22,2012-12-18 12:18:45,1910-01-23
"24",TRUE,24,24.1,"y",23,2012-12-18 12:18:46,1910-01-24
"25",FALSE,25,25.1,"z",24,2012-12-18 12:18:47,1910-01-25
"26",TRUE,26,,,25,2012-12-18 12:18:48,1910-01-26
If your goal is minimizing the RAM footprint during write operations, then first look at:
getOption("ffbatchbytes")

write.csv.ffdf does not have an na parameter, but write.table.ffdf passes the na parameter onto the write.table1 function that it wraps.
Just use sep="," as well and you are good to go.
This will work even for large ff variables.

Related

read_csv (readr, R) populates entire column with NA if there are NA in the fist 1000 + x observations in a simple and clean csv (parsing failure)

I was just going through a tremendous headache caused by read_csv messing up my data by substituting content with NA while reading simple and clean csv files.
I’m iterating over multiple large csv files that add up to millions of observations. Some columns contain quite some NA for some variables.
When reading a csv that contains NA in a certain column for the first 1000 + x observations, read_csv populates the entire column with NA and thus, the data is lost for further operations.
The warning message “Warning: x parsing failure” is shown, but as I’m reading multiple files I cannot check this file by file. Still, I would not know an automated fix for the parsing problem indicated also with problems(x)
Using read.csv instead of read_csv does not cause the problem, but it is slow and I run into encoding issues (using different encodings requires too much memory for large files).
An option to overcome this bug is to add a first observation (first row) to your data that contains something for each column, but still I need to read the file first somehow.
See a simplified example below:
##create a dtafrane
df <- data.frame( id = numeric(), string = character(),
stringsAsFactors=FALSE)
##poluate columns
df[1:1500,1] <- seq(1:1500)
df[1500,2] <- "something"
# variable string contains the first value in obs. 1500
df[1500,]
## check the numbers of NA in variable string
sum(is.na(df$string)) # 1499
##write the df
write_csv(df, "df.csv")
##read the df with read_csv and read.csv
df_readr <- read_csv('df.csv')
df_read_standard <- read.csv('df.csv')
##check the number of NA in variable string
sum(is.na(df_readr$string)) #1500
sum(is.na(df_read_standard$string)) #1499
## the read_csv files is all NA for variable string
problems(df_readr) ##What should that tell me? How to fix it?
Thanks to MrFlick for giving the answering comment on my questions:
The whole reason read_csv can be faster than read.csv is because it can make assumptions about your data. It looks at the first 1000 rows to guess the column types (via guess_max) but if there is no data in a column it can't guess what's in that column. Since you seem to know what's supposed to be in the columns, you should use the col_types= parameter to tell read_csv what to expect rather than making it guess. See the ?readr::cols help page to see how to tell read_csv what it needs to know.
Also guess_max = Inf overcomes the problem, but the speed advantage of read_csv seems to be lost.

R : Column delimited tree-rings dataset into TUCSON file (*.RWL)

Tucson File is a standard format for tree-ring dataset (see : http://www.cybis.se/wiki/index.php?title=Tucson_format) for a precise description.
The aim is to convert Excel files with 1st Column as YEARS, and other columns as MEASUREMENTS into that RWL format to run DplR package on R.
Some clues are already on (creating a .rwl object) but actually, Chron() and Detrend() functions doen't handle column files as they introduce NAs by coercion.
I've been working many ways to built a "brutal" loop without succeeding, but I'm wondering if a smarter way is possible under R environment ?
Anyway, if somebody here is able to help on a loop I'll take it :)
Thanks a lot !
Alex,
OK, DplR Package have a write.tucson()function (o_O)
library("dplR")
dat <- read.table ("column.txt", header = T, row.names = 1)
write.tucson (dat, "tucson.txt", prec = 0.01, long.names = TRUE)

'x' must be numeric R error when reading from file

I am trying to do Hartigan's diptest in R, however, I get the following error: 'x' must be numeric.
Apologies for such a basic question, but how do I ensure that the data that I load is numeric?
If I make up a set of values as follows (code below), the diptest works without problems:
library(diptest)
x = c(1,2,3,4,4,4,4,4,4,4,5,6,7,8,9,9,9,9,9,9,9,9,9)
hist(x)
dip.test(x)
But for example, when the same values are saved in an Excel file/tab delimited .txt file (saved as one column of values), and imported into R, when I run the diptest the 'x' must be numeric error occurs.
x <- read.csv("x.csv") # comes back in as a data frame
hist(x)
dip.test(x)
Is there a way to check what format the imported data from an Excel/.txt file is in R, and subsequently change it to be numeric? Thanks again.
Any help will be much appreciated thank you.
Here's what's happening. If you run the code that you know works, it's working because the data class is numeric as it should be. When you read it back in it's a data.frame, however. So you need to point to the numeric element of the data.frame:
library(diptest)
x = c(1,2,3,4,4,4,4,4,4,4,5,6,7,8,9,9,9,9,9,9,9,9,9)
write.csv(x, "x.csv", row.names=F)
x <- read.csv("x.csv") # comes back in as a data frame
hist(x$x)
dip.test(x$x)
Hartigans' dip test for unimodality / multimodality
data: x$x
D = 0.15217, p-value = 2.216e-05
alternative hypothesis: non-unimodal, i.e., at least bimodal
If you were to save the file to a .RDS instead of .csv then you could avoid this problem.
You could also check if your data frame contains any non-numeric characters as follows:
which(!grepl('^[0-9]',your_data_frame[[1]]))

use ape to phase a fasta file and create a DNAbin file as output, then test tajima's D using pegas

I'm trying to complete the very simple task of reading in an unphased fasta file and phasing it using ape, and then calculating Tajima's D using pegas, but #my data doesn't seem to be reading in correctly. Input and output is as #follows:
library("ape")
library("adegenet")
library("ade4")
library("pegas")
DNAbin8c18 <- read.dna(file="fasta8c18.fa", format="f")
I shouldn't need to attach any data since I've just generated the file, but since the data() command was in the manual, I executeed
data(DNAbin8c18)
and got
Warning message: In data(DNAbin8c18) : data set ‘DNAbin8c18’ not found
I know that data() only works in certain contexts, so maybe this isn't a big deal. I looked at what had been loaded
DNAbin8c18
817452 DNA sequences in binary format stored in a matrix.
All sequences of same length: 96
Labels:
CLocus_12706_Sample_1_Locus_34105_Allele_0 [BayOfIslands_s08...
CLocus_12706_Sample_2_Locus_31118_Allele_0 [BayOfIslands_s08...
CLocus_12706_Sample_3_Locus_30313_Allele_0 [BayOfIslands_s09...
CLocus_12706_Sample_5_Locus_33345_Allele_0 [BayOfIslands_s09...
CLocus_12706_Sample_7_Locus_37388_Allele_0 [BayOfIslands_s09...
CLocus_12706_Sample_8_Locus_29451_Allele_0 [BayOfIslands_s09... ...
More than 10 million nucleotides: not printing base composition
so it looks like the data should be fine. Because of this, I tried what I want to do
tajima.test(DNAbin8c18)
and got
Error: cannot allocate vector of size 2489.3 Gb
Many people have completed this same test using as many or more SNPs that I have, and also using FASTA files, but is it possible that mine is too big, or can you see another issue?
The data file can be downloaded at the following link
https://drive.google.com/open?id=0B6qb8IlaQGFZLVRYeXMwRnpMTUU
I have also sent and earlier version of this question, with the data, to the r-sig-genetics mailing list, but I have not heard back.
Any thoughts would be much appreciated.
Ella
Thank you for the comment. Indeed, you are correct. The developer just emailed me with the following very helpful comments.
The problem is that your data are too big (too many sequences) and tajima.test() needs to compute the matrix of all pairwise distances. You could this check by trying:
dist.dna(DNAbin8c18, "N")
One possibility for you is to sample randomly some observations, and repeat this many times, eg:
tajima.test(DNAbin8c18[sample(n, size = 1000), ])
This could be:
N <- 1000 # number of repeats
RES <- matrix(N, 3)
for (i in 1:N)
RES[, i] <- unlist(tajima.test(DNAbin8c18[sample(n, size = 10000), ]))
You may adjust N and 'size =' to have something not too long to run. Then you may look at the distribution of the columns of RES.

Does R produce warnings when it runs out of space from read.csv command

This question is pretty simple and maybe even dumb, but I can't find an answer on google. I'm trying to read a .txt file into R using this command:
data <- read.csv("perm2test.txt", sep="\t", header=FALSE, row.names=1, col.names=paste("V", seq_len(max(count.fields("perm2test.txt", sep="\t"))), sep=""), fill=TRUE)
The reason I have the col.names command is because every line in my .txt file has a different number of observations. I've tested this on a much smaller file and it works. However, when I run it on my actual dataset (which is only 48MB), I'm not sure if it is working... The reason I'm not sure is because I haven't received an error message, yet it has been "running" for over 24 hours at this point (just the read.csv command above). Is it possible that it has run out of memory and it just doesn't output a warning?
I've looked around and I know people say there are functions out there to reduce the size and remove lines that aren't needed, etc. but to be honest I don't think this file is THAT big, and unfortunately I do need every line in the file... (it's actually only 70 lines, but some lines contain as much as 100k entries, while others may only have say 100). Any ideas what is happening?
Obviously untested but should give you some code to modify:
datL <- readLines("perm2test.txt") # one line per group
# may want to exclude some lines but question is unclear
listL <- lapply(datL, function(L) read.delim(text=L, colCasses="numeric") )
# This is a list of values by group
dfL <- data.frame( vals = unlist(listL),
# Now build a grouping vector that is associated with each bundle of values
groups= rep( LETTERS[1:length(listL)] ,
sapply(listL, length) )
# Might have been able to do that last maneuver with `stack`.
library(lattice)
bwplot( vals ~ groups, data=dfL)

Resources