R: Cannot create a formula from a zero-column data frame - r

I´m trying to do redundancy analysis (RDA) on my data in R. The data frame I´m using was uploaded as a Microsoft Excel csv file. The data frame looks something like this:
site biomass index
1 0.001 1.5
2 0.122 2.3
3 0.255 4.9
When trying to create a formula for the RDA, I constantly get the following message: "Error in formula.data.frame(object, env = baseenv()) :
cannot create a formula from a zero-column data frame"
Does anyone know how I can change my data frame so that I no longer get this error message?
Thanks in advance!

You load all your data to row.names instead of columns
you used default separator ',' while your data is separated by
';'
you specified row.names = 1 so that first column (and the
only one as the separator is wrong) goes to row.names.
that's why your data.frame has no columns. To fix this, use
read.csv('data.csv', sep = ';', row.names=NULL)

Problem solved :) Thanks for the your answers. I tried the suggestions but what ended up working was to import the data with the <- read.csv2("data.csv", row.names=1), so basically to use read.csv2 instead of read.csv, as it seems to be used in many European locales.
There is more info on the difference between csv and csv2 here: Difference between read.csv() and read.csv2() in R

Related

R : Column delimited tree-rings dataset into TUCSON file (*.RWL)

Tucson File is a standard format for tree-ring dataset (see : http://www.cybis.se/wiki/index.php?title=Tucson_format) for a precise description.
The aim is to convert Excel files with 1st Column as YEARS, and other columns as MEASUREMENTS into that RWL format to run DplR package on R.
Some clues are already on (creating a .rwl object) but actually, Chron() and Detrend() functions doen't handle column files as they introduce NAs by coercion.
I've been working many ways to built a "brutal" loop without succeeding, but I'm wondering if a smarter way is possible under R environment ?
Anyway, if somebody here is able to help on a loop I'll take it :)
Thanks a lot !
Alex,
OK, DplR Package have a write.tucson()function (o_O)
library("dplR")
dat <- read.table ("column.txt", header = T, row.names = 1)
write.tucson (dat, "tucson.txt", prec = 0.01, long.names = TRUE)

'x' must be numeric R error when reading from file

I am trying to do Hartigan's diptest in R, however, I get the following error: 'x' must be numeric.
Apologies for such a basic question, but how do I ensure that the data that I load is numeric?
If I make up a set of values as follows (code below), the diptest works without problems:
library(diptest)
x = c(1,2,3,4,4,4,4,4,4,4,5,6,7,8,9,9,9,9,9,9,9,9,9)
hist(x)
dip.test(x)
But for example, when the same values are saved in an Excel file/tab delimited .txt file (saved as one column of values), and imported into R, when I run the diptest the 'x' must be numeric error occurs.
x <- read.csv("x.csv") # comes back in as a data frame
hist(x)
dip.test(x)
Is there a way to check what format the imported data from an Excel/.txt file is in R, and subsequently change it to be numeric? Thanks again.
Any help will be much appreciated thank you.
Here's what's happening. If you run the code that you know works, it's working because the data class is numeric as it should be. When you read it back in it's a data.frame, however. So you need to point to the numeric element of the data.frame:
library(diptest)
x = c(1,2,3,4,4,4,4,4,4,4,5,6,7,8,9,9,9,9,9,9,9,9,9)
write.csv(x, "x.csv", row.names=F)
x <- read.csv("x.csv") # comes back in as a data frame
hist(x$x)
dip.test(x$x)
Hartigans' dip test for unimodality / multimodality
data: x$x
D = 0.15217, p-value = 2.216e-05
alternative hypothesis: non-unimodal, i.e., at least bimodal
If you were to save the file to a .RDS instead of .csv then you could avoid this problem.
You could also check if your data frame contains any non-numeric characters as follows:
which(!grepl('^[0-9]',your_data_frame[[1]]))

R - write.table() creates table that isn't read properly by fread()

I have a data.table in R that I'm trying to write out to a .txt file, and then input back into R.
It's sizeable table of 6.5M observations and 20 variables, so I want to use fread().
When I use
write.table(data, file = "data.txt")
a table of about 2.2GB is written in data.txt. In manually inspecting it, I can see that there are column names, that it's separated by " ", and that there are quotes on character variables. So everything should be fine.
However,
data <- fread("data.txt")
returns a data.table of 6.5M observations and 1 variable. OK, maybe for some reason fread() isn't automatically understanding the separator string:
data <- fread("data.txt", sep = " ")
All the data is in the proper variables now, but
R has added an unnecessary row-number column
in one (only one) of my columns all NAs have been replaced by 9218868437227407266
All variable names are missing
Maybe fread() isn't recognizing the header, somehow.
data <- fread("data.txt", sep = " ", header = T)
Now my first set of observations is my column names. Not very useful.
I'm completely baffled. Does anyone understand what's happening here?
EDIT:
row.names = F solved the names problem, thanks Ananda Mahto.
Ran
datasub <- data[runif(1000,1,6497651), ]
write.table(datasub, file = "datasub.txt", row.names = F)
fread("datasub.txt")
fread() seems to work fine for the smaller dataset.
EDIT:
Here is the subset of data I created above:
https://github.com/cbcoursera1/ExploratoryDataAnalysisProject2/blob/master/datasub.txt
This data comes from the National Emissions Inventory (NEI) and is made available by the EPA. More information is available here:
http://www.epa.gov/ttn/chief/eiinformation.html
EDIT:
I can no longer reproduce this issue. It may be that row.names = F solved the issue, or possibly restarting R/clearing my environment/something random fixed the problem.

Importing data from csv into R, column not populated

I am new to R programming. I have imported a csv file using the following function
PivotTest <- read.csv(file.choose(), header=T)
The csv file has 7 columns: Meter_Serial_No, Reading_Date, Reading_Description, Reading_value, Entry_Processed, TS_Inserted, TS_LastUpdated.
When uploading, the Meter_Serial_No is filled with zero while there are data in that column in the csv file. When running a function to see what data are in that particular column (PivotTest$Meter_Serial_No), it's returning NULL. Can anyone assist me please.
Furthermore, the csv that I'm importing has more than 127,000 rows. When doing a test with 10 rows of data only, I don't have that problem where the column Meter_Serial_No is replaced with zero.
Depends on the class of values which are there in the column (PivotTest$Meter_Serial_No). I believe there is a problem in type conversion, try the following.
PivotTest <- read.csv("test.csv", header=T,colClasses=c(PivotTest$Meter_Serial_No="character",rep("numeric",6)))

Importing csv file into R - numeric values read as characters

I am aware that there are similar questions on this site, however, none of them seem to answer my question sufficiently.
This is what I have done so far:
I have a csv file which I open in excel. I manipulate the columns algebraically to obtain a new column "A". I import the file into R using read.csv() and the entries in column A are stored as factors - I want them to be stored as numeric. I find this question on the topic:
Imported a csv-dataset to R but the values becomes factors
Following the advice, I include stringsAsFactors = FALSE as an argument in read.csv(), however, as Hong Ooi suggested in the page linked above, this doesn't cause the entries in column A to be stored as numeric values.
A possible solution is to use the advice given in the following page:
How to convert a factor to an integer\numeric without a loss of information?
however, I would like a cleaner solution i.e. a way to import the file so that the entries of column entries are stored as numeric values.
Cheers for any help!
Whatever algebra you are doing in Excel to create the new column could probably be done more effectively in R.
Please try the following: Read the raw file (before any excel manipulation) into R using read.csv(... stringsAsFactors=FALSE). [If that does not work, please take a look at ?read.table (which read.csv wraps), however there may be some other underlying issue].
For example:
delim = "," # or is it "\t" ?
dec = "." # or is it "," ?
myDataFrame <- read.csv("path/to/file.csv", header=TRUE, sep=delim, dec=dec, stringsAsFactors=FALSE)
Then, let's say your numeric columns is column 4
myDataFrame[, 4] <- as.numeric(myDataFrame[, 4]) # you can also refer to the column by "itsName"
Lastly, if you need any help with accomplishing in R the same tasks that you've done in Excel, there are plenty of folks here who would be happy to help you out
In read.table (and its relatives) it is the na.strings argument which specifies which strings are to be interpreted as missing values NA. The default value is na.strings = "NA"
If missing values in an otherwise numeric variable column are coded as something else than "NA", e.g. "." or "N/A", these rows will be interpreted as character, and then the whole column is converted to character.
Thus, if your missing values are some else than "NA", you need to specify them in na.strings.
If you're dealing with large datasets (i.e. datasets with a high number of columns), the solution noted above can be manually cumbersome, and requires you to know which columns are numeric a priori.
Try this instead.
char_data <- read.csv(input_filename, stringsAsFactors = F)
num_data <- data.frame(data.matrix(char_data))
numeric_columns <- sapply(num_data,function(x){mean(as.numeric(is.na(x)))<0.5})
final_data <- data.frame(num_data[,numeric_columns], char_data[,!numeric_columns])
The code does the following:
Imports your data as character columns.
Creates an instance of your data as numeric columns.
Identifies which columns from your data are numeric (assuming columns with less than 50% NAs upon converting your data to numeric are indeed numeric).
Merging the numeric and character columns into a final dataset.
This essentially automates the import of your .csv file by preserving the data types of the original columns (as character and numeric).
Including this in the read.csv command worked for me: strip.white = TRUE
(I found this solution here.)
version for data.table based on code from dmanuge :
convNumValues<-function(ds){
ds<-data.table(ds)
dsnum<-data.table(data.matrix(ds))
num_cols <- sapply(dsnum,function(x){mean(as.numeric(is.na(x)))<0.5})
nds <- data.table( dsnum[, .SD, .SDcols=attributes(num_cols)$names[which(num_cols)]]
,ds[, .SD, .SDcols=attributes(num_cols)$names[which(!num_cols)]] )
return(nds)
}
I had a similar problem. Based on Joshua's premise that excel was the problem I looked at it and found that the numbers were formatted with commas between every third digit. Reformatting without commas fixed the problem.
So, I had the similar situation here in my data file when I readin as a csv. All the numeric value were turned into char. But in my file there was a value with a word "Filtered" instead of NA. I converted "Filtered" to NA in vim editor of linux terminal with a command <%s/Filtered/NA/g> and saved this file and later used it and read it in R, all the values were num type and not char type any more.
Looks like character value "Filtered" was inducing all values to be char format.
Charu
Hello #Shawn Hemelstrand here are the steps in detail below:
example matrix file.csv having 'Filtered' word in it
I opened the file.csv in linux command terminal
vi file.csv
then press "Esc shift:"
and type the following command at the bottom
"%s/Filtered/NA/g"
press enter
then press "Esc shift:"
write "wq" at the bottom (this save the file and quit vim editor)
then in R script I read the file
data<- read.csv("file.csv", sep = ',', header = TRUE)
str(data)
All columns were num type which were earlier char type.
In case you need more help, it would be easier to share your txt or csv file.

Resources