Why read.delim failed to skip blank lines in R - r

I have a input file (| separated ) with the following content.
name|val1|val2
foo|30|20
bar||20
qux|30|
ert|12|11
uyt||
What I want to do is to extract lines only when every fields
contain value. Yielding this:
foo|30|20
ert|12|11
Why the following doesn't work:
data<-read.delim("myinput.txt",sep="|",na.strings="",
header=TRUE,blank.lines.skip=TRUE,fill=FALSE)

If the file isn't too big to be read into memory at first, you can easily fix this with:
data <- data[complete.cases(data), ]
Also, you don't have any lines that are blank, you just have lines where some cells have missing values, which isn't quite the same thing, and it's not really covered by the blank.lines.skip argument.

Related

Loading CSV with fread stops because of to large string

This is the command I'm using :
dallData <- fread("data.csv", showProgress = TRUE, colClasses = c(rep("NULL", 2), "character", rep("NULL", 37)))
but I get this error when trying to load it: R character strings are limited to 2^31-1 bytes|
Anyway to skip those values ?
Here's a strategy that may work or at least narrow down the possible sources of error. It assumes you have enough working memory to hold the data and that your separators are really commas. If you actually have tabs as separators then you will need to modify accordingly. The plan is to read using readLines which will basically ignore the quotes that are probably mismatched. Then figure out which line or lines are at fault using count.fields, table, and which.
input <- readLines("data.csv") # ignores quotes
counts.def <- count.fields(textConnection(input),
sep=",") # defaults quotes are both ' and "
table(counts.def) # might show a variety of line counts.
# Second try with just double-quotes
counts.dbl <- count.fields(textConnection(input),
sep=",", quote="\"") # just dbl-quotes
table(counts.dbl) # if all the same, then all you do is change the quotes argument
Depending on the results you may need to edit cerain lines which can be identified using which(counts.def < 40) assuming most of them are 40 as your input efforts suggest is the expected number of fields per line.
(If the tag for [ram] means you are limited and getting warnings or using virtual memory which slows things down horribly, then you should restart your OS, and only load R before trying again. R needs contiguous block of memory and Windoze isn't very good at memory management.)
Here's a small test case to work with:
input <- readLines(textConnection(
"v1,v2,v3,v4,v5,v6
text, text, text, text, text, text
text, text, O'Malley, text,text,text
junk,junk, more junk, \"text\", tex\"t, nothing
3,4,5,6,7,8")

R: Why am I getting an extra column titled "X.1" in my dataframe after reading my .txt file?

I have got this .txt file outputed by a microscope to process.
#read the .txt file generated by microscope, skipping the first 9 lines of garbage information
df <- read.csv("Objects_Population - AllCells.txt", sep="\t", skip = 9,header=TRUE, fill = T)
Then I started looking at the structure of the dataframe, everything seems fine except I now found an extra column in the end of the data frame named "x.1" and all rows of it are NA values. I don't see this column when I open the .txt file in excel. I suspect the problem has something to do with the column names generated by microscope, they contain quite some special characters
Below is the dataframe read by Excel(only showing the last 2 columns since I have 132 columns, and their names are disgustingly long):
AllCells - Cell Contact Area with Neighbors [%] AllCells - Nucleus Nearest Neighbor Distance [µm]
0 4.82083
21.9512 0
15.7895 0
29.4118 0.584611
0 4.21569
0 1.99599
0 3.50767
...
This has happened to me before but I never took it too serious as I was always interested in a subset of my data frame. Now I'm looking at all columns then this starts to bothering me.
Is there any way I can read them correctly without R attaching that additional "X.1" column in the end? Preferably not manually delete or subset out the last column...
Cheers,
ML
If all other column names are correct, you have probably a trailing \t in the text file. R tries to include it and gives it the generic column name X.1.
You could try and read the file first as 'plain text' and remove the trailing \t and only then use read.csv:
file_connection <- file("Objects_Population - AllCells.txt")
content <- readLines(file_connection )
close(file_connection)
Now we try to get rid of these trailing \t (this might need some testing to fit your needs)
sanitized <- gsub("\\t$", "", content)
And then we read this sanitized string as if it was a file (using the argument text)
df <- read.csv(text=paste0(sanitized, collapse="\n"), sep="\t", skip = 9,header=TRUE, fill = T)
Had that problem too. Fixed it by saving the file as "CSV (MS-DOS (*csv)" instead of what I originally had as "CSV (Comma delimited)(*csv)".
This is almost certainly because you've got an extra empty column in your spreadsheet.
In Excel, open your sheet and press Ctrl-End. If you end up in an empty cell outside the range of your data, there's the problem. Select the column (Ctrl-Space), right-click, and choose Delete.
I also encountered similar problem. I found that three extra columns were created (X, X.1, X.2), after I loaded dataset from excel sheet to R studio.
Steps Followed by me:
a) I went to the excel sheet and selected those three extra columns after last column with actual values in excel sheet. Selected extra 3 columns by keeping cursor on top of columns and then right click the mouse and select delete.
b) Again loaded that excel sheet in R. I did not find those 3 columns.

Read in data from .txt file (no header, no separator)

I have a large dataset (~ 200MB) stored in a .txt-file which I need to read into R. Unfortunately there are no separators (like " " or ",") between the values of the variables and there is no header file.
But there is a codebook, which gives the variable names and also specifies which column belongs to which variable. Some of the variable take one column of space, some take more (so read.fwf won't work); but their width is the same for all cases.
I possibly only have to read in a few of these variables, so I expect that I will just have to select the necessary columns and name the variables. What would be an elegant solution to do this (and maybe even preselect meaningful variable types)?
You can consider loading the data as is and then parsing each line using 'strsplit' with appropriate regular expression.
con <- file("yourfile.txt", open = "r")
lines <- readLines(con)
Iterate it over, apply strsplit to each line and add that to your data table with rbind.

duplicate 'row.names' are not allowed error

I am trying to load a csv file that has 14 columns like this:
StartDate, var1, var2, var3, ..., var14
when I issue this command:
systems <- read.table("http://getfile.pl?test.csv", header = TRUE, sep = ",")
I get an error message.
duplicate row.names are not allowed
It seems to me that the first column name is causing the issue. When I manually download the file and remove the StartDate name from the file, R successfully reads the file and replaces the first column name with X. Can someone tell me what is going on? The file is a (comma separated) csv file.
Then tell read.table not to use row.names:
systems <- read.table("http://getfile.pl?test.csv",
header=TRUE, sep=",", row.names=NULL)
and now your rows will simply be numbered.
Also look at read.csv which is a wrapper for read.table which already sets the sep=',' and header=TRUE arguments so that your call simplifies to
systems <- read.csv("http://getfile.pl?test.csv", row.names=NULL)
This related question points out a part of the ?read.table documentation that explains your problem:
If there is a header and the first row contains one fewer field
than the number of columns, the first column in the input is used
for the row names. Otherwise if row.names is missing, the rows are numbered.
Your header row likely has 1 fewer column than the rest of the file and so read.table assumes that the first column is the row.names (which must all be unique), not a column (which can contain duplicated values). You can fix this by using one of the following two Solutions:
adding a delimiter (ie \t or ,) to the front or end of your header row in the source file, or,
removing any trailing delimiters in your data
The choice will depend on the structure of your data.
Example:
Here the header row is interpreted as having one fewer column than the data because the delimiters don't match:
v1,v2,v3 # 3 items!!
a1,a2,a3, # 4 items
b1,b2,b3, # 4 items
This is how it is interpreted by default:
v1,v2,v3 # 3 items!!
a1,a2,a3, # 4 items
b1,b2,b3, # 4 items
The first column (with no header) values are interpreted as row.names: a1 and b1. If this column contains duplicates, which is entirely possible, then you get the duplicate 'row.names' are not allowed error.
If you set row.names = FALSE, the shift doesn't happen, but you still have a mismatching number of items in the header and in the data because the delimiters don't match.
Solution 1
Add trailing delimiter to header:
v1,v2,v3, # 4 items!!
a1,a2,a3, # 4 items
b1,b2,b3, # 4 items
Solution 2
Remove excess trailing delimiter from non-header rows:
v1,v2,v3 # 3 items
a1,a2,a3 # 3 items!!
b1,b2,b3 # 3 items!!
In my case was a comma at the end of every line. By removing that worked
I had this error when opening a CSV file and one of the fields had commas embedded in it. The field had quotes around it, and I had cut and paste the read.table with quote="" in it. Once I took quote="" out, the default behavior of read.table took over and killed the problem. So I went from this:
systems <- read.table("http://getfile.pl?test.csv", header=TRUE, sep=",", quote="")
to this:
systems <- read.table("http://getfile.pl?test.csv", header=TRUE, sep=",")
I used read_csv from the readr package
In my experience, the parameter row.names=NULL in the read.csv function will lead to a wrong reading of the
file if a column name is missing, i.e. every column will be shifted.
read_csv solves this.
Another possible reason for this error is that you have entire rows duplicated. If that is the case, the problem is solved by removing the duplicate rows.
The answer here (https://stackoverflow.com/a/22408965/2236315) by #adrianoesch should help (e.g., solves "If you know of a solution that does not require the awkward workaround mentioned in your comment (shift the column names, copy the data), that would be great." and "...requiring that the data be copied" proposed by #Frank).
Note that if you open in some text editor, you should see that the number of header fields less than number of columns below the header row. In my case, the data set had a "," missing at the end of the last header field.
It seems the problem can arise from more than one reasons. Following two steps worked when I was having same error.
I saved my file as MS-DOS csv. ( Earlier it was saved in as just csv , excel starter 2010 ).
Opened the csv in notepad++. No coma was inconsistent (consistency as described above #Brian).
Noticed I was not using argument sep="," . I used and it worked ( even though that is default argument!)

R - read.table imports half of the dataset - no errors nor warnings

I have a csv file with ~200 columns and ~170K rows. The data has been extensively groomed and I know that it is well-formed. When read.table completes, I see that approximately half of the rows have been imported. There are no warnings nor errors. I set options( warn = 2 ). I'm using 64-bit latest version and I increased the memory limit to 10gig. Scratching my head here...no idea how to proceed debugging this.
Edit
When I said half the file, I don't mean the first half. The last observation read is towards the end of the file....so its seemingly random.
You may have a comment character (#) in the file (try setting the option comment.char = "" in read.table). Also, check that the quote option is set correctly.
I've had this problem before how I approached it was to read in a set number of lines at a time and then combine after the fact.
df1 <- read.csv(..., nrows=85000)
df2 <- read.csv(..., skip=84999, nrows=85000)
colnames(df1) <- colnames(df2)
df <- rbind(df1,df2)
rm(df1,df2)
I had a similar problem when reading in a large txt file which had a "|" separator. Scattered about the txt file were some text blocks that contained a quote (") which caused the read.xxx function to stop at the prior record without throwing an error. Note that the text blocks mentioned were not encased in double quotes; rather, they just contained one double quote character here and there (") which tripped it up.
I did a global search and replace on the txt file, replacing the double quote (") with a single quote ('), solving the problem (all rows were then read in without aborting).

Resources