R- my first row has # character in the column names? - r

My test file is formatted very odly.
The first rows starts with:
If i ignore the first row and import the data by using the read.table it works well but then i donot have the column names. But if i try to import the data using col.names=TRUE, it says "more columns than column names". I guess i can separately import the first row and the rest of data and add the first (which is the column names) to the final output file. But when i import the the first row: it completely ignores the column names and jumps to the row with 0 0 0 0.... Is it because the first row has a # character. And also because of the # character there is an extra empty column in the data.

Here are a few possibilities:
1) process twice Read it in as a character vector of lines, L, using readLines. Then remove the # and read L using read.table:
L <- sub("#", "", readLines("myfile.dat"))
read.table(text = L, header = TRUE)
2) read header separately For smaller files the prior approach is short and should be fine but if the file is large you may not want to process it twice. In that case, use readLines to read in only the header line, fix it up and then read in the rest applying the column names.
File <- "myfile.dat"
col.names <- scan(text = readLines(File, 1), what = "", quiet = TRUE)[-1]
read.table(File, col.names = col.names)
3) pipe Another approach is to make use of external commands:
File <- "myfile.dat"
cmd <- paste("sed -e 1s/.//", File)
read.table(pipe(cmd), header = TRUE)
On UNIX-like systems sed should be available. On Windows you will need to install Rtools and either ensure sed is on the PATH or else use the path to the file:
cmd <- paste("C:/Rtools/bin/sed -e 1s/.//", File)
read.table(pipe(cmd), header = TRUE)

One approach would be to just do a single separate read of the first line to sniff out the column names. Then, do a read.table as you were already doing, and skip the first line.
f <- "path/to/yourfile.csv"
con <- file(f, "r")
header <- readLines(con, n=1)
close(con)
df <- read.table(f, header=FALSE, sep = " ", skip=1) # skip the first line
names(df) <- strsplit(header, "\\s+")[[1]][-1] # assign column names
But, I don't like this approach and would rather prefer that you fix the source of your flat files to not include that troublesome # symbol. Also, if you only need this requirement as a one time thing, you might also just edit the flat file manually to remove the # symbol.

Related

How to import a CSV with a last empty column into R?

I wrote an R script to make some scientometric analyses of Journal Citation Report data (JCR), which I have been using and updating in the past years.
Today, Clarivate has just introduced some changes in its database and now the exported CSV file contains one last empty column, which spoils my script. Because of this last empty column, read.csv automatically assumes that the first column contains the row names.
As before, there is also one first useless row, which is automatically removed in my script with skip = 1.
One simple solution to this "empty column situation" would be to manually remove this last column in Excel, and then proceed with my script as usual.
However, is there a way to add this removal to my script using base R?
The beginning of my script is:
jcreco = read.csv("data/jcr ecology 2020.csv",
na = "n/a", skip = 1, header = T)
The original CSV file downloaded from JCR is available in my Dropbox.
Could you please help me? Thank you!
The real problem is that empty column doesn't have a header. If they had only had the extra comma at the end of the header line this probably wouldn't be as messy. But you can also do a bit of column shuffling with fill=TRUE. For example
dd <- read.table("~/../Downloads/jcr ecology 2020.csv", sep=",",
skip=2, fill=T, header=T, row.names=NULL)
names(dd)[-ncol(dd)] <- names(dd)[-1]
dd <- dd[,-ncol(dd)]
This reads in the data but puts the rows names in the data.frame and fills the last column with NA. Then you shift all the column names over to the left and drop the last column.
Here is a way.
Read the data as text lines;
Discard the first line;
Remove the end comma with sub;
Create a text connection;
And read in the data from the connection.
The variable fl holds the file, on my disk I had to set the directory.
fl <- "jcr_ecology_2020.csv"
txt <- readLines(fl)
txt <- txt[-1]
txt <- sub(",$", "", txt)
con <- textConnection(txt)
df1 <- read.csv(con)
close(con)
head(df1)

read a csv file with quotation marks and regex R

ne,class,regex,match,event,msg
BOU2-P-2,"tengigabitethernet","tengigabitethernet(?'connector'\d{1,2}\/\d{1,2})","4/2","lineproto-5-updown","%lineproto-5-updown: line protocol on interface tengigabitethernet4/2, changed state to down"
these are the first two lines, with the first one that will serve as columns names, all separated by commas and with the values in quotation marks except for the first one, and I think it is that that creates troubles.
I am interested in the columns class and msg, so this output will suffice:
class msg
tengigabitethernet %lineproto-5-updown: line protocol on interface tengigabitethernet4/2, changed state to down
but I can also import all the columns and unselect the ones I don't want later, it's no worries.
The data comes in a .csv file that was given to me.
If I open this file in excel the columns are all in one.
I work in France, but I don't know in which locale or encoding the file was created (btw I'm not French, so I am not really familiar with those).
I tried with
df <- read.csv("file.csv", stringsAsFactors = FALSE)
and the dataframe has the columns' names nicely separated but the values are all in the first one
then with
library(readr)
df <- read_delim('file.csv',
delim = ",",
quote = "",
escape_double = FALSE,
escape_backslash = TRUE)
but this way the regex column gets splitted in two columns so I lose the msg variable altogether.
With
library(data.table)
df <- fread("file.csv")
I get the msg variable present but empty, as the ne variable contains both ne and class, separated by a comma.
this is the best output for now, as I can manipulate it to get the desired one.
another option is to load the file as a character vector with readLines to fix it, but I am not an expert with regexs so I would be clueless.
the file is also 300k lines, so it would be hard to inspect it.
both read.delim and fread gives warning messages, I can include them if they might be useful.
update:
using
library(data.table)
df <- fread("file.csv", quote = "")
gives me a more easily output to manipulate, it splits the regex and msg column in two but ne and class are distinct
I tried with the input you provided with read.csv and had no problems; when subsetting each column is accessible. As for your other options, you're getting the quote option wrong, it needs to be "\""; the double quote character needs to be escaped i.e.: df <- fread("file.csv", quote = "\"").
When using read.csv with your example I definitely get a data frame with 1 line and 6 columns:
df <- read.csv("file.csv")
nrow(df)
# Output result for number of rows
# > 1
ncol(df)
# Output result for number of columns
# > 6
tmp$ne
# > "BOU2-P-2"
tmp$class
# > "tengigabitethernet"
tmp$regex
# > "tengigabitethernet(?'connector'\\d{1,2}\\/\\d{1,2})"
tmp$match
# > "4/2"
tmp$event
# > "lineproto-5-updown"
tmp$msg
# > "%lineproto-5-updown: line protocol on interface tengigabitethernet4/2, changed state to down"

Filter CSV files for specific value before importing

I have a folder with thousands of comma delimited CSV files, totaling dozens of GB. Each file contains many records, which I'd like to separate and process separately based on the value in the first field (for example, aa, bb, cc, etc.).
Currently, I'm importing all the files into a dataframe and then subsetting in R into smaller, individual dataframes. The problem is that this is very memory intensive - I'd like to filter the first column during the import process, not once all the data is in memory.
This is my current code:
setwd("E:/Data/")
files <- list.files(path = "E:/Data/",pattern = "*.csv")
temp <- lapply(files, fread, sep=",", fill=TRUE, integer64="numeric",header=FALSE)
DF <- rbindlist(temp)
DFaa <- subset(DF, V1 =="aa")
If possible, I'd like to move the "subset" process into lapply.
Thanks
1) read.csv.sql This will read a file directly into a temporarily set up SQLite database (which it does for you) and then only read the aa records into R. The rest of the file will not be read into R at any time. The table will then be deleted from the database.
File is a character string that contains the file name (or pathname if not in the current directory). Other arguments may be needed depending on the format of the data.
library(sqldf)
read.csv.sql(File, "select * from file where V1 == 'aa'", dbname = tempfile())
2) grep/findstr Another possibility is to use grep (Linux) or findstr (Windows) to extract the lines with aa. That should get you the desired lines plus possibly a few others and at that point you have a much smaller input so it could be subset it in R without memory problems. For example,
fread("findstr aa File")[V1 == 'aa'] # Windows
fread("grep aa File")[V1 == 'aa'] # Linux
sed or gawk could also be used and are included with Linux and in Rtools on Windows.
3) csvfix The free csvfix utility is available on all platforms that R supports and can be used to select field values -- there also exist numerous other similar utilities such as csvkit, csvtk, miller and xsv.
The line below says to return only lines for which the first comma separated field equals aa. This line may need to be modified slightly depending on the cmd line or shell processor used.
fread("csvfix find -if $1==aa File") # Windows
fread("csvfix find -if '$1'==aa File") # Linux bash
setwd("E:/Data/")
files <- list.files(path = "E:/Data/",pattern = "*.csv")
temp <- lapply(files, function(x) subset(fread(x, sep=",", fill=TRUE, integer64="numeric",header=FALSE), V1=="aa"))
DF <- rbindlist(temp)
Untested, but this will probably work - replace your function call with an anonymous function.
This could help but you have to expand the function:
#Function
myload <- function(x)
{
y <- fread(x, sep=",", fill=TRUE, integer64="numeric",header=FALSE)
y <- subset(y, V1 =="aa")
return(y)
}
#Apply
temp <- lapply(files, myload)
If you don't want to muck with SQL, consider using the skip argument in a loop. Slower, but that way you read in a block of lines, filter them, then read in the next block of lines (to the same temp variable so as not to take extra memory), etc.
Inside your lapply call, either a second lapply or equivalently
for (jj in 0: N) {
foo <- fread(filename, skip = (jj*1000+1):((jj+1)*1000), sep=",", fill=TRUE, integer64="numeric",header=FALSE)
mydata[[jj]] <- do_something_to_filter(foo)
}

write.table unintendedly adds subscript x to header

I have got a comma delimited csv document with predefined headers and a few rows. I just want to exchange the comma delimiter to a pipe delimiter. So my naive approach is:
myData <- read.csv(file="C:/test.CSV", header=TRUE, sep=",", check.names = FALSE)
Viewing myData gives me results without X subscripts in header columns. If I set check.names = TRUE, the column headers have a X subscript.
Now I am trying to write a new csv with pipe-delimiter.
write.table(MyData1, file = "C:/test_pipe.CSV",row.names=FALSE, na="",col.names=TRUE, sep="|")
In the next step I am going to test my results:
mydata.test <- read.csv(file="C:/test_pipe.CSV", header=TRUE, sep="|")
Import seems fine, but unfortunately the X subscript in column headers appear again. Now my question is:
Is there something wrong with the original file or is there an error in my naive approach?
The original csv test.csv was created with Excel, of course without X subscripts in column headers.
Thanks in advance
You have to keep using check.names = FALSE, also the second time.
Else your header will be modified, because apparently it contains variable names that would not be considered valid names of columns of a data.frame. E.g., special characters would be replaced by dots, i.e. . Similarly, numbers would be pre-fixed with X.

Skip all leading empty lines in read.csv

I am wishing to import csv files into R, with the first non empty line supplying the name of data frame columns. I know that you can supply the skip = 0 argument to specify which line to read first. However, the row number of the first non empty line can change between files.
How do I work out how many lines are empty, and dynamically skip them for each file?
As pointed out in the comments, I need to clarify what "blank" means. My csv files look like:
,,,
w,x,y,z
a,b,5,c
a,b,5,c
a,b,5,c
a,b,4,c
a,b,4,c
a,b,4,c
which means there are rows of commas at the start.
read.csv automatically skips blank lines (unless you set blank.lines.skip=FALSE). See ?read.csv
After writing the above, the poster explained that blank lines are not actually blank but have commas in them but nothing between the commas. In that case use fread from the data.table package which will handle that. The skip= argument can be set to any character string found in the header:
library(data.table)
DT <- fread("myfile.csv", skip = "w") # assuming w is in the header
DF <- as.data.frame(DT)
The last line can be omitted if a data.table is ok as the returned value.
Depending on your file size, this may be not the best solution but will do the job.
Strategy here is, instead of reading file with delimiter, will read as lines,
and count the characters and store into temp.
Then, while loop will search for first non-zero character length in the list,
then will read the file, and store as data_filename.
flist = list.files()
for (onefile in flist) {
temp = nchar(readLines(onefile))
i = 1
while (temp[i] == 0) {
i = i + 1
}
temp = read.table(onefile, sep = ",", skip = (i-1))
assign(paste0(data, onefile), temp)
}
If file contains headers, you can start i from 2.
If the first couple of empty lines are truly empty, then read.csv should automatically skip to the first line. If they have commas but no values, then you can use:
df = read.csv(file = 'd.csv')
df = read.csv(file = 'd.csv',skip = as.numeric(rownames(df[which(df[,1]!=''),])[1]))
It's not efficient if you have large files (since you have to import twice), but it works.
If you want to import a tab-delimited file with the same problem (variable blank lines) then use:
df = read.table(file = 'd.txt',sep='\t')
df = read.table(file = 'd.txt',skip = as.numeric(rownames(df[which(df[,1]!=''),])[1]))

Resources