Quotation issues reading data into R - r

I have some data from and I am trying to load it into R. It is in .csv files and I can view the data in both Excel and OpenOffice. (If you are curious, it is the 2011 poll results data from Elections Canada data available here).
The data is coded in an unusual manner. A typical line is:
12002,Central Nova","Nova-Centre"," 1","River John",N,N,"",1,299,"Chisholm","","Matthew","Green Party","Parti Vert",N,N,11
There is a " on the end of the Central-Nova but not at the beginning. So in order to read in the data, I suppressed the quotes, which worked fine for the first few files. ie.
test<-read.csv("pollresults_resultatsbureau11001.csv",header = TRUE,sep=",",fileEncoding="latin1",as.is=TRUE,quote="")
Now here is the problem: in another file (eg. pollresults_resultatsbureau12002.csv), there is a line of data like this:
12002,Central Nova","Nova-Centre"," 6-1","Pictou, Subd. A",N,N,"",0,168,"Parker","","David K.","NDP-New Democratic Party","NPD-Nouveau Parti democratique",N,N,28
Because I need to suppress the quotes, the entry "Pictou, Subd. A" makes R wants to split this into 2 variables. The data can't be read in since it wants to add a column half way through constructing the dataframe.
Excel and OpenOffice both can open these files no problem. Somehow, Excel and OpenOffice know that quotation marks only matter if they are at the beginning of a variable entry.
Do you know what option I need to enable on R to get this data in? I have >300 files that I need to load (each with ~1000 rows each) so a manual fix is not an option...
I have looked all over the place for a solution but can't find one.

Building on my comments, here is a solution that would read all the CSV files into a single list.
# Deal with French properly
options(encoding="latin1")
# Set your working directory to where you have
# unzipped all of your 308 CSV files
setwd("path/to/unzipped/files")
# Get the file names
temp <- list.files()
# Extract the 5-digit code which we can use as names
Codes <- gsub("pollresults_resultatsbureau|.csv", "", temp)
# Read all the files into a single list named "pollResults"
pollResults <- lapply(seq_along(temp), function(x) {
T0 <- readLines(temp[x])
T0[-1] <- gsub('^(.{6})(.*)$', '\\1\\"\\2', T0[-1])
final <- read.csv(text = T0, header = TRUE)
final
})
names(pollResults) <- Codes
You can easily work with this list in different ways. If you wanted to just see the 90th data.frame you can access it by using pollResults[[90]] or by using pollResults[["24058"]] (in other words, either by index number or by district number).
Having the data in this format means you can also do a lot of other convenient things. For instance, if you wanted to fix all 308 of the CSVs in one go, you can use the following code, which will create new CSVs with the file name prefixed with "Corrected_".
invisible(lapply(seq_along(pollResults), function(x) {
NewFilename <- paste("Corrected", temp[x], sep = "_")
write.csv(pollResults[[x]], file = NewFilename,
quote = TRUE, row.names = FALSE)
}))
Hope this helps!

This answer is mainly to #AnandaMahto (see comments to the original question).
First, it helps to set some options globally because of the french accents in the data:
options(encoding="latin1")
Next, read in the data verbatim using readLines():
temp <- readLines("pollresults_resultatsbureau13001.csv")
Following this, simply replace the first comma in each line of data with a comma+quotation. This works because the first field is always 5 characters long. Note that it leaves the header untouched.
temp[-1] <- gsub('^(.{6})(.*)$', '\\1\\"\\2', temp[-1])
Penultimately, write over the original file.
fileConn<-file("pollresults_resultatsbureau13001.csv")
writeLines(temp,fileConn)
close(fileConn)
Finally, simply read the data back into R:
data<-read.csv(file="pollresults_resultatsbureau13001.csv",header = TRUE,sep=",")
There is probably a more parsimonious way to do this (and one that can be iterated more easily) but this process made sense to me.

Related

sqldf returns zero observations

I have a number of large data files (.csv) on my local drive that I need to read in R, filter rows/columns, and then combine. Each file has about 33,000 rows and 575 columns.
I read this post: Quickly reading very large tables as dataframes and decided to use "sqldf".
This is the short version of my code:
Housing <- file("file location on my disk")
Housing_filtered <- sqldf('SELECT Var1 FROM Housing', file.format = list(eol="/n")) *I am using Windows
I see "Housing_filtered" data.frame is created with Var1, but zero observations. This is my very first experience with sqldf. I am not sure why zero observations are returned.
I also used "read.csv.sql" and still I see zero observations.
Housing_filtered <- read.csv.sql(file = "file location on my disk",
sql = "select Var01 from file",
eol = "/n",
header = TRUE, sep = ",")
You never really imported the file as a data.frame like you think.
You've opened a connection to a file. You mentioned that it is a CSV. Your code should look something like this if it is a normal CSV file:
Housing <- read.csv("my_file.csv")
Housing_filtered <- sqldf('SELECT Var1 FROM Housing')
If there's something non-standard about this CSV file please mention what it is and how it was created.
Also, to another point that was made in the comments, if you do for some reason need to manually input the line breaks use \n where you were using /n. Any error is not being caused by that change, but rather you're getting passed 1 problem and on to another, probably due to improperly handling missing data, space, commas in text fields that aren't being handled, etc.
If there are still data errors can you please use R code to create a small file that is reflective of the relevant characteristics of your data and which produces the same error when you import it? This may help.

Loop for extracting variable from each document and placing in appropriate column

My company documents summaries of policies/services for each client in a pdf formatted file. These files are combined into a large dataset each year. One row per client and columns are variables in the client's document. There are a couple thousand of these files and each one has approximately 20-30 variables each. I' want to automate this process by creating a data.frame with each row representing a client, and then pull the variables for each client from their pdf document. I'm able to create a list or data.frame of all the clients by the pdf filename in a directory but don't know how to create a loop that pulls each variable I need for each document. I currently have two different methods which I can't decide between, and also need help with a loop that grabs the variables I need for each client document. My code and links to two mock files are provided below. Any help would be appreciated!
Files: Client 1 and Client 2
Method 1: pdftools
The benefit of the first method is it extracts the entire pdf into a vector, and each page into a separate element. This makes it easier for me to pull strings/variables. However, don't know how to loop it to pull the information from each client and appropriately place it in a column for each client.
library(pdftools)
library(stringr)
Files <- list.files(path="...", pattern=".pdf")
Files <- Files %% mutate(FR =
str_match(text, "\\$\\d+\\s\\Financial Reporting")) #Extract the first variable
Method 2:
The benefit of this approach is it automatically creates a database for each of the client documents with file name as a row, and the each pdf in a variable. The downside is an entire pdf in a variable makes it more difficult to match and extract strings compared to having each page in its own element. I don't know how to write a loop that will extract variables for each client and place them in their respective column.
DF <- readtext("directory pathway/*.pdf")
DF <- DF %>% mutate(FR =
str_match(text, "\\$\\d+\\s\\Financial Reporting"))
Here's a basic framework that I think solves your problem using your proposed Method 1.
library(pdftools)
library(stringr)
Files <- list.files(path="pdfs/", pattern=".pdf")
lf <- length(Files)
client_df <- data.frame(client = rep(NA, lf), fr = rep(NA, lf))
for(i in 1:lf){
# extract the text from the pdf
f <- pdf_text(paste0("pdfs/", Files[i]))
# remove commas from numbers
f <- gsub(',', '', f)
# extract variables
client_name <- str_match(f[1], "Client\\s+\\d+")[[1]]
fr <- as.numeric(str_match(f[1], "\\$(\\d+)\\s+Financial Reporting")[[2]])
# add variables to your dataframe
client_df$client[i] <- client_name
client_df$fr[i] <- fr
}
I removed commas from the text under the assumption that any numeric variables you extract you'll want to use as numbers in some analysis. This removes all commas though, so if those are important in other areas you'll have to rethink that.
Also note that I put the sample PDFs into a directory called 'pdfs'.
I would imagine that with a little creative regex you can extract anything else that would be useful. Using this method makes it easy to scrape the data if the elements of interest will always be on the same pages across all documents. (Note the index on f in the str_match lines.) Hope this helps!

Reading zipped folder containing non-traditional spreadsheet

I'm trying to read a zipped folder called etfreit.zip contained in Purchases from April 2016 onward.
Inside the zipped folder is a file called 2016.xls which is difficult to read as it contains empty rows along with Japanese text.
I have tried various ways of reading the xls from R, but I keep getting errors. This is the code I tried:
download.file("http://www3.boj.or.jp/market/jp/etfreit.zip", destfile="etfreit.zip")
unzip("etfreit.zip")
data <- read.csv(text=readLines("2016.xls")[-(1:10)])
I'm trying to skip the first 10 rows as I simply wish to read the data in the xls file. The code works only to the extent that it runs, but the data looks truly bizarre.
Would greatly appreciate any help on reading the spreadsheet properly in R for purposes of performing analysis.
There is more than one bizzare thing going on here I think, but I had some success with (somewhat older) gdata package:
data = gdata::read.xls("2016.xls")
By the way, treating xls file as csv seldom works. Actually it shouldn't work at all :) Find out a proper import function for your type of data and then use it, don't assume that read.csv is going to take care about anything else than csv (properly).
As per your comment: I'm not sure what you mean by "not properly aligned", but here is some code that cleans the data a bit, and gives you numeric variables instead of factors (note I'm using tidyr for that):
data2 = data[-c(1:7), -c(1, 6)]
names(data2) = c("date", "var1", "var2", "var3")
data2[, c(2:4)] = sapply(data2[, c(2:4)], tidyr::extract_numeric)
# Optionally convert the column with factor dates to Posixct
data2$date = as.POSIXct(data2$date)
Also, note that I am removing only 7 upper rows - this seems to be the portion of the data that contains the header with Japanese.
"Odd" unusual excel tables cab be read with the jailbreakr package. It is still in development, but looks pretty ace:
https://github.com/rsheets/jailbreakr

Read, process and export analysis results from multiple .csv files in R

I have a bunch of CSV files and I would like to perform the same analysis (in R) on the data within each file. Firstly, I assume each file must be read into R (as opposed to running a function on the CSV and providing output, like a sed script).
What is the best way to input numerous CSV files to R, in order to perform the analysis and then output separate results for each input?
Thanks (btw I'm a complete R newbie)
You could go for Sean's option, but it's going to lead to several problems:
You'll end up with a lot of unrelated objects in the environment, with the same name as the file they belong to. This is a problem because...
For loops can be pretty slow, and because you've got this big pile of unrelated objects, you're going to have to rely on for loops over the filenames for each subsequent piece of analysis - otherwise, how the heck are you going to remember what the objects are named so that you can call them?
Calling objects by pasting their names in as strings - which you'll have to do, because, again, your only record of what the object is called is in this list of strings - is a real pain. Have you ever tried to call an object when you can't write its name in the code? I have, and it's horrifying.
A better way of doing it might be with lapply().
# List files
filelist <- list.files(pattern = "*.csv")
# Now we use lapply to perform a set of operations
# on each entry in the list of filenames.
to_dispose_of <- lapply(filelist, function(x) {
# Read in the file specified by 'x' - an entry in filelist
data.df <- read.csv(x, skip = 1, header = TRUE)
# Store the filename, minus .csv. This will be important later.
filename <- substr(x = x, start = 1, stop = (nchar(x)-4))
# Your analysis work goes here. You only have to write it out once
# to perform it on each individual file.
...
# Eventually you'll end up with a data frame or a vector of analysis
# to write out. Great! Since you've kept the value of x around,
# you can do that trivially
write.table(x = data_to_output,
file = paste0(filename, "_analysis.csv"),
sep = ",")
})
And done.
You can try the following codes by putting all csv files in the same directory.
names = list.files(pattern="*.csv") %csv file names
for(i in 1:length(names)){ assign(names[i],read.csv(names[i],skip=1, header=TRUE))}
Hope this helps !

Importing Multiple Text Files as Individual data.frames in R - 2.15.2 on Mac - 10.8.4

I have searched through this forum for most of the day trying to find the solution - I could not find so I am posting. If the answer is already out there please point me in the right direction.
What I have -
A directory with 40 texts files called the following
test_63x_disc_z00.txt
*01.txt
*02.txt
...
*39.txt
In each of these files there are 10 columns of data with no header and a varying number of rows.
What I want -
I want to have an individual data.frame in R for each text file with names:
file00
file01
...
file39.
I then want to do add a header column to each of these data.frames.
I then want to be able to manipulate the data at ease (this last part I can sort out once I have input a the data)
This is what I have accomplished (don't laugh now) -
I can input a single text file as a data frame and add a header, like so :
d<-read.delim("test_63x_disc_z00.txt", header = F)
colnames(d)<-c("cell","CentX","CentY","CountLabels","AvgGreen","DeviationsGreen","AvgRed","DeviationsRed","GUI-ID","Slice")
I am not sure how to set up a loop to perform each of the commands to all 40 files and maintain distinct file names.
To quicly read in a lot of data frames you can
listy <- apply(data.frame(list.files()), 1, read.table, sep="", header=F)
Then to name them the list items, you can:
names(listy) <- paste0("file", seq_along(1:40))
They are then called by listy$file1 etc.
Thanks Metrics for editing my input code... I wasn't sure how to give it the format you did.
So I figured it out (dirty version), but it still needs work -
stem <- c("/Users/stefanzdraljevic/Northwestern/2013/Carthew-Rotation/Sample-Images-Ritika/ritika-tes/statistics3/test_63x_disc_z")
The above is the stem for the file naming`
addition <- c("00","01","02","03","04","05","06","07","08","09","10","11","12","13","14","15","16","17","18","19","20","21","22","23","24","25","26","27","28","29","30","31","34","35","36","37","38","39")
This will add the number of the text file to the end of stem. I am not sure how to incorporate the "00" numbering structure without writing them all out.
colnames <- c("cell","CentX","CentY","CountLabels","AvgGreen","DeviationsGreen","AvgRed","DeviationsRed","GUI-ID","Slice")
This will add column names to the data.frame
data = NULL
for(j in stem){
data[[j]] = NULL
for(i in addition){
data[[j]][[i]] = read.table(paste(j,i,".txt",sep=""), header=F, col.names=colnames)
}
}
This loop does the trick.

Resources