csv files error in R - r

When trying to read a local csv file im getting the error
Error in xts(dat, order.by = as.Date(rownames(dat), "%m/%d/%Y")) :
'order.by' cannot contain 'NA', 'NaN', or 'Inf'
im trying out the example from https://rpubs.com/mohammadshadan/288218 which is the following:
tmp_file <- "test.csv"
# Create dat by reading tmp_file
dat <- read.csv(tmp_file,header=FALSE)
# Convert dat into xts
xts(dat, order.by = as.Date(rownames(dat), "%m/%d/%Y"))
# Read tmp_file using read.zoo
dat_zoo <- read.zoo(tmp_file, index.column = 0, sep = ",", format = "%m/%d/%Y")
# Convert dat_zoo to xts
dat_xts <- as.xts(dat_zoo)
the thing is when i try to read the file like in the example which is reading the file from the server this works somehow but not when i try with a csv file locally even if its the same info as the file in the web.
i have tried creating the csv file with Notepad,Notepad++ and Excel with no luck.
Any idea what im missing?, i have also tried using read.table instead of csv with the same results...
File can be found at: https://ufile.io/zfqje
if header=TRUE i get the following error:
Warning messages: 1: In read.table(file = file, header = header, sep =
sep, quote = quote, : incomplete final line found by
readTableHeader on 'test.csv'
2: In read(file, ...) : incomplete
final line found by readTableHeader on 'test.csv'

The problem is the header=FALSE argument in read.csv.
read.csv will choose the first column as the row names if there is a header and the first row contains one fewer field than the number of columns. When header = FALSE, it doesn't create the row names.
Here is an example of the problem:
dat <- read.csv(text = "a,b
1/02/2015,1,3
2/03/2015,2,4", header = F)
as.Date(rownames(dat), "%m/%d/%Y")
#> [1] NA NA NA
By removing header = F, the problem is fixed:
dat <- read.csv(text = "a,b
1/02/2015,1,3
2/03/2015,2,4")
as.Date(rownames(dat), "%m/%d/%Y")
#> [1] "2015-01-02" "2015-02-03"

Related

EOF within quoted string warning when merging csv files

I have more than 70 CSV files and I am trying to merge them row-wise (they all have same columns). I tried to combine them using this code:
library(tidyverse)
library(plyr)
library(readr)
setwd("*\\data")
myfolder="test"
allfiles= list.files(path=myfolder, pattern="*.csv", full.names = T)
allfiles
combined_csv= ldply(allfiles, read.csv)
Once I run this code I get a warning message:
Warning message:
In scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
EOF within quoted string
It looks like that I am losing some rows. How can I fix this?
It is possible that same columns in different files are read as different types when some of them have some 'character' element and some are just numeric. Here, is one method to read with all columns specified as "character" column, rbind the elements and then use type.convert to automatically convert the column classes based on the value it have
library(data.table)
out <- rbindlist(lapply(list.files(path=myfolder, full.names = TRUE),
fread, colClasses = "character"))
out <- type.convert(out, as.is = TRUE)
Try this:
library(dplyr)
library(readr)
myfolder="test"
df <- list.files(path=myfolder, full.names = TRUE) %>%
lapply(read_csv) %>%
bind_rows

NA introduced by coercion

I have a file a notepad txt file inflation.txt that looks something like this:
1950-1 0.0084490544865279
1950-2 −0.0050487986543660
1950-3 0.0038461526886055
1950-4 0.0214293914558992
1951-1 0.0232839389540449
1951-2 0.0299121323429455
1951-3 0.0379293285389640
1951-4 0.0212773984472849
From a previous stackoverflow post, I learned how to import this file into R:
data <- read.table("inflation.txt", sep = "" , header = F ,
na.strings ="", stringsAsFactors= F, encoding = "UTF-8")
However, this code reads the file as a character. When I try to convert this file to numeric format, all negative values are replaced with NA:
b=as.numeric(data$V2)
Warning message:
In base::as.numeric(x) : NAs introduced by coercion
> head(b)
[1] 0.008449054 NA 0.003846153 0.021429391 0.023283939 0.029912132
Can someone please show me what I am doing wrong? Is it possible to save the inflation.txt file as a data.frame?
I would read the file using space as a separator, then spin out two separate columns for the year and quarter from your R script:
data <- read.table("inflation.txt", sep = " ", header=FALSE,
na.strings="", stringsAsFactors=FALSE, encoding="UTF-8")
names(data) <- c("ym", "vals")
data$year <- as.numeric(sub("-.*$", "", data$ym))
data$month <- as.numeric(sub("^\\d+-", "", data$ym))
data <- data[, c("year", "month", "vals")]
The issue is that "−" that you have in your data is not minus sign (it is a dash), hence the data is being read as character.
You have two options.
Open the file in any text editor and find and replace all the "−" with negative sign and then using read.table would work directly.
data <- read.table("inflation.txt")
If you can't change the data in the original file then replace them with sub after reading the data into R.
data$V2 <- as.numeric(sub('−', '-', data$V2, fixed = TRUE))

Error in type.convert when reading data from CSV

I am working on a basketball project. I am struggling to open my data on R :
https://www.basketball-reference.com/leagues/NBA_2019_totals.html
I have imported the data on excel and then saved it as CSV (for macintosh).
When I import the data on R I get an error message :
"Error in type.convert.default(data[[i]], as.is = as.is[i], dec = dec, : invalid multibyte string at '<e7>lex<20>Abrines' "
The following seems to work. The readHTMLTable function does give warnings due to the presence of null characters in column Player.
library(XML)
uri <- "https://www.basketball-reference.com/leagues/NBA_2019_totals.html"
data <- readHTMLTable(readLines(uri), which = 1, header = TRUE)
i <- grep("Player", data$Player, ignore.case = TRUE)
data <- data[-i, ]
cols <- c(1, 4, 6:ncol(data))
data[cols] <- lapply(data[cols], function(x) as.numeric(as.character(x)))
Check if there are NA values. This is needed because the table in the link restarts the headers every now and then and character strings become mixed with numeric entries. The grep above is meant to detect such cases but maybe there are others.
sapply(data, function(x) sum(is.na(x)))
No, everything is alright. So write the data set as a CSV file.
write.csv(data, "nba.csv")
The file Encoding to Latin1 can help.
For example, to read a file in csv skipping second row:
Test=(read.csv("IMDB.csv",header=T,sep=",",fileEncoding="latin1")[-2,])

Reading a dat file in R

I am trying to read a dat file with ";" separated. I want to read a specific line that starts with certain characters like "B" and the other line are not the matter of interest. Can anyone guide me.
I have tried using the read_delim, read.table and read.csv2.But since some lines are not of equal length. So, I am getting errors.
file <- read.table(file = '~/file.DAT',header = FALSE, quote = "\"'",dec = ".",numerals = c("no.loss"),sep = ';',text)
I am expecting a r dataframe out of this file which I can write it to a csv file again.
You should be able to do that through readLines
allLines <- readLines(con = file('~/file.DAT'), 'r')
grepB <- function(x) grepl('^B',x)
BLines <- filter(grepB, allLines)
df <- as.data.frame(strsplit(BLines, ";"))
And if your file contains header, then you can specify
names(df) <- strsplit(allLines[1], ";")[[1]]

Import data from excel but get warning messages

I import data from excel and I have multiple excel so I read at one time.
Here is my code:
library(readxl)
library(data.table)
file.list <- dir(path = "path/", pattern='\\.xlsx', full.names = T)
df.list <- lapply(file.list, read_excel)
data <- rbindlist(df.list)
However, I get this warning messages between df.list <- lapply(file.list, read_excel) and data <- rbindlist(df.list).
Warning messages:
1: In read_xlsx_(path, sheet, col_names = col_names, col_types = col_types, :
[3083, 9]: expecting date: got '2015/07/19'
2: In read_xlsx_(path, sheet, col_names = col_names, col_types = col_types, :
[3084, 9]: expecting date: got '2015/07/20'
What's going on? How can I check and correct?
According to my comment I submit this as an answer. Have you looked into your excel sheet at the respective lines? to me it seems that there is something going on there. maybe you have an empty cell before or after these lines, some space or anything like that... or the format of your date is different in these ones from what is in the other cells.
It is not an elegant solution but use the parameter guess_max = "number of lines in your data file"; this eliminates the warnings and the side effects.

Resources