I run into issue with opening a large file using fread.
df <- fread("userprofile.csv", encoding = "UTF-8")
In fread("userprofile.csv", encoding = "UTF-8") :
Stopped early on line 342637. Expected 77 fields but found 81. Consider fill=TRUE and comment.char=. First discarded non-empty line:
Edit1:
df <- fread("userprofile.csv", encoding = "UTF-8", fill=TRUE)
This gave me R Session Aborted
Is there any alternative to open a large file? or handle this error?
Related
I have two data sets containing rows of data where the last row is missing a CRLF. I am having to add it to the files in order to read them in. Is there a way I can read in without modifying these files?
One of the final records looks like this:
surface NewYork Ave. 1259 1290 no final carriage return
at end of record
Warning message:
In readLines(file, n = thisblock) : incomplete final line found on
roadways.dat'
Thanks. MM
The only way I managed to have reproduced your problem is when I use a win unicode file encoding = "UCS-2LE". A couple of ways to go about the problem, and a warning for you to test it if it produces the desired output. In most cases it is a warning which you can suppress using available switches.
# set the warning FALSE (Assuming it is just a warning with no effect)
data <- readLines(con <- file("your_file", encoding = "UCS-2LE"), warn = FALSE, n=-1)
# Or see if other alternative encoding can solve your problem
A <- readLines(con <- file("your_file", encoding = "UTF-8"), n=-1)
What is the proper way to append col names to the header of csv table which is generated by write.table command?
For example write.table(x, file, col.names= c("ABC","ERF")) throws error saying invalid col.names specification.Is there way to get around the error, while maintaining the function header of write.table.
Edit:
I am in the middle of writing large code, so exact data replication is not possible - however, this is what I have done:
write.table(paste("A","B"), file="AB.csv", col.names=c("A1","B1")) , I am still getting this error Error in write.table(paste("A","B"), file="AB.csv", col.names=c("A", : invalid 'col.names' specification.
Is that what you expect, tried my end
df <- data.frame(condition_1sec=1)
df1 <- data.frame(susp=0)
write.table(c(df,df1),file="table.csv",col.names = c("A","B"),sep = ",",row.names = F)
I am trying to read.table for a tab-delimited file using the following command:
df <- read.table("input.txt", header=FALSE, sep="\t", quote="", comment.char="",
encoding="utf-8")
There should be 30 million rows. However, after read.table() returns, df contains only ~6 million rows. And there is a warning message like this:
Warning message:
In read.table("input.txt", header = FALSE, sep = "\t", quote = "", :
incomplete final line found by readTableHeader on 'input.txt'
I believe read.table quits after encountering a special sympbol (ASCII code: 1A Substitute) in one of the string columns. In the input file, the only special character is tab because it is used to separate columns. Is there anyway to ask read.table to treat any other character as not special?
If you have 30 million rows. I would use fread rather than read.table. It is faster. Learn more about here http://www.inside-r.org/packages/cran/data.table/docs/fread
fread(input, sep="auto", encoding = "UTF-8" )
Regarding your issue with read.table. I think the solutions here should solve it.
'Incomplete final line' warning when trying to read a .csv file into R
I tried to extract noun for R. When using program R, an error appears. I wrote the following code:
setwd("C:\\Users\\kyu\\Desktop\\1-1file")
library(KoNLP)
useSejongDic()
txt <- readLines(file("1_2000.csv"))
nouns <- sapply(txt, extractNoun, USE.NAMES = F)
and, the error appear like this:
setwd("C:\\Users\\kyu\\Desktop\\1-1file")
library(KoNLP)
useSejongDic()
Backup was just finished!
87007 words were added to dic_user.txt.
txt <- readLines(file("1_2000.csv"))
nouns <- sapply(txt, extractNoun, USE.NAMES = F)
java.lang.ArrayIndexOutOfBoundsException Error in
Encoding<-(*tmp*, value = "UTF-8") : a character vector
argument expected
Why is this happening? I load 1_2000.csv file, there are 2000 lines of data. Is this too much data? How do I extract noun like large data file? I use R 3.2.4 with RStudio, and Excel version 2016 on Windows 8.1 x64.
The number of lines shouldn't be a problem.
I think that there might be a problem with the encoding. See this post. Your .csv file is encoded as EUC-KR.
I changed the encoding to UTF-8 using
txtUTF <- read.csv(file.choose(), encoding = 'UTF-8')
nouns <- sapply(txtUTF, extractNoun, USE.NAMES = F)
But that results in the following error:
Warning message:
In preprocessing(sentence) : Input must be legitimate character!
So this might be an error with your input. I can't read Korean so can't help you further.
I am trying to read car.data file at this location - https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data using read.table as below. Tried various solutions listed earlier, but did not work. I am using Windows 8, R version 3.2.3. I can save this file as txt file and then read, but not able to read the .data file directly from URL or even after saving using read.table
t <- read.table(
"https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data",
fileEncoding="UTF-16",
sep = ",",
header=F
)
Here is the error I am getting and is resulting in an empty dataframe with single cell with "?" in it:
Warning messages:
1: In read.table("https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data", : invalid input found on input connection 'https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data'
2: In read.table("https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data", :
incomplete final line found by readTableHeader on 'https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data'
Please help!
Don't use read.table when the data is not stored in a table. Data at that link is clearly presented in comma-separated format. Use the RCurl package instead and read the data as CSV:
library(RCurl)
x <- getURL("https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data")
y <- read.csv(text = x)
Now y contains your data.
Thanks to cory, here is the solution - just use read.csv directly:
x <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/car/car.data")