Two problems.
I have a large file with sensor data exported in json format in txt files.
When I use jsonlite to parse it:
json1 <- fromJSON(txt = "temp.txt")
I receive:
Error in parse_con(txt, bigint_as_char) :
lexical error: invalid char in json text.
prm,{"event_id":"0d3eefe1-8f7e-
(right here) ------^
I have tried to run a simple code to clean it:
test <- readLines("temp.txt", warn = FALSE)
test <- gsub("prm,", "", test)
This cleans the gunk out but then when I try to save it back as a text file:
write.table(test, "test.txt", sep= ",")
The file contains this at the beginning:
"x"
"1","{\"event_id\":\"0d3eefe1-8 etc
Any ideas?
I think what you are looking for is writeLines().
write.table() will convert the character strings to a table. This part: "1", is the line number which R puts as a new column in the way you save the file. "x" is the column name which is created.
What I think you wanted to do is:
writeLines(test, "test.txt", useBytes = TRUE)
The part useBytes = TRUE makes sure the encoding is not changed when you save the file (which Windows annoyingly insists on doing otherwise).
Related
I have an issue, where I'm reading in big (+500mb) CSV-files and then want to verify that all data has been read in correctly. To do so, I have been using a comparison between length() of readLines() and nrow() of read.csv2.
The following is my R-code:
df <- readFileFromServer(HOST, KEY,
paste0(SERVER_PATH, SERVER_FOLDER),
FILENAME,
FUN = read.csv2,
sep = ";",
quote = "", encoding = "UTF-8", skipNul = TRUE)
df_check <- readFileFromServer(HOST, KEY,
paste0(SERVER_PATH, SERVER_FOLDER),
FILENAME,
FUN = readLines,skipNul = TRUE)`
Then I verify that all data was loaded, by checking:
if(nrow(df) != (length(df_check) - dif)){
stop("some error msg")
}
dif is set to 1, to account for header in the CSV-files.
This check is the part that fails for a given CSV-file.
This has been working as intended up until this point, but now this check is causing issues, but I cannot fully understand why.
The one CSV-file that fails the check has "NULL" in the data, which I believe readLines interprets as a delimiter, thus causing a new line, and then the check fails, but I'm really not sure.
I tried parsing different parameters to my readfunctions, but issue still persists.
I expect readlines and read.csv2 to result in equal the same length()-1 and nrow() respectively, as shown in my code-snippet.
This is not a proper answer, but it was too long for a comment. This would be my debug strategy here.
Pick a file that fails. Slurp it with readLines.
Save the file locally using writeLines.
Your first job is to make sure that the check fails also when the file
is loaded from the disk. My first thought would be that the file transfer the first time you have run readFilesFromServer and the second time were not precisely identical.
Now. If your problem persists for the given file when you read it locally with read.csv (different number of rows than number of lines in the readLine output), your job becomes much easier (and faster, probably) to solve.
First, take a look at the beginning of the CSV file and at its end. Are they as they should be? Do they match the data in the head and tail of your data frame? If yes, then you need to find the missing lines systematically.
Since CSV is just comma separated files, you can compare each line read from the CSV file with readLines with the line as it should be based on the table you have read using read.csv. How this should be done, depends on how your original csv file looks like (whether you need to insert quotes etc.). Basically, you need to figure out a way of restoring the lines of the CSV file from the data in your data frame, and then looking for the first line that is different.
Here is some code to give you an idea what I mean:
## first, prepare data – for this example only!
f <- file("test.csv", "w")
writeLines(c("a,b,c", "1,what ever,42", "12,89,one"), f)
close(f)
## actual test
## first, read the file with readlines
f <- file("test.csv", "r")
rl <- readLines(f)
close(f)
## then, read it with test.csv
csv <- read.csv("test.csv")
## third, prepare the lines as they should look based on the CSV
rl_sim <- do.call(paste, c(csv, sep=","))
## find the first mismatch
for(i in 1:length(rl_sim)) {
if(rl_sim[i] != rl[i + 1]) {
message("Problems start at line ", i, "\n", rl_sim[i], rl[i + 1])
break
}
}
I am trying to read a csv file in R, but I am getting some errors.
This is what I have and also I have set the correct path
mydata <- read.csv("food_poisioning.csv")
But I am getting this error
Error in make.names(col.names, unique = TRUE) :
invalid multibyte string at '<ff><fe>Y'
In addition: Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
line 1 appears to contain embedded nulls
2: In read.table(file = file, header = header, sep = sep, quote = quote, :
line 2 appears to contain embedded nulls
I believe I am getting this error because my csv file is actually not separated by comma, but it has spaces. This is what is looks like:
I tried using sep=" ", but it didn't work.
If you're having difficulty using read.csv() or read.table() (or writing other import commands), try using the "Import Dataset" button on the Environment panel in RStudio. It is useful especially when you are not sure how to specify the table format or when the table format is complex.
For your .csv file, use "From Text (readr)..."
A window will pop up and allow you to choose a file/URL to upload. You will see a preview of the data table after you select a file/URL. You can click on the column names to change the column class or even "skip" the column(s) you don't need. Use the Import Options to further manage your data.
Here is an example using CreditCard.csv from Vincent Arel-Bundock's Github projects:
You can also modify and/or copy and paste the code in Code Preview, or click Import to run the code when you are ready.
I am trying to read a data file into R with several delimited columns. Some columns have entries which are special characters (such as arrow). Read.table comes back with an error:
incomplete final line found by readTableHeader
and does not read the file. Tried UTF-8, UTF-16 coding options which didn't help either. Here is a small example file.
I am not able to reproduce the arrow in this question box, hence I am attaching the image of the notepad screen of a small file (test1.txt).
Here is what I get when I try to open it.
test <- read.table("test1.TXT", header=T, sep=",", fileEncoding="UTF-8", stringsAsFactor=F)
Warning message: In read.table("test1.TXT", header = T, sep = ",",
fileEncoding = "UTF-8", : incomplete final line found by
readTableHeader on 'test1.TXT'
However, if I remove the second line (with the special character) and try to import the file, R imports it without problem.
test2.txt =
id, ti, comment
1001, 105AB, "All OK"
test <- read.table("test2.TXT", header=T, sep=",", fileEncoding="UTF-8", stringsAsFactor=F)
id ti comment
1 1001 105AB All OK
Although this is a small example, the file I am working with is very large. Is there a way I can import the file to R with those special characters in place?
Thank you.
test1.txt
I am having a problem with using read.csv in R. I am trying to import a file that has been saved as a .csv file in Excel. Missing values are blank, but I have a single entry in one column which looks blank, but is in fact a space. Using the standard command that I have been using for similar files produces this error:
raw.data <- read.csv("DATA_FILE.csv", header=TRUE, na.strings="", encoding="latin1")
Error in type.convert(data[[i]], as.is = as.is[i], dec = dec, na.strings = character(0L)) :
invalid multibyte string at ' floo'
I have tried a few variations, adding arguments to the read.csv() command such as na.strings=c(""," ") and strip.white=TRUE, but these result in the exact same error.
It is a similar error to what you get when you use the wrong encoding option, but I am pretty sure this shouldn't be a problem here. I have of course tried manually removing the space (in Excel), and this works, but as I'm trying to write generic code for a Shiny tool, this is not really optimal.
NEI <- readRDS(unz(tf, filename = "summarySCC_PM25.rds", open = "", encoding = getOption("encoding")))
Variable tf is a temporary file with a very specific location saved on the hard drive. It is my understanding that the format for unz() is:
unz(description, filename, open = "", encoding = getOption("encoding"))
As I read the documentation, I am interpreting that my application of the code is accurate as that:
description is a specific zip file destination, which outputs in var tf as c://...//345du.zip
filename is summarySCC_PM25.rds, which is the file to be extracted from tf
open is already established in the var so black should be fine
encoding labels the language type.
Within the context of the code above, I receive "Error: unknown input format" from R 3.1.1. I need clarification on what might be happening as that I interpret my code to be the same as:
NEI <- readRDS("summarySCC_PM25.rds")
Am I misinterpreting this?
I found your data online so that I can read your file. It was available from here:
https://www.linkedin.com/today/post/article/20140617173447-5576436-explore-n-analyze-data-assignment-2
> unzip("C:\\Users\\jmiller\\Downloads\\exdata_data_NEI_data.zip")
> NEI <- readRDS("summarySCC_PM25.rds")
> dim(NEI)
[1] 6497651 6
> colnames(NEI)
[1] "fips" "SCC" "Pollutant" "Emissions" "type" "year"
Avoid unz() and use unzip(withanindex) since the temp file is a moving target