Parsing Unicode string with nulls in R - r

I am having some trouble parsing a Unicode string a JSON object pulled from an API. As the string has Encoding() like "unknown", i need to parse it for the system to know what its dealing with. The string represents a decoded .png file in UTF-8 that I then need to decode back to latin1 before writing it to a .png file (I know, very backwards, and it would be much better if the API pushed a base64-string).
I get the string from the API as chr object, and try to let fromJSON do the job, but no dice. It cuts the string at the first null (\u0000).
> library(httr)
> library(jsonlite)
> library(tidyverse)
> m
Response [https://...]
Date: 2018-04-10 11:47
Status: 200
Content-Type: application/json
Size: 24.3 kB
{"artifact": "\u0089PNG\r\n\u001a\n\u0000\u0000\u0000\rIHDR\u0000\u0000\u0000\u0092\u0000\u0000\u0000\u00e3...
> x <- content(m, encoding = "UTF-8", as = "text")
> ## substing of the complete x:
> x <- "{\"artifact\": \"\\u0089PNG\\r\\n\\u001a\\n\\u0000\\u0000\\u0000\\rIHDR\\u0000\\u0000\\u0000\\u0092\\u0000\\u0000\\u0000\\u00e3\\b\\u0006\\u0000\\u0000\\u0000n\\u0005M\\u00ea\\u0000\\u0000\\u0000\\u0006bKGD\\u0000\\u00ff\\u0000\\u00ff\\u0000\\u00ff\\u00a0\\u00bd\\u00a7\\u0093\\u0000\\u0000\\u0016\\u00e7IDATx\\u009c\\u00ed\"}\n"
>
> ## the problem
> "\u0000"
Error: nul character not allowed (line 1)
> ## this works fine
> "\\u0000"
[1] "\\u0000"
>
> y <- fromJSON(txt = x)
> y # note how the string is cut!
$artifact
[1] "\u0089PNG\r\n\032\n"
When I replace the \\u0000 with char(0), everything works fine. The problem is that the nulls seems to play an important role in the binary representation of the file that I write to in the end, causing the resulting image to be corrupted in the viewer.
> x <- str_replace_all(string = x, pattern = "\\\\u0000", replacement = chr(0))
> y <- fromJSON(txt = x)
> y
$artifact
[1] "\u0089PNG\r\n\032\n\rIHDR\u0092ã\b\006n\005Mê\006bKGDÿÿÿ ½§\u0093\026çIDATx\u009cí"
> str(y$artifact)
chr "<U+0089>PNG\r\n\032\n\rIHDR<U+0092>ã\b\006n\005Mê\006bKGDÿÿÿ ½§<U+0093>\026çIDATx<U+009C>í"
> Encoding(y$artifact)
[1] "UTF-8"
> z <- iconv(y$artifact, from = "UTF-8", to = "latin1")
> writeBin(object = z, con = "test.png", useBytes = TRUE)
I have tried these commands with the original string, to no avail
> library(stringi)
> stri_unescape_unicode(str = x)
Error in stri_unescape_unicode(str = x) :
embedded nul in string: '{"artifact": "<U+0089>PNG\r\n\032\n'
> ## and
> parse(text = x)
Error in parse(text = x) : nul character not allowed (line 1)
Is there no way for R to handle this nul character?
Any idea on how I can get the complete encoded string and write it to a file?
The same story works just fine in Python, which uses a \x convention in stead of \u00
response = r.json()
artifact = response['artifact']
artifact
'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR....'
artifact_encoded = artifact.encode('latin-1')
artifact_encoded # note the binary form!
b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR....'
fh = open("abc.png", "wb")
fh.write(artifact_encoded)
fh.close()
FYI: I have cut some most of the actual string out, but enough to use for testing purposes. The actual string contained other symbols, and it seemed impossible to copy-paste the string in a script and assign it to a new variable (e.g. y <- "{\"artifact\": \"\\u0089PNG\\..."). So, I don't know what I would do if I had to read the string from e.g. a .csv file..
Any pointers in any of my struggles would be appreciated :)

Related

No encoding supplied: defaulting to UTF-8

I just wrote a code in order to extract some data.
The script is working almost just fine. By almost, I mean that I can get the results that I want, but I have always the same issue in my console which is :
No encoding supplied: defaulting to UTF-8.
The problem is that I'm running a loop with 900 lines, so I have 900 times the "No encoding supplied: defaulting to UTF-8."
Is is possible to get rid of it ? Does it have an impact on the process speed ?
Here is my code :
mykeywords <- readLines("mykeywords.txt", encoding = "UTF-8")
mykeywords <- as.character(mykeywords)
my_user_agent <- "XXXXXXXXXXX"
PAA = list()
max_length = 0
for (i in 1: length(mykeywords)){
url_to_check <- paste0("https://www.google.com/search?q=",mykeywords[i],"&ie=utf-8&oe=utf-8&client=firefox-b")
result_get <- GET(url_to_check, user_agent(my_user_agent)) %>%
htmlParse(encoding = "UTF-8") %>%
xpathSApply('//div[/*]/g-accordion-expander/div/div', xmlValue)
if (is.null(result_get)) {
PAA[[i]]= NA
} else {
result_get = result_get[result_get != "" & !is.na(result_get)]
PAA[[i]] = result_get
if (length(result_get) > max_length) {
max_length = length(result_get)}}}

Error in gsub is too long with Arabic language in R

I am doing text mining in R with Arabic language
And use gsub function but I got an error as shown here
Error in gsub("^\\x{0627}\\x{0644}(?=\\p{L})", "", x, perl = TRUE) :
invalid regular expression '^\x{0627}\x{0644}(?=\p{L})'
In addition: Warning message:
In gsub("^\\x{0627}\\x{0644}(?=\\p{L})", "", x, perl = TRUE) :
PCRE pattern compilation error
'character value in \x{} or \o{} is too large'
at '}\x{0644}(?=\p{L})'
here is my code
x<-("الوطن")
# Remove leading alef lam with optional leading waw
m <- gsub('^\\x{0627}\\x{0644}(?=\\p{L})', '', x, perl = TRUE)
anyone can help me ?
Finally I solved the problem ,
the problem is : when I import data in Arabic language as csv then apply gsub I get the error
Error in gsub("^\\x{0627}\\x{0644}(?=\\p{L})", "", x, perl = TRUE) :
invalid regular expression '^\x{0627}\x{0644}(?=\p{L})'
In addition: Warning message:
In gsub("^\\x{0627}\\x{0644}(?=\\p{L})", "", x, perl = TRUE) :
PCRE pattern compilation error
'character value in \x{} or \o{} is too large'
at '}\x{0644}(?=\p{L})'
I figure out that I need to save the data with encode= UTF-8
then read it also with encode= UTF-8 Then change the Local .
like this code :
Sys.setlocale("LC_CTYPE","arabic")
[1] "Arabic_Saudi Arabia.1256"
> write.csv(x, file = "x.csv" , fileEncoding = "UTF-8")
y<-read.csv("C:/Users/Documents/x.csv",encoding ="UTF-8")
> Sys.setlocale("LC_CTYPE","arabic")
[1] "Arabic_Saudi Arabia.1256"
it seems to me the only problem is your quotation marks:
> x <- "الوطن"
> gsub('^\\x{0627}\\x{0644}(?=\\p{L})', '', x, perl = TRUE)
[1] "وطن"
also, check for your OS locale as I've experienced some similar issues when trying to process Hebrew text while my Windows locale was set to US.

r - source script file that contains unicode (Farsi) character

write the text below in a buffer and save it as a .r script:
letters_fa <- c('الف','ب','پ','ت','ث','ج','چ','ح','خ','ر','ز','د')
then try these lines to source() it:
script <- "path/to/script.R"
file(script,
encoding = "UTF-8") %>%
readLines() # works fine
file(script,
encoding = "UTF-8") %>%
source() # works fine
source(script) # the Farsi letters in the environment are misrepresented
source(script,
encoding = "UTF-8") # gives error
The last line throws error. I tried to debug it and I believe there is a bug in the source function, in the following lines:
...
loc <- utils::localeToCharset()[1L]
...
The error occurs at .Internal(parse( line.
...
exprs <- if (!from_file) {
if (length(lines))
.Internal(parse(stdin(), n = -1, lines, "?",
srcfile, encoding))
else expression()
}
else .Internal(parse(file, n = -1, NULL, "?", srcfile,
encoding))
...
The exact error is:
Error in source(script, encoding = "UTF-8") :
script.R:2:17: unexpected INCOMPLETE_STRING
1: #' #export
2: letters_fa <- c('
^
The solution to this problem is to either change the OS Locale to a native Locale (e.g. Persian in this case) or use R built-in function Sys.setlocale(locale="Persian") to change an R session native Locale.
Use source without specifying the encoding, and then modify the vector's encoding with Encoding:
source(script)
letters_fa
# [1] "الÙ\u0081" "ب" "Ù¾" "ت" "Ø«"
# [6] "ج" "چ" "ح" "خ" "ر"
# [11] "ز" "د"
Encoding(letters_fa) <- "UTF-8"
letters_fa
# [1] "الف" "ب" "پ" "ت" "ث" "ج" "چ" "ح" "خ" "ر" "ز" "د"

Reading binary file into R

I'm trying to read a binary file into R but this file has rows of data written in binary code. So it doesnt have one full set of data belonging to one column instead it is stored as rows of data. Here's what my data looks like:
Bytes 1-4: int ID
Byte 5: char response character
Bytes 6-9: int Resp Dollars
Byte 10: char Type char
Anyone can help me figure out how to read this file into R?
Here is the code I have tried so far. I tried a couple of things with limited success. Unfortunately, I cant post any of the data on public sites, apologies. I’m relatively new to R, so I need some help in terms of how to improve the code.
> binfile = file("File Location", "rb")
> IDvals = readBin(binfile, integer(), size=4, endian = "little")
> Responsevals = readBin(binfile, character (), size = 5)
> ResponseDollarsvals = readBin (binfile, integer (), size = 9, endian= "little")
Error in readBin(binfile, integer(), size = 9, endian = "little") :
size 9 is unknown on this machine
> Typevals = readBin (binfile, character (), size=4)
> binfile1= cbind(IDvals, Responsevals, ResponseDollarsvals, Typevals)
> dimnames(binfile1)[[2]]
[1] "IDvals" "Responsevals" "ResponseDollarsvals" "Typevals"
> colnames(binfile1)=binfile
Error in `colnames<-`(`*tmp*`, value = 4L) :
length of 'dimnames' [2] not equal to array extent
You could open the file as a raw file, then issue readBin or readChar commands to get each line. Append each value to a column as you go.
my.file <- file('path', 'rb')
id <- integer(0)
response <- character(0)
...
Loop around this block:
id = c(id, readBin(my.file, integer(), size = 4, endian = 'little'))
response = c(response, readChar(my.file, 1))
...
readChar(my.file, size = 1) # For UNIX newlines. Use size = 2 for Windows newlines.
Then create your data frame.
See here: http://www.ats.ucla.edu/stat/r/faq/read_binary.htm

How to use a non-ASCII symbol (e.g. £) in an R package function?

I have a simple function in one of my R packages, with one of the arguments symbol = "£":
formatPound <- function(x, digits = 2, nsmall = 2, symbol = "£"){
paste(symbol, format(x, digits = digits, nsmall = nsmall))
}
But when running R CMD check, I get this warning:
* checking R files for non-ASCII characters ... WARNING
Found the following files with non-ASCII characters:
formatters.R
It's definitely that £ symbol that causes the problem. If I replace it with a legitimate ASCII character, like $, the warning disappears.
Question: How can I use £ in my function argument, without incurring a R CMD check warning?
Looks like "Writing R Extensions" covers this in Section 1.7.1 "Encoding Issues".
One of the recommendations in this page is to use the Unicode encoding \uxxxx. Since £ is Unicode 00A3, you can use:
formatPound <- function(x, digits=2, nsmall=2, symbol="\u00A3"){
paste(symbol, format(x, digits=digits, nsmall=nsmall))
}
formatPound(123.45)
[1] "£ 123.45"
As a workaround, you can use intToUtf8() function:
# this causes errors (non-ASCII chars)
f <- function(symbol = "➛")
# this also causes errors in Rd files (non-ASCII chars)
f <- function(symbol = "\u279B")
# this is ok
f <- function(symbol = intToUtf8(0x279B))

Resources