I used FROM_geojson() to get geojson data from url.
The context contains korean language and i set default encoding in Rstudio global option as "UTF-18".
But context are broken like "湲몄뙂怨듭썝" which should be presented like "길쌈공원".
Do you have any idea to solve this problem?
(I'm not sure if it's encoding problem.. )
url="https://icloudgis.incheon.go.kr/server/rest/services/ParkScore_Dynamic/MapServer/1/query?outFields=*&where=1%3D1&f=geojson"
file_json=FROM_GeoJson(url_file_string = url)
Related
I am having troubles with the German Umlaute (ä, ü, ö) and other signs when using osmdata in R.
I can successfully get the data via query (notice the Ü in the bounding box in the first line, it is working fine):
#install.packages("osmdata")
#library(osmdata)
bw <- osmdata::getbb("Baden-Württemberg") %>%
osmdata::opq(timeout = 25*100) %>%
osmdata::add_osm_feature(
key = "admin_level",
value = "4"
) %>%
osmdata::osmdata_sf()
Having a look at the data, one can see that the umlaute aren't displayed correctly.
View(bw$osm_multipolygons)
Consequently, searching by "name" doesn't work anymore:
dplyr::filter(bw$osm_multipolygons, name == "Tirol")
dplyr::filter(bw$osm_multipolygons, name == "Baden-Württemberg")
Tirol is working (no umlaut), Baden-Württemberg (with the ü) isn't.
I'm running R on a German Windows 10, R is running in English.
Best regards
The current working solution for me is described here:
Importing Open Street Map data gives wrong encoding
It is really annoying because
a) I am using a function which I don't understand
b) it seems to work for other people (#mrgrund)
on the other side, it seems I am not the only person with the problem.
Best regards
In R
I am extracting data from Pdf tables using Tabulizer library and the Name are on Nepali language
and after extracting i Get this Table
[1]: https://i.stack.imgur.com/Ltpqv.png
But now i want that column 2's name To change, in its English Equivalent
Is there any way to do this in R
The R code i wrote was
library(tabulizer)
location <- "https://citizenlifenepal.com/wp-content/uploads/2019/10/2nd-AGM.pdf"
out <- extract_tables(location,pages = 113)
##write.table(out,file = "try.txt")
final <- do.call(rbind,out)
final <- as.data.frame(final) ### creating df
col_name <- c("S.No.","Types of Insurance","Inforce Policy Count", "","Sum Assured of Inforce Policies","","Sum at Risk","","Sum at Risk Transferred to Re-Insurer","","Sum At Risk Retained By Insurer","")
names(final) <- col_name
final <- final[-1,]
write.csv(final,file = "/cloud/project/Extracted_data/Citizen_life.csv",row.names = FALSE)
View(final)```
It appears that document is using a non-Unicode encoding. This web site https://www.ashesh.com.np/preeti-unicode/ can convert some Nepali encodings to Unicode, which would display properly in R, assuming you have the right fonts loaded. When I tried it on the output of your code, it did something that looked okay to me, but I don't know Nepali:
> out[[1]][1,2]
[1] ";fjlws hLjg aLdf"
When I convert the contents of that string, I get
सावधिक जीवन बीमा
which looks to me something like the text on that page in the document. If it's actually written correctly, then converting it to English will need some Nepali speaker to do the translation: hopefully that's you, but if I use Google Translate, it gives
Term life insurance
So here's my suggestion: contact the owner of that www.ashesh.com.np website, and find out if they can give you the translation rules. Write an R function to implement them if you can't find one by someone else. Then do the English translations manually.
Im trying to load a dataset into R using an API that lets me do a query and returns back the data I need (I cant configure on server side).
I know it has something to do with Encoding. When i check the string in from by dataframe in R in gives me ENC: UTF-8 "Cosmética".
When i copy the source string "Cosmética" it gives me latin1.
How can i get the UTF-8 string properly formatted like the latin1?
Ive tried this below:
Sys.setlocale("LC_ALL","Spanish")
tried directly on the string:
Enconding(Description) <- "latin1"
unfortunately i cant get it to work. Any ideas are welcome! Thanks.
You can use iconv to change to encoding of the string:
iconv(mystring, to = "ISO-8859-1")
# [1] "Cosmética"
ISO 8859-1 is the common character encoding in Western Europe.
I'm currently building a shiny application that needs to be translated in different languages. I have the whole structure but I'm struggling into getting values such as "Validació" that contain accents.
The structure I've followed is the following:
I have a dictionary that is simply a csv with the translation where
there's a key and then each language. The structure of this dictionary is the following:
key, cat, en
"selecció", "selecció", "Selection"
"Diferències","Diferències", "Differences"
"Descarregar","Descarregar", "Download"
"Diagnòstics","Diagnòstics", "Diagnoses"
I have a script that once the dictionary.csv is modified, generates a .bin file that later will be loaded in the code.
In strings.R I have all the strings that will appear on the code and I use a function to translate the current language to the one I want. The function is the following:
Code:
tr <- function(text){
sapply(text, function(s) translation[[s]][["cat"]], USE.NAMES=F)
}
When I translate something, since I am doing in another file, I assign it to another variable something like:
str_seleccio <- tr('Selecció)
The problem I'm facing is that for example if we translate 'Selecció' would be according to this function, tr('Selecció') and provides a correct answer if I execute it in the RStudio terminal but when I do it in the Shiny application, appears to me as a NULL. If the word I translate has no accents such as "Hello", tr("Hello") provides me a correct answer in the Shiny application and I can see it throught the code.
So mainly tr(word) gets the correct value but when assigning it "loses the value" so I'm a bit lost how to do it.
I know that you can do something like Encoding(str_seleccio) <- "UTF-8" but in this case is not working. In case of plain words it used to do but since when I asssign it, gets NULL is not working.
Any idea? Any suggestion? What I would like is to add something to tr function
The main idea comes from this repository that if you can take a look is the simplest version you can do, but (s)he has problem with utf-8 also.
https://github.com/chrislad/multilingualShinyApp
As in http://shiny.rstudio.com/articles/unicode.html suggested (re)save all files with UTF-8 encoding.
Additionaly change within updateTranslation.R:
translationContent <- read.delim("dictionary.csv", header = TRUE, sep = "\t", as.is = TRUE)
to:
translationContent <- read.delim("dictionary.csv", header = TRUE, sep = "\t", as.is = TRUE, fileEncoding = "UTF-8").
Warning, when you (re)save ui.R, your "c-cedilla" might get destroyed. Just re-insert it, in case it happens.
Happy easter :)
I am fairly new to R and am having trouble with pulling data from the Forbes website.
My current function is:
url = http://www.forbes.com/global2000/list/#page:1_sort:0_direction:asc_search:_filter:All%20industries_filter:All%20countries_filter:All%20states
data = readHTMLTable(url)
However, when I change the page # in the url from 1 to 2 (or to any other number), the data that is pulled is the same data from page 1. For some reason R does not pull the data from the correct page. If you manually paste the link into the browser with a specific page #, then it works fine.
Does anyone have an idea as to why this is happening?
Thanks!
This appears to be an issue caused by URL fragments, which the pound sign represents. It essentially creates an anchor on a page and directs your browser to jump to that particular location.
You might be having this trouble because readHTMLTable() might not be created to work with URL fragments. See if you can find a version of the same table that does not require # in the URL.
Here are some helpful links that might shed light on what you are experiencing:
What is it when a link has a pound "#" sign in it
https://support.microsoft.com/kb/202261/en-us
If I come across anything else that's helpful, I'll share it in follow-up comments.
What you might need to do is use the URLencode() method in R.
kdb.url <- "http://m1:5000/q.csv?select from data0 where folio0 = `KF"
kdb.url <- URLencode(kdb.url)
df <- read.csv(kdb.url, header=TRUE)
You might have meta-characters in your URL too. (Mine are the spaces and the backtick.)
>kdb.url
[1] "http://m1:5000/q.csv?select%20from%20data0%20where%20folio0%20=%20%60KF"
They think of everything those R guys.