jsonlite::fromJSON failed to connect to Port 443 in quantmod getFX() - r

I needed a function which automatically gets the Exchange Rate from a website (oanda.com), therefore I used the quantmod package and made a function based on that.
library(quantmod)
ForeignCurrency<-"MYR" # has to be a character
getExchangeRate<-function(ForeignCurrency){
Conv<-paste("EUR/",(ForeignCurrency),sep="")
getFX(Conv,from=Sys.Date()-179,to=Sys.Date())
Conv2<-paste0("EUR",ForeignCurrency,sep="")
Table<-as.data.frame(get(Conv2))
ExchangeRate<-1/mean(Table[,1])
ExchangeRate
}
ExchangeRate<-getExchangeRate(ForeignCurrency)
ExchangeRate
On my personal PC, it works perfectly and do what I want. If i run this on the working PC, I get following Error:
Warning: Unable to import “EUR/MYR”.
Failed to connect to www.oanda.com port 443: Timed out
I googled already a lot, it seems to be a Firewall Problem, but none of the suggestions I found there doesnt work. After checking the getFX() function, the Problem seems to be in the jsonlite::fromJSON function which getFX() is using.
Did someone of you faced a similar Problem? I am quite familar with R, but with Firewalls/Ports I have no expertise. Do I have to change something in the R settings or is it a Problem independent of R and something in the Proxy settings needs to be changed?
Can you please help :-) ?

The code below shows how you can do a workaround for the getfx() in an enterprise context where you often have to go through a proxy to the internet.
library(httr)
# url that you can find inside https://github.com/joshuaulrich/quantmod/blob/master/R/getSymbols.R
url <- "https://www.oanda.com/fx-for-business//historical-rates/api/data/update/?&source=OANDA&adjustment=0&base_currency=EUR&start_date=2022-02-17&end_date=2022-02-17&period=daily&price=mid&view=table&quote_currency_0=VND"
# original call inside the quantmod library: # Fetch data (jsonlite::fromJSON will handle connection) tbl <- jsonlite::fromJSON(oanda.URL, simplifyVector = FALSE)
# add the use_proxy with your proxy address and proxy port to get through the proxy
response <- httr::GET(url, use_proxy("XX.XX.XX.XX",XXXX))
status <- status_code(response)
if(status == 200){
content <- httr::content(response)
# use jsonlite to get the single attributes like quote currency, exchange rate = average and base currency
exportJson <- jsonlite::toJSON(content, auto_unbox = T)
getJsonObject <- jsonlite::fromJSON(exportJson, flatten = FALSE)
print(getJsonObject$widget$quoteCurrency)
print(getJsonObject$widget$average)
print(getJsonObject$widget$baseCurrency)
}

Related

R curl::has_internet() FALSE even though there are internet connection

My problem arose when downloading data from EuroSTAT using the R package eurostat:
# Population data by NUTS3
pop_data <- subset(eurostat::get_eurostat("demo_r_pjangrp3", time_format = "num"),
(age == "TOTAL") & (sex == "T") &
(nchar(trimws(geo)) == 5))[, c("time","geo","values")]
#Fejl i eurostat::get_eurostat("demo_r_pjangrp3", time_format = "num") :
# You have no internet connection, please reconnect!
Seaching, I have found out that it is the statement (in the eurostat-package code):
if (curl::has_internet() {stop("You have no inernet connection, please connnect") that cause the problem.
However, I have interconnection and can e.g. ping www.eurostat.eu
I have tried curl::has_internet() on different computers, all with internet connection. On some it work (respond TRUE) on others it don't.
I have talked with our IT department, and we tried if it could be a firewall problem. Removing the firewall, did not solve the problem.
Unfortunately, I am ignorant on network-settings. Hence, when trying to read the documentation for the curl-package I am lost.
Downloading data from EuroSTAT using the command above have worked for the last at least 2 years, and for me the problem arose at the start of 2020 (January 7).
Hope someone can help with this, as downloading population data from EuroSTAT is a mandatory part in more of my/our regular work.
In the special case of curl::has_internet, you don't need to modify the function to return a specific value. It has its own enclosing environment, from which it reads a state variable indicating whether a proxy connection exists. You can modify that state variable instead.
assign("has_internet_via_proxy", TRUE, environment(curl::has_internet))
curl::has_internet() # will always be TRUE
# [1] TRUE
It's difficult to tell without knowing your settings but there are a couple of things to try. This issue has been noted and possibly addressed in a development version which you can install with
install.packages("https://github.com/jeroen/curl/archive/master.tar.gz", repos = NULL)
You could also try updating libcurl, which is the C library for which the R package acts as an R interface. The problem you describe seems to be more common with older versions of libcurl.
If all else fails, you could overwrite the curl::has_internet function like this:
remove_has_internet <- function()
{
unlockBinding(sym = "has_internet", asNamespace("curl"))
assign("has_internet", function() return(TRUE), envir = asNamespace("curl"))
lockBinding(sym = "has_internet", asNamespace("curl"))
}
Now if you run remove_has_internet(), any call to curl::has_internet() will return TRUE for the remainder of your R session. However, this will only work if other curl functionality is working properly with your network settings. If it isn't then you will get other strange errors and should abandon this approach.
If, for any reason, you want to restore the functionality of the original curl::has_internet without restarting an R session, you can do this:
restore_has_internet <- function()
{
unlockBinding(sym = "has_internet", asNamespace("curl"))
assign("has_internet",
function() {!is.null(nslookup("r-project.org", error = FALSE))},
envir = asNamespace("curl"))
lockBinding(sym = "has_internet", asNamespace("curl"))
}
I just got into this problem, so here's an additional solution, blending both previous answers. It's reversible and checks if we actually have internet to avoid bigger problems later.
# old value
op = get("has_internet_via_proxy", environment(curl::has_internet))
# check for internet
np = !is.null(curl::nslookup("r-project.org", error = FALSE))
assign("has_internet_via_proxy", np, environment(curl::has_internet))
Within a function, this line can be added to automatically revert the process:
on.exit(assign("has_internet_via_proxy", op, environment(curl::has_internet)))

How to use proxies with urls in R?

I am trying to use proxies with my request urls in R. It changes my requested url from "www.abc.com/games" to "www.abc.com/unsupportedbrowser"
The proxies are working since I tested them in python. However I would like to implement it in R
I tried using "httr" and "crul" library in R
#Using httr library
r <- GET(url,use_proxy("proxy_url", port = 12345, username = "abc", password ="xyz") )
text <-content(r, "text")
#using "crul"
res <- HttpClient$new(
url,
proxies = proxy(proxy_url:12345,"abc","xyz")
)
out <-res$get()
text <-out$parse("UTF-8")
Is there some other way to implement the above using proxies or how can I avoid getting the request url changing from "www.abc.com/games" to "www.abc.com/unsupportedbrowser"
I also tried using "requestsR" package
However when I try something like this:
library(dplyr)
library(reticulate)
library(jsonlite)
library(requestsR)
library(rvest)
library(listviewer)
proxies <-
"{'http': 'http://abc:xyz#proxy_url:12345',
'https': 'https://abc:xyz#proxy_url:12345'}" %>%
convert_dictionary_to_list()
res <- Get(url, proxy=proxies)
It gives an error: "r$get : $ operator is invalid for atomic vectors"
I don't understand why it raises such an error. Please Let me know if this can be resolved
Thanks!
I was able to solve the above problem using "user_agent" argument with my GET()

R. RCurl 400 Bad Request

I try to send a request to API, use RCurl library.
My code:
start = "2018-07-30"
end = "2018-08-15"
api_request <- paste("https://api-metrika.yandex.ru/stat/v1/data.csv?id=34904255&date1=",
start,
"&date2=",
end,
"&dimensions=ym:s:searchEngine&metrics=ym:s:visits&dimensions=ym:s:<attribution>SearchPhrase&filters=ym:s:<attribution>SearchPhrase!~'some|phrases|here'&limit=100000&oauth_token=OAuth_token_here", sep="")
s <- getURL(api_request)
And every time I try to do it I have the response "Error 400" or "Bad Request" if I use getUrlContent instead. When I just open this url in my browser - it works correctly.
I still couldn't find any solution for this problem, so if somebody knows something about it - please help me, kind man =)
There are several approaches you can use in case the URL is right. First you can add the following parameter to the getURL function. Setting the parameter followlocation equal TRUE allows to follow any "Location:" header that the server returns as part of the HTTP headers. Processing is recursive, PHP will follow any "Location:" header.
> s <- getURL(url1, .opts=curlOptions(followlocation = TRUE))
If this is not working an alternative way is to use the XML package by calling the htmlParse method instead of getURL
> library(XML)
> s <- htmlParse(api_request)
Another approach would be to use the httr package and call the GET function:
> library(httr)
> s <- GET(api_request)

RCurl error: Connection reset by peer

I am scraping a website for links using the XML and RCurl packages of R. I need to make multiple calls (several thousand).
The script I use is in the following form:
raw <- getURL("http://www.example.com",encoding="UTF-8",.mapUnicode = F)
parsed <- htmlParse(raw)
links <- xpathSApply(parsed,"//a/#href")
...
...
return(links)
When used a single time, there is no problem.
However, when applied to a list of urls (using sapply), I receive the following error:
Error in function (type, msg, asError = TRUE) : Recv failure:
Connection reset by peer
If I retry the same request later it usually returns ok.
I am new to Curl and web scraping, and not sure how to fix or avoid this.
Thank you in advance
Try something like this
for(i in 1:length(links)){
try(WebPage <- getURL(links[[i]], ssl.verifypeer = FALSE,curl=curl))
while((inherits(NivelRegion, "try-error"))){
Sys.sleep(1)
try(WebPage <- getURL(links[[i]], ssl.verifypeer = FALSE,curl=curl))
}

R produces "unsupported URL scheme" error when getting data from https sites

R version 3.0.1 (2013-05-16) for Windows 8 knitr version 1.5 Rstudio 0.97.551
I am using knitr to do the markdown of my R code.
As part of my analysis I downloaded various data sets from the web, knitr is totally fine with getting data from http sites but from https ones where it generates an unsupported URL scheme message.
I know when using the download.file function on a mac the method parameter has to be set to curl to get data from an https however this doesn't help when using knitr.
What do I need to do so that knitr will gather data from Https websites?
Edit:
Here is the code chunk that returns an error in Knitr but when run through R works without error.
```{r}
fileurl <- "https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv"
download.file(fileurl, destfile = "C:/Users/xxx/yyy")
```
You could use https with download.file() function by passing "curl" to method as :
download.file(url,destination,method="curl")
Edit (May 2016): As of R 3.3.0, download.file() should handle SSL websites automatically on all platforms, making the rest of this answer moot.
You want something like this:
library(RCurl)
data <- getURL("https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv",
ssl.verifypeer=0L, followlocation=1L)
That reads the data into memory as a single string. You'll still have to parse it into a dataset in some way. One strategy is:
writeLines(data,'temp.csv')
read.csv('temp.csv')
You can also separate out the data directly without writing to file:
read.csv(text=data)
Edit: A much easier option is actually to use the rio package:
library("rio")
import("https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv")
This will read directly from the HTTPS URL and return a data.frame.
Use setInternet2(use = TRUE) before using the download.file() function. It works on Windows 7.
setInternet2(use = TRUE)
download.file(url, destfile = "test.csv")
I am sure you have already found solution to your problem by now.
I was working on an assignment right now and ended up getting the same error. I tried some of the tricks, but that did not work for me. Maybe because I am working on Windows machine.
Anyhow, I changed the link to http: rather than https: and that did the trick.
Following is chunk of my code:
if (!file.exists("./PeerAssesment2")) {dir.create("./PeerAssessment2")}
fileURL <- "http://d396qusza40orc.cloudfront.net/repdata%2Fdata%2FStormData.csv.bz2"
download.file(fileURL, dest = "./PeerAssessment2/Data.zip")
install.packages("R.utils")
library(R.utils)
if (!file.exists("./PeerAssessment2/Data")) {
bunzip2 ("./PeerAssessment2/Data.zip", destname = "./PeerAssessment2/Data")
}
list.files("./PeerAssessment2")
noaaData <- read.csv ('./PeerAssessment2/Data')
Hope this helps.
I had the same issue with knitr and download.file() with a https url, on Windows 8.
You could try setInternet2(TRUE) before using the download.file() function. However I'm not sure that this fix works on Unix-like systems.
setInternet2(TRUE) # set the R_WIN_INTERNET2 to TRUE
fileurl <- "https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv"
download.file(fileurl, destfile = "C:/Users/xxx/yyy") # now it should work
Source : R documentation (?download.file()) :
Note that https:// URLs are only supported if --internet2 or environment variable R_WIN_INTERNET2 was set or setInternet2(TRUE) was used (to make use of Internet Explorer internals), and then only if the certificate is considered to be valid.
I had the same problem with a https with the following code running perfectly in R and getting unsupported URL scheme when knitting to html:
temp = tempfile()
download.file("https://d396qusza40orc.cloudfront.net/repdata%2Fdata%2Factivity.zip", temp)
data = read.csv(unz(temp, "activity.csv"), colClasses = c("numeric", "Date", "numeric"))
I tried all the solutions posted here and nothing worked, in my absolute desperation I just eliminated the "s" in the "https" in the url and everything got fine...
Using the R download package takes care of the quirky details typically associated with file downloads. For you example, all you needed to do would have been:
```{r}
library(download)
fileurl <- "https://dl.dropbox.com/u/7710864/data/csv_hid/ss06hid.csv"
download(fileurl, destfile = "C:/Users/xxx/yyy")
```

Resources