RSocrata package with Chicago data neglects my token - r

I can not throttle-up my downloads by using the token issued to my app (on data.chicago.com portal, where I had to register)
Error 1:
token <- "___my_app_token__";
fdf <- read.socrata("h___s://data.cityofchicago.org/resource/7edu-s3u7.csv?$where=station_name=\"Foster Weather Station\"", token)
2016-10-06 10:39:53.685 getResponse:
Error in httr GET: 403 h___s://data.cityofchicago.org/resource/7edu-s3u7.csv?%24where=station_name%3D%22Foster%20Weather%20Station%22&app_token=%2524%2524app_token%3D___my_app_token_______
I have NO IDEA where did the first 'token' (2524 2524) come from, do you? Can somebody tell me? Maybe the author of the package is here?
Non-error:
fdf <- read.socrata("h___s://data.cityofchicago.org/resource/7edu-s3u7.csv?$where=station_name=\"Foster Weather Station\"")
WITHOUT A TOKEN (and not throttled-up) works perfectly well!
and this 'open source' h___s://github.com/Chicago/RSocrata/blob/master/R/RSocrata.R doesn't answer the question as well.

It looks like the syntax you're using to pass your app token is wrong. I'm no R expert, but I found this example in the documentation for the RSocrata library:
df <- read.socrata("http://soda.demo.socrata.com/resource/4334-bgaj.csv",
app_token = "__my_app_token__")
Try passing your app token as a named parameter instead of an indexed parameter, and see if that helps.

Related

academictwitteR - Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, : something went wrong. Status code: 403

I have just received an Academic Twitter Developer privileges and am attempting to scrape some tweets. I updated RStudio, regenerated a new bearer token once I got updated Academic Twitter access, and get_bearer() returns my new bearer token. However, I continue to get the following error:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 403
In addition: Warning messages:
1: Recommended to specify a data path in order to mitigate data loss when ingesting large amounts of data.
2: Tweets will not be stored as JSONs or as a .rds file and will only be available in local memory if assigned to an object.
Additionally, I have tried specifying a data path, but I think I am confused as to what these means? I think that's where my issue lies, but does the data path way mean like a specific file pathway on my computer?
Below is the code I was attempting to use.This code worked previously with my professor's bearer token that they used to just show the output:
`tweets <-
get_all_tweets(
query = "#BlackLivesMatter",
start_tweets = "2020-01-01T00:00:00Z",
end_tweets = "2020-01-05T00:00:00Z",
n = 100)`
Thanks in advance!
Status code 403 means forbidden. You may want to check the error codes reference page of Twitter API here.
Perhaps your bearer token is misspelled?

How to create user agent?

I'm trying out this function from the package edgarwebR
x <- paste0("https://www.sec.gov/Archives/edgar/data/",
"933691/000119312517247698/0001193125-17-247698-index.htm")
try(filing_information(x))
But it returns me the following
No encoding supplied: defaulting to UTF-8.
Error in check_result(res) :
EDGAR request blocked from Undeclared Automated Tool.
Please visit https://www.sec.gov/developer for best practices.
See https://mwaldstein.github.io/edgarWebR/index.html#ethical-use--fair-access for your responsibilities
Consider also setting the environment variable 'EDGARWEBR_USER_AGENT
So I went to SEC and the information tells me to declare my user agent in the request headers. I'm new to this. How do I create a user agent?
I use the edgar package so I'm not sure this is helpful. But this is how I would do it using edgar package
library("edgar")
useragent = "Your Name Contact#domain.com"
info <- getFilingInfo('933691', 2021, quarter = c(1,2,3,4), form.type = 'ALL', useragent)

Calling Mastodon API in R Studio

Please can someone help me figure out what is going on? I used the mastodon library in R Studio to extract some data from the fediverse successfully a while ago. Here is the code I used:
tokens <- login("https://mastodon.social/",user = user,pass = password)
"user" is my email address.
It worked well initially, but trying it again, I am getting this annoying error message, which I do not understand:
Error in UseMethod("content", x) :
no applicable method for 'content' applied to an object of class "response"
Please can any good samaritan out there who has used this library in R Studio help me figure out what is going on? I need to prepare a report on this project. Thanks in advance of your help.
It is possible that some functions may have masked the function. By calling this on a fresh R session, it did work
tokens <- login("https://mastodon.social/",user = user,pass = password)
tokens$instance
[1] "https://mastodon.social/"

How to use proxies with urls in R?

I am trying to use proxies with my request urls in R. It changes my requested url from "www.abc.com/games" to "www.abc.com/unsupportedbrowser"
The proxies are working since I tested them in python. However I would like to implement it in R
I tried using "httr" and "crul" library in R
#Using httr library
r <- GET(url,use_proxy("proxy_url", port = 12345, username = "abc", password ="xyz") )
text <-content(r, "text")
#using "crul"
res <- HttpClient$new(
url,
proxies = proxy(proxy_url:12345,"abc","xyz")
)
out <-res$get()
text <-out$parse("UTF-8")
Is there some other way to implement the above using proxies or how can I avoid getting the request url changing from "www.abc.com/games" to "www.abc.com/unsupportedbrowser"
I also tried using "requestsR" package
However when I try something like this:
library(dplyr)
library(reticulate)
library(jsonlite)
library(requestsR)
library(rvest)
library(listviewer)
proxies <-
"{'http': 'http://abc:xyz#proxy_url:12345',
'https': 'https://abc:xyz#proxy_url:12345'}" %>%
convert_dictionary_to_list()
res <- Get(url, proxy=proxies)
It gives an error: "r$get : $ operator is invalid for atomic vectors"
I don't understand why it raises such an error. Please Let me know if this can be resolved
Thanks!
I was able to solve the above problem using "user_agent" argument with my GET()

Yahoo gecoding API in R

I am trying to do a batch geocode with the Yahoo BOSS api from R.
It is currently throwing an error based on credentials - any idea how I can get this to succeed?
myapp <- oauth_app("yahoo",
key = "my key",
secret = "my secret"
)
yahoo <- oauth_endpoint("get_request_token", "request_auth", "get_token",
base_url = "https://yboss.yahooapis.com/geo/placefinder")
token <- oauth1.0_token(myapp, yahoo)
sig <- sign_oauth1.0(myapp, token$oauth_token, token$oauth_token_secret)
GET("https://yboss.yahooapis.com/geo/placefinder",
sig)
Unfortunately Yahoo uses a weird authentication strategy that isn't compatible with a simple oauth_endpoint function. You can see the general flow I use in the rydn package that #Scott pointed out here.
You might benefit from just using that package, or feel free to leverage the working example I have there in your own stuff.

Resources