failed to authenticate google translate in R - r

So I tried to use gl_translate function to 500,000 characters in Rstudio, which means I have to authenticate my google translate API. The problem is that I tried it like two months ago with my old google account and now I'm using the new one.
So when I tried to authenticate new client_id with my new google account, I got error message that my API hadn't been enabled yet, which I had enabled it. I restarted my Rstudio and now I got this error message:
020-01-22 19:01:24 -- Translating html: 147 characters -
2020-01-22 19:01:24> Request Status Code: 403
Error: API returned: Request had insufficient authentication scopes.
It is very frustrating because then I tried to enable the old google account and it requires me to put my credit card number, which is again I did and then now they asked me to wait several days.
Anyone can figure out what's problem with this?
here is my R code for authentication:
install.packages("googleAnalyticsR", dependencies = TRUE)
library(googleAnalyticsR)
install.packages("googleLanguageR")
library(googleLanguageR)
install.packages("dplyr")
library(dplyr)
library(tidyverse)
install.packages("googleAuthR")
library(googleAuthR)
client_id <- "107033903887214478396"
private_key <- "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQChPmvib1v9/CFA\nX7fG8b8iXyS370ivkufMmX30C6/rUNOttA+zMhamS/EO0uaYtPw44B4QdNzRsSGq\nm+0fQ5Sp1SHJaVPkoImuUXZdMlLO73pvY48nMmEFg7deoOZI+CNWZYgIvPY8whMo\nk4vKE2iyuG+pl9MT7dT6dwWNmXDWr8kgxAfryfVEUIeqaf+57Z9g3FfVPLARz4iH\nCaTu55hhbmo/XknUx0hPsrwBMPaNLGl2+o5MU1ZuIkl47EJvdL8CdUuEmb9qJDtv\nUyKqANlwFa7cVW8ij2tjFpjJ7bigRVJsI8odbsEbwmx1b5SLckDOQ7t4l8zERtmo\nUv5fxyNNAgMBAAECggEAApeLyWyL2IXcjPnc7OxG68kGwJQuoW/lnQLcpPcpIUm/\n1Vt/IxzLg2nWGqxmO48xPMLRiOcwA4jq5yCxi56c/avo6qFwUU0JWY2CrxXXge8U\nk0TQ8MrdB2cqI/HHMeYXP1TLfoR3GtvtzemtRhbQyIqxdNL1eC0LDump47BTQYg0\nVPuCxU3zXVIj+Qt0FZa+Pa/nAOGHf5b4ye56L7vxL2BCeRncuHdDcE6Ilkpz79Gv\nkXP1K5j22uEVCmobe1qRlq3BLx2Qimj4h8MI8CKiNS40tGR/oewJ5uMgmeCePYKv\nqSFOwCDvRkw9V2KdGu40WQFEq21mczlv9gMWhp2/EQKBgQDRmBZZM7ugIpo64wx6\nDFYhZo05LmMiwymIfWs2CibzKDeXPuy3OSytvTPVFCkG+RlcYthxAIAn1Z/qJ4UI\n+8c8Zwfg+toYtEa2gTYM2185vmnqQwqmAsaK+4xKZzgfqxie/CBuPzUOZO41q6P8\ni7A2KqXHcDb4SMqnkdGGLk/7+QKBgQDE8dBesgx4DsHFYg1sJyIqKO4d2pnLPkDS\nAzx5xvQuUcVCNTbugOC7e0vGxWmQ/Eqal5b3nitH590m8WHnU9UiE4HciVLe+JDe\nDe5CWzBslnncBjpgiDudeeEubhO7MHv/qZyZXMh73H2NBdO8j0uiGTNbBFoOSTYq\nsFACiCZu9QKBgE2KjMoXn5SQ+KpMkbMdmUfmHt1G0hpsRZNfgyiM/Pf8qwRjnUPz\n/RmR4/ky6jLQOZe6YgT8gG08VVtVn5xBOeaY34tWgxWcrIScrRh4mHROg/TNNMVS\nRY3pnm9wXI0qyYMYGA9xhvl6Ub69b3/hViHUCV0NoOieVYtFIVUZETJRAoGAW/Y2\nQCGPpPfvD0Xr0parY1hdZ99NdRQKnIYaVRrLpl1UaMgEcHYJekHmblh8JNFJ3Mnw\nGovm1dq075xDBQumOBU3zEzrP2Z97tI+cQm3oNza5hyaYbz7aVsiBNYtrHjFTepb\nT1l93ChnD9SqvB+FR5nQ2y07B/SzsFdH5QbCO4kCgYBEdRFzRLvjdnUcxoXRcUpf\nfVMZ6fnRYeV1+apRSiaEDHCO5dyQP8vnW4ewISnAKdjKv/AtaMdzJ5L3asGRWDKU\n1kP/KDBlJkOsOvTkmJ4TxbIhgcSI62/wqDBi5Xqw1ljR2mh8njzRwqDRKs12EtQ0\n9VaUDm7LCNTAskn2SR/o4Q==\n-----END PRIVATE KEY-----\n"
options(googleAuthR.client_id = client_id)
options(googleAuthR.client_secret = private_key)
devtools::reload(pkg = devtools::inst("googleAnalyticsR"))
ga_auth()
in case you need to see what's my translate code like:
translate <- function(tibble) {
tibble <- tibble
count <- data.frame(nchar = 0, cumsum = 0) # create count file to stay within API limits
for (i in 1:nrow(tibble)) {
des <- pull(tibble[i,2]) # extract description as single character string
if (count$cumsum[nrow(count)] >= 80000) { # API limit check
print("nearing 100000 character per 100 seconds limit, pausing for 100 seconds")
Sys.sleep(100)
count <- count[1,] # reset count file
}
if (grepl("^\\s*$", des) == TRUE) { # if description is only whitespace then skip
trns <- tibble(translatedText = "", detectedSourceLanguage = "", text = "")
} else { # else request translation from API
trns <- gl_translate(des, target='en', format='html') # request in html format to anticipate html descriptions
}
tibble[i,3:4] <- trns[,1:2] # add to tibble
nchar = nchar(pull(tibble[i,2])) # count number of characters
req <- data.frame(nchar = nchar, cumsum = nchar + sum(count$nchar))
count <- rbind(count, req) # add to count file
if (nchar > 20000) { # addtional API request limit safeguard for large descriptions
print("large description (>20,000), pausing to manage API limit")
Sys.sleep(100)
count <- count[1,] # reset count file
}
}
return(tibble)
}

I figured it out after 24 hours.
Apparently it is really easy. I just followed the step from this link.
But yesterday I make mistake because the json file I downloaded is the json file from service client Id, while I actually need the json file from service account.
Then I install the googleLanguageR package with this code:
remotes::install_github("ropensci/googleLanguageR")
library(googleLanguageR)
and then just set the file location of my download Google Project JSON file in a GL_AUTH argument like this:
gl_auth("G:/My Drive/0. Thesis/R-Script/ZakiServiceAccou***************kjadjib****.json")
and now I'm happy :)

Related

How to read R Plumber function parameters data from another script?

Let say there is R code for REST API based on using the "plumber" package.
Here is a function from it.
#' Date of sale
#' #get /date_of_sale
#' #param auth_key Auth key
#' #param user_id User ID
#' #example https://company.com/date_of_sale?auth_key=12345&user_id=6789
function(auth_key, user_id) {
# ...
}
Let say there is another R script that uses API request to this server like
api_string <- "https://company.com/date_of_sale?auth_key=12345&user_id=6789"
date_of_sale <- jsonlite::fromJSON(api_string)
Is it possible to get a description of the parameters "auth_key" and "user_id" in the second script to have a full explanation of what means each parameter? For example, get for "auth_key" a string "Auth key"? Or how it will be possible to get access to function "date_of_sale" metadata at all?
Thanks for any ideas?
Using a file plumber.R with content as you provided. Assuming it is in the working directory.
In R
pr_read <- plumber::pr("plumber.R")
spec <- pr_read$getApiSpec()
spec$paths$`/date_of_sale`$get$parameters
spec is an R list with the same structure as an OpenAPI document.
If you do not have access to API plumber file but your API is running somewhere you have access to.
spec <- jsonlite::fromJSON("{api_server}/openapi.json", simplifyDataFrame = FALSE)
spec$paths$`/date_of_sale`$get$parameters
Again this follows the OpenAPI documentation standards.
Since this is an API GET request you can not access the description of the variable unless you explicitly include it in the API response.
I've learned some R script purely for this question, my guess is this is how you've prepared your JSON API response.
You can do something like this in your JSON request.
library(rjson)
auth_key <- "some_key";
user_id <- "340";
x <- list(auth_key = list(
type = typeof(auth_key),
lenght = length(auth_key),
attributes = attributes(auth_key),
string = auth_key
),
user_id = list(
type = typeof(user_id),
lenght = length(user_id),
attributes = attributes(user_id),
string = user_id
),
data = "your_data"
);
#x
json <- toJSON(x, indent=0, method="C" )
fromJSON( json )
You might want to look a these.
https://stat.ethz.ch/R-manual/R-devel/library/base/html/typeof.html
https://ramnathv.github.io/pycon2014-r/learn/structures.html https://rdrr.io/cran/rjson/man/toJSON.html https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/attributes

Access Coinbase API using HTTR and R

I am trying to pull a list of Coinbase accounts into R using their API. I receive an authentication error saying "invalid signature". Something is clearly wrong when I create my sha256 signature but I can't figure out what the issue is. I have not had this same issue when accessing the GDAX API using a sha256 signature.
API Key Documentation
API key is recommend if you only need to access your own account. All API >key requests must be signed and contain the following headers:
CB-ACCESS-KEY The api key as a string
CB-ACCESS-SIGN The user generated message signature (see below)
CB-ACCESS-TIMESTAMP A timestamp for your request
All request bodies should have content type application/json and be valid JSON.
The CB-ACCESS-SIGN header is generated by creating a sha256 HMAC using the secret key on the prehash string timestamp + method + requestPath + body (where + represents string concatenation). The timestamp value is the same as the CB-ACCESS-TIMESTAMP header.
My Code
library(httr)
library(RCurl)
library(digest)
coinbase_url <- "https://api.coinbase.com"
coinbase_reqPath <- "/v2/accounts/"
coinbase_fullPath <- paste(coinbase_url, coinbase_reqPath,sep = "")
coinbase_key <- "XXXXMYKEYXXX"
coinbase_secret <- "XXXXXMYSECRETKEYXXXX"
cb_timestamp <- format(as.numeric(Sys.time()), digits=10)
coinbase_message <- paste0(cb_timestamp,"GET", coinbase_reqPath)
coinbase_sig <- hmac(key = coinbase_secret, object = coinbase_message, algo = "sha256", raw = F)
coinbase_acct <- content(GET(coinbase_fullPath,
add_headers(
"CB-ACCESS-KEY" = coinbase_key,
"CB-ACCESS-SIGN" = coinbase_sig,
"CB-ACCESS-TIMESTAMP" = cb_timestamp,
"Content-Type"="application/json")))
Sorry for not updating this earlier. The answer is simple, I just needed to remove the final forward slash in "/v2/accounts/" when specifying my request path.

R Loop over multiple interdependent functions failing to loop

I am building a function to download Google analytics data from a long list of profiles and need a loop function that can tolerate a profile returning no data.
The problem is that there are several functions needed between the start of the loop and where the error can occur.
The Paste function is pulling an ID from idsvector and then the API query is constructed in 2 successive steps. This is then sent to the API using GetReportData(). The second ID in the list returns no data from the API. Currently it downloads the data from the first profile, merges it with the master dataset and then stops.
for (v in idsvector){
view.id <- paste("ga:",v,sep="") #the View ID parameter need to have "ga:" in front of the ID
sourcequery.list <- Init(
start.date = start.date,
end.date = end.date,
dimensions = "ga:channelGrouping,ga:campaign,ga:source,ga:medium,ga:date",
metrics = "ga:sessions,ga:bounces",
table.id = view.id,
max.results = 9999999
)
}
ga.sourcequery <- QueryBuilder(sourcequery.list)
data <- GetReportData(ga.sourcequery, token)
error=function(e){dev.off(); return(NULL)}
if (!is.null(data)) {
data$Property <- view.id
final.data<-rbind(sourcequery.data,data)
}
else {
next
}
}
How do I adapt this so that it loops back and tries the next ID?
Not sure if this will solve your problem, but a better approach for this is to use lapply.
It is not clear which library you are using to access GA, so I will make up some code for you:
library(data.table)
ga_load_property_data <- function(property) {
# here goes GA API wrapper magic
}
data <- lapply(properties, ga_load_property_data)
data <- rbindlist(data, idcol = "property")
This way you separate the load logic from your iterations.

R Scripts - Use Output Message as if condition

I am using the RGoogleAnalytics library to get all the data from my Google Analytics Account into R. However, complex queries deliver 0 results.
My code looks like:
query.list <- Init(start.date = paste(c(lastmonth.startdate)),
end.date = paste(c(lastmonth.enddate)),
metrics = "ga:goalCompletionsAll",
dimensions = "ga:countryIsoCode,ga:yearMonth",
filters = "ga:goalCompletionsAll>0",
max.results = 10000,
table.id = sprintf("ga:%s", sites$profile.id[i]))
# Create the Query Builder object so that the query parameters are validated
ga.query <- QueryBuilder(query.list)
# Extract the data and store it in a data-frame
ga.countriesConversions1 <- GetReportData(ga.query, token)
Everything is inside a "for", and the script stops if one of the queries end in 0 results, because GetReportData(ga.query, token) cannot create a dataframe if there is no data.
I would like to know if there is a way use the warning message ("Your query matched 0 results. Please verify your query using the Query Feed Explorer and re-run it") fired by the library to the console, assign it to a variable and use this as an if condition. So I could create a dummy data.frame before the next function comes.
Assuming getReportData is throwing an error, then you can try:
ga.countriesConversions1 <- try(GetReportData(ga.query, token), silent=TRUE)
if(inherits(ga.countriesConversions1, "try-error")) {
warning(geterrmessage())
... error handling logic ...
}

Creating a persistant connection to twitter stream API using R

I am presently using the streamR package in R to stream tweets from the filter stream in twitter. I have a handshaken ROAuth object that I use for this. My piece of code looks like:
# load the Twitter auth object
load("twitter_oAuth3.RData")
load("keywords3.RData")
streamTweet = function(){
require(streamR)
require(ROAuth)
stack = filterStream(file.name="",track=keywords,timeout=500,oauth=twitter_oAuth)
return(stack)
}
I wanted to create a real time application, which involves dumping these tweets into an activeMQ topic. My code for that is:
require(Rjms)
# Set logger properties
url = "tcp://localhost:61616"
type = "T"
name = "TwitterStream"
# initialize logger
topicWriter = initialize.logger(url,type,name)
topicWrite = function(input){
# print("writing to topic")
to.logger(topicWriter,input,asString=TRUE,propertyName='StreamerID',propertyValue='1')
return()
}
logToTopic = function(streamedStack){
# print("inside stack-writer")
stacklength = length(streamedStack)
print(c("Length: ",stacklength))
for(i in 1:stacklength){
print(c("calling for: ",i))
topicWrite(streamedStack[i])
}
return()
}
Now my problem is that of the timeout that filterStream() needs. I looked under the hood, and found this call that the function makes:
url <- "https://stream.twitter.com/1.1/statuses/filter.json"
output <- tryCatch(oauth$OAuthRequest(URL = url, params = params,
method = "POST", customHeader = NULL,
writefunction = topicWrite, cainfo = system.file("CurlSSL",
"cacert.pem", package = "RCurl")), error = function(e) e)
I tried removing the timeout component but it doesn't seem to work. Is there a way I can maintain a stream forever (until I kill it) which dumps each tweet as it comes into a topic?
P.S. I know of a java implementation that makes a call to the twitter4j API. I, however, have no idea how to do it in R.
The documentation for streamR package mentions that the default option for timeout option in filterStream() is 0 which will keep the connection open permanently.
I quote:
"numeric, maximum length of time (in seconds) of connection to stream. The
connection will be automatically closed after this period. For example, setting
timeout to 10800 will keep the connection open for 3 hours. The default is 0,
which will keep the connection open permanently."
Hope this helps.

Resources