I'm trying to connect to db in Azure Data Explorer from R using AzureKusto library. Following this documentation https://github.com/Azure/AzureKusto, after calling kusto_database_endpoint(...) function I need to open a browser page and insert the printed code manually. There's a way to skip this manual step and do it automatically? Or there are alternatives for connecting to ADX db?
Thanks for the help!
Co-creator of the package here. Thank you for the question. Yes, you can use the get_kusto_token function to obtain a token and then pass it to kusto_database_endpoint as the .query_token argument. get_kusto_token supports the following authentication flows:
"authorization_code"
"device_code"
"client_credentials"
"resource_owner"
For example, if you have an AAD application service principal that has access to the Azure Data Explorer cluster, you can use its ID and secret to authenticate:
# authenticate using client_credentials method: see ?AzureAuth::get_azure_token
token <- get_kusto_token("https://mycluster.kusto.windows.net",
tenant="mytenant",
authtype="client_credentials",
app="myappid",
password="myclientsecret")
kusto_database_endpoint(server = "mycluster.kusto.windows.net",
database = "mydb",
.query_token=token)
The help page ?AzureKusto::get_kusto_token provides more detailed information on this. Also, please note that the get_kusto_token function is a wrapper around AzureAuth::get_azure_token. The readme for the AzureAuth R package has more detailed examples of other methods of obtaining an Azure access token: https://github.com/Azure/AzureAuth
Related
I've verified my API in RStudio after hours of trying and now I've reached another error while trying to translate a sentence. Would be grateful for any help!
I'm just trying to translate "hello" to french using googleLanguageR package -
> gl_translate("Hello", "fr")
The result I get is this -
2021-01-21 17:15:36 -- Translating text: 5 characters -
i 2021-01-21 17:15:36 > Request Status Code: 403
Error: API returned: Request had insufficient authentication scopes.
I'm a literal beginner in the field of computing and do not understand what scopes mean here.
Thanks for the help!
Scopes are permissions that you give to apps you use to access an API. For example, one App might have permission to read the private messages of a users, whereas another doesn't. It's similar to when an app on your phone asks for permission to use the camera, or access your contacts.
Your app is trying to do something that it doesn't have permission to do. You'll need to add the relevant scopes in whatever setting that it is where you're generating keys etc. Presumably Google Data Studio?
Okay, I found an answer.
I needed to download a json version of my key and authorize it using the code -
gl_auth("filename.json")
After doing this, I needed to make sure my API is enabled. Now, it is working perfectly!
I'm trying to get data from my GA account using R. I've managed to do this with RGoogleAnalytics package. Unfortunetally I have a problem with authentication on production server. Everything works but I have to refresh token each day (and it is not acceptable for me). I've read about need of refresh_token but I have no idea how to obtain it using this package... What I did is:
library("RGoogleAnalytics")
client.id <- "XXX"
client.secret <- "XXX"
token <- Auth(client.id, client.secret)
save(token, file = "./auth/token")
load("./auth/token")
ValidateToken(token)
It works only for few hours (when token is valid) and the next day I get the error: Error: Refresh token not available. How to get this refresh token? Auth function does not seem to give it to me and after reading whole Internet I still have no idea how to deal with it. Could you help?
the RGoogleAnalytics package gives an easy way to extract the Google Analytics Data. but I think you should consider other packages that are more up to date and that are a little bit 'easier' to deal with(like the googleAnalyticsR package)
In my experience, practically all google services need this 'authentication' when using the API, and generally, the token expires, however, it is possible to generate a 'permanent token'.
This form for this 'permanent token' is a little 'hard', but once learned, you will be able to apply it in a similar way to most of Google's API services!
In summary, you need to create a project there in the GCP (Google Cloud console) to create an authentication key :
I recommend reading this first:
https://cloud.google.com/docs/authentication
In the documentation link below, you can find a step by step for this authentication for the RgoogleAnalytics package.
https://cran.r-project.org/web/packages/RGoogleAnalytics/RGoogleAnalytics.pdf
I am having problems using bigrquery to connect to a GCP service account from within an R Markdown document that I knit. When I attempt from the console, authentication works fine. Both
library(bigrquery)
bq_auth()
and
library(bigrquery)
bq_auth(email="my-service-account-email#myproject.iam.gserviceaccount.com")
launch a browser with a dialog that lets me pick and authenticate using the specified account as expected. But in the R Markdown, any attempt like
options("httr_oob_default" = TRUE)
bq_auth(email="my-service-account-email#myproject.iam.gserviceaccount.com")
or even using the full list like this
bq_auth(
email = "my-service-account-email#myproject.iam.gserviceaccount.com",
path = NULL,
scopes = c("https://www.googleapis.com/auth/bigquery"),
cache = gargle::gargle_oauth_cache(),
use_oob = gargle::gargle_oob_default(),
token = NULL
)
leads to the error
Error: Can't get Google credentials.
Are you running bigrquery in a non-interactive session? Consider:
* Call `bq_auth()` directly with all necessary specifics.
Can anyone see what I am missing? Thanks in advance.
You can download the JSON file of your Google Cloud service account, then use it as a path that the “bq_auth” function can recognize. Here's the steps:
Google Cloud Console (console.cloud.google.com)
Navigation Menu
IAM & Admin Service
Accounts
Create Service Account (create one)
Create Key, and save to "/path/to/jsonfilename.json"
Authenticate in your R Markdown code: bigrquery::bq_auth(path = "/path/to/jsonfilename.json")
Note: you'll need to make sure to set the service account to have access to BigQuery. I set mine to "BigQuery Admin" and it worked, but that might be too broad
Borrowed this answer from Elaine See's post on medium: https://medium.com/#elaine.yl.see/easiest-way-to-use-bigquery-in-r-8af466cd55ca
I try to use RGoogleDocs and get
Error: Forbidden
I have two-step verification on: is there a work-around?
sheets.con = getGoogleDocsConnection(getGoogleAuth(user, ps, service = "wise"))
Error: Forbidden
Relevant question
The getGoogleAuth of RGoogleDocs package is based on an officially deprecated ClientLogin to connect google server, see https://developers.google.com/identity/protocols/AuthForInstalledApps?csw=1
You may use the application password of google as a try.
Another way is just use the url of you google docs to visit certain contents, see http://www.r-bloggers.com/access-google-spreadsheet-directly-in-bash-and-in-r/
Update:
In the source code of getGoogleAuth, the author used an application called 'R-GoogleDocs-0.1', you may apply an new application and get the token. Then I think you could use the token and the api from google to access google docs directly in R. However, such hacks almost mean update/rewrite RGoogleDocs package.
I am interested to know what commands allows me to write and read data to and from Amazon ElasticCache using the ASP.NET SDK. I've viewed the online documentation but couldn't figure out how it is done.
What I did in the code: I created to keys in the web.config to store the Id and Access password.
AmazonElastiCacheClient client = new AmazonElastiCacheClient(ElasticCache_Id, ElasticCache_Pass);
Initialize the AmazonElasticCacheClient object and pass the credentials strings.
I need a sample code that will demonstrate how to put data and how to retrieve data from the ElasticCache cluster. thanks.
It looks like you can only manage elasticcache clusters through the AWS SDK.
You can use any memcached client to read and write to elasticcache since that is the underlying technology.
Here is an example:
http://geekswithblogs.net/shaunxu/archive/2010/04/07/first-round-playing-with-memcached.aspx