I use this to test if I can retrieve references from a paper using doi from rscopus package
I use this:
library(rscopus)
library(dplyr)
auth_token_header("please_add")
akey="please_add"
object_retrieval("10.1109/ISCSLP.2014.6936630", ref = "doi")
but I receive this error:
Error in get_api_key(api_key, error = api_key_error) :
API key not found, please set option('elsevier_api_key_filename') or option('elsevier_api_key') for general use or set environment variable Elsevier_API, to be accessed by Sys.getenv('Elsevier_API')
Why do I receive it?
Please follow the steps I outlined in the section of https://github.com/muschellij2/rscopus#steps-to-get-api-key
Which is posted below:
In order to use this package, you need an API key from https://dev.elsevier.com/sc_apis.html. You should login from your institution and go to Create API Key. You need to provide a website URL and a label, but the website can be your personal website, and agree to the terms of service.
Go to https://dev.elsevier.com/user/login. Login or create a free account.
Click "Create API Key". Put in a label, such as rscopus key. Add a website. http://example.com is fine if you do not have a site.
Read and agree to the TOS if you do indeed agree.
Add Elsevier_API = "API KEY GOES HERE" to ~/.Renviron file, or add export Elsevier_API=API KEY GOES HERE to your ~/.bash_profile.
Alternatively, you you can either set the API key using rscopus::set_api_key or by options("elsevier_api_key" = api_key). You can access the API key using rscopus::get_api_key.
You should be able to test out the API key using the interactive Scopus APIs.
A note about API keys and IP addresses
The API Key is bound to a set of IP addresses, usually bound to your institution. Therefore, if you are using this for a Shiny application, you must host the Shiny application from your institution servers in some way. Also, you cannot access the Scopus API with this key if you are offsite and must VPN into the server or use a computing cluster with an institution IP.
See https://dev.elsevier.com/tecdoc_api_authentication.html
Related
I use SSO and a profile as defined in ~/.aws/config (MacOS) to access AWS services, for instance:
aws s3 ls --profile myprofilename
I would like to access AWS services from within R, using the paws() package. In order to do this, I need to set my credentials in the R code. I want to do this through accessing the profile in the ~/.aws/config file (as opposed to listing access keys in the code), but I haven't been able to figure out how to do this.
I looked at the extensive documentation here, but it doesn't seem to cover my use case.
The best I've been able to come up with is:
x = s3(config = list(credentials = list(profile = "myprofilename")))
x$list_objects()
... which throws an error: "Error in f(): No credentials provided", suggesting that the first line of code above does not connect to my profile as stored in ~/.aws/config.
An alternative is to generate a user/key with programmatic access to your S3 data. Then, assuming that ~/.aws/env contains the values of the generated key:
AWS_ACCESS_KEY_ID=abc
AWS_SECRET_ACCESS_KEY=123
AWS_REGION=us-east-1
insert the following line at the beginning of your file:
readRenviron("~/.aws/env")
This AWS blog provides details about how to get the temporary credentials for programatic access. If you can get the credentials and set the appropriate environment variables, then the code should work fine without the profile name.
Or You can also try the following if you can get temporary credentials using aws cli
Check if you can generate temporary credentials
aws sts assume-role --role-arn <value> --role-session-name <some-meaningful-session-name> --profile myprofilename
If you can execute the above successfully, then you can use this method to automate the process of generating credentials before your code runs.
Put the above command in a bash script get-temp-credentials.sh and generate a JSON containing the temporary credentials as per the documentation.
Add a new profile programmatic-access in the ~/.aws/config
[profile programmatic-access]
credential_process = "/path/to/get-temp-credentials.sh"
Finally update the code to use the profile name as programmatic-access
If you have AWS cli credentials set up as a bash profile eg. ~/.aws/config:
[profile myprof]
region=eu-west-2
output=json
.. and credentials eg. ~/.aws/credentials:
[myprof]
aws_access_key_id = XXX
aws_secret_access_key = xxx
.. paws will use these if you add a line to ~/.Renviron:
AWS_PROFILE=myprof
I try to connect with Azure Cognitive Service using Roxford package. I got error propably due to wrong endpoint (after including Oxford Project into Azure Services there are several, region specific end points).
I got the key from personal account in Azure Cognitive Service project:
library(Roxford)
library(plyr)
library(rjson)
facekey <- "xxx" #look it up on your subscription site
getFaceResponseURL("http://getwallpapers.com/wallpaper/full/5/6/4/1147292-new-women-faces-wallpaper-2880x1800-for-phone.jpg",key= facekey)
#I got error
# {"error":{"code":"Unspecified","message":"Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."}}
How to change the endpoint to the: "https://westcentralus.api.cognitive.microsoft.com/face/v1.0" ???
If your Roxford lib is the one here: https://github.com/flovv/Roxford/blob/master/R/videoAnalysis_LIB.R#L182
Then you can add the region when you call the method. Cognitive Services keys are dedicated to an Azure region, so you should use the same region when you use it. If you don't remember which region you choose when you generated the key, it's written in the overview in Azure portal.
Then when you use getFaceResponseUrl:
getFaceResponseURL <- function(img.url, key, region="westus")
Pass the region:
getFaceResponseURL("http://getwallpapers.com/wallpaper/full/5/6/4/1147292-new-women-faces-wallpaper-2880x1800-for-phone.jpg", key=facekey, region="theAzureRegionOfYourKey")
I'm using rga to get some data from Google Analytics. From the repo:
The principle of this package is to create an instance of the API Authentication, which is a S4/5-class (utilizing the setRefClass). This instance then contains all the functions needed to extract data, and all the data needed for the authentication and reauthentication. The class is in essence self sustaining.
The package creates and saves a local instance using:
rga.open(instance="ga", where="~/ga.rga")
When I try to knit, however, I get an error that the ga object (what would be the instance) is not found. The code works when I run the chunks in RStudio, however—I believe the error is related to this aspect:
[The command above] will check if the instance is already created, and if it is, it'll prepare the token. If the instance is not created [...] it will redirect the client to a browser for authentication with Google.
My guess is that knitr can't perform that last step and so, the object is never created.
How can I make this work? I'm thinking that there might be a way to load the local ga.rga file to bypass browser authentication.
You can bypass browser authentication by passing the client id and client secret key that you can get it from Google API console. Saving a local auth file in the dev env is always risky. You can try this code, this uses Google API and also saves the local instance -
rga.open(instance = "ga",
client.id = "<contains apps.googleusercontent.com>",
client.secret =<your secret key>, where ="~/ga.rga" )
Also ensure that desktop option setting is enabled in Google API console
Context: We are trying to load some CSV format data into GCP BigQuery using GCP Dataflow (Apache Beam). As a part of this for the first time (for each table) creating the BQ tables thru BigQueryIO API. One of the customer requirement is the data on GCP needs to be encrypted using Customer supplied/managed Encryption keys.
Problem Statement: We are not able to find any way to specify the "Custom Encryption Keys" thru APIs while creating Tables. The GCP documentation details about how to specify the Custom encryption keys thru GCP BQ Console but could not find anything for specifying it thru APIs from within DataFlow Code.
Code Snippet:
String tableSpec = new StringBuilder().append(PipelineConstants.PROJECT_ID).append(":")
.append(dataValue.getKey().target_dataset).append(".").append(dataValue.getKey().target_table_name)
.toString();
ValueProvider<String> valueProvider = StaticValueProvider.of("gs://bucket/folder/");
dataValue.getValue().apply(Count.globally()).apply(ParDo.of(new RowCount(dataValue.getKey())))
.apply(ParDo.of(new SourceAudit(runId)));
dataValue.getValue().apply(ParDo.of(new PreProcessing(dataValue.getKey())))
.apply(ParDo.of(new FixedToDelimited(dataValue.getKey())))
.apply(ParDo.of(new CreateTableRow(dataValue.getKey(), runId, timeStamp)))
.apply(BigQueryIO.writeTableRows().to(tableSpec)
.withSchema(CreateTableRow.getSchema(dataValue.getKey()))
.withCustomGcsTempLocation(valueProvider)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
Query: If anybody could let us know
If this is possible to provide encryption key thru Beam API?
If its not possible with the current version what could be the possible work
around?
Kindly let know if additional information is required.
Customer supplied encryption keys is a new feature, not all libraries have been updated to support it yet.
If you know the table name in advance, you can use UI/CLI or API to create table, then run your normal flow to load data into that table. That might be a work around for you.
https://cloud.google.com/bigquery/docs/customer-managed-encryption#create_table
API to create table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
You need to set this section on table object:
"encryptionConfiguration": {
"kmsKeyName": string
}
More details on table: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource
How to create data base link in oracle 11 g to Access Tables.
You seem to have copied the example in the documentation without really understanding it.
The USING 'local' part of the statement is creating a link to 'the local database', where local is the service name of a database. (The example is a bit confusing, to be fair).
When the link is used it tries to interpret local as a service name, appending the current database's domain, as the docs say:
USING 'connect string'
Specify the service name of a remote database. If you specify only the
database name, then Oracle Database implicitly appends the database
domain to the connect string to create a complete service name.
Therefore, if the database domain of the remote database is different
from that of the current database, then you must specify the complete
service name.
If you're trying to create a link back into the same database - which would be a bit odd but I've seen it done in place of grant access across schemas, and that seems to be what the example is hinting at - then you can replace 'local' in the USING clause with the service name of your current database (e.g. USING 'orcl', or whatever).
You can also use a TNS alias; if your tnsnames.ora has an entry for SOME_DB which points to the SID or service name of another database, you can have USING'some_db'`. You should be able to use any connect string I think; certainly Easy Connect is allowed. There's more in the net services admin guide.