I am using R and package bigrquery to access Bigquery from an R session.
This works great as long as I am on my local machine.
However, when I try to access Bigquery from R on a remote server it does not work at all.
I tried to copy the .httr-oauth file into my home directory on the server but this does not work.
I get the error message:
Auto-refreshing stale OAuth token.
Error in refresh_oauth2.0(self$endpoint, self$app, self$credentials) :
client error: (400) Bad Request
I really have no idea about where to store the necessary credentials and unfortunately I was not able to find anything useful about that by google-searching the topic.
By default httr, which is used by bigrquery for oauth, will look in the R session's current working directory for .httr-oauth. You can override this location with the following (perhaps putting it in your .Rprofile if you like):
options("httr_oauth_cache"="~/.httr-oauth")
But for error message you received, its seems like the location is not the issue and it might be easier to just redo the oauth flow on the remote server to cache a new credential. To trigger a new oauth flow on the remote server:
ensure the .httr-oauth file does not exist
restart R
perform one query with bigrquery
Note that if httr tries to redirect to localhost, you can force it to do an out-of-band oauth flow with:
options(httr_oob_default = TRUE)
Related
I'm trying to implement an unattended script accessing files within OneDrive using Microsoft365R.
I've setup everything like in the docs using the default app registration.
The interactive flow with auth_type="device_code" works without issues:
odb <- Microsoft365R::get_business_onedrive(auth_type="device_code")
But when trying auth_type="resource_owner" like shown in the docs here, I get the following error:
odb <- Microsoft365R::get_business_onedrive(tenant=tenant, app=app, username=user, password=getPass(), auth_type="resource_owner")
Error in process_aad_response(res) :
Bad Request (HTTP 400). Failed to obtain Azure Active Directory token. Message:
AADSTS50126: Error validating credentials due to invalid username or password.
My guess is, that the default app is missing some privileges to use the "resource_owner" flow.
Can someone point me to the right direction on how to get the resource_owner flow working?
(Using Service Principles is not a solution for my setup, but I did also try it with a dedicated service account and it was not working either)
I'm trying to use the AzureR family of R packages to interact with Outlook through the Graph API. Using Microsoft365R I have the following code:
outl <- get_business_outlook(
tenant = tenant_id,
app = client_id,
password = client_secret
)
But this results in a 403 error:
Error in process_response(res, match.arg(http_status_handler), simplify) :
Forbidden (HTTP 403). Failed to complete operation. Message:
Insufficient privileges to complete the operation.
The app in question has the API permissions Mail.ReadWrite, Mail.ReadWriteShared, Mail.Send, Mail.Send.Shared, offline_access, openid, User.Read.
I also tried using the AzureGraph package directly like:
login <- create_graph_login(
tenant = tenant_id,
app = client_id,
password = client_secret
)
This works and I get a token. I then try to extract user information with me <- login$get_user(), but this throws the same 403 error as above. I suspect there is something I need to do to actually authenticate the user, but I can't really figure out what.
I am entirely new to the Graph API so it's very possible that I have missed something obvious. Any help appreciated!
Microsoft365R/AzureGraph author here. In the code you show, both with get_business_onedrive() and create_graph_login(), you are authenticating as the app, not as the user. This means that there is no user account involved, hence you're unable to view user details or send email.
To authenticate as the user, run
# Microsoft365R
get_business_outlook("tenant_id", app="client_id")
# AzureGraph
create_graph_login("tenant_id", app="client_id")
ie, without the password argument. You should know it's working if R opens up a browser window for you to login to Azure (or to show it's successfully logged in).
The latest revision of the AzureAuth package has a vignette that explains a bit more on the various authentication scenarios. AzureAuth::get_azure_token is the underlying function used to obtain an OAuth token by Microsoft365R and AzureGraph, and you can pass down the arguments mentioned in the vignette from get_business_outlook and create_graph_login.
I am trying Access the API to get information on http://github.com. I created in application in github (in developer application) for this URL and try to access thru R using httr libraries. The following is the code
library(httr)
oauth_endpoints("github")
myapp <- oauth_app("github",key = "#####################",secret = "########################" )
(key was replaced with client id and secret was replaced with secred id)
github_token <- oauth2.0_token(oauth_endpoints("github"), myapp)
This prompted me the following
Use a local file to cache OAuth access credentials between R sessions?
1: Yes
2: No
I selected 2 (as i tried option 1 earlier) then the following are displayed
httpuv not installed, defaulting to out-of-band authentication
Please point your browser to the following url:
https://github.com/login/oauth/authorize?client_id=72939e1b6d499f4f1894&scope=&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code
Enter authorization code
Can any one tell me what the authorization code is?
The authorisation code is the code that github supplies after a correct OAuth 2.0 'dance' (to use Hadley Wickham's term). The easiest way of doing this is to use httpuv (install.packages("httpuv")). With that installed, a local webserver is setup on port 1410 and provided you've setup your github application appropriately (with a redirect to http://localhost:1410).
If you don't have httpuv installed, then httr's OAuth 2.0 function defaults to out of band authorisation. This asks GitHub to redirect to urn:ietf:wg:oauth:2.0:oob&response_type=cod which should display the authorisation code within the browser so that it can be copied and pasted. However, you've almost certainly got something different set as your redirect URL and so github complains that there is a redirect URI mismatch. I'm not sure whether github can be configured to allow the oob redirect (but I've just tried and it doesn't seem to).
The only reasons not to use httpuv are if you are using R on a machine that won't let you set up a server on port 1410 or if you are using R on a remote machine via RStudio Server or an SSH session. In the latter case, the webserver will be setup on the remote machine, but your browser will be trying to connect to port 1410 on your local machine. You could potentially get around this by doing SSH port forwarding from port 1410 on your local machine to port 1410 on the remote machine.
Note also that the demo code at https://github.com/hadley/httr/blob/master/demo/oauth2-github.r unlike the current CRAN version of the oauth2-github demo includes the secret for Hadley's application so you can run the demo as is without setting up your own application first.
Here's what worked for me:
Install package HTTPUV from https://github.com/rstudio/httpuv
And maybe set your \\R\library permission for the current user for running devtools::install_github("rstudio/httpuv")`
I've tried everything possible, to setup nJupiter.DataAccess.Ldap as the membership provider on our intranet based web application built using asp.net 3.5.
Challenges I am facing:
Not able to authenticate the user using the default login webpart (says Your login attempt was not successful. Please try again)
I tried this code and I receive a COMException : "There is no such object on the server."
var ldapMembershipUser = System.Web.Security.Membership.GetUser("username") as LdapMembershipUser;
if (ldapMembershipUser != null)
{
var givenName = ldapMembershipUser.Attributes["givenName"];
}
I have placed my web.config and the nJupiter.DataAccess.Ldap.config here:
web.config : http://pastebin.com/9XdDnhUH
nJupiter.DataAccess.Ldap.config : http://pastebin.com/WsSEhi98
I have tried all possible permutations and combinations for different values in the XML and i am not able to take it forward. Please guide. I just am not able to connec to the LDAP and authenticate the user or even search for users.
Just looking at your config is unlikely to be enough since I don't know your Domino server's confguration, so my answer isn't an attempt to fix your problem. It's an attempt to teach you how I would approach it if it were my problem. Here's what I do to troubleshoot connections and queries from code to Domino LDAP:
Configure the Domino LDAP server for logging the highest level of debug information with the notes.ini setting LDAPDEBUG=7. See this IBM technote for more info.
Use an LDAP client and figure out how to successfully connect to the Domino LDAP server. I like the free Softerra client for this. Check the logs and save off the info from your successful connection.
Now run your code and compare what you see in the logs against the successful connection.
If the code is making it past authentication but failing on the query, then find the actual query in the log, go back to your LDAP client, figure out what the query should have been, and adjust your code's configuration appropriately.
I have 2 buckets for my application:
- gambify-dev-devil ( for development)
- gambify-prod (for production)
I have set them up absolutely identical, but for my production I have issues accessing some ressources. My production environment is a pagodabox. I use Gaufrette, LiipImagine and Vichuploader for my File handling. The issue I have is that in my production environment seems that either my application requests the wrong ressources or that there is an access issue. Because I have a lot logs indicating the an AccessDenied error within my bucket:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>D90C05F182C91003</RequestId>
<HostId>
i7SkwNCbyUnCCBCnkyyrv7x9pOLGtr4sUgqWYkJMqk0X0lXYIW5zeu4688FCqBiA
</HostId>
</Error>
In order to investigate this issue further (I really have no idea where it is coming from because its working fine in every other environment and also in production it was working fine 2 weeks ago), I would like to see which ressource was requested. Is there a chance to find the URL that was requested or who tried to request what, that caused this issue? Because if I provide a correct path to an existing ressouce the bucket works fine:
e.g: https://s3-eu-west-1.amazonaws.com/gambify-prod/profile/default.png
Update:
Now I found the real error message that is causing me problems:
04fadbab7a82c23143855d5c918e1ba8fa32ef1d622c00a3daa9fcdc6daf5d90
gambify-prod [05/Aug/2013:19:03:57 +0000] 173.193.185.250 -
133EF43443891C63 REST.HEAD.OBJECT
profile_thumb_small/51e9a03453c80.jpeg "HEAD
/profile_thumb_small/51e9a03453c80.jpeg HTTP/1.1" 403
SignatureDoesNotMatch 1015 - 7 -
"https://gambify-prod.s3.amazonaws.com/profile_thumb_small/51e9a03453c80.jpeg"
"aws-sdk-php/1.5.17.1 PHP/5.3.23 Linux/2.6.32-042stab068.8 Arch/x86_64
SAPI/fpm-fcgi Integer/9223372036854775807 Build/20121126140000
simplexml/0.1 json/1.2.1 pcre/8.31 spl/0.2 curl/7.19.7 openssl/0.9.8k
apc/3.1.9 pdo/1.0.4dev pdo_sqlite/1.0.1 sqlite/2.0-dev sqlite3/0.7-dev
zlib/1.1 memory_limit/200M date.timezone/Europe.Berlin
open_basedir/off safe_mode/off zend.enable_gc/on" -
I still have no idea what is causing the initial issue.
Moved the discussion about the signature error to: Amazon S3 signature not working with SDK
If you haven't already done so, you can configure your production bucket to keep a log of all the requests made against it, similar to an Apache or other web server access log.
http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
Once you have logging enabled, you will be able to find out the URL of the request, who requested it and when it was requested.
Update:
If an AccessDenied error is returned when trying to access the S3 server log files through the API or the AWS console, the problem is caused by missing permissions (ACLs) on the log files.
To access those log files, the Open/Download permission should be granted for the user that owns them. Having a bucket policy with public read enabled is not enough to get access to the server log files.
More details on the issue are available in the comments below.
These look like responses that S3 sends back when the ACL/Grant permissions aren't set correctly. I'd check those first. If your bucket is behind a CloudFront distribution, make sure you invalidate the CloudFront cache as well.