Automatic sign-in not working on server URL (localhost works) - google-signin

I'm trying to sign in automatically when possible using following code (TypeScript, called from a React app):
google.accounts.id.initialize({
client_id: envSettings.auth.google.clientId,
callback: signInWithJwt,
auto_select: true,
});
google.accounts.id.renderButton(domElement, {
theme: "outline",
});
google.accounts.id.prompt();
I now have following situation:
Signing in via the rendered button always works (locally and on my "Static Web App" hosted in Azure)
google.accounts.id.prompt() however only works on localhost but not on the server, even though the URLs are added in the "Authorized JavaScript origins" section in the Google console. I get following message in the browser console: [GSI_LOGGER]: The given origin is not allowed for the given client ID.
The only difference I see between localhost and the server is that server is running on https and localhost is using http.
For me this does not really make sense, as obviously it does work with the button. Any thoughts on what is wrong here?

You need to follow the message, "The given origin is not allowed for the given client ID." Go to the google cloud console, and allow the origin that your server is on. Go to your project > APIs and Services > Credentials > your OAuth 2.0 Client ID, and edit it to allow your domain to be authorized.
This is for security purposes, so that a malicious actor cannot use your client ID to pose as your app on another domain, and access your users' data.
Google documentation

Found the issue thanks to this post: https://stackoverflow.com/a/70739451/4092115
I had to set the referrer policy in my index.html as follows:
<meta name="referrer" content="strict-origin-when-cross-origin" />
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy

Related

Microsoft application - Redirect URI allows 'localhost' but not '127.0.0.1'

I have developed an application that allows MSA (Microsoft Account) authentication. I have registered my app here: https://apps.dev.microsoft.com.
When testing my app locally, I can access my app with no problem at my SSL URL of https://localhost:44300, and MSA works fine. When I registered my app, I used https://localhost:44300/signin-microsoft as the Redirect URI.
Problem: I can also access my app at https://127.0.0.1:44300, as one would expect. However, MSA here doesn't work. The error page says, We're unable to complete your request.
Microsoft account is experiencing technical problems. Please try again later. And the URL of the error page reveals that the error is with a mismatch in the Redirect URI: https://login.live.com/err.srf?lc=1033#error=invalid_request&error_description=The+provided+value+for+the+input+parameter+'redirect_uri'+is+not+valid.+The+expected+value+is+'https://login.live.com/oauth20_desktop.srf'+or+a+URL+which+matches+the+redirect+URI+registered+for+this+client+application.
In the Microsoft Apps page, when I try to update the Redirect URI from https://localhost:44300/signin-microsoft to https://127.0.0.1:44300/signin-microsoft, it doesn't allow me to save my change and it shows me this error: Your URL can't contain a query string or invalid special characters, and it provides a 'Learn More' link: https://learn.microsoft.com/en-us/azure/active-directory/active-directory-v2-limitations#restrictions-on-redirect-uris
After reading the info in this link, I see nowhere that a URI like mine (https://127.0.0.1:44300/signin-microsoft) would be an unacceptable URL, as I'm not breaking any of their rules: I have no invalid characters, no query strings, etc.
My research: Looking online, people are getting the Your URL can't contain a query string or invalid special characters because they are actually using a query string or invalid special characters, such as in this link: https://social.msdn.microsoft.com/Forums/en-US/4f638860-ea57-4f0e-85e0-b28e1e357fe2/office-365-app-authorization-redirect-uri-issue?forum=WindowsAzureAD. I couldn't find a case where someone has entered a valid URI and they weren't allowed to save it.
Why I need 127.0.0.1 to work: I need to expose this website, which is running on my local box. In order to have the website running without having an instance of Visual Studio opened all the time, I'm using csrun to host my website in Azure local fabric (by the way, my app is an Azure Cloud Service, with a ASP.NET MVC 5 app as a web role). I followed this instruction for csrun: http://www.bardev.com/2013/03/12/how-to-deploy-application-to-windows-azure-compute-emulator-with-csrun/. Using csrun, it allowed me to host my website in https://127.0.0.1:444 (but, as with https://127.0.0.1:44300, MSA doesn't work). My end goal is to expose this website with a public URL using ngrok (https://www.sitepoint.com/use-ngrok-test-local-site/), so that anyone can access my site.
Therefore, my main question is: how can I have the Redirect URI be https://127.0.0.1:44300/signin-microsoft instead of https://localhost:44300/signin-microsoft?
Make sure you access this portal through https://identity.microsoft.com as this is the only way the steps below will work.
You can get around this error right now by adding the reply URL through the manifest. Login to the portal, select the app you want to configure, and scroll down and hit the Edit Application Manifest button. Then you can add your https://127.0.0.1:44300/ to the replyUrls field.
There's some funny behavior that will only allow this right now if you only register other localhost reply Urls. If this is the only reply URL you need then it shouldn't be a problem.

Google Tag Manager 403's every request even if CORS mapping is defined

when I moved to AMP, the Google Tag Manager stopped to working.
The problem occurs every time when I open my AMPed page, I can see some errors in browser console, e.g.
First error:
https://www.googletagmanager.com/amp.json?id=MY_GTM_TAG&gtm.url=MY_HTTP_URL
(403)
Second error:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
In my class that extends WebMvcConfigurerAdapter I overwritten the method addCorsMappings like this:
#Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**")
.allowedOrigins("*")
.allowedHeaders("*")
.allowCredentials(true);
};
But it still doesn't work (this method is executed on startup, I checked it). Do you have any ideas / tips why?
EDIT 1 (22.12.2016):
Q: How are you loading tag manager? Are you using the AMP version of the script? (#Jim Jeffries)
A: Yes, in <head> I included the following piece of code:
<script async custom-element="amp-analytics" src="https://cdn.ampproject.org/v0/amp-analytics-0.1.js"></script>
and in <body> there is:
<amp-analytics config="https://www.googletagmanager.com/amp.json?id=${googleTagId}&gtm.url=SOURCE_URL" data-credentials="include"></amp-analytics>
I was having the same issue and it turns out you can't use your old GTM "Web" container for this so you'll have to create a specific AMP Container.
As per Google's instructions found here:
Create an AMP container
Tag Manager features an AMP container type. Create a new AMP container for your project:
On the Accounts screen, click More Actions (More) for the account
you'd like to use. Select Create Container.
Name the container. Use a descriptive name, e.g. "example.com - news - AMP".
Under "Where to Use Container", select AMP.
Click "Create".
Based from this thread, maybe you are doing an XMLHttpRequest to a different domain than your page is on. So the browser is blocking it as it usually allows a request in the same origin for security reasons. You need to do something different when you want to do a cross-domain request. A tutorial about how to achieve that is Using CORS.
*When you are using postman they are not restricted by this policy. Quoted from Cross-Origin XMLHttpRequest:*
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
Also based from this forum, the app must authenticate as a full admin and POST the desired CORS configuration to /rest/system/config.

Web API 2.1 Windows Authentication CORS Firefox

Here's the scenario:
I created a web api project and an mvc project, like so:
http://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api
I installed CORS support via nuget and added the EnableCorsAttribute
I ran the project and everything worked as expected (GET, PUT, and POST) across Chrome, IE, and FireFox.
I then enabled Windows Authentication in the web api project (yes, i really need win auth in the api project). In order to get this to work, I added the xhrFields arg to my jquery.ajax call:
$.ajax({
type: method,
url: serviceUrl,
data: JSON.stringify(foo),
contentType: 'application/json; charset=UTF-8',
xhrFields: {
withCredentials: true
}
}).done(function (data) {
$('#value1').text(data);
}).error(function (jqXHR, textStatus, errorThrown) {
$('#value1').text(jqXHR.responseText || textStatus);
});
In addition, I set the EnableCorsAttribute.SupportsCredentials property = true
I tested everything out. Chrome and IE worked, FireFox did not. Firefox receives a 401 in response to it's preflight (OPTIONS) request.
It seems as though FireFox is not making an attempt to authenticate with the service.
Has anyone found a solution to this problem?
I figured out a 2-part solution.
The issue is that when Firefox issues an OPTION request and is denied with a 401, it makes no further attempt to re-authenticate. This led me down a path to bypass authentication on all OPTION requests. I couldn't find much information on the subject, but I did find this:
401 response for CORS request in IIS with Windows Auth enabled
(Original page content quoted below)
Enabling NTLM Authentication (Single Sign-On) in Firefox
This HowTo will describe how to enable NTLM authentication (Single Sign-On) in Firefox.
How many of you have noticed that when you are using Internet Explorer and you browse to your companies intranet page that it will automatically authenticate you but when you use Firefox you will be prompted with a login box?
I recently, in searching for solutions to allow NTLM authentication with Apache, stumbled across how to set a preference in Firefox that will pass the NTLM authentication information to a web server. The preference is network.automatic-ntlm-auth.trusted-uris.
So how do you do it?
1) Open Firefox and type “about:config” in the address bar. (without the quotes of course)
2) In the ‘Filter’ field type the following “network.automatic-ntlm-auth.trusted-uris”
3) Double click the name of the preference that we just searched for
4) Enter the URLs of the sites you wish to pass NTLM auth info to in the form of:
http://intranet.company.com,http://email.company.lan
5) Notice that you can use a comma separated list in this field.
6) Updated: I have created VBScript that can be used to insert this information into a users prefs.js file by using group policy or standalone if for some reason you want to use it for that.
The script is available to be downloaded here.
After downloading the script you will want to extract it from the ZIP archive and then modify the line starting with strSiteList.
NOTE: This script will not perform its function if the user has Firefox open at the time the script is executed. Running the script through group policy will work without problem unless for some reason your group policy launches Firefox before the execution of this script.
You can read through the rest of the script for additional information. If you have questions, comments or concerns please let me know.
Based on that, I set Anonymous Authentication set to Enabled in the api project's settings (I still also had Windows Authentication set to Enabled).
After running the projects (mvc and api), I was prompted for credentials when issuing a CORS request. After supplying my credentials, I was able to make GET/POST/PUTS with Firefox successfully.
To eliminate the prompting of credentials in Firefox, I received a tip from Brock Allen that led me down the path of enabling NTLM authentication. I found a post here that offers instructions on how to make the appropriate settings change.
After adding 'http://localhost' to the network.negotiate-auth.trusted-uris setting, I am now able to issue CORS requests against all verbs using Firefox without prompting for credentials.
I'm currently solving this problem and the solution of enabling the Anonymous authentication was something I didn't really like.
So struggling a bit I found the right combination described in this answer.
I'm still not 100% happy, I want to avoid the code in the global asax but through the web config I didn't succeeded jet.
I hope this may help.

GAE Federated Identity Login HTTP 204

I've searched the site but can't find anything that exactly matches this situation.
Cliff's Notes:
Trying to implement Federated login on GAE, using the sample python code at https://developers.google.com/appengine/docs/python/users/, with a custom OpenID Provider. GAE returns either at HTTP 500 or HTTP 204 depending on the server setup. There are no entries in the application logs on the admin console. Most likely it is a problem to do with the XRDS file and the discovery process. I'd appreciate any suggestions as to a cause or possible debugging methods. Thanks in advance.
Problem Details:
The code works fine when using the following providers in the 'federated_identity' parameter of the users.create_login_url() function:
https://www.google.com/accounts/o8/id
yahoo.com
aol.com
myopenid.com
The issues start when trying to use our own custom OpenID Provider. We have set up the OpenID plugin on a couple of Wordpress installs on different hosts for testing purposes. The plugin makes use of XRDS-Simple to publish the XRDS document at domain.com/?xrds. Example document contents:
<?xml version="1.0" encoding="UTF-8" ?>
<xrds:XRDS xmlns:xrds="xri://$xrds" xmlns="xri://$xrd*($v*2.0)" xmlns:simple="http://xrds-simple.net/core/1.0" xmlns:openid="http://openid.net/xmlns/1.0">
<XRD xml:id="main" version="2.0">
<Type>xri://$xrds*simple</Type>
<!-- OpenID Consumer Service -->
<Service priority="10">
<Type>http://specs.openid.net/auth/2.0/return_to</Type>
<URI>https://goff.wpengine.com/index.php/openid/consumer</URI>
</Service>
<!-- OpenID Provider Service (0) -->
<Service priority="0">
<Type>http://specs.openid.net/auth/2.0/server</Type>
<URI>https://goff.wpengine.com/index.php/openid/server</URI>
<LocalID>http://specs.openid.net/auth/2.0/identifier_select</LocalID>
</Service>
<!-- AtomPub Service -->
<Service priority="10">
<Type>http://www.w3.org/2007/app</Type>
<MediaType>application/atomsvc+xml</MediaType>
<URI>https://goff.wpengine.com/wp-app.php/service</URI>
</Service>
</XRD>
I have verified that the OpenID provider works by using it to log in to other OpenID enabled sites, including other Wordpress installs with the OpenID plugin and Stackoverflow.
When using the login link http://api.lighthouseuk.net/_ah/login_redir?claimid=https://goff.wpengine.com/?xrds&continue=http://api.lighthouseuk.net/ GAE returns a HTTP 500 error after several seconds. We haven't found any reason for this - there are no log entries in the admin console - but I suspect it may have something to do with the configuration on wpengine.com not returning the XRDS file or caching an incorrect one.
We have semi-confirmed this by running an identical setup on our dev server which has no caching enabled. Now when we visit the login link GAE returns a HTTP 302 response followed by a HTTP 204 response: http://www.google.com/gen_204?reason=EmptyURL.
As far as I can tell, after requesting the XRDS file GAE makes no further requests to our server. This leads me to believe that there might be a problem with the XRDS file but I can't find any details in the documentation about required attributes.
Things tried:
Login on other systems
If you send an authentication request to the URI specified in the XRDS document the OpenID server responds correctly by prompting the user to log in. Again this suggests that GAE takes issues with the XRDS file because no authentication request is made to our server. I can't figure out how to debug it when there are no errors recorded in the logs.
e.g. https://goff.wpengine.com/openid/server?openid.ns=http://specs.openid.net/auth/2.0&openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select&openid.identity=http://specs.openid.net/auth/2.0/identifier_select
&openid.return_to=http://api.lighthouseuk.net/checkauth&openid.realm=http://api.lighthouseuk.net/&openid.mode=checkid_setup
SSL
Obviously for a production environment we would be using SSL on both Wordpress and GAE but currently this is just a proof of concept. cURL, by default I believe, attempts to check the validity of SSL certificates so we've tried various combinations of SSL setting, including having none at all. Seemingly no effect.
Wordpress permalinks
As the XRDS document, by default, points to /index.php/openid/server/ we attempted different combinations of permalink setting in Wordpress to see if it had any effect. It didn't.
URL encode
URL encoding the claimid seemed to have no effect - we still received the HTTP 204 response.
After giving up for a while I revisited this issue and managed to solve it. Answering here in case anyone else faces the same issues. Ultimately it was down to my use of secure URLs.
TL;DR
It should have been the first thing I checked but, make sure you have an SSL certificate on your server so that the OpenID server is accessible via a secure URL. You will get a HTTP 500 error from GAE if the URL is not secure or if the SSL certificate does not validate (Obvious in hindsight but, this caught me out on a different test site with a custom generated SSL certificate).
In addition, make sure that the XRDS document contains said secure address in the <URI> element.
Setup Details
Using OpenID plugin Version 3.3.4
Using XRDS-Simple plugin Version 1.1
Wordpress version 3.8
Hosted on WPEngine.com
Google App Engine instance running the gae-boilerplate code (federated identity enabled)
Modifications
I played around with fiddler2 to see if I could learn anything more about the requests made to and from GAE. I compared the access logs from my OpenID server on WPEngine with the data I could pull from fiddler2 about the stackexchange OpenID server (openid.stackexchange.com).
XRDS-Simple Plugin
I modified this plugin to include an additional filter for the Wordpress HTTP headers:
add_filter('wp_headers', 'xrds_add_xrds_location');
function xrds_add_xrds_location($headers) {
error_log('Adding XRDS header', 0);
$headers['X-XRDS-Location'] = get_bloginfo('url').'/?xrds';
return $headers;
}
After that I modified the xrds_write() function to simply return the following xml:
<?xml version="1.0" encoding="UTF-8"?>
<xrds:XRDS
xmlns:xrds="xri://$xrds"
xmlns:openid="http://openid.net/xmlns/1.0"
xmlns="xri://$xrd*($v*2.0)">
<XRD>
<Service priority="10">
<Type>http://specs.openid.net/auth/2.0/server</Type>
<Type>http://openid.net/extensions/sreg/1.1</Type>
<Type>http://axschema.org/contact/email</Type>
<URI>http://goff.wpengine.com/index.php/openid/server</URI>
</Service>
</XRD>
</xrds:XRDS>
This got rid of the http://www.google.com/gen_204?reason=EmptyURL redirect and simply returned a HTTP 500 error.
Curious, I tried various different things to get any response out of GAE (remember GAE does not show error that occur in the /_ah/ handlers.
As a last resort I modified the <URI> element to be https instead of http. This did the trick! I was successfully redirected to goff.wpengine.com and was asked to verify that I wanted to login. Excited, I clicked verify. PHP Fatal error: Call to a member function needsSigning() on a non-object. Balls. At least now I could ascertain problems from the PHP error log.
OpenID Plugin
After some quick Googling I found a thread on Google Code for the OpenID plugin. People had had similar issues and the consensus was that it was due to a plugin conflict. Comment #55 from user infinite.alis mentioned that adding the Relying Party to the user's 'Trusted Sites' consistently solved the problem. Lo and behold, after adding the address to my trusted sites the entire authentication flow completed without error!
Conclusion
I have yet to do a post mortem to figure out which of the changes to XRDS-Simple made the difference. I suspect that simply changing the <URI> element in XRDS-Simple to https would solve the problem (My previous tests with SSL only focused on making sure the users.create_login_url() function was passed a secure address, not that the XRDS file described the OpenID server via a secure address). Possibly need to play around with the filters for get_bloginfo('url') in the xrds_write() function.

Facebook Graph API Explorer behind HTTP Basic Auth

I keep getting the following error when I try to test one of my pages with Graph API Explorer:
{
"error": {
"message": "(#3502) Object at URL https://example.com/place/123456-Something has og:type of 'website'. The property 'bar' requires an object of og:type 'example:bar'. (http response code: 401)",
"type": "OAuthException",
"code": 3502
}
}
The problem is that this page is behind HTTP Basic Authentication and it returns 401 Unauthorized even if I pass proper credentials to authenticate for this page. I can't believe it but this seems to me that Graph API Explorer does not support HTTP Basic Authentication. Does anyone had this issue before and know how to force Graph API Explorer to be able to authenticate?
If the scraper (https://developers.facebook.com/tools/debug) cannot reach your page then it's not possible.
Open Graph pages must be public and reachable.
Using self-hosted objects requires that you host them as pages on your own webserver and all self-hosted objects are public.
https://developers.facebook.com/docs/opengraph/using-objects/
You can either punch a hole in basic auth via user agent (not secure, since that is trival to spoof) or Facebook's published list of crawler IP addresses.
I've written a quick PHP script here to generate an htaccess that includes simple auth and those IPs. FB says they shift the crawler IPs, so you'd want to do cron that script to regenerate the htaccess every so often.

Resources