Getting Correlation failed exception while signing in the auth0 inside an iframe - iframe

I can successfully login with the standalone mvc app without the iframe. But when I put the same app inside an iframe, I'm getting the Exception: Correlation failed exception.
When I tried with postman, I'm getting the following response:
I have also tried with different SameSiteMode configurations but to no avail. Is there any way or workaround to achieve this? Thanks.

I suspect the cookie is not sent by the browser.
You need to use HTTPS to get it to work together with samesite=none;Secure attributes added to the cookie. Otherwise the cookie will be blocked by the browser.
You can diagnose why a cookie was not accepted or used by going to the Chrome devtools and:
Open the Browser Developer Tools (F12)
Click on the network tab and reload the page
Click on the Cookies request
Select the Cookies tab
Then hover your mouse over the (i) to see the reasoning by the browser

Related

Prevent HttpClient logging 404 errors in console

I am using HttpClient in client side blazor app, but when i am doing a call to an API that some times isnt there (because its an image search to an azure blob storage) it returns a 404 error which is fine. My issue is, is that the 404 error is then being displayed in the browser console (google chrome for me), is there a way of preventing this as i dont want the user to know it is a 404 instead i want to act upon that 404 my self by displaying a default image.
Here is my offending but simple code that then logs to the browser console.
await client.GetAsync("URL of Image");
Can you give a bit more context and code examples? Are you handling the HttpClient errors in a try/catch block?
As far as I know, the browser will always log any 400-500 errors in the console, and you cannot disable this output. It's called the developer tools for a reason. Also, what is keeping users from viewing the network tab in those same tools?
I honestly think this is a non-issue unless you are naming files something suspicious..

Why would Facebook oauth try to access the http version of an https site?

When I try to use Facebook login on this site:
https://parlay.io
by clicking the button at the top of the page, I get a popup with the URL:
https://www.facebook.com/login.php?skip_api_login=1&api_key=501604519940587&signed_next=1&next=https://www.facebook.com/v2.2/dialog/oauth?redirect_uri=https%3A%2F%2Fparlay.io%2F_oauth%2Ffacebook%3Fclose&display=popup&state=eyJsb2dpblN0eWxlIjoicG9wdXAiLCJjcmVkZW50aWFsVG9rZW4iOiJxd01acHRSb3hGX0hDM1FEV25vSVVSVXlDZTZWcWVFNUhrUHZVcHA5ZWhUIiwiaXNDb3Jkb3ZhIjpmYWxzZX0%3D&scope=email%2Cuser_friends&client_id=501604519940587&ret=login&cancel_url=https://parlay.io/_oauth/facebook?close&error=access_denied&error_code=200&error_description=Permissions+error&error_reason=user_denied&state=eyJsb2dpblN0eWxlIjoicG9wdXAiLCJjcmVkZW50aWFsVG9rZW4iOiJxd01acHRSb3hGX0hDM1FEV25vSVVSVXlDZTZWcWVFNUhrUHZVcHA5ZWhUIiwiaXNDb3Jkb3ZhIjpmYWxzZX0%3D#=&display=popup
I enter in my Facebook creds and submit. In Safari, this works and login completes. In Chrome, the popup goes blank but stays open. The popup URL is
https://parlay.io/_oauth/facebook?close&code=...
The popup console says:
Uncaught SecurityError: Blocked a frame with origin "https://parlay.io" from accessing a frame with origin "http://parlay.io". The frame requesting access has a protocol of "https", the frame being accessed has a protocol of "http". Protocols must match.
The error occurs on line 23:
I don't know why this popup is trying to access http://parlay.io. I do not have http or http://parlay.io as a setting anywhere in my app.
This is using the 'popup' style oauth. When I switch to 'redirect' style in Chrome, the first time I login, I get this error on the server:
{"line":"398","file":"oauth_server.js","message":"Error in OAuth Server: redirectUrl (http://parlay.io/) is not on the same host as the app (https://parlay.io/)","time":{"$date":1435164688847},"level":"warn"}[parlay.io]
and I get redirected to same signin page. The second time I click login, it works. The second click can be automated with:
I had the exact same problem, under similar conditions (Meteor 1.3.x, ROOT_URL set to https, FB/Twitter apps set to https.)
What fixed the problem for me was to set up my site to always redirect HTTP requests to HTTPS. I am using Cloudflare, so I followed the instructions here:
https://support.cloudflare.com/hc/en-us/articles/200170536-How-do-I-redirect-all-visitors-to-HTTPS-SSL-
After making the change, sign-in worked like a charm across different machines. Final results here:
https://goodbyegunstocks.com

jQuery form plugin - uploading file to a different domain

I have a asp.net website which is supposed to upload files to a handler from a different application / domain. I'm using jQuery Form plugin. When trying to make an example on the same domain (uploading to the same domain) this setup works with success. When trying to upload file with from siteA to siteB I see in firebug that response from handler has been returned (in the Network tab) and is valid, however, the code never enters the 'success' handler, instead giving me such errors in the firebug console:
[jquery.form] Server abort: Error: Permission denied to access property 'document' (Error) log:
[jquery.form] cannot access response document: Error: Permission denied to access property 'document'
[jquery.form] aborting upload... aborted
In chrome it is:
Unsafe JavaScript attempt to access frame with URL http://domainB/handler.ashx from frame with URL domainA. Domains, protocols and ports must match.
Now, I am aware of the fact that there are some policies about ajax calls between domains, but it seems that jquery form plugin simply tries to access some url that is forbidden.
Does anyone have a workaround for it? Any solution please!:)
UPDATE:
I ended up hacking jquery.form so it doesn't throw cross-site exception and since I don't need actual result of upload - it works for me!
Check this and yes, this is a same-origin policy. There are ways to work this around using flash, iframes, jsonp etc but this will require editing a plugin.

What does "pending" mean for request in Chrome Developer Window?

What does "Pending" mean under the status column in the "Network" tab of Google Chrome Developer window?
This happens when my page script issues a GET request whose response contains content-headers for downloading a CSV file:
Content-type: text/csv;
Content-Disposition: attachment; filename=myfile.csv
This works fine in FF and IE7, downloading a CSV file as expected and opening a file picker to save the file, but Chrome does nothing. I confirmed that the server responds to the request, so it appears that Chrome will not process the response.
Curiously, all works as expected if I type the URL into Chromes address bar and hit <enter>.
FYI: Chrome 10.0.648.204 on Windows XP
In my case, I found that the "pending" status was caused by the AdBlock extension. The image that I couldn't get to load had the word "ad" in the URL, so AdBlock kept it from loading.
Disabling AdBlock fixes this issue.
Renaming the file so that it doesn't contain "ad" in the URL also fixes it, and is obviously a better solution. Unless it's an advertisement, in which case you should leave it like that.
I also get this when using the HTTPS everywhere plugin.
This plugin has a list of sites that also have https instead of http. So I assume before the actual request is made it is already being cancelled somehow.
So for example when I go to http://stackexchange.com, in Developer I first see a request with status (terminated). This request has some headers, but only the GET, User-Agent, and Accept. No response as well.
Then there is request to https://stackexchange.com with full headers etc.
So I assume it is used for requests that aren't sent.
I had some problems with pending request for mp3 files.
I had a list of mp3 files and one player to play them. If I picked a file that had already been downloaded, Chrome would block the request and show "pending request" in the network tab of the developer tools.
All versions of Chrome seem to be affected.
Here is a solution I found:
player[0].setAttribute('src','video.webm?dummy=' + Date.now());
You just add a dummy query string to the end of each url. This forces Chrome to download the file again.
Another example with popcorn player (using jquery) :
url = $(this).find('.url_song').attr('url');
pop = Popcorn.smart( "#player_", url + '?i=' + Date.now());
This works for me. In fact, the resource is not stored in the cache system. This should also work in the same way for .csv files.
I had the same issue on OSX Mavericks, it turned out that Sophos anti-virus was blocking certain requests, once I uninstalled it the issue went away.
If you think that it might be caused by an extension one easy way to try and test this is to open chrome with the '--disable-extensions flag to see if it fixes the problem. If that doesn't fix it consider looking beyond the browser to see if any other application might be causing the problem, specifically security apps which can affect requests.
I had a similar issue with application/json ajax calls. In ff/IE they were fine. In chrome in the Developer Network window Status was always (pending) because a different status code was being returned.
In my case I changed my Json response to send a HttpStatusCode of 200 then Chrome was fine and the Status Text changed to 200 OK.
For example using ASP.NET Web Api
return new HttpResponseMessage(HttpStatusCode.OK ) {
Content = request.Content
};
The Network pending state on time, means your request is in progressing state. As soon as it responds the time will be updated with total elapsed time.
This picture shows the network call is in processing state(Pending)
This picture shows the time taken in processing by network call.
The fix, for me, was to add the following to the top of the php file which was being requested.
header("Cache-Control: no-cache,no-store");
Same problem with Chrome : I had in my html page the following code :
<body>
...
<script src="http://myserver/lib/load.js"></script>
...
</body>
But the load.js was always in status pending when looking in the Network pannel.
I found a workaround using asynchronous load of load.js:
<body>
...
<script>
setTimeout(function(){
var head, script;
head = document.getElementsByTagName("head")[0];
script = document.createElement("script");
script.src = "http://myserver/lib/load.js";
head.appendChild(script);
}, 1);
</script>
...
</body>
Now its working fine.
Encountered a similar issue recently.
My App is in angular 11 and we have a form with some validators which have regex to validate the data. One of data element had a special character which the regex wasn't handling and it made the entire browser hung up. Infact, even though all network calls were successful with 200 Ok, chrome was not showing any response returned by the backend and was also showing the requests in Pending State when infact all network calls are successful, there was no console log errors or anything. Handling the regex fixed the issue.
After i found the issue, i googled more about it. Here is more explanation about it.
https://javascript.info/regexp-catastrophic-backtracking
I came across this issue when I was debugging a local web application. The issue turned out to be AVG Antivirus and Firewall restrictions. I had to allow an exception through the firewall to get rid of the "Pending" status.
In my case, a simple restart to my browser (chrome) and it worked straight away afterwards like magic!
Little bit of context, I happen to refresh my frontend web page and straight away went onto making a changes to my API which led it to restart. During that instance, the frontend was making calls to API which led into "pending" due to that API is reloading. Browser at this point cached that pending state. For me to get out of it is either I set no-cache (which I didn't want to) or simply restart the browser, I chose the restart.
A little background
I encountered such an issue when requesting an url in my Django project. The server is setup using Apache HTTP web server and basic auth for user authentication.
The url I was accessing required no authentication to access i.e. in my Apache config, I had set Require all granted on the url using the LocationMatch directive.
The issue
The url I was trying to access returned 200 status (in the Network tab in Chrome), but the static assets being used for styling of the requested webpage (css, javascript, font files etc.) associated with the request url were not loading and returned pending status.
In the meanwhile, the page loaded partially and still kept on loading. All this was happening in the presence of basic-auth dialog in browser, even though my url was granted all access.
What worked for me
Interestingly, as I entered my credentials and logged in, the requested page loaded all the static assets. This made it very clear to me that the static assets directory might NOT have the necessary access permissions.
Then, I granted the access to the static assets directory by updating my Apache config and then the requested url and the webpage loaded up fine (200 status) without any basic auth dialog OR pending status.
In my case, there's an update for Chrome that makes it won't load before you restart the browser. Cheers
I encountered the same problem when I request certain images from page. I use JavaScript to set the src attribute of an img object and if the network is poor pending will be displayed in the network panel of chrome developer window. I think it's due to the poor network.

session lost on redirect

I have a web app that is being hit by facebook. The login page retrieves the keys that I need and sets some session variables. When the server then redirects the user to the next page, the session information is lost. I’m running the IIS engine on vista ultimate at the moment, the app pools don’t matter because I’m using a state service and I’m still losing the session state. I’ve tried both the overloaded method of the response.redirect function and also adding a header to the page to force the redirect and none of this seems to work. Does anyone have any ideas of what I’m missing?
I’ve tried both of these:
Response.Headers.Add("refresh", "3;url=Dashboard.aspx")
And
Response.Redirect("Dashboard.aspx", False)
[EDIT]
So i just did a little experiment and well it turns out that when I hit the url directly from the facebook page I get the problem, but when i copy the url for the IFrame into a new browser window and try it it works fine.
[EDIT]
So I found an article on this and after addin gthe header the problem was solved (for now)
http://support.microsoft.com/kb/323752
Response.AddHeader("P3P: CP", "CAO PSA OUR")
when I hit the url directly from the facebook page I get the problem, but when i copy the url for the IFrame into a new browser window and try it it works fine.
If you're in an iframe, any cookies you set are “third-party cookies”. Third-party cookies may be subject to more stringent conditions than the normal “first-party” cookies you are setting when the user is directly on your site. This can be due to different browser default cookie handling or because the user has deliberately configured it like that. (And for good reason: many third-parties are unpleasant privacy-invading advertisers.)
In particular, in IE6+ with the default settings, you cannot set a third-party cookie unless you write a P3P policy promising that you will be a good boy and not flog your users' data to the nearest identify thief.
(In practice of course P3P is a dead loss, since there's nothing stopping the site owner from just lying. Another worthless complication that provides no actual security. Yay.)
I'd try running Fiddler and see if your session cookie is being sent properly with the response when interacting with your app via Facebook.
The session depends also on cookie support by the client. When you say the app "is being hit by facebook" are you sure that by what ever means they are "hitting" you they are supporting cookies?
Response.Redirect and refresh don't carry session. Server.Transfer() can but loses the ability to transfer to other servers/sites.

Resources