Related
I've been using Postman in my app development for some time and never had any issues. I typically use it with Google Chrome while I debug my ASP.NET API code.
About a month or so ago, I started having problems where Postman doesn't seem to send the cookie my site issued.
Through Fiddler, I inspect the call I'm making to my API and see that Postman is NOT sending the cookie issued by my API app. It's sending other cookies but not the one it is supposed to send -- see below:
Under "Cookies", I do see the cookie I issue i.e. .AspNetCore.mysite_cookie -- see below:
Any idea why this might be happening?
P.S. I think this issue started after I made some changes to my code to name my cookie. My API app uses social authentication and I decided to name both cookies i.e. the one I receive from Facebook/Google/LinkedIn once the user is authenticated and the one I issue to authenticated users. I call the cookie I get from social sites social_auth_cookie and the one I issue is named mysite_cookie. I think this has something to do with this issue I'm having.
The cookie in question cannot legally be sent over an HTTP connection because its secure attribute is set.
For some reason, mysite_cookie has its secure attribute set differently from social_auth_cookie, either because you are setting it in code...
var cookie = new HttpCookie("mysite_cookie", cookieValue);
cookie.Secure = true;
...or because the service is configured to automatically set it, e.g. with something like this in web.config:
<httpCookies httpOnlyCookies="true" requireSSL="true"/>
The flag could also potentially set by a network device (e.g. an SSL offloading appliance) in a production environment. But that's not very likely in your dev environment.
I suggest you try to same code base but over an https connection. If you are working on code that affects authentication mechanisms, you really really ought to set up your development environment with SSL anyway, or else you are going to miss a lot of bugs, and you won't be able to perform any meaningful pen testing or app scanning for potential threats.
You don't need to worry about cookies if you have them on your browser.
You can use your browser cookies by installing Postman Interceptor extension (left side of "In Sync" button).
I have been running into this issue recently with ASP.NET core 2.0. ASP.NET Core 1.1 however seems to be working just fine and the cookies are getting set in Postman
From what you have describe it seems like Postman is not picking up the cookie you want, because it doesn't recognize the name of the cookie or it is still pointing to use the old cookie.
Things you can try:
Undo all the name change and see if it works( just to get to the root of issue)
Rename one cookie and see if it still works, then proceed with other.
I hope by debugging in this way it will take you to the root cause of the issue.
What does "Pending" mean under the status column in the "Network" tab of Google Chrome Developer window?
This happens when my page script issues a GET request whose response contains content-headers for downloading a CSV file:
Content-type: text/csv;
Content-Disposition: attachment; filename=myfile.csv
This works fine in FF and IE7, downloading a CSV file as expected and opening a file picker to save the file, but Chrome does nothing. I confirmed that the server responds to the request, so it appears that Chrome will not process the response.
Curiously, all works as expected if I type the URL into Chromes address bar and hit <enter>.
FYI: Chrome 10.0.648.204 on Windows XP
In my case, I found that the "pending" status was caused by the AdBlock extension. The image that I couldn't get to load had the word "ad" in the URL, so AdBlock kept it from loading.
Disabling AdBlock fixes this issue.
Renaming the file so that it doesn't contain "ad" in the URL also fixes it, and is obviously a better solution. Unless it's an advertisement, in which case you should leave it like that.
I also get this when using the HTTPS everywhere plugin.
This plugin has a list of sites that also have https instead of http. So I assume before the actual request is made it is already being cancelled somehow.
So for example when I go to http://stackexchange.com, in Developer I first see a request with status (terminated). This request has some headers, but only the GET, User-Agent, and Accept. No response as well.
Then there is request to https://stackexchange.com with full headers etc.
So I assume it is used for requests that aren't sent.
I had some problems with pending request for mp3 files.
I had a list of mp3 files and one player to play them. If I picked a file that had already been downloaded, Chrome would block the request and show "pending request" in the network tab of the developer tools.
All versions of Chrome seem to be affected.
Here is a solution I found:
player[0].setAttribute('src','video.webm?dummy=' + Date.now());
You just add a dummy query string to the end of each url. This forces Chrome to download the file again.
Another example with popcorn player (using jquery) :
url = $(this).find('.url_song').attr('url');
pop = Popcorn.smart( "#player_", url + '?i=' + Date.now());
This works for me. In fact, the resource is not stored in the cache system. This should also work in the same way for .csv files.
I had the same issue on OSX Mavericks, it turned out that Sophos anti-virus was blocking certain requests, once I uninstalled it the issue went away.
If you think that it might be caused by an extension one easy way to try and test this is to open chrome with the '--disable-extensions flag to see if it fixes the problem. If that doesn't fix it consider looking beyond the browser to see if any other application might be causing the problem, specifically security apps which can affect requests.
I had a similar issue with application/json ajax calls. In ff/IE they were fine. In chrome in the Developer Network window Status was always (pending) because a different status code was being returned.
In my case I changed my Json response to send a HttpStatusCode of 200 then Chrome was fine and the Status Text changed to 200 OK.
For example using ASP.NET Web Api
return new HttpResponseMessage(HttpStatusCode.OK ) {
Content = request.Content
};
The Network pending state on time, means your request is in progressing state. As soon as it responds the time will be updated with total elapsed time.
This picture shows the network call is in processing state(Pending)
This picture shows the time taken in processing by network call.
The fix, for me, was to add the following to the top of the php file which was being requested.
header("Cache-Control: no-cache,no-store");
Same problem with Chrome : I had in my html page the following code :
<body>
...
<script src="http://myserver/lib/load.js"></script>
...
</body>
But the load.js was always in status pending when looking in the Network pannel.
I found a workaround using asynchronous load of load.js:
<body>
...
<script>
setTimeout(function(){
var head, script;
head = document.getElementsByTagName("head")[0];
script = document.createElement("script");
script.src = "http://myserver/lib/load.js";
head.appendChild(script);
}, 1);
</script>
...
</body>
Now its working fine.
Encountered a similar issue recently.
My App is in angular 11 and we have a form with some validators which have regex to validate the data. One of data element had a special character which the regex wasn't handling and it made the entire browser hung up. Infact, even though all network calls were successful with 200 Ok, chrome was not showing any response returned by the backend and was also showing the requests in Pending State when infact all network calls are successful, there was no console log errors or anything. Handling the regex fixed the issue.
After i found the issue, i googled more about it. Here is more explanation about it.
https://javascript.info/regexp-catastrophic-backtracking
I came across this issue when I was debugging a local web application. The issue turned out to be AVG Antivirus and Firewall restrictions. I had to allow an exception through the firewall to get rid of the "Pending" status.
In my case, a simple restart to my browser (chrome) and it worked straight away afterwards like magic!
Little bit of context, I happen to refresh my frontend web page and straight away went onto making a changes to my API which led it to restart. During that instance, the frontend was making calls to API which led into "pending" due to that API is reloading. Browser at this point cached that pending state. For me to get out of it is either I set no-cache (which I didn't want to) or simply restart the browser, I chose the restart.
A little background
I encountered such an issue when requesting an url in my Django project. The server is setup using Apache HTTP web server and basic auth for user authentication.
The url I was accessing required no authentication to access i.e. in my Apache config, I had set Require all granted on the url using the LocationMatch directive.
The issue
The url I was trying to access returned 200 status (in the Network tab in Chrome), but the static assets being used for styling of the requested webpage (css, javascript, font files etc.) associated with the request url were not loading and returned pending status.
In the meanwhile, the page loaded partially and still kept on loading. All this was happening in the presence of basic-auth dialog in browser, even though my url was granted all access.
What worked for me
Interestingly, as I entered my credentials and logged in, the requested page loaded all the static assets. This made it very clear to me that the static assets directory might NOT have the necessary access permissions.
Then, I granted the access to the static assets directory by updating my Apache config and then the requested url and the webpage loaded up fine (200 status) without any basic auth dialog OR pending status.
In my case, there's an update for Chrome that makes it won't load before you restart the browser. Cheers
I encountered the same problem when I request certain images from page. I use JavaScript to set the src attribute of an img object and if the network is poor pending will be displayed in the network panel of chrome developer window. I think it's due to the poor network.
I am streaming a PDF to the browser in ASP.NET 2.0. This works in all browsers over HTTP and all browsers except IE over HTTPS. As far as I know, this used to work (over the past 5 years or so) in all versions of IE, but our clients have only recently started to report issues. I suspect the Do not save encrypted pages to disk security option used to be disabled by default and at some point became enabled by default (Internet Options -> Advanced -> Security). Turning this option off helps, as a work-around, but is not viable as a long term solution.
The error message I am receiving is:
Internet Explorer cannot download OutputReport.aspx from www.sitename.com.
Internet Explorer was not able to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later.
The tool used to create the PDF is ActiveReports from DataDynamics. Once the PDF is created, here is the code to send it down:
Response.ClearContent()
Response.ClearHeaders()
Response.AddHeader("cache-control", "max-age=1")
Response.ContentType = "application/pdf"
Response.AddHeader("content-disposition", "attachment; filename=statement.pdf")
Response.AddHeader("content-length", mem_stream.Length.ToString)
Response.BinaryWrite(mem_stream.ToArray())
Response.Flush()
Response.End()
Note: If I don't explicitly specify cache-control then .NET sends no-cache on my behalf, so I have tried setting cache-control to: private or public or maxage=#, but none of those seem to work.
Here is the twist: when I run Fiddler to inspect the response headers, everything works fine. The headers that I receive are:
HTTP/1.1 200 OK
Cache-Control: max-age=1
Date: Wed, 29 Jul 2009 17:57:58 GMT
Content-Type: application/pdf
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
content-disposition: attachment; filename=statement.pdf
Content-Encoding: gzip
Vary: Accept-Encoding
Transfer-Encoding: chunked
As soon as I turn Fiddler off and try again, it fails again. One other thing that I noticed is that when Fiddler is running I get a There is a problem with this website's security certificate warning message, and I have to click Continue to this website (not recommended) to get through. When Fiddler is off, I do not encounter this security warning and it fails right away.
I am curious what is happening between Fiddler and the browser so that it works when Fiddler is running but breaks when it's not, but more importantly, does anyone have any ideas how I could change my code so streaming PDFs to IE will work without making changes to the client machine?
Update: The Fiddler issues are resolved, thank you very much EricLaw, so now it behaves consistently (broken, with or without Fiddler running).
Based on Google searching, there seem to be plenty of reports of this same issue all over the web, each with it's own specific combination of response headers that seem to fix the problem for their individual cases. I've tried many of these suggestions, including adding an ETag, LastModified date, removing the Vary header (using Fiddler) and dozens of combinations of the Cache-Control and/or Pragma headers. I tried "Content-Transfer-Encoding: binary" as well as "application/force-download" for the ContentType. Nothing has helped so far. There are a few Microsoft KB articles, all of which indicate that Cache-Control: no-cache is the culprit. Any other ideas?
Update: By the way, for completeness, this same issue occurs with Excel and Word outputs as well.
Update: No progress has been made. I emailed the .SAZ file from Fiddler to EricLaw and he was able to reproduce the problem when debugging IE, but there are no solutions yet. Bounty is going to expire...
Your Cache-Control header is incorrect. It should be Cache-Control: max-age=1 with the dash in the middle. Try fixing that first to see if it makes a difference.
Typically, I would say that the most likely culprit is your Vary header, as such headers often cause problems with caching in IE: http://blogs.msdn.com/ieinternals/archive/2009/06/17/9769915.aspx. You might want to try adding a ETAG to the response headers.
Fiddler should have no impact on cacheability (unless you've written rules), and it sounds like you're saying that it does, which suggests that perhaps there's a timing problem of some sort.
>Do not save encrypted pages to disk security option used to be disabled by default
This option is still disabled by default (in IE6, 7, and 8), although IT Administrators can turn it on via Group Policy, and some major companies do so.
Incidentally, the reason you see the certificate error while running Fiddler is that you haven't elected to trust the Fiddler root certificate; see http://www.fiddler2.com/fiddler/help/httpsdecryption.asp for more on this topic.
After two weeks on a wild goose chase, I have not been able to find any combination of code changes that will allow this method of streaming PDF, Excel or Word documents when the 'Do not save encrypted pages to disk' option is turned on.
Microsoft has said this behavior is by design in a number of KB articles and private emails. It appears that when the 'Do not save encrypted pages to disk' option is turned on that IE is behaving correctly and doing what it is told to do. This post is the best resource I have found so far that explains why this setting would be enabled and the Pros and Cons of enabling it:
"The 'Do not save encrypted pages to disk' comes into play when dealing with SSL (HTTPS) connections. Just like a web server can send done information about how to cache a file one can basically set Internet Explorer up to not save files to the cache during an SSL (HTTPS) connection regardless if the web server advises you can.
What is the upside for turning this feature on, security is the number one reason why the feature is turned on. Pages are not stored in the Temporary Internet Files cache.
What is the downside? Slow performance, since nothing is saved to the cache even that 1 byte gif image used a dozen times on the page must be fetched from the webserver each time. To make matters worse some user actions may fail such as downloaded files will be deleted and an error presented or opening PDF documents will fail to name a few scenarios."
The best solution we can find at this point is to communicate to our clients and users that alternatives exist to using this setting:
"Use 'Empty Temporary Internet Files folder when browser is closed'. Each time the browser closes all files will be purged from the cache assuming there is not a lock on a file from another instance of the browser or some external application.
A lot of consideration needs to be given before utilizing 'Do not save encrypted pages to disk'. Sounds like a great security feature and it is but the results of using this feature may cause your help desk calls to go up for download failures or slow performance."
I found that this seemed to work for me:
Dim browser As System.Web.HttpBrowserCapabilities = Request.Browser
If (browser.Browser = "IE") Then
Response.AppendHeader("cache-control", "private") ' ie only
Else
Response.AppendHeader("cache-control", "no-cache") ' all others (FF/Chrome tested)
End If
I had a similar problem with PDF files I wanted to stream. Even with Response.ClearHeaders() I saw Pragma and Cache-Control headers added at runtime. The solution was to clear the headers in IIS (Right-click -> Properties on the page loading the PDF, then "Http headers" tab).
SOLVED: This is an IE problem, not from aplication...
fix it with this: http://support.microsoft.com/kb/323308
It work perfect for me, after try for a long time.
ATT: Mr.Dark
We have faced a similar problem long time back - what we did was we (this is Java EE). In the web application config we add
<mime-mapping>
<extension>PDF</extension>
<mime-type>application/octet-stream</mime-type>
</mime-mapping>
This will make any pdf coming from your web application to be downloaded instead of the browser trying to render.
EDIT: looks like you are streaming it. In that case you will use a mime-type as application/octet-stream in your code and not in the config. So here instead of
Response.ContentType = "application/pdf"
you will use
Response.ContentType = "application/octet-stream"
What version of IE? I recall that Microsoft released a Hotfix for IE6 for this issue. Hope that is of some use?
I read of your Cache-control goose chase, but I'll share mine, that met my needs, in case it helps.
try to disable gzip compression.
Adding it here hoping that someone might find this useful instead going through the links.
Here is my code
byte[] bytes = // get byte array from DB
Response.Clear();
Response.ClearContent();
Response.ClearHeaders();
Response.Buffer = true;
// Prevent this page from being cached.
// NOTE: we cannot use the CacheControl property, or set the PRAGMA header value due to a flaw re: PDF/SSL/IE
Response.Expires = -1;
Response.ContentType = "application/pdf";
// Specify the number of bytes to be sent
Response.AppendHeader("content-length", bytes.Length.ToString());
Response.BinaryWrite(bytes);
// Wrap Up
Response.Flush();
Response.Close();
Response.End();
Like the OP I was scratching my head for days trying to get this to work, but I did it in the end so I thought I'd share my 'combination' of headers:
if (System.Web.HttpContext.Current.Request.Browser.Browser == "InternetExplorer"
&& System.Web.HttpContext.Current.Request.Browser.Version == "8.0")
{
System.Web.HttpContext.Current.Response.Clear();
System.Web.HttpContext.Current.Response.ClearContent();
System.Web.HttpContext.Current.Response.ClearHeaders();
System.Web.HttpContext.Current.Response.ContentType = "application/octet-stream";
System.Web.HttpContext.Current.Response.AppendHeader("Pragma", "public");
System.Web.HttpContext.Current.Response.AppendHeader("Cache-Control", "private, max-age=60");
System.Web.HttpContext.Current.Response.AppendHeader("Content-Transfer-Encoding", "binary");
System.Web.HttpContext.Current.Response.AddHeader("content-disposition", "attachment; filename=" + document.Filename);
System.Web.HttpContext.Current.Response.AddHeader("content-length", document.Data.LongLength.ToString());
System.Web.HttpContext.Current.Response.BinaryWrite(document.Data);
}
Hope that saves someone somewhere some pain!
I was running into a similar issue with attempting to stream a PDF over SSL and put this inside an iframe or object. I was finding that my aspx page would keep redirecting to the non-secure version of the URL, and the browser would block it.
I found switching from an ASPX page to a ASHX handler fixed my redirection problem.
This problem has been solved thanks to your suggestions. See the bottom for details. Thanks very much for your help!
Our ASP.NET website is accessed from several specific and highly secure international locations. It has been operating fine, but we have added another client location which is exhibiting very strange behaviour.
In particular, when the user enters search criteria and clicks the search button the result list returns empty. It doesn't even show the '0 results returned' text, so it is as if the Repeater control did not bind at all. Similar behaviour appears in some, but not all, other parts of the site. The user is able to log in to the site fine and their profile information is displayed.
I have logged in to the site locally using exactly the same credentials as them and the site works well from here. We have gone through the steps carefully so I am confident it is not a user issue.
I bind the search results in the Page_Load of the search results page the first time it is loaded (the criteria is in the query string). i.e.
if (!IsPostBack) {
BindResults();
}
I can replicate exactly the same behaviour locally by commenting out the BindResults() method call.
Does anybody know how the value of IsPostBack is calculated? Is it possible that their highly-secure firewall setup would cause IsPostBack to always return true, even when it is a redirect from another page? That could be a red herring as the problem might be elsewhere. It does exactly replicate the result though.
I have no access to the site, so troubleshooting is restricted to giving them instructions and asking for them to tell me the result.
Thanks for your time!
Appended info: Client is behind a Microsoft ISA 2006 firewall running default rules. The site has been added to the Internet Explorer trusted sites list and tried in FireFox and Google Chrome, all with the same result.
SOLUTION: The winner for me was the suggestion to use Fiddler. What an excellent tool that no web developer should be without. Using this I was able to strip various headers from the request until I reproduced the problem. There were actually two factors that caused this bug, as is so often the case with such confusing issues.
Factor one – Where possible the web application uses GZIP compression as supported by all major browsers. The firewall was stripping off the header that specifies GZIP decompression support (Accept-Encoding: gzip, deflate).
Factor two – A bug in my code meant that some processing was bypassed when the content was being sent uncompressed. This problem was not noticed before because the application is used by a limited audience, all of which supported GZIP decompression.
If they're at all tech-savvy, I would have them download Fiddler or something similar, capture the entire HTTP session, and then send you the saved session. Maybe something in there will stick out.
Meanwhile, see if you can get an install of ISA Server (an evaluation install, if you have to, or one from MSDN if you have or know anyone with a sub) and see if you can replicate it locally.
Is it possible the client has disabled Javascript and it's not picking up the _EVENTTARGET form value?
It might be some sort of proxy which creates a GET request out of a given POST request...
I am not sure how the IsPostBack is calculated, but my guess would be that it checks the HTTP request to see if it's a POST or a GET...
Ohh, yeah. It's definitely NOT "_EVENTTARGET" BTW...
I know this since Ra-Ajax does NOT pass any of those parameters to the server and they (Ra-ajax requests) are processed as IsPostBack requests...
Location, location, location. Check the user's culture. Normally that causes issues.
Could you create a test Post Page that passes the same things that your search page does, and in the Page_Load write back all of the post to make sure they are getting passed, particularly the __VIEWSTATE.
foreach (string key in Request.Form)
{
Response.Write("<br>" + key + "=" + Request.Form[key]);
}
Then ask one of the users to forward back what they see on that test page.
EDIT: There is documentation that some firewalls can corrupt the VIEWSTATE and some methods to get around it: View State Overview
Check the IIS logs to see if the request even makes it to your server. The ISA setup might be caching the initial request and serving that up in the succeeding requests.
Getting sporadic errors from users of a CMS; Ajax requests sometimes result in a "501 Method not implemented" response from the server. Not all the time; usually works.
Application has been stable for months. Users seem to be getting it with Firefox 3. I've seen a couple references via Google to such problems being related to having "charset=UTF-8" in the Content-type header, but these may be spurious
Has anyone seen this error or have any ideas about what the cause could be?
Thanks
Ian
You may want to check the logs of the server to see what's causing the issue. For example, it might be that these requests are garbled, say, because of a flaw in the HTTP 1.1 persistent connection implementation.
Try this
Try clearing your cookies and your cache
Type about:config into the URL bar, list of configuration settings for Firefox
Locate the setting for 'network.automatic-ntlm-auth.trusted-uris'
Set the value of names of the servers to use NTLM with.
Locate the setting for 'network.negotiate-auth.trusted-uris'
Set the value of names of the servers to use NTLM with.
network.automatic-ntlm-auth.allow-proxies = True
Restart Firefox - Test URL to application
The problem occurs as your app is not running on the same domain as your service. You need to configure your Server to accept those calls by adding the 'Access-Control-Allow-Origin' Header.