How to deliver big files in ASP.NET Response? - asp.net

I am not looking for any alternative of streaming file contents from
database, indeed I am looking for root of the problem, this was
running file till IIS 6 where we ran our app in classic mode, now we
upgraded our IIS to 7 and we are running app pool in pipeline mode and
this problem started.
I have an handler, where I have to deliver big files to client request. And I face following problems,
Files are of average size 4 to 100 MB, so lets consider 80MB file download case.
Buffering On, Slow Start
Response.BufferOutput = True;
This results in very slow start of file, as user downloads and even progress bar does not appear till few seconds, typically 3 to 20 seconds, reason behind is, IIS reads entire file first, determines the content-length and then begin the file transfer. File is being played in video player, and it runs very very slow, however iPad only downloads fraction of file first so it works fast.
Buffering Off, No Content-Length, Fast Start, No Progress
Reponse.BufferOutput = False;
This results in immediate start, however end client (typical browser like Chrome) does not know Content-Length as IIS does not know either, so it does not display progress, instead it says X KB downloaded.
Buffering Off, Manual Content-Length, Fast Start, Progress and Protocol Violation
Response.BufferOutput = False;
Response.AddHeader("Content-Length", file.Length);
This results in correct immediate file download in Chrome etc, however in some cases IIS handler results in "Remote Client Closed Connection" error (this is very frequent) and other WebClient results in protocol violation. This happens 5 to 10% of all requests, not every requests.
I guess what is happening is, IIS does not send anything called 100 continue when we dont do buffering and client might disconnect not expecting any output. However, reading files from source may take longer time, but at client side I have increased timeout but seems like IIS timesout and have no control.
Is there anyway I can force Response to send 100 continue and not let anyone close the connection?
UPDATE
I found following headers in Firefox/Chrome, nothing seems unusual here for Protocol Violation or Bad Header.
Access-Control-Allow-Headers:*
Access-Control-Allow-Methods:POST, GET, OPTIONS
Access-Control-Allow-Origin:*
Access-Control-Max-Age:1728000
Cache-Control:private
Content-Disposition:attachment; filename="24.jpg"
Content-Length:22355
Content-Type:image/pjpeg
Date:Wed, 07 Mar 2012 13:40:26 GMT
Server:Microsoft-IIS/7.5
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
UPDATE 2
Turning Recycling still did not offer much but I have increased my MaxWorkerProcess to 8 and I now get less number of errors then before.
But on an average, out of 200 requests in one second, 2 to 10 requests fail.., and this happens on almost every alternate seconds.
UPDATE 3
Continuing 5% of requests failing with "The server committed a protocol violation. Section=ResponseStatusLine", I have another program that downloads content from the webserver which uses WebClient, and which gives this error 4-5 times a second, on an average I have 5% of requests failing. Is there anyway to trace WebClient failure?
Problems Redefined
Zero Byte File Received
IIS closes connection for some reason, on client side in WebConfig, I receive 0 bytes for the file which is not zero bytes, We do SHA1 hash check, this told us that in IIS web server, no error is recorded.
This was my mistake, and its resolved as we are using Entity Framework, it was reading dirty (uncommitted rows) as read was not in transaction scope, putting it in transaction scope has resolved this issue.
Protocol Violation Exception Raised
WebClient throws WebException saying "The server committed a protocol violation. Section=ResponseStatusLine.
I know I can enable unsafe header parsing but that is not the point, when it is my HTTP Handler that is sending proper headers, dont know why IIS is sending anything extra (checked on firefox and chrome, nothing unusual), this happens only 2% of times.
UPDATE 4
Found sc-win32 64 error and I read somewhere that WebLimits for MinBytesPerSecond must be changed from 240 to 0, still everything is same. However I have noticed that whenever IIS logs 64 sc-win32 error, IIS records HTTP Status as 200 but there was some error. Now I cant turn on Failed Trace Logging for 200 because it will result in massive files.
Both of above problems were solved by increasing MinBytesPerSecond and as well as disabling Sessions, I have added detailed answer summarizing every point.

When you have set the content length with the bufferOutput to false then the possible reason of the fails is because IIS try to gzip the file you send, and by set the Content-Length IIS can not change it back to the compressed one, and the errors starts (*).
So keep the BufferOutput to false, and second disable the gzip from iis for the files you send - or disable the iis gzip for all files and you handle the gzip part programmatically, keeping out of gzip the files you send.
Some similar questions for the same reason:
ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load balanced servers
HTTP Compression: Some external scripts/CSS not decompressing properly some of the time
(*) why not change it again ? because from the moment you set a header you can not take it back, except if you have enable this option on IIS and take care that the header have not all ready send to the browser.
Follow up
If not gziped, the next thing it came to my mind is that the file is sent and for some reason the connection got delayed, and got a timeout and closed. So you get the "Remote Host Closed The Connection".
This can be solved depending on the cause:
Client really closed the connection
The timeout is from the page itself, if you use handler (again, probably, the message must be "Page Timed Out" ).
The timeout is coming from the idle waiting, the page take more than the execution time, gets a timeout and close the connection. Maybe in this case the message was the Page Timed Out.
The pool make a recycle the moment you send the file. Disable all pool recycles! This is the most possible cases that I can think of right now.
If it is coming from the IIS, go to the web site properties and make sure you set the biggest "Connection Timeout", and "Enable HTTP Keep-Alives".
The page timeout by changing the web.config (you can change it programmatically only for one specific page)
<httpRuntime executionTimeout="43200"
Also have a look at :
http://weblogs.asp.net/aghausman/archive/2009/02/20/prevent-request-timeout-in-asp-net.aspx
Session lock
One more thing that you need to examine is to not use session on the handler that you use to send the file, because the session locks the action until finish out and if a user take longer time to download a file, a second one may get time out.
some relative:
call aspx page to return an image randomly slow
Replacing ASP.Net's session entirely
Response.WriteFile function fails and gives 504 gateway time-out

Although correct way to deliver the big files in IIS is the following option,
Set MinBytesPerSecond to Zero in WebLimits (This will certainly help in improving performance, as IIS chooses to close clients holding KeepAlive connections with smaller size transfers)
Allocate More Worker Process to Application Pool, I have set to 8, now this should be done only if your server is distributing larger files. This will certainly cause other sites to perform slower, but this will ensure better deliveries. We have set to 8 as this server has only one website and it just delivers huge files.
Turn off App Pool Recycling
Turn off Sessions
Leave Buffering On
Before each of following steps, check if Response.IsClientConnected is true, else give up and dont send anything.
Set Content-Length before sending the file
Flush the Response
Write to Output Stream, and Flush in regular intervals

What I would do is use the not so well-known ASP.NET Response.TransmitFile method, as it's very fast (and possibly uses IIS kernel cache) and takes care of all header stuff. It is based on the Windows unmanaged TransmitFile API.
But to be able to use this API, you need a physical file to transfer. So here is a pseudo c# code that explain how to do this with a fictional myCacheFilePath physical file path. It also supports client caching possibilities. Of course, if you already have a file at hand, you don't need to create that cache:
if (!File.Exists(myCacheFilePath))
{
LoadMyCache(...); // saves the file to disk. don't do this if your source is already a physical file (not stored in a db for example).
}
// we suppose user-agent (browser) cache is enabled
// check appropriate If-Modified-Since header
DateTime ifModifiedSince = DateTime.MaxValue;
string ifm = context.Request.Headers["If-Modified-Since"];
if (!string.IsNullOrEmpty(ifm))
{
try
{
ifModifiedSince = DateTime.Parse(ifm, DateTimeFormatInfo.InvariantInfo);
}
catch
{
// do nothing
}
// file has not changed, just send this information but truncate milliseconds
if (ifModifiedSince == TruncateMilliseconds(File.GetLastWriteTime(myCacheFilePath)))
{
ResponseWriteNotModified(...); // HTTP 304
return;
}
}
Response.ContentType = contentType; // set your file content type here
Response.AddHeader("Last-Modified", File.GetLastWriteTimeUtc(myCacheFilePath).ToString("r", DateTimeFormatInfo.InvariantInfo)); // tell the client to cache that file
// this API uses windows lower levels directly and is not memory/cpu intensive on Windows platform to send one file. It also caches files in the kernel.
Response.TransmitFile(myCacheFilePath)

This piece of code works for me.
It starts the data stream to client immediately.
It shows progress during download.
It doesn't violate HTTP. Content-Length header is specified and the chuncked transfer encoding is not used.
protected void PrepareResponseStream(string clientFileName, HttpContext context, long sourceStreamLength)
{
context.Response.ClearHeaders();
context.Response.Clear();
context.Response.ContentType = "application/pdf";
context.Response.AddHeader("Content-Disposition", string.Format("filename=\"{0}\"", clientFileName));
//set cachebility to private to allow IE to download it via HTTPS. Otherwise it might refuse it
//see reason for HttpCacheability.Private at http://support.microsoft.com/kb/812935
context.Response.Cache.SetCacheability(HttpCacheability.Private);
context.Response.Buffer = false;
context.Response.BufferOutput = false;
context.Response.AddHeader("Content-Length", sourceStreamLength.ToString (System.Globalization.CultureInfo.InvariantCulture));
}
protected void WriteDataToOutputStream(Stream sourceStream, long sourceStreamLength, string clientFileName, HttpContext context)
{
PrepareResponseStream(clientFileName, context, sourceStreamLength);
const int BlockSize = 4 * 1024 * 1024;
byte[] buffer = new byte[BlockSize];
int bytesRead;
Stream outStream = m_Context.Response.OutputStream;
while ((bytesRead = sourceStream.Read(buffer, 0, BlockSize)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
outStream.Flush();
}

Related

The server receives an HTTP request body that is different from what was sent

FOREWORD
This may well be the weirdest problem I have ever witnessed in 15 years. It is 100% reproducible on a specific machine that sends a specific request when authenticated as a specific user, if the request is sent from Chrome (it doesn't happen from Edge, it doesn't happen from cURL or Postman). I can't expect an exact solution to my disturbingly specific issue, but any pointers about what could theoretically cause it are more than welcome.
WHAT HAPPENS
We have several PCs in our factory, that communicate with a central HTTP server (hosted on premise, if that even matters: they're on the same LAN). Of course, we have users who could work on any of these machines.
When a certain user does a specific action on a certain one of those machines, she gets a message about an "HTTP error". The server responds with a 400, specifying that the JSON in the request is ill-formed. Fine, let's look at the JSON: it's an 80-characters string, and it looks very well-formed. I check its length, and it is in fact an 80-character string, and the request has a Content-Length of 80. All is fine, but the server responds with the 400.
The same user on a different machine, or a different user on the same machine, or any other user on any other machine can do the very same action and the very same corresponding HTTP request. The same user, on that machine, can do the action fine using Edge instead of Chrome (despite both being Chromium-based). If I "export" the request from the browser's Dev Tools into any format (cURL bash, cURL cmd, JS fetch...), the request in Chrome and the one in Edge look the same.
Our UI sends the request using Axios. If I send it with fetch, I still get the error. If I serialize the JSON myself and send the string (instead of letting Axios/fetch handle the serialization), I still get the error. If I send that same request using any other client (cURL from command line, Postman...) I don't get the error - same as in Edge.
WHAT I FINALLY NOTICED (and how I hacked the issue into submission)
The server is ASP.NET Core (using .Net 5), so I added a middleware to record the received request. Apparently, in the specified conditions, the server receives a request body that is different from what was sent by the client. Say the client sends:
{"key1":"value1","key2":"value2"}
Well, the server receives:
{"key1":"value1","key2":"value2"
Notice the newline at the beginning and the missing closing brace at the end. The body apparently gets an extra character at the start, and the final character is lost - either because it is not actually sent/received or because the Content-Length dictated it to be truncated.
This clearly explains the failed deserialization (the string is in fact invalid JSON) and the resulting 400 response.
Since this bug had been blocking or hindering production for several days, I wrote a "healer" middleware, that tries to deserialize the JSON string received (if the Content Type indicates JSON, of course); if it fails, it looks for a single non-opening-brace character at the start of the string, and if it finds it it rewrites the body by removing that character and appending a closing brace. It lets the healed request go down the pipeline and notifies me via e-mail.
THE AFTERMATH
All has been working fine since I released the fix, and we even asked or system managers to replace the PC that was causing problems, since we could only think of a vicious issue with OS/browser setup or configuration that caused conflicts.
However, when they replaced it, I started getting the notification e-mail again... this time from other two users, always on that same machine, each of them having the same issue (that is being healed, btw), each of them on a different request (but always the same request for each user). The requests point to different URLs and their bodies have different lengths and complexity (JSON-wise). I haven't tried all the tests I did before (different browser, cURL, fetch...) but the diagnosis of the problem is the same, and it is being handled by the healer middleware.
A colleague reported that they already had a similar problem several months ago, which they didn't investigate back then. They're not sure it was the very same workstation, but they replaced the PC and the error didn't happen any more. It seems to be pretty much random, and I still have no idea what could cause such a behaviour.
Here is some more info about the platform, if any of this is relevant:
clients: Windows 10 PCs, using Chrome in kiosk mode, launched by a batch that is located on a network share;
UI: React, sending HTTP requests with Axios;
server: .Net 5 ASP.NET Core service.
UPDATE
I've recorded the network traffic using Wireshark on the client PC. Here is what I got:
So apparently the request is already modified when it leaves the client host.

Size of the request headers is too long

I'm currently working on an ASP.NET MVC website and it works fine.
But I have a problem that I don't understand at all... When I launch my website on Visual Studio with Chrome for example no problem, but when I stop it and try to launch an other test with Firefox for example, my url is growing and then I get this error :
HTTP 400. The size of the request headers is too long.
Can someone explain me why this is happening ? Is it something with my code or does it come from IIS express or anything else ?
Thanks in advance
You can probably increase the size of requests your webserver will allow. However, take a look at the amount and the size of cookies your browser are sending to the server. Clear your cookies and try again, and see if you can reduce the size and amount of cookies your app is using. The less, the better! Mobile browsers can get these errors, as they don't allow the same size as do desktop browsers(?).
The error can also mean the query string is getting too large.
.NET MVC SOLUTION FOR ME
In my case, it was my claims that was multiplying my session cookies to look as below in my browser cookies:
.AspNet.ApplicationCookie
.AspNet.ApplicationCookieC1
.AspNet.ApplicationCookieC2
.AspNet.ApplicationCookieC3
.AspNet.ApplicationCookieC4
.AspNet.ApplicationCookieC5
.AspNet.ApplicationCookieC6
.AspNet.ApplicationCookieC7
__RequestVerificationToken
I simply went to aspNetUserClaims table in my mssql management studio and cleared it. Then cleared the browser cookie for the project.
Refreshed the page. Kalas!!! Done!!
I believe it happened because I was switching from one database connectionstring to another which caused the claimsManager to recreate session and add to my cookie. On saturation, everyting exploded.
Check the MSDN:
Cause
This issue may occur when the user is a member of many Active
Directory user groups. When a user is a member of a large number of
active directory groups the Kerberos authentication token for the user
increases in size. The HTTP request that the user sends to the IIS
server contains the Kerberos token in the WWW-Authenticate header, and
the header size increases as the number of groups goes up. If the
HTTP header or packet size increases past the limits configured in
IIS, IIS may reject the request and send this error as the response.
Resolution
To work around this problem, choose one of the following options:
A) Decrease the number of Active Directory groups that the user is a
member of.
OR
B) Modify the MaxFieldLength and the MaxRequestBytes registry settings
on the IIS server so the user's request headers are not considered too
long. To determine the appropriate settings for the MaxFieldLength
and the MaxRequestBytes registry entries, use the following
calculations:
Calculate the size of the user's Kerberos token using the formula described in the following article:
New resolution for problems with Kerberos authentication when users belong to many groups
http://support.microsoft.com/kb/327825
Configure the MaxFieldLength and the MaxRequestBytes registry keys on the IIS server with a value of 4/3 * T, where T is the user's token
size, in bytes. HTTP encodes the Kerberos token using base64 encoding
and therefore replaces every 3 bytes in the token with 4 base64
encoded bytes. Changes that are made to the registry will not take
effect until you restart the HTTP service. Additionally, you may have
to restart any related IIS services.
try this
<system.web>
<httpRuntime maxRequestLength="2097151" executionTimeout="2097151" />
</system.web>
The maxRequestLength default size is 4096 KB (4 MB).
if browser request some resource again and again , at some time request header value length increase by number of times so we may try to extend request length to max length.
i hope this may usefull
In windows system generally this error occurs due to the default header size limits set in the http.sys service. This service acts as a protective layer before requests are forwarded to the application to prevent it from being overwhelmed by invalid requests.
You can override the default max header limit by modifying the windows registry.
Follow the steps :
Run regedit
From the address bar go to the address : Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters or drill down manually.
Right click on "Parameters" > New > DWORD
Rename the new entry to MaxFieldLength
Right click the newly created MaxFieldLength, modify it and set the value to desired max individual header size in bytes, make sure base is set to decimal.
Do the same for MaxRequestBytes. Make it sufficiently higher to match value set in MaxFieldLength.
Open command prompt as administrator
Enter the command "net stop http" (make sure visual studio or other interfering programs are closed)
Enter the command "net start http"
Resources:
Enabling logging
Supported parameters
In my case, I had cookies from a number of different apps served on my localhost with large cookies. FF differentiates by host-name so clearing my cookies from localhost fixed it.
Following Ifeanyi Chukwu's answer, for my case, I tried with private mode (Incognito) and it works fine. Then I go to browser settings and delete cookies of my site (localhost). That fixes the issue.
As you may already figured out issue, a simple temporary solution would be to switch your browser while debugging.

Restlet Server not returning proper responses

I have a ServerResource object that is running within a component. Its purpose is to act in many ways like a basic HTTP server. It uses a Representation to acquire a file and return the file's contents to a browser.
The active function for this application is provided below:
public Representation showPage()
{
Representation rep = null;
if(fileName != null)
{
File path = new File("pages/" + fileName);
rep = new FileRepresentation(path,MediaType.ALL);
}
return(rep);
}
Note that "fileName" is the name of an HTML file (or index.html) which was previously passed in as an attribute. The files that this application serves are all in a subdirectory called "pages" as shown in the code. The idea is that a browser sends an HTTP request for an HTML file, and the server returns that file's contents in the same way that Apache would.
Note also that the restlet application is deployed as a JSE application. I am using Restlet 2.1.
An interesting problem occurs when accessing the application. Sometimes, when the request comes from a Firefox browser, the server simply does not send a response at all. The log output shows the request coming in, but the server sinply does not respond, not even with a 404. The browser waits for a response for a time, then times out.
When using Internet Explorer, sometimes the browser times out due to not receiving a response from the server, but sometimes the server also returns a 304 response. My research into this response indicates that it should not be returned at all -- especially if the HTML files have no- caching tags included.
Is there something in the code that is causing these non- responses??? Is there something missing that is causing the ServerResource object to handle responses so unreliably? Or have I found a bug in Restlet's response mechanisms?
Someone please advise...

Flex: HTTP request error #2032

In Flex 3 application I use HTTPService class to make requests to the server:
var http:HTTPService = new HTTPService();
http.method = 'POST';
http.url = hostUrl;
http.resultFormat = 'e4x';
http.addEventListener(ResultEvent.RESULT, ...);
http.addEventListener(FaultEvent.FAULT, ...);
http.send(params);
The application has Comet-architecture. So it makes long running requests. While waiting a response for this request, other requests can be made concurrently.
The application works in most cases. But sometimes some clients get HTTP request error executing long running request:
faultCode:Server.Error.Request
faultString:'HTTP request error'
faultDetail:'Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]. URL: http://example.com/ws'
I think it depends on user's browser.
Any ideas?
I experienced the same problem when I sent longer (3-4K!) parameter in HttpRequest. As soon as I sent smaller ones it worked (without refresh, reload or anything). I do not know if there is a limit on client side or on web server side of the length of parameters you can send but definitely this causes the issue.
url limitations may cause it
This error appears very generic and I would suggest trying to collect more information and sharing it regarding the issue.
This post appears similar to your situation.
This post might help you find more debugging information which would be helpful.
Which clients are affected?
Can you capture the http status code or the traffic being sent using Charles, Wireshark, or similar?
Try listening for the HTTP-status of the request, using flash.events.HTTPStatusEvent.HTTP_STATUS
That might give you some more info about what's going wrong.
I was going to open another question on essentially the same topic, but I figure two unanswered questions is worse than 1.
I get a similar intermittent issue from some users of a Flex application we have, but with some slightly different symptoms. The full range of information I can provide is:
It occurs on short (10ms) requests as well.
It appears to occur randomly.
The connection is over SSL.
It only occurs for users of IE, not for users using FireFox.
Once it occurs, users inform me they need to shut down IE and restart it (some users say they need to reboot, but I think that's less likely than just an IE restart). It appears to require a few minutes to reset itself.
It does not appear to affect the rest of the user's internet connection -they can continue to use other IE windows.
Once it occurs, it appears that no HTTPService request from the flex application will work.
It occurs (apparently) only for a small subset of users. Initially it
seemed to be due to their physical distance from the main server, but this no longer
appears to be necessarily the case (though it could be connection quality).
I'm not clear on what version of Adobe Flash the users are running.
Code was built with Adobe Flex 3.4 (linux)
The application does a wide range of requests, many in parallel though I've not been able to reproduce the problem.
Users do suggest this error occurs after they have come back to the application after a few minutes.
There appears to be no related server side request entry in the server logs, suggesting the request never reaches the server (possibly never leaving the client).
The server responds to all requests with the relevant cache headers to turn of IE caching.
The current workaround we have is to request users run the application in FireFox.
Full dump of the error is:
HTTP Status Code: null
Fault Code: Server.Error.Request
Fault Error ID: null
Fault Detail: Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]. URL: https://my.server/url
Fault String: HTTP request error
Fault Name: Error
Fault Message: faultCode:Server.Error.Request faultString:'HTTP request error' faultDetail:'Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]. URL: https://my.server/url'
Root Cause: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]
Many people have mentioned error 2032, a few even mentioning intermittent errors under IE only, but there are no mentions of the solution. If I ever find one, I'll update my post here.
Update
After talking to a user as this occurred, we found the following:
The user could open a new tab in IE, and load the same flex application up fine - so no internet connectivity problems.
The user could, in the tab with the flex app where the error occurs, load up google.com - so there appears to be no connectivity issue related to that specific tab.
The user could copy the address from the tab with the broken app into another tab, and the flex application would load.
The user could, after loading google.com in the tab where the flex app broke, copy in the flex app URL again, and immediately get the problem.
It appears that in my particular application, my flex app manages to break the flash plugin/VM to such an extent that after the break, no further requests to the URL are allowed.
I am so completely stumped by this I'm at the point of suggesting users use FireFox, or wrapping the application in an Air package.
I had somewhat the same problem here but with a Flash (Web - Flex 4.1 SDK) application.
after trying out a huge assortment of solutions we narrowed we finally came up with one that works pretty reliably for all systems, including newly installed machines.
A. add global event listeners at the root (or stage) of the application, on flex preinitialize stage.
IOErrorEvent.IO_ERROR
IOErrorEvent.NETWORK_ERROR
HTTPStatusEvent.HTTP_STATUS
ErrorEvent.ERROR
SecurityErrorEvent.SECURITY_ERROR
if an error is cought - event.preventDefault();
B. add event listeners on every loader used in the App for the following errors:
IOErrorEvent.IO_ERROR
SecurityErrorEvent.SECURITY_ERROR
HTTPStatusEvent.HTTP_STATUS
*to attempt recovery, like falling back to an external interface call...
C. place all the SWZ files from the bin-release folder together with the SWF file in the same path on the server you use to deliver your App.
in my case these are the files needed:
sparkskins_4.5.1.21328.swz
spark_4.5.1.21328.swz
textLayout_2.0.0.232.swz
rpc_4.5.1.21328.swz
osmf_1.0.0.16316.swz
framework_4.5.1.21328.swz
* to discover this i used Chrome developer console to see which errors occur on the page and discovered a chain of 404s when the app tries to download these files.
D. have a properly configured crossdomain.xml policy file which includes the allow http request xml tag.
<allow-http-request-headers-from domain="*" headers="*"/>
replace the * as needed in your particular case.
Cheers
Sounds like you might have more connections going out then the browser supports. Do you know exactly how many open connections exist at the time of the error?
Different browser allow different numbers of simultaneous open connections. IE 6,7,8 all allow different amounts: http://support.microsoft.com/kb/282402
Firefox: http://www.speedguide.net/faq_in_q.php?qid=231
I've had this exact issue happening in my Air app. I eventually realized that I had accidentally set the urlrequest.idleTimeout to 10. This timeout is actually in milliseconds and my webserver is local, so if I sent no parameters (no get or post) to my local server it would work. Whenever I sent any parameters along with the request of course it would fail because my script took longer than 10ms to run and return the data.
You may want to pay attention to slow loading scripts. You can debug by just forcing some static output and then stop that page from executing further. In my php page I put:
<?php
echo "hello=hi";
die();
?>
Also, make sure to debug it within the sandbox limitations. I am using a self-signed ssl cert and there are a lot of warnings when trying to connect to my local test webserver.
Hope that helps!

Why is Response.BufferOutput = False, not working?

This problem started on a different board, but Dave Ward, who was very prompt and helpful there is also here, so I'd like to pick up here for hopefully the last remaining piece of the puzzle.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
Basically, I was looking for a way to do constant updates to a web page from a long process. I thought AJAX was the way to go, but Dave has a nice article about using JavaScript. I integrated it into my application and it worked great on my client, but NOT my server WebHost4Life. I have another server # Brinkster and decided to try it there and it DOES work. All the code is the same on my client, WebHost4Life, and Brinkster, so there's obviously something going on with WebHost4Life.
I'm planning to write an email to them or request technical support, but I'd like to be proactive and try to figure out what could be going on with their end to cause this difference. I did everything I could with my code to turn off Buffering like Page.Response.BufferOutput = False. What server settings could they have implemented to cause this difference? Is there any way I could circumvent it on my own without their help? If not, what would they need to do?
For reference, a link to the working version of a simpler version of my application is located # http://www.jasoncomedy.com/javascriptfun/javascriptfun.aspx and the same version that isn't working is located # http://www.tabroom.org/Ajaxfun/Default.aspx. You'll notice in the working version, you get updates with each step, but in the one that doesn't, it sits there for a long time until everything is done and then does all the updates to the client at once ... and that makes me sad.
Hey, Jason. Sorry you're still having trouble with this.
What I would do is set up a simple page like:
protected void Page_Load(object sender, EventArgs e)
{
for (int i = 0; i < 10; i++)
{
Response.Write(i + "<br />");
Response.Flush();
Thread.Sleep(1000);
}
}
As we discussed before, make sure the .aspx file is empty of any markup other than the #Page declaration. That can sometimes trigger page buffering when it wouldn't have normally happened.
Then, point the tech support guys to that file and describe the desired behavior (10 updates, 1 per second). I've found that giving them a simple test case goes a long way toward getting these things resolved.
Definitely let us know what it ends up being. I'm guessing some sort of inline caching or reverse proxy, but I'm curious.
I don't know that you can force buffering - but a reverse proxy server between you and the server would affect buffering (since the buffer then affects the proxy's connection - not your browser's).
I've done some fruitless research on this one, but i'll share my line of thinking in the dim hope that it helps.
IIS is one of the things sitting between client and server in this case, so it might be useful to know what version of IIS is involved in each case -- and to investigate if there's some way that IIS can perform its own buffering on an open connection.
Though it's not quite on the money, this article about IIS6 v IIS 5 is the kind of thing I'm thinking of.
You should make sure that neither IIS nor any other filter is trying to compress your response. It is very possible that your production server has IIS compression enabled for dynamic pages such as those with the .aspx suffix, and your development server does not.
If this is the case, IIS may be waiting for the entire response (or a sizeable chunk) before it attempts to compress and send any result back to the client.
I suggest using Fiddler to monitor the response from your production server and figure out if responses are being gzip'd.
If response compression does turn out to be the problem, you can instruct IIS to ignore compression for specific responses via the Content-Encoding:Identity header.
The issue is that IIS will further buffer output (beyond ASP.NET's buffering) if you have dynamic gzip compression turned on (it is by default these days).
Therefore to stop IIS buffering your response there's a little hack you can do to fool IIS into thinking that the client can't handle compression by overwriting the Request.Headers["Accept-Encoding"] header (yes, Request.Headers, trust me):
Response.BufferOutput = false;
Request.Headers["Accept-Encoding"] = ""; // suppresses gzip compression on output
As it's sending the response, the IIS compression filter checks the request headers for Accept-Encoding: gzip ... and if it's not there, doesn't compress (and therefore further buffer the output).

Resources