I'm using ASP.NET 3.5 to build a website. One area of the website shows 28 video thumbnail images, which are jpeg's hosted on another webserver. If one or more of these jpegs do not exist, I want to display a locally hosted default image to the user, rather than a broken image link in the browser.
The approach I have taken to implement this is whenever the page is rendered it will perform an HTTP HEAD request to each of the images. If I get a 200 OK status code back, then the image is good and I can write out <img src="http://media.server.com/media/123456789.jpg" />. If I get a 404 Not Found, then I write out <img src="/images/defaultthumb.jpg" />.
If course I don't want to do this every time for all requests, and so I've implemented a list of cached image status objects stored at application level so that each image is only checked once every 5 minutes across all users, but this doesn't really have any bearing on my issue.
This seems to work very well. My problem is that for specific images, the HTTP HEAD request fails with Request Timed Out.
I have set my timeout value very low to only 200ms so that is doesn't delay the page rendering too much. This timeout seems to be fine for most of the images, and I've tried playing around and increasing this during debugging, but it makes no difference even if it's 10s or more.
I write out a log file to see whats happening, and this is what I get (edited for clarify and anonymity):
14:24:56.799|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/505C3080-EB4F-6CAE-60F8-B97F77A43A47/videothumb.jpg]]
14:24:57.356|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/66E2C916-EEB1-21D9-E7CB-08307CEF0C10/videothumb.jpg]]
14:24:57.914|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/905C3D99-C530-46D1-6B2B-63812680A884/videothumb.jpg]]
...
14:24:58.470|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/1CE0B04D-114A-911F-3833-D9E66FDF671F/videothumb.jpg]]
14:24:59.027|DEBUG|[HTTP HEAD CHECK OK [http://media.server.com/adpm/C3D7B5D7-85F2-BF12-E32E-368C1CB45F93/videothumb.jpg]]
14:25:11.852|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/BED71AD0-2FA5-EA54-0B03-03D139E9242E/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
14:25:12.565|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/92399E61-81A6-E7B3-4562-21793D193528/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
14:25:13.282|ERROR|[HTTP HEAD CHECK ERROR [http://media.server.com/adpm/7728C3B6-69C8-EFAA-FC9F-DAE70E1439F9/videothumb.jpg]] The operation has timed out
Source: System
Target Site: System.Net.WebResponse GetResponse()
Stack Trace: at System.Net.HttpWebRequest.GetResponse()
at MyProject.ApplicationCacheManager.ImageExists(String ImageURL, Boolean UseCache) in d:\Development\MyProject\trunk\src\Web\App_Code\Common\ApplicationCacheManager.cs:line 62
As you can see, the first 25 HEAD requests work, and the final 3 do not. It's always the last three.
If I paste one of the failed HEAD request URLs into a web browser: http://media.server.com/adpm/BED71AD0-2FA5-EA54-0B03-03D139E9242E/videothumb.jpg, it loads the image with no problems.
To try to work out what is happening here, I used Wireshark to capture all of the HTTP requests that are sent to the webserver hosting the images. For the log example I've given, I can see 25 HEAD requests for the 25 that were successful, but the 3 that failed do NOT appear in the wireshark trace.
Other than the images having different visual content, there is no difference from one image to the next.
To eliminate any problems with the URL itself (even though it works in a browser) I changed the order by switching one of the first images with one of the last failed three. When I do this, the problem goes away for the one that used to fail, and starts failing for the one that was bumped down to the end of the list.
So I think I can deduce from the above that when more than 25 HEAD requests occur in quick succession, subsequent HEAD requests fail regardless of the specific URL. I also know that the issue is on the IIS server rather than the remote image hosting server, due to the lack of requests in the Wireshark trace beyond the first 25.
The code snippet I'm using to perform the HEAD requests is shown below. Can anyone give me any suggestions as to what might be the problem? I've tried various different combinations of request header values, but none of them seem to make any difference. My gut feeling is there is some IIS setting somewhere that limits the number of concurrent HttpWebRequests's to 25 in any one request to an ASP.NET page.
try {
HttpWebRequest hwr = (HttpWebRequest)WebRequest.Create(ImageURL);
hwr.Method = "HEAD";
hwr.KeepAlive = false;
hwr.AllowAutoRedirect = false;
hwr.Accept = "image/jpeg";
hwr.Timeout = 200;
hwr.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.Reload);
//hwr.Connection = "close";
HttpWebResponse hwr_result = (HttpWebResponse)hwr.GetResponse();
if (hwr_result.StatusCode == HttpStatusCode.OK) {
Diagnostics.Diags.Debug("HTTP HEAD CHECK OK [" + ImageURL + "]", HttpContext.Current.Request);
// EXISTENCE CONFIRMED - ADD TO CACHE
if (UseCache) {
_ImageExists.Value.RemoveAll(ie => ie.ImageURL == ImageURL);
_ImageExists.Value.Add(new ImageExistenceCheck() { ImageURL = ImageURL, Found = true, CacheExpiry = DateTime.Now.AddMinutes(5) });
}
// RETURN TRUE
return true;
} else if (hwr_result.StatusCode == HttpStatusCode.NotFound) {
throw new WebException("404");
} else {
throw new WebException("ERROR");
}
} catch (WebException ex) {
if (ex.Message.Contains("404")) {
Diagnostics.Diags.Debug("HTTP HEAD CHECK NOT FOUND [" + ImageURL + "]", HttpContext.Current.Request);
// NON-EXISTENCE CONFIRMED - ADD TO CACHE
if (UseCache) {
_ImageExists.Value.RemoveAll(ie => ie.ImageURL == ImageURL);
_ImageExists.Value.Add(new ImageExistenceCheck() { ImageURL = ImageURL, Found = false, CacheExpiry = DateTime.Now.AddMinutes(5) });
}
return false;
} else {
Diagnostics.Diags.Error(HttpContext.Current.Request, "HTTP HEAD CHECK ERROR [" + ImageURL + "]", ex);
// ASSUME IMAGE IS OK
return true;
}
} catch (Exception ex) {
Diagnostics.Diags.Error(HttpContext.Current.Request, "GENERAL CHECK ERROR [" + ImageURL + "]", ex);
// ASSUME IMAGE IS OK
return true;
}
I have solved this myself. The problem was indeed the number of allowed connections, which was set to 24 by default.
In my case, I am going to only perform the image check if the MyHttpWebRequest.ServicePoint.CurrentConnections is less than 10.
To increase the max limit, just set ServicePointManager.DefaultConnectionLimit to the number of concurrent connections you require.
An alternative which may help some people would be to reduce the idle time that is the time a connection waits until it destroys itself. To change this, you need to set MyHttpWebRequest.ServicePoint.MaxIdleTime to the timeout value in milliseconds.
Related
I'm facing problem with kestrel server's performance. I have following scenario :
TestClient(JMeter) -> DemoAPI-1(Kestrel) -> DemoAPI-2(IIS)
I'm trying to create a sample application that could get the file content as and when requested.
TestClient(100 Threads) requests to DemoAPI-1 which in turn request to DemoAPI-2. DemoAPI-2 reads a fixed XML file(1 MB max) and returns it's content as a response(In production DemoAPI-2 is not going to be exposed to outside world).
When I tested direct access from TestClient -> DemoAPI-2 I got expected result(good) which is following :
Average : 368ms
Minimum : 40ms
Maximum : 1056ms
Throughput : 40.1/sec
But when I tried to access it through DemoAPI-1 I got following result :
Average : 48232ms
Minimum : 21095ms
Maximum : 49377ms
Throughput : 2.0/sec
As you can see there is a huge difference.I'm not getting even the 10% throughput of DemoAPI-2. I was told has kestrel is more efficient and fast compared to traditional IIS. Also because there is no problem in direct access, I think we can eliminate the possible of problem on DemoAPI-2.
※Code of DemoAPI-1 :
string base64Encoded = null;
var request = new HttpRequestMessage(HttpMethod.Get, url);
var response = await this.httpClient.SendAsync(request, HttpCompletionOption.ResponseContentRead).ConfigureAwait(false);
if (response.StatusCode.Equals(HttpStatusCode.OK))
{
var content = await response.Content.ReadAsByteArrayAsync().ConfigureAwait(false);
base64Encoded = Convert.ToBase64String(content);
}
return base64Encoded;
※Code of DemoAPI-2 :
[HttpGet("Demo2")]
public async Task<IActionResult> Demo2Async(int wait)
{
try
{
if (wait > 0)
{
await Task.Delay(wait);
}
var path = Path.Combine(Directory.GetCurrentDirectory(), "test.xml");
var file = System.IO.File.ReadAllText(path);
return Content(file);
}
catch (System.Exception ex)
{
return StatusCode(500, ex.Message);
}
}
Some additional information :
Both APIs are async.
Both APIs are hosted on different EC2 instances(C5.xlarge Windows Server 2016).
DemoAPI-1(kestrel) is a self-contained API(without reverse proxy)
TestClient(jMeter) is set to 100 thread for this testing.
No other configuration is done for kestrel server as of now.
There are no action filter, middleware or logging that could effect the performance as of now.
Communication is done using SSL on 5001 port.
Wait parameter for DemoAPI2 is set to 0 as of now.
The CPU usage of DEMOAPI-1 is not over 40%.
The problem was due to HttpClient's port exhaustion issue.
I was able to solve this problem by using IHttpClientFactory.
Following article might help someone who faces similar problem.
https://www.stevejgordon.co.uk/httpclient-creation-and-disposal-internals-should-i-dispose-of-httpclient
DEMOAPI-1 performs a non-asynchronous read of the streams:
var bytes = stream.Read(read, 0, DataChunkSize);
while (bytes > 0)
{
buffer += System.Text.Encoding.UTF8.GetString(read, 0, bytes);
// Replace with ReadAsync
bytes = stream.Read(read, 0, DataChunkSize);
}
That can be an issue with throughput on a lot of requests.
Also, I'm not fully aware of why are you not testing the same code with IIS and Kestrel, I would assume you need to make only environmental changes and not the code.
I am trying to play Widevine encrypted content on an Android TV application using Exoplayer. I have my video URL which is served from a CDN and acquired with a ticket. I have my widevine license URL, a ticket and a auth token for the license server.
I am creating a drmSessionManager, putting the necessary headers needed by the license server as follows:
UUID drmSchemeUuid = C.WIDEVINE_UUID;
mediaDrm = FrameworkMediaDrm.newInstance(drmSchemeUuid);
static final String USER_AGENT = "user-agent";
HttpMediaDrmCallback drmCallback = new HttpMediaDrmCallback("my-license-server", new DefaultHttpDataSourceFactory(USER_AGENT));
keyRequestProperties.put("ticket-header", ticket);
keyRequestProperties.put("token-header", token);
drmCallback.setKeyRequestProperty("ticket-header", ticket);
drmCallback.setKeyRequestProperty("token-header", token);
new DefaultDrmSessionManager(drmSchemeUuid, mediaDrm, drmCallback, keyRequestProperties)
After this Exoplayer handles most of the stuff, the following breakpoints are hit.
response = callback.executeKeyRequest(uuid, (KeyRequest) request);
in class DefaultDrmSession
return executePost(dataSourceFactory, url, request.getData(), requestProperties) in HttpMediaDrmCallback
I can observe that everything is fine till this point, the URL is correct, the headers are set fine.
in the following piece of code, I can observe that the dataSpec is fine, trying to POST a request to the license server with the correct data, but when making the connection the response code returns 405.
in class : DefaultHttpDataSource
in method : public long open(DataSpec dataSpec)
this.dataSpec = dataSpec;
this.bytesRead = 0;
this.bytesSkipped = 0;
transferInitializing(dataSpec);
try {
connection = makeConnection(dataSpec);
} catch (IOException e) {
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
try {
responseCode = connection.getResponseCode();
responseMessage = connection.getResponseMessage();
} catch (IOException e) {
closeConnectionQuietly();
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
When using postman to make a request to the URL, a GET request returns the following body with a response code of 405.
{
"Message": "The requested resource does not support http method 'GET'." }
a POST request also returns response code 405 but returns an empty body.
In both cases the following header is also returned, which I suppose the request must be accepting GET and POST requests.
Access-Control-Allow-Methods →GET, POST
I have no access to the configuration of the DRM server, and my contacts which are responsible of the DRM server tells me that POST requests must be working fine since there are clients which have managed to get the content to play from the same DRM server.
I am quite confused at the moment and think maybe I am missing some sort of configuration in exoplayer since I am quite new to the concept of DRMs.
Any help would be greatly appreciated.
We figured out the solution. The ticket supplied for the DRM license server was wrong. This works as it is supposed to now and the content is getting played. Just in case anyone somehow gets the same problem or is in need of a basic Widevine content playing code, this works fine at the moment.
Best regards.
i have images url , i need to check url is responding or not .
For Example :Below i i have written three image url, first two url is not valid only third url is valid .but second and fourth url is responding as valid image
and but there is no image.
http://media.expedia.com/hotels/1000000/90000/84900/84853/84853_744_b.jpg
http://www.iceportal.com/brochures/media/show.aspx?brochureid=ICE19044&did=3073&mtype=3073&type=pic&lang=en&publicid=4175749&resizing=X
http://images.trvl-media.com/hotels/1000000/30000/20400/20313/20313_166_b.jpg
http://www.iceportal.com/brochures/ice/ErrorPages/404.htm?aspxerrorpath=/brochures/media/show_A.aspx
here is my code:
public static bool CheckUrlExists(string url)
{
try
{
Uri u = new Uri(url);
WebRequest w = WebRequest.Create(u);
w.Method = WebRequestMethods.Http.Head;
using (StreamReader s = new StreamReader(w.GetResponse().GetResponseStream()))
{
return (s.ReadToEnd().Length >= 0);
}
}
catch
{
return false;
}
}
with this code i am validating only those url which is showing 404 error,but not those url which showing 'Sorry, requested brochure is temporarily un-published 'or any other type of message.
You will need a more complex logic to validate if the URL points to an image. If a resource is missing from the server or it is otherwise unavailable, you may get a HTTP error like the infamous 404, which will trigger a WebException. However, that is only part of the story.
Your second URL returns HTTP 200, confirming that the resource is there when in fact the resource is missing. What you really get there is a HTML document explaining the resource is not available. This is bad practice, but not without example.
At very least, you should examine the MIME type (Content-Type header, see WebResponse.ContentType) of the resource you test. A content type of image/* suggests an image-type resource. Failing to detect a known MIME type (e.g. if you receive application/octet-stream) you can actually HTTP GET get resource and run image type detection on the downloaded content.
I would suggest using HttpWebRequest and HttpWebResponse to do this, they are sub classes of WebRequest and WebResponse and as such are more granular for what you're trying to achive. The following code works with the example URIs provided
public static bool CheckUrlExists(string url)
{
try
{
Uri u = new Uri(url);
HttpWebRequest w = (HttpWebRequest)WebRequest.Create(u);
w.AllowAutoRedirect = false;
w.Method = WebRequestMethods.Http.Head;
HttpWebResponse response = (HttpWebResponse)w.GetResponse();
return response.StatusCode == HttpStatusCode.OK; //Check http status code
}
catch(WebException ex)
{
return false;
}
}
What's important here is that I'm checking the HttpStatus code. You're catch will already catch the 404s but the problem URIs ultimately lead to a 200 (OK). By setting AllowAutoRedirect to false the HttpWebRequest instance returns a 302 (redirect) status code, instead of following the redirect through to the "Sorry, requested brochure is temporarily un-published." page which is returning 200 (OK). This should serve your purpose.
Also: Catching a WebException will allow you to examine the status code (400+,500+, etc).
Be aware however, that you may be redirected to a new location for the image you're requesting. Taking that you might want to use PeterK's mime type check.
I receive error emails from my website whenever an exception occurs. I am getting this error:
The remote host closed the connection. The error code is 0x800704CD
and don't know why. I get about 30 a day. I can't reproduce the error either so can't track down the issue.
Website is ASP.NET 2 running on IIS7.
Stack trace:
at
System.Web.Hosting.IIS7WorkerRequest.RaiseCommunicationError(Int32
result, Boolean throwOnDisconnect) at
System.Web.Hosting.IIS7WorkerRequest.ExplicitFlush()
at
System.Web.HttpResponse.Flush(Boolean
finalFlush) at
System.Web.HttpResponse.Flush() at
System.Web.HttpResponse.End() at
System.Web.UI.HttpResponseWrapper.System.Web.UI.IHttpResponse.End()
at
System.Web.UI.PageRequestManager.OnPageError(Object
sender, EventArgs e) at
System.Web.UI.TemplateControl.OnError(EventArgs
e) at
System.Web.UI.Page.HandleError(Exception
e) at
System.Web.UI.Page.ProcessRequestMain(Boolean
includeStagesBeforeAsyncPoint, Boolean
includeStagesAfterAsyncPoint) at
System.Web.UI.Page.ProcessRequest(Boolean
includeStagesBeforeAsyncPoint, Boolean
includeStagesAfterAsyncPoint) at
System.Web.UI.Page.ProcessRequest() at
System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext
context) at
System.Web.UI.Page.ProcessRequest(HttpContext
context) at
ASP.default_aspx.ProcessRequest(HttpContext
context) at
System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at
System.Web.HttpApplication.ExecuteStep(IExecutionStep
step, Boolean& completedSynchronously)
I get this one all the time. It means that the user started to download a file, and then it either failed, or they cancelled it.
To reproduce the exception try do this yourself - however I'm unaware of any ways to prevent it (except for handling this specific exception only).
You need to decide what the best way forward is depending on your app.
As m.edmondson mentioned, "The remote host closed the connection." occurs when a user or browser cancels something, or the network connection drops etc. It doesn't necessarily have to be a file download however, just any request for any resource that results in a response to the client. Basically the error means that the response could not be sent because the server can no longer talk to the client(browser).
There are a number of steps that you can take in order to stop it happening. If you are manually sending something in the response with a Response.Write, Response.Flush, returning data from a web servivce/page method or something similar, then you should consider checking Response.IsClientConnected before sending the response. Also, if the response is likely to take a long time or a lot of server-side processing is required, you should check this periodically until the response.end if called. See the following for details on this property:
http://msdn.microsoft.com/en-us/library/system.web.httpresponse.isclientconnected.aspx
Alternatively, which I believe is most likely in your case, the error is being caused by something inside the framework. The following link may by of use:
http://blog.whitesites.com/fixing-The-remote-host-closed-the-connection-The-error-code-is-0x80070057__633882307305519259_blog.htm
The following stack-overflow post might also be of interest:
"The remote host closed the connection" in Response.OutputStream.Write
One can reproduce the error with the code below:
public ActionResult ClosingTheConnectionAction(){
try
{
//we need to set buffer to false to
//make sure data is written in chunks
Response.Buffer = false;
var someText = "Some text here to make things happen ;-)";
var content = GetBytes( someText );
for(var i=0; i < 100; i++)
{
Response.OutputStream.Write(content, 0, content.Length);
}
return View();
}
catch(HttpException hex)
{
if (hex.Message.StartsWith("The remote host closed the connection. The error code is 0x800704CD."))
{
//react on remote host closed the connection exception.
var msg = hex.Message;
}
}
catch(Exception somethingElseHappened)
{
//handle it with some other code
}
return View();
}
Now run the website in debug mode. Put a breakpoint in the loop that writes to the output stream. Go to that action method and after the first iteration passed close the tab of the browser. Hit F10 to continue the loop. After it hit the next iteration you will see the exception. Enjoy your exception :-)
I was getting this on an asp.net 2.0 iis7 Windows2008 site. Same code on iis6 worked fine. It was causing an issue for me because it was messing up the login process. User would login and get a 302 to default.asxp, which would get through page_load, but not as far as pre-render before iis7 would send a 302 back to login.aspx without the auth cookie. I started playing with app pool settings, and for some reason 'enable 32 bit applications' seems to have fixed it. No idea why, since this site isn't doing anything special that should require any 32 bit drivers. We have some sites that still use Access that require 32bit, but not our straight SQL sites like this one.
I got this error when I dynamically read data from a WebRequest and never closed the Response.
protected System.IO.Stream GetStream(string url)
{
try
{
System.IO.Stream stream = null;
var request = System.Net.WebRequest.Create(url);
var response = request.GetResponse();
if (response != null) {
stream = response.GetResponseStream();
// I never closed the response thus resulting in the error
response.Close();
}
response = null;
request = null;
return stream;
}
catch (Exception) { }
return null;
}
I too got this same error on my image handler that I wrote. I got it like 30 times a day on site with heavy traffic, managed to reproduce it also. You get this when a user cancels the request (closes the page or his internet connection is interrupted for example), in my case in the following row:
myContext.Response.OutputStream.Write(buffer, 0, bytesRead);
I can’t think of any way to prevent it but maybe you can properly handle this. Ex:
try
{
…
myContext.Response.OutputStream.Write(buffer, 0, bytesRead);
…
}catch (HttpException ex)
{
if (ex.Message.StartsWith("The remote host closed the connection."))
;//do nothing
else
//handle other errors
}
catch (Exception e)
{
//handle other errors
}
finally
{//close streams etc..
}
We have a very simple ASP.Net page for uploading a file to our webserver. The page has no controls - a client uses it to automatically send us a file each night.
On occasion, the file seems to not get to us, but the client reports that they have sent it.
We added some logging statements to the page, and discovered something quite odd. The page ceases to execute right in the middle of a log statement. No exceptions, just up and dies.
Here is the code-behind:
protected void Page_Load(object sender, EventArgs e) {
try {
// record that request came in at all
log.Debug("Update Inventory page requested through HTTP {2} on {0} {1}", DateTime.Now.ToShortDateString(), DateTime.Now.ToLongTimeString(), IsPostBack ? "POST" : "GET");
// make sure directory exists
string basePath = Server.MapPath("~/admin/uploads/");
log.Debug("Saving to folder {0}", basePath);
if (!Directory.Exists(basePath)) {
log.Debug("Creating folder {0}", basePath);
Directory.CreateDirectory(basePath);
}
// generate a unique file name
string fileName = DateTime.Now.Ticks.ToString() + ".dat";
string path = basePath + fileName;
log.Debug("Filename to save is {0}", fileName);
// record initial bytes of stream/file
StreamReader reader = new StreamReader(stream);
string fileContents = reader.ReadToEnd();
log.Debug("File received by GET is " + fileContents.Length + " characters long and begins with: "
+ Environment.NewLine + fileContents.Substring(0, Math.Min(fileContents.Length, 1000)));
// write out file
File.WriteAllText(path, fileContents);
log.Debug("Update Inventory page processing finished.");
// trap for and record any and all exceptions
}
catch (Exception ex) {
log.Debug(ex);
}
}
The processing seems to die in the middle of the log statement that outputs the length and first portion of the fileContents variable. The logging that occurs when the process fails looks like this:
2010-08-02 02:46:01.7342|DEBUG|UpdateInventory|Update Inventory page requested through HTTP GET on 8/2/2010 2:46:01 AM
2010-08-02 02:46:01.7655|DEBUG|UpdateInventory|Saving to folder c:\hosting\sites\musicgoround.com\wwwroot\admin\uploads\
2010-08-02 02:46:01.7811|DEBUG|UpdateInventory|Filename to save is 634163139617811250.dat
2010-08-02 02:48:02.3905|DEBUG|UpdateInventory|
I really don't understand what to make of this.
I assume if there was a error in the transmission of the file that either an exception would be thrown from the reader.ReadToEnd() line. And if not an exception, I would expect the page processing to continue but that I may only receive part of the file (in which case it should log something).
The logging statement is only accessing a string variable, and it's inside a try-catch. NLog is the logging component we use, and we access that through the facade provided by the Simple Logging Facade project on Codeplex. So, we trust the logging component to be more or less bulletproof - we certainly don't see anything in our usage of it here that should be causing problems.
So, what's the deal? Why on earth could this page just up and stop processing like this?
The fact that we get a half-finished logging statement seems to point towards an error being swallowed in the logging system - but that just seems so unlikely - and we have NLog's internal logging on and it is not reporting any problems.
The most likely candidate is that this line:
2010-08-02 02:48:02.3905|DEBUG|UpdateInventory|
Is caused by this:
log.Debug(ex);
I.e. it is throwing an exception, but the logger is not recording anything useful. Why don't you try switching about the log levels a bit, e.g. change the exception logging level to error:
log.Error(ex);
That way you can see if it is actually throwing an exception and it is just the logger not recording the exception string properly.