Pre-Load Web Application Pages After Deployment to Prevent Slow Loading - asp.net

We build and deploy our web application to our dev environment automatically every night (using VSTS). When we come into the office in the morning, the first person to access the application has to wait an extended period for each page to load the first time. Subsequent loads are very fast.
The problem has a greater impact in our live environment where, after a deployment, it is potentially an end-user who is the first person to access the application and complain of slowness. To mitigate for this, a member of the team is currently accessing every page of the application manually after deployment to the live environment so that they 'pre-load' every page, which works, but is obviously time-consuming!
I've done a fair bit of searching on the subject, and have configured the appropriate Application Pool in our IIS server (IIS 8.5) so that its Start Mode is set to "AlwaysRunning". I've also edited our applicationHost file and set the appropriate Sites with the preloadEnabled="true" attribute. I did this after reading the instructions in this very helpful Microsoft documentation.
However, if I'm reading that documentation correctly, any pre-loading of the website which might alleviate the issue we're having (and I'm not even certain that this is the kind of pre-loading that I'm thinking of) only takes place when the server, the IIS service of the Application Pool are restarted. This isn't happening in our case. We need the pre-loading to take place following a deployment of the application to the IIS server.
Is there a way to automate this pre-loading?

One way of doing this would be to perform a HTTP request automatically:
As soon as the app was deployed (by running a task from the deploying machine)
Before the application pool has the chance to shut itself down (using Task Scheduler for instance)
Personally, I use a tool that is run in both cases to keep the site warmed up.
Advantages
Robust control over how and when this warm-up is executed.
It's completely independent from any IIS or web.config setup.
Disadvantages
Generates "bogus" log information.
Keeps the app permanently in memory (the Pool would never time-out, essentially wasting server resources for sites with a low # of visitors).
Sample
Such a tool could be a simple console app written as follows:
var taskInfo = new {
Url = "http://www.a-website-to-keep-warm.url",
UseHostHeader = true,
HostHeader = "www.a-website-to-keep-warm.url",
HttpMethod = "head"
};
HttpStatusCode statusCode = HttpStatusCode.Unused;
long contentLength = 0;
try
{
Dictionary<string, string> headers = new Dictionary<string, string>();
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(taskInfo.Url);
webRequest.Method = taskInfo.HttpMethod.ToUpper();
if(taskInfo.UseHostHeader)
webRequest.Host = taskInfo.HostHeader;
using (HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse())
{
//did we warm-up the site successfully?
statusCode = webResponse.StatusCode;
contentLength = webResponse.ContentLength;
//optionally read response headers
foreach (string header in webResponse.Headers)
{
headers.Add(header, webResponse.Headers[header]);
}
}
decimal kilobytes = Math.Round(contentLength / 1024M, 1);
Debug.WriteLine($"Got {kilobytes:F1} kB with statuscode: \"{statusCode} \" ...");
}
catch (Exception ex)
{
Debug.WriteLine($"taskInfo failed with exception: {ex.Message}");
}
In my case, I read a bunch of taskInfo objects from a json file and execute them asynchronously every X minutes, making sure X is lower than the Pool-timeout value. It is also run immediately after every deploy.
Because we're not interested in getting the entire content, it uses a HTTP HEAD request instead of GET. Lastly, it supports multiple sites on the same host by adding a Host header to the request.

Related

When a Asp.net Web App is recycled in Azure?

We have a asp.net web app with "Always on" that is running a long task. To avoid to run this task two or more time simultaneously, at the beggining of the task a flag is set to database. If this task is forced to shutdown the flag is not removed, and the task is not gonna run again without manual intervention.
I've been looking for if the concept of recycle a website is existing in Azure, I didn't find much about it. I found for example https://stackoverflow.com/a/21841469/1081568 it seems that is never executed recycled, but I find some people complaining about web apps with "always on" set that recycles randomly.
I would like to know in which circumstance an app could be Recycled/shutdown in Azure? Just for maintenance? Azure recycle asp.net webs apps? Or is a concept exclusive of On-Premise servers?
And another question, Is there a way to capture this shutdown/recycle from Azure and stop my running task gracefully if it's running.
Thanks
As far as I know, normally azure will not recycled your web app's resource, if you set web apps with "always on".
If web app's “Always On” setting is off, which means the web site will be recycled after period of inactivity (20 minutes).
And another question, Is there a way to capture this shutdown/recycle from Azure and stop my running task gracefully if it's running.
According to your description, I suggest you could send a kudu restapi request to get the current web app's processid.
If the application restarted, the processid will be changed. By comparing the processid, you could capture this web app is recycled.
More details about how to get the current web app's processid, you could refer to below steps:
1.Set a Deployment credentials in your azure web application as below:
Notice:Remember the user name and password, we will use them to generate the access token
2.Send the request to below url to get the process information.
Url:https://yourwebsitename.scm.azurewebsites.net/api/processes
Code sample:
string url = #"https://yourwebsitename.scm.azurewebsites.net/api/processes";
var httpWebRequest = (HttpWebRequest)WebRequest.Create(url);
httpWebRequest.Method = "GET";
httpWebRequest.ContentLength = 0;
string logininforation = "username:password";
byte[] byt = System.Text.Encoding.UTF8.GetBytes(logininforation);
string encode = Convert.ToBase64String(byt);
httpWebRequest.Headers.Add(HttpRequestHeader.Authorization, "Basic " + encode);
using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse())
{
using (System.IO.StreamReader r = new System.IO.StreamReader(response.GetResponseStream()))
{
string jsonResponse = r.ReadToEnd();
dynamic result = JsonConvert.DeserializeObject(jsonResponse);
dynamic resultList = result.Children();
foreach (var item in resultList)
{
Console.WriteLine(item.name + " : " + item.id);
}
}
}
Result:
You could also find the processid in the portal.
Select your web app --> Process explorer
Image:

"An existing connection was forcibly closed by the remote host" sporadic error from HttpWebRequest

I'm having an odd, sporadic problem with a simple test app (Visual Studio console application) that acts as a watchdog for a hosted aspx web site (not app) by sending it simple http requests and timing the response times. I've used this for many weeks in the past without this particular problem. At seemingly random times during the day, my http requests start failing with the above error. They appear to be timeouts since the requests that fail are all taking 60 seconds. After a period of consistent errors as per above (random time periods from a few minutes to 90 minutes in one case) the errros stop and http responses start coming back with no errors at the normal speed (usually about .25s). The watchdog client requests triggers a very simple database lookup, just 1-2 lines of code on the server side. This is hosted at a shared windows hosting web host.
I can also trigger this behavior at will by updating any .cs file on my host site, which, among other things, causes the app pool to recycle. Immediately my watchdog app starts timing out again with the above error.
It smells like some kind of recycled connection problem, since if I simply restart the watchdog app, it works fine and responses start coming back at the normal delay.
I've tried setting request.KeepAlive = false and request.ServicePoint.ConnectionLimit = 1, those did not help.
Another clue, I cannot connect with IIS manager to either of two different websites hosted on this server, which has always worked fine. I'm getting "The underlying connection was closed" trying to connect via IIS Manager. Every time. I have not updated either of the sites in a while, so it wasn't any change of mine.
This is an asp.net 4 website on the backend running on IIS7 with a dedicated app pool in integrated pipeline mode.
Also if I change the sleeptime variable in the watchdog app to something like 30 seconds, the problem doesn't show up. There's some magic number in the range of I believe 10-20 seconds where if the requests are pausing more than that, they never fail.
I think the fact that IIS Manager can't connect is good evidence that something is wrong on the host side independant of my test app but I wanted to cover my bases before opening a support incident... especially since a simple restart of my console app fixes the problem... at least for a while.
class Program
{
//make a simple web request and just return the status code
static string SendHttpMsg(string url, string postData)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(url));
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
ASCIIEncoding encoding = new ASCIIEncoding();
byte[] byte1 = encoding.GetBytes(postData);
request.ContentLength = byte1.Length;
//request.KeepAlive = false; //no effect
//request.ServicePoint.ConnectionLimit = 1; //no effect
Stream requestStream = request.GetRequestStream();
requestStream.Write(byte1, 0, byte1.Length);
requestStream.Close();
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
response.Close();
return ((int)response.StatusCode).ToString();
}
static void Main(string[] args)
{
int sleeptime = 5000;
string result = "";
while (true)
{
DateTime start = DateTime.Now;
try
{
//this is a very simple call that results in a very simple, fast query in the database
result = SendHttpMsg("http://example.com/somepage.aspx", "command=test");
}
catch (Exception ex)
{
//fancy exception handling here, removed
}
DateTime end = DateTime.Now;
TimeSpan calltime = end - start;
Console.WriteLine(end.ToString() + ", " + calltime.TotalSeconds.ToString("F") + " seconds " + result);
Thread.Sleep(sleeptime);
}
}
}
You could have dangling connections, and in HTTP 1.1 you are limited to 2 connections.
Try changing the HTTP Protocol Version used in the request:
request.ProtocolVersion = HttpVersion.Version10;
If that doesn't work, it could be taking a long time to resolve the proxy settings, which can be fixed by disabling the proxy settings in your application by adding the following to the .config file:
<system.net>
<defaultProxy enabled="false">
<proxy/>
<bypasslist/>
<module/>
</defaultProxy>
</system.net>
If either of the above fixes the problem, I'd recommend adding a try...catch...finally block around your request code, to ensure that each request is properly closed and disposed of (try setting request = null in your finally statement inside the method).

ASP.NET HttpModule RewritePath virtual directory cache not refreshing

I have an ASP.NET IHttpModule implementation designed to rewrite paths for serving files. The module handles only one event, PostAuthenticateRequest, as follows:
void context_PostAuthenticateRequest(object sender, EventArgs e)
{
if (HttpContext.Current.Request.Path.ToLower().Contains("foobar"))
{
HttpContext.Current.RewritePath("virtdir/image.png");
}
}
The path "virtdir", is a virtual directory child of the application. The application itself runs in a typical location: C:\inetpub\wwwroot\IisModuleCacheTest\ The virtual directory "virtdir" is mapped to C:\TestVirtDir\
A request to http://myserver/iismodulecachetest/foobar will, as expected, return image.png from the virtual directory. Equally, a request to http://myserver/iismodulecachetest/virtdir/image.png will return the same image file.
I then perform the following:
Request http://myserver/iismodulecachetest/foobar
Directly modify C:\testvirtdir\image.png (change its colour in paint and re-save).
Repeat.
After anywhere between 1 and 20 repeats spaced a few seconds apart, the image returned will be an out of date copy.
Once upset, the server will only return the current version after an unknown amount of time elapses (from 10 seconds up to a few minutes). If I substitute the URL in step 1 with http://myserver/iismodulecachetest/virtdir/image.png, the problem doesn't appear to arise. But strangely enough, after the problem has arisen by using the "foobar" URL, the direct URL also starts returning an out of date copy of the image.
Pertinent Details:
A recycle of the app-pool resolves the issue.
Waiting a while resolves the issue.
Repeatedly re-saving the file doesn't appear to have an effect. I'd wondered if a "file modified" event was getting lost, but once stuck I can save half a dozen modifications and Iis stil doesn't return a new copy.
Disabling cache in web.config made no difference. <caching enabled="false" enableKernelCache="false" />
The fact that this is a virtual directory seems to matter, I could not replicate the issue with image.png being part of the content of the application itself.
This is not a client-cache, it is definitely the server returning an out of date version. I have verified this by examining request headers, Ctrl+F5 refreshing, even using separate browsers.
I've replicated the issue on two machines. Win7 Pro 6.1.7601 SP1 + IIS 7.5.7600.16385 and Server 2008 R2 6.1.7601 SP1 + IIS 7.5.7600.16385.
Edit - More Details:
Disabling cache and kernel cache at the server level makes no difference.
Adding an extension to the URL makes no difference http://myserver/iismodulecachetest/foobar.png.
Attaching a debugger to IIS shows the context_PostAuthenticateRequest event handler is being triggered each time and behaving the same way whether or not the cache is stuck.
Edit2 - IIS Logs:
I enabled "Failed Request Tracing" in IIS (interesting how this works for non-failed requests also if configured appropriately. The pipeline is identical up until step 17 where the request returning the out of date version clearly shows a cache hit.
The first request looks just fine, with a cache miss:
But once it gets stuck, it repeatedly shows a cache hit:
The events after the cache hit are, understandably, quite different than the cache miss scenario. It really just looks like IIS is perfectly content to think its file cache is up to date, when it is definitely not! :(
A little further down the stack we see first request:
And then subsequent (faulty) cache-hit request:
Also note that the directory is apparently monitored, as per FileDirmoned="true".
You can do something like below.
void context_PostAuthenticateRequest(object sender, EventArgs e)
{
if (HttpContext.Current.Request.Path.ToLower().Contains("foobar"))
{
Random rnd = new Random();
int randomNumber = rnd.Next(int.MinValue, int.MaxValue);
HttpContext.Current.RewritePath("virtdir/image.png?"+randomNumber);
}
}
I had the same problem using the method RewritePath to address static resources in a virtual directory.
I do not have a solution for the use of this method but at the end I opted to use the method Server.TransferRequest and this shows no caching problems.
HttpContext.Current.Server.TransferRequest(newUrl);
The request transfer is processed again by the IHttpModule so you need to be careful to not produce loops.

Web Garden and Static Objects difficult to understand

I am in the process of investigating to convert our web application to the web farm. So I started with web garden, by converting the 'Maximum Worker Process = 3'. Following is the simplified version of my problem.
Following is my Static Object class.
public static class MyStaticObject
{
public static string MyProperty {get;set;}
}
Then on Page Load I initialized the Static Objects as follows -
MyStaticObject.MyProperty = "My Static Property";
Then using asp.net ajax [WebMethod] create the ajax method on my web page
[WebMethod()]
public static string getStaticProperty()
{
return MyStaticObject.MyProperty;
}
// Then I call this Ajax method using Javascript and set the return to the text box.
This test is not working as expected. Following are my assumptions and wrong outcome from the test.
I thought when we set virtual directory to be web garden, then each request to the virtual directory is handled by different process in web garden, so my next few request to the server, should return null as I have initialized the static objects for one working process. But even if I click the ajax button for 20 times in row (means 20 requests) even then the static objects return me value.
Am i right in assuming on restarting the IIS should kill all the static objects.
Static objects are not shared in web gardens/web farms.
I am surprised by the behaviour of IIS, static objects and web garden.
Is I am assuming wrong or my way of testing is wrong.
Thanks.
Your assumptions about the way static objects are managed in AppPools / web gardens is correct.
However, your assumption about the way that web requests are distributed is not. HTTP requests are round-robined by the http.sys driver to IIS worker processes only when a new TCP connection is established, not when a new request arrives. Since keepalives are enabled by default, even though you made 20 requests in a row, they probably were all served by the same IIS worker process.
You can have IIS disable keepalives for testing purposes from the HTTP Response Headers section in IIS Manager, under Set Common Headers. That should cause your browser to open a new connection for each request.
To test with keepalives enabled, you can use the Web Capacity Analysis Tool (WCAT), available with the IIS 6 Resource Kit, to generate a multi-threaded load that accesses both IIS processes in your web garden.

Starting Process from ASP.NET

I'm running an ASP web application that should start a Powershell script on the server. To run this Powershell script a lot of Domain rights are needed. So I run the apppool under a user that has all the rights.
But when I start the powershellscript I alway get the that the access is denied.
Has any one an idea how to solve the problem?
When I start a process as described, is the process running under the usercontext of the app pool or under the usercontext of the user which is logged in in the ASP.NET web application?
I'ver tried two methods
1.
string cmdArg = "C:\\Scripts\\test.ps1 " + username;
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript(cmdArg);
pipeline.Commands[0].MergeMyResults(PipelineResultTypes.Error, PipelineResultTypes.Output);
Collection<PSObject> results = pipeline.Invoke();
runspace.Close();
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject obj in results)
{
stringBuilder.AppendLine(obj.ToString());
string test = Environment.UserName;
}
return results[0].ToString();
2.
string cmdArg = "C:\\Scripts\\test.ps1 " + username;
Process myProcess = new Process();
ProcessStartInfo myProcessStartInfo = new ProcessStartInfo("powershell.exe",cmdArg);
myProcessStartInfo.UseShellExecute = false;
myProcessStartInfo.RedirectStandardOutput = true;
myProcess.StartInfo = myProcessStartInfo;
myProcess.Start();
StreamReader myStreamReader = myProcess.StandardOutput;
myProcess.WaitForExit();
string myString = myStreamReader.ReadLine();
return myString;
Ok, you think running the Apppool with these grand permissions is not best practise.
What about puting a webservice between? The webservice is in an appdomain that is only reachable from localhost?
Update
Ok, I've written an asp.net webservice. The webservice runs in an applicationpool with all rights but is only reachable from localhost. The webservice contains the code to start the script. The ASP MVC3 webapplication is running in a applicationpool with nearly no rights.
But when the webmethod is executed I always get an error that tell me, that I haven't enought rights. I tried to set the impersonate in the webconfig false, but without success.
Does anyone know how to solve this probleme?
Update:
I've read out the current user who execute the powershell when I start it from the webservice. I says it is the user who've got all rights. But the ps throws Errors like: you can't start a method with value null.
Then I've tried to run the ps with runsas as a low level user. I get the same errors.
Then I've tried to run the ps with the same user as in the webservice and everything worked!
Is there anyone who could explain this phenomenon?
And what is the different between my code above and a runas? (same user context)
thanks a lot!
Starting a new process in a HTTP request is not great for performance and it may also be a security risk. However, before ASP.NET and other modern web servers was available the only way to serve content (besides static files) was to execute a program in a separate process.
The API for doing this called the Common Gateway Interface (CGI) and still supported by IIS. If configured correctly you can execute a program on each request.
I'm not sure that you can use CGI to execute a script but then you can create an ISAPI filter that will execute PowerShell on files having extension .ps1. This is basically how for instance php.exe is executed in a new process when a file with extension .php is requested.
Enabling executable content can and should be done on a folder-by-folder basis to limit the security risk. In general you should avoid mixing different kinds of content, ie. it should not be possible to both view a script and also execute it.
If you intention is to be able to remotely run PowerShell scripts and not much else it should also be easy to write a small HTTP server in PowerShell completely removing IIS and ASP.NET from the equation.
I suppose this merely depends on the impersonation settings, if impersonation is enabled, then the currently logged in user is used, otherwise the app pool user

Resources