ASP.NET webAPI Framework Slowness issue - asp.net

I have implemented a web service using ASP.NET WebAPI framework.
The response time is faster in my local system (target framework4.5) that deploying to the server.
After deployment to Windows2012R2 IIS8.5, a response takes about 30 seconds for each request. I have tried the sample web service without any data retrieval from database. This also takes about longer time. I installed Fiddler and identified that TTFB took about 26 seconds.
I created the web service using this url:
https://www.c-sharpcorner.com/article/create-simple-web-api-in-asp-net-mvc/
Even displaying "welcome" message itself takes about 30 seconds.
Any thoughts on where to look to investigate the performance issue?
Here is sample code as in URL
namespace Demo1.Controllers
{
[System.Web.Mvc.SessionState(System.Web.SessionState.SessionStateBehavior.Disabled)]
public class DemoController : ApiController
{
public string Get()
{
return "Welcome To PARIS Web API";
}
public List<string> Get(int Id)
{
return new List<string> {
"Data1",
"Data2"
};
}
}
}

Performance of compiled code on two different servers is almost always related to the physical resources that have been made available at the time to code runs.
Keep in mind that we as developers usually code on PCs with far better specs than the servers we deploy to!
For API deployments, it is common to deploy to cloud or minimal servers that have either reduced CPU or Core counts, or the resources available to your deployed app have been restricted to allow for the server to run multiple apps.
Sometimes to reduce costs we deploy to servers with the resources that we think our app needs, without remembering that the OS of that server will have basic requirements that will need to be met.
You have already isolated the issue about the available bandwidth between the deployed server and the database, for other punters keep this in mind as a potential bottleneck, keep your database close to your APIs.
If every request to a simple site is slow, check the logs on the slow server, look for errors or indications of slow starts, or that the app is restarting for every request. It is common and we accept slow cold starts of our web apps, but the web app should remain alive between requests, even if it is session/state less.
It would help to know if you are deploying to a physical server, virtual machine, cloud app service or a container host, if the server is in the cloud or on the ground and if you known the general bandwidth available from the server?
Just spit-balling, this was an old .Net gotchya: Check that your server or firewall is not blocking outbound internet calls!
Due to low level calls in .Net trying to perform SSL Certificate Revocation checks your code may wait at startup trying to make external calls. This post discussed options to disable that. Something to try if all else fails.

Related

ASP.Net API App - continual HTTP 502.3 errors

My team and I have been at this for 4 full days now, analyzing every log available to us, Azure Application Insights, you name it, we've analyzed it. And we can not get down to the cause of this issue.
We have a customer who is integrated with our API to make search calls and they are complaining of intermittent but continual 502.3 Bad Gateway errors.
Here is the flow of our architecture:
All resources are in Azure. The endpoint our customers call is a .NET Framework 4.7 Web App Service in Azure that acts as the stateless handler for all the API calls and responses.
This API app sends the calls to an Azure Service Fabric Cluster - that cluster load balances on the way in and distributes the API calls to our Search Service Application. The Search Service Application then generates and ElasticSearch query from the API call, and sends that query to our ElasticSearch cluster.
ElasticSearch then sends the results back to Service Fabric, and the process reverses from there until the results are sent back to the customer from the API endpoint.
What may separate our process from a typical API is that our response payload can be relatively large, based on the search. On average these last several days, the payload of a single response can be anywhere from 6MB to 12MB. Our searches simply return a lot of data from ElasticSearch. In any case, a normal search is typically executed and returned in 15 seconds or less. As of right now, we have already increased our timeout window to 5 minutes just to try to handle what is happening and reduce timeout errors for the fact their searches are taking so long. However, we increased the timeout via the following code in Startup.cs:
services.AddSingleton<HttpClient>(s => {
return new HttpClient() { Timeout = TimeSpan.FromSeconds(300) };
});
I've read in some places that you actually have to do this in the web.config file as opposed to here, or at least in addition to it. Not sure if this is true?
So The customer who is getting the 502.3 errors have significantly increased the volumes they are sending us over the last week, but we believe we are fully scaled to be able to handle it. They are still trying to put the issue on us, but after many days of research, I'm starting to wonder if the problem is actually on their side. Could it be possible that they are not equipped to take the increased payload on their side. Can it be that their integration architecture is not scaled enough to take the return payload from the increased volumes? When we observe our resources usages (CPU/RAM/IO) on all of the above applications, they are all normal - all below 50%. This also makes me wonder if this is on their side.
I know it's a bit of a subjective question, but I'm hoping for some insight from someone who may have experienced this before, but even more importantly, from someone who has experience with a .Net API app in Azure which return large datasets in it's responses.
Any code blocks of our API app, or screenshots from Application Insights are available to post upon request - just not sure what exactly anyone would want to see yet as I type this.

Types of services for long running asp.net processes

Our website has a long running calculation process which keeps the client waiting for a few minutes until it's finished. We've decided we need a design change, and to farm out the processing to a windows or a WCF service, while the client is presented with another page, while we're doing all the calculations.
What's the best way of implementing the service though?
We've looked at background worker processes, but it looks like these are problematic because if IIS can periodically shut down threads
It seems the best thing to use is either a Windows service or a WCF service. Does anyone have a view on which is better for this purpose?
If we host the service on another machine, would it have to be a WCF service?
It looks like it's difficult to have the service (whatever type it is) to communicate back to the website - maybe instead the service can update its results to a database, and the website polls that for the required results later on.
It's an open ended question I know, but does anyone have any ideas?
thanks
I don't think that the true gain in terms of performance will come from the design change.
If I were to chose between windows service and WCF I would go with the Windows service because I would be able to fix an affinity and prioritize as I want. However I will have to implement the logic for serving multiple clients in the same time (which in a WCF service approach will be handled by IIS).
So in terms of performance if you use .NET framework for both the WCF service and Windows service the performance difference will not be major. Windows service would be more "controllable", WCF would be more straight-forward and with no big performance penalties.
For these types of tasks I would focus on highly optimizing the single thread calculation. If you have a complex calculation, can it be written in native code (C or C++)? You could make a .DLL file that is highly optimized and is used by either the Windows service or the WCF service. Using this approach will allow you to select best compiler option and make best use of your machine resources. Also nothing stops you from creating multiple threads in the .DLL function.
The link between the website and the service can be ensured in both cases: through sockets for Windows service (extra code for creating the protocol) or directly through SOAP for the WCF. If you push the results in a database the difficulty would be letting the website (and knowing to wich particular user session) know that the data is there.
So that's what I would do.
Hope it helps.
Cheers!
One way to do this is:
The Client submits the calculation request using a Call to a WCF Service (can be hosted in IIS)
The calculation request is stored in a database With a unique ID
The ID is returned to the Client
A Windows Service (or serveral on several different machines) poll the database for New requests
The Windows service performs the calculation and stores the result to a result table With the ID
The Client polls the result table (using a WCF service) With the ID
When the calculation is finished the result is returned to the client

Fixing slow initial load for IIS

IIS has an annoying feature for low traffic websites where it recycles unused worker processes, causing the first user to the site after some time to get an extremely long delay (30+ seconds).
I've been looking for a solution to the problem and I've found these potential solutions.
A. Use the Application Initialization plugin
B. Use Auto-Start with .NET 4
C. Disable the idle-timeout (under IIS Reset)
D. Precompile the site
I'm wondering which of these is preferred, and more importantly, why are there so many solutions to the same problem? (My guess is they aren't, and I'm just not understanding something correctly).
Edit
Performing C seems to be enough to keep my site warmed up, but I've discovered that the real root of my site's slowness has to do with Entity Framework, which I can't seem to figure out why it's going cold. See this question, which unfortunately hasn't been answered yet has been answered!
I eventually just had to make a warm up script to hit my site occasionally to make sure it stayed speedy.
Options A, B and D seem to be in the same category since they only influence the initial start time, they do warmup of the website like compilation and loading of libraries in memory.
Using C, setting the idle timeout, should be enough so that subsequent requests to the server are served fast (restarting the app pool takes quite some time - in the order of seconds).
As far as I know, the timeout exists to save memory that other websites running in parallel on that machine might need. The price being that one time slow load time.
Besides the fact that the app pool gets shutdown in case of user inactivity, the app pool will also recycle by default every 1740 minutes (29 hours).
From technet:
Internet Information Services (IIS) application pools can be
periodically recycled to avoid unstable states that can lead to
application crashes, hangs, or memory leaks.
As long as app pool recycling is left on, it should be enough.
But if you really want top notch performance for most components, you should also use something like the Application Initialization Module you mentioned.
Web Hosting Challenge
You have to remember that none of the machine configuration options are available if you are hosted on a shared server as many of us (smaller companies and individuals) are.
ASP.NET MVC Overhead
My site takes at least 30 seconds when it hasn't been hit in over 20 minutes (and the web app has been stopped). It is terrible.
Another Way to Test Performance
There's another way to test if it is your ASP.NET MVC start up or something else. Drop a normal HTML page on your site where you can hit it directly.
If the problem is related to ASP.NET MVC start up then the HTML page will render almost immediately even when the web app hasn't been started.
That's how I first recognized that the problem was in the ASP.NET MVC startup.
I loaded an HTML page at any time and it would load blazing fast. Then, after hitting that HTML page I'd hit one of my ASP.NET MVC URLs and I'd get the Chrome message "Waiting for raddev.us..."
Another Test With Helpful Script
After that I wrote a LINQPad (check out http://linqpad.net for more) script that would hit my web site every 8 minutes (less than the time for the app to unload -- which should be 20 minutes) and I let it run for hours.
While the script was running I hit my web site and every time my site came up blazingly fast. This gives me a good idea that most likely the slowness I was experiencing was because of ASP.NET MVC startup times.
Get LinqPad and you can run the following script -- just change the URL to your own and let it run and you can test this easily.
Good luck.
NOTE: In LinqPad you'll need to press F4 and add a reference to System.Net to add the library which will retrieve your page.
ALSO : make sure you change the String URL variable to point at a URL that will load a route from your ASP.NET MVC site so the engine will run.
System.Timers.Timer webKeepAlive = new System.Timers.Timer();
Int64 counter = 0;
void Main()
{
webKeepAlive.Interval = 5000;
webKeepAlive.Elapsed += WebKeepAlive_Elapsed;
webKeepAlive.Start();
}
private void WebKeepAlive_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
webKeepAlive.Stop();
try
{
// ONLY the first time it retrieves the content it will print the string
String finalHtml = GetWebContent();
if (counter < 1)
{
Console.WriteLine(finalHtml);
}
counter++;
}
finally
{
webKeepAlive.Interval = 480000; // every 8 minutes
webKeepAlive.Start();
}
}
public String GetWebContent()
{
try
{
String URL = "http://YOURURL.COM";
WebRequest request = WebRequest.Create(URL);
WebResponse response = request.GetResponse();
Stream data = response.GetResponseStream();
string html = String.Empty;
using (StreamReader sr = new StreamReader(data))
{
html = sr.ReadToEnd();
}
Console.WriteLine (String.Format("{0} : success",DateTime.Now));
return html;
}
catch (Exception ex)
{
Console.WriteLine (String.Format("{0} -- GetWebContent() : {1}",DateTime.Now,ex.Message));
return "fail";
}
}
Writing a ping service/script to hit your idle website is rather a best way to go because you will have a complete control. Other options that you have mentioned would be available if you have leased a dedicated hosting box.
In a shared hosting space, warmup scripts are the best first level defense (self help is the best help). Here is an article which shares an idea on how to do it from your own web application.
I'd use B because that in conjunction with worker process recycling means there'd only be a delay while it's recycling. This avoids the delay normally associated with initialization in response to the first request after idle. You also get to keep the benefits of recycling.
A good option to ping the site on a schedule is to use Microsoft Flow, which is free for up to 750 "runs" per month. It is very easy to create a Flow that hits your site every hour to keep it warm. You can even work around their limit of 750 by creating a single flow with delays separating multiple hits of your site.
https://flow.microsoft.com
See this article for tips on how to help performance issues. This includes both performance issues related to starting up, under the "cold start" section. Most of this will matter no matter what type of server you are using, locally or in production.
https://blogs.msdn.microsoft.com/b/mcsuksoldev/2011/01/19/common-performance-issues-on-asp-net-web-sites/
If the application deserializes anything from XML (and that includes web services…) make sure SGEN is run against all binaries involved in deseriaization and place the resulting DLLs in the Global Assembly Cache (GAC). This precompiles all the serialization objects used by the assemblies SGEN was run against and caches them in the resulting DLL. This can give huge time savings on the first deserialization (loading) of config files from disk and initial calls to web services.
http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx
If any IIS servers do not have outgoing access to the internet, turn off Certificate Revocation List (CRL) checking for Authenticode binaries by adding generatePublisherEvidence=”false” into machine.config. Otherwise every worker processes can hang for over 20 seconds during start-up while it times out trying to connect to the internet to obtain a CRL list.
http://blogs.msdn.com/amolravande/archive/2008/07/20/startup-performance-disable-the-generatepublisherevidence-property.aspx
http://msdn.microsoft.com/en-us/library/bb629393.aspx
Consider using NGEN on all assemblies. However without careful use this doesn’t give much of a performance gain. This is because the base load addresses of all the binaries that are loaded by each process must be carefully set at build time to not overlap. If the binaries have to be rebased when they are loaded because of address clashes, almost all the performance gains of using NGEN will be lost.
http://msdn.microsoft.com/en-us/magazine/cc163610.aspx
I was getting a consistent 15 second delay on the first request after 4 minutes of inactivity. My problem was that my app was using Windows Integrated Authentication to SQL Server and the service profile was in a different domain than the server. This caused a cross-domain authentication from IIS to SQL upon app initialization - and this was the real source of my delay. I changed to using a SQL login instead of windows authentication. The delay was immediately gone. I still have all the app initialization settings in place to help improve performance but they may have not been needed at all in my case.

Using a remote, external web service instead of a database

I am building an ASP.NET web application that will be deployed to a 4-node web farm.
My web application's farm is located in California.
Instead of a database for back-end data, I plan to use a set of web services served from a data center in New York.
I have a page /show-web-service-result.aspx that works like this:
1) User requests page /show-web-service-result.aspx?s=foo
2) Page's codebehind queries a web service that is hosted by the third party in New York.
3) When web service returns, the returned data is formatted and displayed to user in page response.
Does this architecture have potential scalability problems? Suppose I am getting hundreds of unique hits per second, e.g.
/show-web-service-result.aspx?s=foo1
/show-web-service-result.aspx?s=foo2
/show-web-service-result.aspx?s=foo3
etc...
Is it typical for web servers in a farm to be using web services for data instead of database? Any personal experience?
What change should I make to the architecture to improve scalability?
You have most definitely a scalability problem: the third-party web service. Unless you have a service-level agreement with that service (agreeing on the number of requests that you can submit per second), chances are real that you overload that service with your anticipated load. That you have four nodes yourself doesn't help you then.
So you should a) come up with an agreement with the third party, and b) test what the actual load is that they can take.
In addition, you need to make sure that your framework can use parallel connections for accessing the remote service. Suppose you have a round-trip time of 20ms from California to New York (which would be fairly good), you can not make more than 50 requests over a single TCP connection. Likewise, starting new TCP connections for every request will also kill performance, so you want pooling on these parallel connections.
I don't see a problem with this approach, we use it quite a bit where I work. However, here are some things to consider:
Is your page rendering going to be blocked while waiting for the web service to respond?
What if the response never comes, i.e. the service is down?
For the first problem I would look into using AJAX to update the page after you get a response back from the web service. You'll also want to consider how to handle the no response or timeout condition.
Finally, you should really think about how you could cache the web service data locally. For example if you are calling a stock quoting service then unless you have a real-time feed, there is no reason to call the web service with every request you get. Store the data locally for a period of time and return that until it becomes stale.
You may have scalability problems but most of these can be carefully engineered around.
I recommend you use ASP.NET's asynchronous tasks so that the web service is queued up, the thread is released while the request waits for the web service to respond, and then another thread picks up when the web service is done to finish off the request.
MSDN Magazine - Wicked Code - Asynchronous Pages in ASP.NET 2.0
Local caching is an absolute must. The fewer times you have to go from California to New York, the better. You might want to look into Microsoft's Velocity (although that's still in CTP) or NCache, or another distributed cache, so that each of your 4 web servers don't all have to make and cache the same data from the web service - once one server gets it, it should be available to all.
Microsoft Project Code Named "Velocity"
NCache
Other things that can go wrong that you should engineer around:
The web service is down (obviously) and data falls out of cache, and you can't get it back. Try to make it so that the data is not actually dropped from cache until you're sure you have an update available. Then the only risk is if the service is down and your application pool is reset, so don't reset it as a first-line troubleshooting maneuver!
There are two different timeouts on web requests, a connect and an overall timeout. Make sure both are set extremely low and you handle both of them timing out. If the service's DNS goes down, this can look like quite a different failure.
Watch perfmon for ASP.NET Queued Requests. This number will rise rapidly if the service goes down and you're not covering it properly.
Research and adjust ASP.NET performance registry settings so you have a highly optimized ASP.NET thread pool. I don't remember the specifics, but I seem to remember that there's a limit on IO Completion Ports and something else of that nature that are absurdly low for the powerful hardware I'm assuming you have on hand.
the trendy answer is REST. Any GET request can be HTTP Response cached (with lots of options on how that is configured) and it will be cached by the internet itself (your ISP, essentially).
Your project has an architecture that reflects they direction that Microsoft and many others in the SOA world want to take us. That said, many people try to avoid this type of real-time risk introduced by the web service.
Your system will have a huge dependency on the web service working in an efficient manner. If it doesn't work, or is slow, people will just see that your page isn't working properly.
At the very least, I would get a web stress tool and performance test your web service to at least the traffic levels you expect to get at peaks, and likely beyond this. When does it break (if ever?), when does it start to slow down? These are good metrics to know.
Other options to look at: perhaps you can get daily batches of data from the web service to a local database and hit the database for your web site. Then, if for some reason the web service is down or slow, you could use the most recently obtained data (if this is feasible for your data).
Overall, it should be doable, but you want to understand and measure the risks, and explore any potential options to minimize those risks.
It's fine. There are some scalability issues. Primarily, with the number of calls you are allowed to make to the external web service per second. Some web services (Yahoo shopping for example) limit how often you can call their service and will lock out your account if you call too often. If you have a large farm and lots of traffic, you might have to throttle your requests.
Also, it's typical in these situations to use an interstitial page that forks off a worker thread to go and do the web service call and redirects to the results page when the call returns. (Think a travel site when you do search, you get an interstitial page while they call out to an external source for the flight data and then you get redirected to a results page when the call completes). This may be unnecessary if your web service call returns quickly.
I recommend you be certain to use WCF, and not the legacy ASMX web services technology as the client. Use "Add Service Reference" instead of "Add Web Reference".
One other issue you need to consider, depending on the type of application and/or data you're pulling down: security.
Specifically, I'm referring to authentication and authorization, both of your end users, and the web application itself. Where are these things handled? All in the web app? by the WS? Or maybe the front-end app is authenticating the users, and flowing the user's identity to the back end WS, allowing that to verify that the user is allowed? How do you verify this? Since many other responders here mention a local data cache on the front end app (an EXCELLENT idea, BTW), this gets even MORE complicated: do you cache data that is allowed to userA, but not for userB? if so, how do you verify that userB cannot access data from the cache? What if the authorization is checked by the WS, how do you cache the permissions then?
On the other hand, how are you verifying that only your web app is allowed to access the WS (and an attacker doesn't directly access your WS data over the Internet, for instance)? For that matter, how do you ensure that your web app contacts the CORRECT WS server, and not a bogus one? And of course I assume that all the connection to the WS is only over TLS/SSL... (but of course also programmatically verify the cert applies to the accessed server...)
In short, its complicated, and many elements to consider here.... but it is NOT insurmountable.
(as far as input validation goes, that's actually NOT an issue, since this should be done by BOTH the front end app AND the back end WS...)
Another aspect here, as mentioned by #Martin, is the need for an SLA on whatever provider/hosting service you have for the NY WS, not just for performance, but also to cover availability. I.e. what happens if the server is inaccessible how quickly they commit to getting it back up, what happens if its down for extended periods of time, etc. That's the only way to legitimately transfer the risk of your availability being controlled by an externality.

How to create an ASP.NET web farm?

I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers?
We created a web application which works fine when, say, there are 500 users at the same time. But now we need to make it work for 10 000 users (working with the web app at the same time).
So we need to set up 20 web servers and make something so that 10 000 users could work with the web app by typing "www.MyWebApp.ru" in their web browsers, though their requests would be handled by 20 web-servers, without their knowing that.
1) Is there special standard software to create an ASP.NET web farm?
2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?
I found very little information on ASP.NET web farms and scalability on the web: in most cases, articles on scalability tell how to optimize and ASP.NET app and make it run faster. But I found no example of a "Hello world"-like ASP.NET web app running on 2 web servers.
Would be great if someone could post a link to an article or, better, tell about one's own experience in ASP.NET "web farming" and addressing scalability issues.
Thank you,
Mikhail.
1) Is there special standard software
to create an ASP.NET web farm?
No.
2) Or should we create a web farm
ourselves, by transferring requests
between different web servers manually
(using ASP.NET / C#)?
No.
To build a web farm, you will need some form of load balancing. For up to 8 servers or so, you can use Network Load Balancing (NLB), which is built in to Windows. For more than 8 servers, you should use a hardware load balancer.
However, load balancing is really just the tip of the iceberg. There are many other issues that you should address, including things like:
State management (cookies, ViewState, session state, etc)
Caching and cache invalidation
Database loading (managing round-trips, partitioning, disk subsystem, etc)
Application pool management (WSRM, pool resets, partitioning)
Deployment
Monitoring
In case it might be helpful, I cover many of these issues in my book: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.
I'd say you should configure an NLB cluster (Network Load Balancing), which basically splits all requests between cluster nodes (And as an added benefit detects if things are down and stops sending them requests). There's features built into windows for this, but they don't compare to a hardware device for performance or scalability. If you're using Windows 2008 it really is simple to set one up. If you do this make sure you have a shared machine key or you'll start getting exceptions for viewstate being invalid (When 1 server submits the form and it posts to the other and they're using different keys to encode the data).
You can also use DNS round-robin but at 20 servers presumably in 1 datacenter I wouldn't see a point to going to such crazy lengths. If you've got multiple data centers though this is definitely worth considering (As NLB won't really work well between data centers).
You'll also want to be sure if a user swaps servers they don't loose their session. The simplest way would be to use a Session State database (Configurable in the web.config, or you can do it server-wide in IIS's configs). If you don't use sessions though just turn them off in the Pages directive of the web.config and call it a day. You could also use a session state server, but I don't have any experience with this.
It may also be worth considering spending some time optimizing the code or adding caching directives to static content - it can be very cost-effective even if you only trim the need for a few of those servers.
Hope that helps.
If you keep your server stateless, it is easy with a good router that implements some round-Robbin protocol (that send each call to the single published server ip to a different web server).
if it is not stateless (like - if a login is required, or ssl) than you need to keep each session to the same server.
Here is some info about MS Application Request Routing - you will get everything there:
IIS Load balancing
I would not recommend #2. You will do much better off with a load balancer.
Pay attention to session state management. Unless you configure the load balancer to keep each user on the same web server, you will have to use the session state server or database.
Also, check your code's usage of Application and Cache variables. These will be different on every web server. If those values are static, you may not have a problem. But if they can change, you can end up with different values on each web server.
There used to be a problem with ViewState in 1.x, as explained here. I'm not sure if this problem still exists.
Then, there are some changes that you need to make to the Machine Key in web.config, as explained here.

Resources