We have same ear that installs fine and works on both WAS 8.5 and WAS 7.0. With how it is coded it connect to multiple domains such as cert, train and mock domains or separate one for prod. It uses subject figure out what domain to connect too. The problem is 1st transcation to a domain was little slower on WAS 7.0 close to 5- 6 secs, but now on WAS 8.5 it takes somewhere around 10 - 25 seconds not sure what is going on.
More testing on the end on WAS 8.5, all the request headers need authentication, client always sends with authentication header, We were thinking it JPA, but we saw 1st request that comes to server takes good 10 - 25 secs respond. After that all the other requests are relatively faster. When monitored the WAS 8.5 we see almost 1 core constantly consumed by the Java process. Plus constantly pinging the ODR which keeps the CPU for that process to be high
Just wondering anybody has similar problem on this upgrade, where they are initial call to JPA is being comparatively slower to WAS 7.0
After further investigation we found websphere was scanning jars resource annonations and this happens on 1st request. There is configuration on websphere that will make sure this scanning doesn't happen. If that is setup correctly it works as expected.
The property is document as such like this
com.ibm.ws.webcontainer.SkipMetaInfResourcesProcessing = true
we got this from http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/index.jsp?topic=%2Fcom.ibm.websphere.nd.multiplatform.doc%2Finfo%2Fae%2Fae%2Frweb_jsp_staticfile.html
Related
I have a simple web app with one endpoint which goes through 2 others http call & 2 database access. When I run some naïve load test with siege on my laptop I observe surprising result: a clearly low throughput and when profiling a high level on lock contention. I remove all the http calls and even if the throughput increase but it's not incredible, I observe the same high level of lock contention.
Here is the result of my profiling:
What I misunderstand is that the majority of my lock contention is done outside my user code. How can I diagnose this? Any idea about the method to find the root cause?
We use .net 6 - Ef 6 - SqlClient 3.
I suspect this at the beginning https://github.com/dotnet/SqlClient/issues/422 , upgrade to SQL client 5 and disable MARS but same behavior. My env is macOS.
I have an ASP.NET web service that I am hosting on IIS 6 (may change to IIS 7 in the future). The .asmx page may receive many requests at the same time. It takes approximately 3s per request per CPU to return a response after a request is received (so two requests will also come back in 3s on a dual-core). However, when multiple requests come in at once (or close enough), the service seems to try to make it look like it is processing all of them at the same time. For example, if 6 requests come in, they all return in around 9s instead of the first two coming back in 3s, the next 2 in 6s, and the final 2 in 9s. My questions are: What is going on (briefly or elaborately if you have the patience :)), and how can I limit the number of requests or threads created from the service point-of-view, preferably without making any changes to IIS or machine.config?
Thanks in advance!
EDIT:
Just to clarify, I'm trying to make the web service perform as 1st in 1st out sort of situation, where the first n requests are processed (n = number of processors), then the next ones. Right now, if I send 10 requests at once, the service gathers all of them together and splits the processing up among all processors. It seems to me that in theory, if I can tell the service to limit the number of simultaneous processing to n (corresponding to the # of processors), then I will achieve my goal. But I don't know how to do this.
In Pro ASP.NET Web API: HTTP Web Services in ASP.NET book -
With .NET Framework 4.0, the default configuration settings are
suitable for most scenarios. For example, in ASP.NET 4.0 and 4.5, the
MaxConcurrentRequestsPerCPU value is set to 5000 by default (it had
been a very low number in earlier versions of .NET). According to team
members of IIS, there’s nothing special about the value 5000. It was
set because it is a very large number and will allow plenty of async
requests to execute concurrently. This setting should be fine, so
there is no need to change it.
Tip Windows 7, Windows Vista, and all other Windows client operating
systems handle a maximum of 10 concurrent requests. You need to have a
Windows Server operating system or use IIS Express to see the benefits
of asynchronous methods under high load.
I am building an ASP.NET application. I'm experiencing some slow load times so I checked out the traffic using Fiddler. It seems that the page itself is loading in around 3 seconds.
OK, it's kind of slow, but what baffles me is how long the js, css, and image assets take to load, even when they're being cached. It is taking the HTTP request 3 seconds before it responds with a 304. Now, I've done my share of web development, and my understanding is that a 304 response should not take 3 seconds.
My suspicion is that the server that the app is hosted is too weak. It is a VM running Windows Server 2K8 SP2 with about 2GB of memory, and the physical machine has at least one other VM running concurrently. Before I go and get myself a new server, does the lack of power in the machine sound like a possible cause of this problem?
Note: Latency should not be a problem; I'm accessing it through an intranet.
My environment: Windows Server 2008, IIE 7.0, ASP.NET
I developed a Silverlight client.
This client gets updates from the ASP.NET host through a WCF web service.
We get 100% CPU usage and connection drops when we have a very low number of users (~50).
The server should clearly be able to handle a lot more than that.
I ran some tests on our DEV server and indeed 100 requests / s maxes out the CPU.
What's odd is that even if the service is replaced by a dummy sending back hardcoded data the service still maxes out the CPU.
The thread count looked very low at about 20 so I thought there was some contention somewhere.
I changed all configuration options I could find to increase the worker threads (processModel, httpRuntime and the MaxRequestsPerCPU registry entry).
Nothing changed.
Then I stopped the IIS server and ran the web service as a console (removing all the ASP authentication references).
The service maxed out the CPU as well.
Then comes the magic part: I killed the console app and restarted IIS and now the service runs a 5-60% CPU with 100 requests / s and I can see 50+ worker threads.
I did the same thing on our preprod machine and had the same magic effect.
Rebooting the machines keeps the good behaviour.
So my question is: what happened to fix my IIS server? I really can't understand what fixed it.
Cheers.
Find out the root cause of the high CPU usage, and then you can find a fix,
http://support.microsoft.com/kb/919791
I have faced strange behavior of my ASP.Net application on the server (IIS7 on Windows Server 2008 x64, processor Xeon Quad).
The web application is the simple page which about one second calculates some math, and then displays result. That is it almost does not consume a network, a disk, memory, but completely uses processor resources.
The following phenomenon appears at load testing: IIS7 utilize processor no more than on 25% and not for the world does not wish to utilize it more. This 25% are equal to one core, but spread out on all four according to task manager performance tab. On the other computer (IIS7, Win 7, Quad) all works as well as should: the processor is utilized on all of 100%.
For each of behavior variants (peak loading of 25% and 100% on 4 core processors) I have found on 2 computers. The similar situation is described here. What can cause such behavior?
This 25% are equal to one core, but
spread out on all four according to
task manager performance tab.
Reality check: when you use up one core, the CPU scheduler will move the load between cores before 2008 R2. Staring with 2008 R2 it will keep it on one core to actually move the other cores into deep sleep.
So, what you see is basically an application that uses one CPU core. Point.
What can cause such behavior?
Either your code, or your request generation (well, together with your code) make sure that the requests are serialized and not handled in parallel.
During load testing... do you accept / keep the session cookie (like: ONE) and in your asp.net page do you have session state enabled? This would serialize all page requests to the one session in memory and is one very likely culprit. Another one is doing "stupid" things in code the result in a block and make the algorithm effectively single threaded - but this can no be evaluated without a lot more information from you on how you program and what you effectively do. For example I have seen a bunch of monkeys once code an online shop using ONE database connection (to not overload the database) that was kept in the aplication object and using the lock / unlock methods there to effectively turn their asp application into a single threaded thing. This was obvious - but there are a lot of other things that can go wrong. The questions basically are:
Are you by configuration / test scenario doing something to force IIS to serialize (which would be among web farming settings or bad usage of session state)?
Do you do anything within the pages that is effectively locking them to single threaded?
IIS per se answers requests through work items (i.e. uses a LOT of threads) unless it HA to serialize them (sessions only are ever assigned to ONE thread at a time, so a second request for the same session is serialized).
I doubt it's spread out. More likely the algorithm is not parallelised and so the code runs in a single core.
I have understood, that on those 2 computers where loading was 100%, 32 bit Windows has been installed. On the same 2 computers, where peak loading was 25% - 64 bit. But customisation change "Enable 32 bit applications"=true has not helped.
If your server is using multiple worker processes and you are sure that your load testing software is issuing requests in parallel, then something in your application is likely becoming serial.
This is actually pretty common (we do a lot of load testing for our customers) - it could be as simple as a database pool with a size of one or as complex as some shared resource being locked at some level deep within the application or within a library the application is using. We've seen cases where the first step in serving an application page opens a transaction that is not committed until the page is done. If that transaction is locking a table that is needed for the same purpose by every other page, then only one page request can be serviced at a time.
Good luck hunting down the problem - be sure to let us know what you find!
The problem has been solved after installation of fresher OS. "Windows Server 2008 Enterprise SP1 (c) 2009" instead of "Windows Server 2008 Standard SP2 (c) 2007".