below some of the counter values while doing load testing using 250 users:
> Gen2 heap size : 1124196
> #bytes in all heaps : 2172104
> #GC Handles: 926
> # of pinned objects: 11 Large Object Heap size: 87128
> # total commited bytes: 3350528
> # total reserved bytes: 33546240
they were increasing, increasing till they reach that limit.
after the test finished, the memory shown in task manager for w3wp.exe is not releasing until an IIS Reset is applied.
also the application is not accessible till an IIS Reset is applied (getting com+ activation failed)
Anyone, had benn in that situation before?
Thanks
Yes, you need a tool like ANTS Memory Profiler. Aside from that, make sure you are closing MemoryStreams, anything IO related, SqlConnections...etc. Try to us using statements on anything that implements IDisposable. Check for static references to objects tied to your Page instances.
Related
I am trying to set up the Azure Cosmos DB Emulator to work locally with integration tests but I found that it is very slow.
I am reading a ~1KB JSON document with the container.ReadItemAsync<T> method, and awaiting the answer. I am calling this method in a loop, for 100 times.
The execution time is consistently around 9.5-10 seconds, so one request takes around 100 milliseconds which is very slow compared to the fact that this service is running locally.
Why is this so slow and how can I make it faster?
I expect at most 1 ms / request considering it is all disk I/O.
I tried the following but they didn't work:
Turning Rate Limiting on/off
creating the database/collection with various provisioning settings, it has zero effect on performance (even 100k RU)
creating the db and collection manually vs with the client SDK
"Reset Data" menu in the emulator tray menu
Further information:
The emulator version is 2.14.6.0 (68d4ca59)
I start the emulator from the start menu, but starting it from the command line doesn't change anything
I am using the Microsoft.Azure.Cosmos nuget package, version 3.22.1
my CPU is i7-8565U, but it isn't even fully used while the test is running
my system has 16 GB RAM
my system is running on a fast enough SSD: "NVMe SK hynix BC501 H", but while running the test the SSD usage is between 0 and 2%.
the performance is the same if I increase the document size to 100 KB or even 1 MB.
Creating your CosmosClientOptions with the AllowBulkExecution = true setting can cause this.
the SDK will construct batches and group operations, when the batch is full, it will get dispatched, but if the batch doesn’t fill up, there is a timer that will dispatch it to make sure they complete. This timer currently is 100 milliseconds. So if the batch does not get filled up (for example, you are just sending 50 concurrent operations), then the overall latency might be affected.
Source: Introducing Bulk support in the .NET SDK
I would like to update the memory allocation limit of a Firebase Cloud Function to 4GB. I updated the memory value with the runWith parameter:
functions.runWith({memory: "4GB"})
This, however, is being ignored and the deployed function does not have 4 GB of memory allocated.
If I try with 3GB, i get this error:
Error: The only valid memory allocation values are: 128MB, 256MB, 512MB, 1GB, 2GB, 4GB
So it seems 4 GB is a valid value.
What am I doing wrong? Am I missing something?
It seems to work just fine if I use 2GB, 1GB... It only ignores the 4 GB value.
I had the exact same issue and reached out to Firebase Support to report the bug. According to Firebase Support, they acknowledge it's a bug and will release a fix in their next release.
In the time of reading, if the issue is not fixed this is the workaround:
Go the GCP console and select your project
select Cloud Functions from the menu on the left
Click "edit"
Expand the section "VARIABLES, NETWORKING AND ADVANCED SETTINGS"
Change the "memory allocated" field.
Click "Next" and then click on "Deploy"
EDIT: The fix has been released in firebase-functions#3.13.1
As of 2022 July
memory: amount of memory to allocate to the function, possible values are: '128MB', '256MB', '512MB', '1GB', '2GB', '4GB', and '8GB'.
functions.runWith({
memory: '4GB',
}).firestore
ASP.NET WebAPI2 application on .NET 4.6.2, hosted on IIS on Windows Server 2016. From time to time, there is a lot (hundreds) of requests stuck for hours (despite the fact I have request timeout 60s set) with no CPU usage. So, I took the memory dump of w3wp process, along with sos.dll, clr.dll and mscordacwks.dll and all my project's dlls and pdbs from bin directory from server and used WinDbg as described in many blogs and tutorials. But, in all of them, they are able to directly see CLR stack by calling ~*e !clrstack. I can see CLR stacktrace for some Redis and ApplicationInsights workers, but for all other managed threads I can see only:
OS Thread Id: 0x1124 (3)
Child SP IP Call Site
GetFrameContext failed: 1
0000000000000000 0000000000000000
!dumpstack for any of these gives just this:
0:181> !dumpstack
OS Thread Id: 0x1754 (181)
Current frame: ntdll!NtWaitForSingleObject+0x14
Child-SP RetAddr Caller, Callee
000000b942c7f6a0 00007fff33d63acf KERNELBASE!WaitForSingleObjectEx+0x8f, calling ntdll!NtWaitForSingleObject
000000b942c7f740 00007fff253377a6 clr!CLRSemaphore::Wait+0x8a, calling kernel32!WaitForSingleObjectEx
000000b942c7f7b0 00007fff25335331 clr!GCCoop::GCCoop+0xe, calling clr!GetThread
000000b942c7f800 00007fff25337916 clr!ThreadpoolMgr::UnfairSemaphore::Wait+0xf1, calling clr!CLRSemaphore::Wait
000000b942c7f840 00007fff253378b1 clr!ThreadpoolMgr::WorkerThreadStart+0x2d1, calling clr!ThreadpoolMgr::UnfairSemaphore::Wait
000000b942c7f8e0 00007fff253d952f clr!Thread::intermediateThreadProc+0x86
000000b942c7f9e0 00007fff253d950f clr!Thread::intermediateThreadProc+0x66, calling clr!_chkstk
000000b942c7fa20 00007fff37568364 kernel32!BaseThreadInitThunk+0x14, calling ntdll!LdrpDispatchUserCallTarget
000000b942c7fa50 00007fff3773e821 ntdll!RtlUserThreadStart+0x21, calling ntdll!LdrpDispatchUserCallTarget
So I have no idea, where to look for bug in my code.
(here is the full result:
https://gist.github.com/rouen-sk/eff11844557521de367fa9182cb94a82
and here is the results of !threads:
https://gist.github.com/rouen-sk/b61cba97a4d8300c08d6a8808c4bff6e)
What can I do? Google search for GetFrameContext failed gives nothing helpful.
As mentioned, this is not trivial, however you can find a case study of similar problem here: https://blogs.msdn.microsoft.com/rodneyviana/2015/03/27/the-case-of-the-non-responsive-mvc-web-application/
In a nutshell:
Download NetExt. It is the zip file here:
https://github.com/rodneyviana/netext/tree/master/Binaries
Open your dump and load NetExt
Run !windex to index the heap
Run !whttp -order -running to see a list of running requests
If the requests contains thread number you can go to the thread to see what is happening
If the requests contains --- instead of thread number, they are waiting a thread and this is a sign that some throttling is happening
If it is a WCF service, run !wservice to see the services
Run !wruntime to see runtime information
Run !wapppool to see Application Pool information
Run !wdae to list all errors
... And so it goes. When you do this again and again you will be able to spot issues easily
I am trying to play my QTP11 scripts in the UFT14 (trail) but for some reason .Exist doesn't wait for the given timeout. Rather it is waiting as per the Object sync timeout project settings if the object doesn't exist. Any reason why?
Like my project's object sync timeout is set at 60 seconds. And when I use something like If ErrorObject.Exist(10) Then ErrorObject.Close -- this should wait for 10 seconds only but rather UFT14 is waiting for full 60 seconds. Is it a bug or is there any extra setting which I have to apply in UFT14 for Exist to wait for the given timeout only?
Edit - On further inspection I found out that this is an issue with Java objects only. So might be a bug in Java addin. Can anyone verify or provide a workaround.
Edit - HP acknowledged that this is an issue. Here is the link if anyone is interested.
https://softwaresupport.hpe.com/group/softwaresupport/search-result/-/facetsearch/document/KM02764499
This is because of the default timeout in UFT.You can change that default timeout as below
Test Settings -> Run -> Object synchronization timeout
Change the "Object synchronization timeout" in seconds.
Or You can do this directly through vbscript code
Setting("DefaultTimeout") = 5000(This value is in milliseconds)
I have an ASP.NET MVC website that gets about 6500 hits a day, on a shared hosting platform at Server Intellect. I keep seeing app restarts in the logs and I cannot figure out why.
I've read Scott Gu's article here: http://weblogs.asp.net/scottgu/archive/2005/12/14/433194.aspx
and implemented the technique, and here's what shows up in my log:
Application Shutdown:
_shutDownMessage=HostingEnvironment initiated shutdown
HostingEnvironment caused shutdown
_shutDownStack=at
System.Environment.GetStackTrace(Exception e, Boolean needFileInfo) at
System.Environment.get_StackTrace() at
System.Web.Hosting.HostingEnvironment.InitiateShutdownInternal() at
System.Web.Hosting.HostingEnvironment.InitiateShutdown() at
System.Web.Hosting.PipelineRuntime.StopProcessing()
It seems to occur about every five minutes.
Are there any other ways to debug this?
UPDATE: Here are the application pool settings mentioned by Softion:
CPU
Limit : 0
Limit Action : no action
Limit Interval : 5 Minutes
Process Model
Idle Timeout : 20 Minutes
Ping Maximum Response Time : 90 Seconds
Startup Time Limit : 90 Seconds
Rapid-Fail Protection
Enabled : True
Failure Interval : 5 Minutes
Recycling
Private Memory Limit : 100 MB
Regular Time Interval : 1740 Minutes (29 Hours)
Request Limit : 0
Specific Times : none
Virtual Memory Limit : 0
You can easily grab the reason of the shutdown by HostingEnvironment.
You read Scott Gu article, but you missed its comments.
var shutdownReason = HostingEnvironment.ShutdownReason;
If the reason is HostingEnvironment, check the IIS application pool parameters controlling recycling. I've put a red dot near each one. Check the description in the bottom help box in your own copy for full info.
You can ask your provider to give you the applicationHost.config file where all these parameters are set. They find it in C:\Windows\System32\inetsrv\config. I'm sure you can also get them using some .NET api.
For 6500 hits a day, which is a very low hit rate, i'm betting the "Idle time-out" is set to 5mn.
Update (moved comments to here //jgauffin)
CPU Limit 0 = disabled.
Process Model Idle Timeout : 20 Minutes (20mn without a request recycles your app).
Rapid-Fail Protection enabled (5mn). You need to know the maximum failures count. If your app throws more than this exception count in 5mn it will we recycled.
Private Memory Limit : 100 MB. Yes you should profile, this is a low limit.
Regular Time Interval : 1740 Minutes (29 Hours): it will recycle every 29h.
Request Limit : 0 (disabled).
Virtual Memory Limit : 0 (disabled).
Rapid-Fail Protection enabled (5mn). You need the maximum failures count. If your app throws more than this exception count in 5mn it recycles. If it recycles every 5mn this should be the thing to check. There should be 0 unhandled exception in secondary worker threads. Wrap your code into a try catch there.
re update:
The settings asked to the provider help, but is way better to ask for information on the reason of the restarts like I mentioned on my original answer i.e. the actual log entries of the restarts like I mentioned on my orig answer. From those you can know specifically what was triggered, I've seen happen one hitting different limits.
You really have to:
profile your application with a
realistic amount of test data
My money is on hitting resource limits set by your hosting provider.
Before going crazy with optimization without a target, contact your provider and ask them to give you information on the restarts.
Typical recycles:
idle x amount of time / like 15 mins
more than x amount of memory / like 200 MB
more than x % processor over y time / like 70 over 1 minute
a daily recycle
Once you know the case, you have to find out what's taking those resources. For this you have to profile your application with a realistic amount of test data. Knowing if it is memory or processor can help on knowing what to look for.
Is IIS set to recycle the app pool frequently?
Is there some kind of runaway memory leak in the app pool?
It requires a bit of know how on what your app does here's a list of things that can cause the app to restart/reset or even shut down
StackOverflowException
OutOfMemoryException
Any unhandled exception that crashes a thread
CodeContracts use Environment.FailFast when a contract violation occurs
Exceptions are quite easy to track if you can reproduce the issue with a debugger attached you can go into Visual Studio and enable all exceptions when they are thrown not caught by user code. It will sometimes reveal intresting stuff that otherwise is hidden away.