I have DELL Power Edge T410 server (Quad Core Dual Xeon 5500 Series with 16 GB Ram), installed Windows 2003 Server.
I write a code in C# to play with large amount of nos and after certain calculations the results are stored in a 6000 x 6000 matrix. Finally it write this matrix (36 Million entries) to a text file (172 MB).
When I run this program on my laptop, the CPU utilization goes to 100 % and it takes abput 40 hours to complete this task.
When I run this program on my server, the CPU utilization goes to just 10 % and it takes almost same 40 hours to complete this task.
Now my problem, is that obviously, the server should utilize more CPU , at least 70 % and should complete this task in shorter time, How can I achieve this goal ?
Rewrite the code to take advantage of the greater capabilities of the server, such as the additional cores.
Related
I have an interesting problem with how Windows and .Net manage memory for Asp.Net applications that I can't explain myself. The problem is that I have a big Asp.Net application that after starts up can take about 1 GB memory according Resource Manager. We tried to test how many instances of the application we can run at the same time on a single machine with 14-16 GB memory.
First test is with an Azure Windows 2016 server with 8 vCPUs, 14 GB RAM, HDD.
After a few instances:
After 30 instances:
As you can see, private byes and working set of some instances reduced a lot. Based on what I read from how memory is managed (aka working set, physical memory, virtual memory, page files...), I can understand how the OS can take physical memory away from some idle processes for the others that are in need. So far so good.
Then we tested the same scenario with another Azure Windows 2016 server with 4 vCPUs, 16 GB RAM, but this one uses SSD.
After about 20 instances, we got OutOfMemoryException:
The key difference I could see is that memory of all those w3wp processes were still high. In other words, they were not reduced as in the test above.
My question is why the behaviors were different? What prevented the second cases from saving memory to page file (my guess!) and thus caused OutOfMemoryException?
Checking pagefile setting showed us that it was stilled enabled in "System managed size" mode but somehow Windows refused to use it for the w3wp processes. We tried to change it to custom size and set it to 20 GB and everything started working again as expected. I must admit that I still don't know why Windows 2016 behaves like that when SSD is used though.
Environment :
machines : 2.1 xeon, 128 GB ram, 32 cpu
os : centos 7.2 15.11
cassandra version : 2.1.15
opscenter version : 5.2.5
3 keyspaces : Opscenter (3 tables), OpsCenter (10 tables), application`s keyspace with (485 tables)
2 Datacenters, 1 for cassandra (5 machines )and another one DCOPS to store up opscenter data (1 machine).
Right now the agents on the nodes consume on average ~ 1300 cpu (out of 3200 available). The only transactioned data being ~ 1500 w/s on the application keyspace.
Any relation between number tables and opscenter? Is it behaving alike, eating a lot of CPU because agents are trying to write the data from too many metrics or is it some kind of a bug!?
Note, same behaviour on previous version of opscenter 5.2.4. For this reason i first tried to upg opscenter to newest version available.
From opscenter 5.2.5 release notes :
"Fixed an issue with high CPU usage by agents on some cluster topologies. (OPSC-6045)"
Any help/opinion much appreciated.
Thank you.
Observing with the awesome tool you provided Chris, on specific agent`s PID noticed that the heap utilisation was constant above 90% and that triggered a lot of GC activity with huge GC pauses of almost 1 sec. In this period of time i suspect the pooling threads had to wait and block my cpu alot. Anyway i am not a specialist in this area.
I took the decision to enlarge the heap for the agent from default 128 to a nicer value of 512 and i saw that all the GC pressure went off and now any thread allocation is doing nicely.
Overall the cpu utilization dropped from values of 40-50% down to 1-2% for the opscenter agent. And i can live with 1-2% since i know for sure that the CPU is consumed by the jmx-metrics.
So my advice is to edit the file:
datastax-agent-env.sh
and alter the default 128 value of Xmx
-Xmx512M
save the file, restart the agent, and monitor for a while.
http://s000.tinyupload.com/?file_id=71546243887469218237
Thank you again Chris.
Hope this will help other folks.
Here is my problem. I am working on a batch process that spawns multiple tasks. Each task is basically doing some journal postings. The tasks are run in parallel. Now the journal is a counting journal with close to 10k lines. This process runs for hours as there are around hundred journals to be posted. The process runs fine on physical dev boxes, AOS and SQL on same box. But on a virtual server, its behavior is different. AOS starts consuming all the memory while the lines are getting added and at one point, memory hits 100% and AOS throws out of memory exception and dies, other times process just hangs and waits for memory to be released, which takes a long time. The journal posting is standard AX process and is not customised. The AX environment is 2012 R1 and latest kernel hotfixes are applied (KB2962510). I explored this property called MaxMemLoad that allows you to restrict the memory an AOS can consume on a server, but did not help at all.
The AX environment is composed of three AOSs in a cluster.
How can i restrict this crazy memory consumption?
EDIT:
Thanks to Matej i made some progress. SQL Server version was 2008 R2 SP1 and I applied the latest SP3. Interestingly out of three AOSs in cluster two now have much better memory graph, less than 45%. But the third one is still having weird memory usage. All three AOSs are same versions of AX, similar system configs (windows 2008 r2, 24 GB RAM, 4 cores). I had also applied the latest kernel hotfix on all AOSs. At the moment I am doing a full CIL on this particular server and run the batch again if that helps. I am attaching three graphs, generated using performance monitor, for CPU and memory, as you can see the memory on Server 01 is very erratic, not releasing memory on time, the other two are more stable. Any ideas?
Say I have written a program that takes 30 seconds to execute on a dual core processor. What time it would take on a 16 core processor? same or differs
Two cases:
One- the program is written with multiple cores in mind.
Two- program is written irrespective of no of cores.
Viewed in isolation, then unless you have explicitly written multithreaded code, the runtime should be the same.
1 Of course, it might be faster if you have other applications running simultaneously, because they can now run on the other cores.
If you have written multithreaded code, then the speedup you see will be based on all sorts of factors (memory bandwidth, IO bandwidth, memory access patterns, cache coherency, synchronisation, etc.), as well as Amdahl's law. It will always be some number less than N (where N is the number of cores).
1. Assuming we're talking about a conventional platform.
We have a vanilla ASP.NET application (ASP.NET web forms, Entity Framework, SQL Server 2005) without any explicit multithreading from code. It has been deployed on the stagging environment (OS - Windows Server 2008 R2 64 bit, CPU - Intel Xeon E5507 # 2.27 GHz 2.34 GHz, RAM - 7.5 GB). This environment is composed of a web, a database and a reporting server each a separate instance in the cloud (Amazon EC2). When testing for concurrency, observations are as under:
1 user - CPU usage ~25%, Response time 2-4 seconds
2 users - CPU usage 40-50%, Response time 3-6 seconds
4 users - CPU usage 60-80%, Response time 4-8 seconds
8 users - CPU usage 80-100%, Response time 4-10 seconds
My questions are:
Is CPU usage is relative to no. of concurrent users? And the response time can vary to a great extent as seen in the above observations?
As from the above observations, CPU will be maxed out when concurrent user counts is ~10. Shouldn't the CPU handle much more concurrent users seamlessly without a drastic increase in the response time? In ideal scenario, in case of a basic ASP.NET application, how many concurrent users a CPU can handle?
If yes in the above question, what could be the problem here for high CPU/long response time? What ways we should go ahead for effective debugging to find out bottlenecks in code/IIS settings?
PS: IIS settings (i.e. in machine.config) which have been changed:
maxWorkerThreads = 100
MinWorkerThreads = 50
maxIOThreads = 100
minIOThreads = 50
minFreeThreads = 176
maxConnections = 100
The high CPU usage could be caused by a wide variety of things. The easiest way to find out what's going on is to use a profiling tool:
ANTS Memory Profiler:
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/
ANTS Performance Profiler:
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/
They're very affordable, but you should be able to work unrestricted with the trial versions. These tools do a fantastic job of identifying bottlenecks and memory leaks.