Recently we upgraded from Tridion 5.3 to Tridion 2011 SP1. Initially performance of Tridion GUI was very good. But as we converted all VBScript templates to compound templates we have to publish entire site (around 1600 pages and 5000 components). After publishing the site we found that Tridion GUI became very slow, also publishing queue is taking lot of time to get loaded, can anyone suggest some performance optimization tips on CMS servers as well as SQL Database.
Clearly your performance impact is due to your publisher rather than the CME/IIS, since you state that
After publishing the site we found that Tridion GUI became very slow
What happened after all pages were published?
The CME (Content Manager Explorer) and the Publisher are separate processes and we tend to install them on different machines precisely to avoid this type of resource bottlenecks.
Have you considered that perhaps the process of converting to Modular Templates wasn't very successful or sufficiently tested from a performance point of view?
Anyway, there is no quick solution. Reduce publisher throughput by limiting the number of render threads if CME response is more important than publishing throughput. Add a machine to deal with publishing, if publishing performance is important enough for you.
For Publish Queue performance please use the Purge tool on the server to delete successful publish transactions. This is important when bulk publishing in both the older and new version of Tridion .
Related
We have 4 servers load balanced:
4 cores # 2.6Ghz (E5-2650 v2)
14GB RAM
Windows 2012 R2
High Performance power setting
IIS 8.5
ASP 5.3
EF 6.1
They each have a single application pool with one worker process and a single website. Each server has its own copy of the site (DLLs & views), running on a local disk. We are using IIS virtual directories to point to shares on a clustered file server for log files and common images etc (content only). The application pools are set to not shut down when idle (interval of 0) and we have also disabled the every-1740 minute recycle interval too.
We have New Relic's .NET agent installed on all servers, and looking through our slow transaction log, I can see that many requests are taking 15 seconds or so to complete. Looking into the trace, I can see a common call to System.Web.Compilation.AssemblyBuilder.Compile() and System.Web.Compilation.BuildManager.CompileWebFile().
As far as I know or understand, ASP would compile these views upon first request to them, and cache it (to the temporary ASP files in C:\Windows\Microsoft.Net) and then load from there for subsequent requests.
I'm confused how this is happening so often - when I visit these URLs, the TTFB is about 400ms, and due to constant load I can't see the websites "losing" their cache and needing to compile the views again. These pages are frequently hit - it's an e-commerce store and I can see that it happens often, and on our most popular pages: catalogue (category/brand/gender etc) listings and product details.
I've set the settings against each application pool to log an event when recycling, and there have been no events logged when I'm checking the WAS service in the event viewer. We also have New Relic server installed, and looking over the past 6 hours' worth of data, I can't see any dip in RAM usage on any of the servers - which would indicate the application pool recycling. This has really baffled me!
I'm thinking of moving towards pre-compiling our views as part of our release process - it makes sense really. But it feels like that is working around, or masking an issue which as far as I can see should not be happening. We build our site in Release mode and have <compilation debug="false" /> on all web.config files.
Can anyone think of any causes for this?
It is because of how JIT (Just-In-Time) compilation works.
When you build your application, it is converted into .NET Microsoft Intermediate Language (MSIL) or Intermediate Language (IL).
As your applications is accessed, Common Language Runtime (CLR) converts only executed IL parts of your code into native instructions.
Just-In-Time compilation process converts IL to native machine instructions and it is a part of CLR.
In a simplified terms when you run a .NET application and your program calls a method. JIT Compiler reads IL from metadata and compiles it into native instructions and run it. Next when your program calls the same method, CLR executes native CPU instructions directly. This process adds some overhead for the first method call. You can go with the other option of pre-compiling your application using NGEN, which is usually not recommended because you will loose some optimizations that only JIT can perform due to its awareness of underlying hardware platform. These two articles has more details
http://geekswithblogs.net/ilich/archive/2013/07/09/.net-compilation-part-1.-just-in-time-compiler.aspx and https://msdn.microsoft.com/en-us/library/ms366723.aspx
There are also other things you can try that might help you speed up your application. You can use IIS Application warm up module How to warm up an ASP.NET MVC application on IIS 7.5?, implement distributed caching etc to alleviate some of your application bottlenecks.
I'm considering migrating to SQL CE 4.0 for my website projects, all of my sites use Umbraco but they aren't seriously busy websites (up to 15,000 visits month).
My main concern is performance, so I was wondering if anyone had any experience or knowledge on what sort of performance limitations I can expect.
Also, if running in a managed host environment do I need to be concerned about application pool memory limits?
Thanks
The performance of the website should be no problem, because it reads all from cache.
But I'm having trouble with the CMS performance, on my local environment but also on my VPS and on a shared hosting environment. It takes a lot of waiting time before items are created and edited. That's a big issue for my customer right now.
I do run the latest version of Umbraco (4.7.2.1) which should include a lot of performance improvements for SQL CE.
It might get even worse when the datbase size is growing (mine is about 92Mb)
I need to point my customer to IIS 7.5 / ASP.NET 4.0 bug or limitation (2Gb upload limit). As soon as I constantly get complaints about this from customers.
Are there any bugs records database or limitations descriptions for IIS 7.5 and ASP.NET 4.0 officially supported by Microsoft and publicly available?
Microsoft has one official website where customers can report bugs and suggestions: Microsoft Connect. You can join the site with a Windows Live ID and report a bug. However, I should warn that not all reports are escalated, often times reports will be closed as 'by design' which may not be of much help to you or your customers. But at least the response is official from Microsoft, so you may find some value for not having to bear the burden of proof.
Unfortunately IIS is not listed in the directory of products and programs. In that case your best bet is iis.net as the official source for documentation. I did a search on your particular problem and it turns out it's not so much a bug, but it is a limitation. This particular limitation is documented on MSDN:
Memory Limits for Windows Releases
Edit:
I did find a report on Microsoft Connect about this particular issue, it's in the Visual Studio and .NET Framework site:
Maximum upload size of a file limited to 2GB
Per your comments, we have decided to limit the maxRequestLength value
of the httpRuntime element to under 2Gb. Larger uploads will need to
be split into multiple payloads in order to be transmitted to the
server. Thanks for reporting this issue, Web Platform and Tools team.
It is important to note that though this report is marked "Closed as Fixed" that is because the report is about validating the configuration setting so it can never be larger than 2GB, not because they enabled uploads larger than 2B.
Working with one of our partners, we have developed now two separate sets of web services for their use. The first one was a simple "post to an https URL" style web service, which we facilitated by building a web page in ASP.NET that inspected the arguments in the URL, and then acted accordingly. This "web service" (if you can call it that) has been very stable.
At some point, the partner asked us to begin using SOAP based web services. At their request, we built them a new set of web services largely based on the previous objects, reimplemented as an actual "Web Service". This web service has not been very stable: around once a week, Nagios will alert us that our web service is not responding - and a quick iisreset does the trick.
Analyzing the log output and working in a debugger has not led us to anything concrete. The volume on this new web service is actually much lower than the HTTP web service. I think this could be a code problem or a platform problem, or of course something in between.
We've tried, with little improvement:
To duplicate the behavior in the lab
Debugging in the Visual Studio debugger
Tinkering with IIS options to give it its own application pool
My question, what are the next steps for troubleshooting?
Environment:
Windows Server 2003 Standard Edition R2 Service Pack 2 32 bit, Visual Studio 2005, MS SQL 2005, .NET Framework 2.0.50727
You may get some answers by profiling your webservices and understanding how they are using their resources. perfmon and procmon are both very useful tools in this regard.
EDIT: Since you say errors happen after about a week, the only thing I can think of is resource usage. Ensure your DB connections are being cleaned up, and any opened files (system call to the exe) are being closed.
Also, if your webservices can tolerate it, IIS has a setting that triggers a periodic recycle of an App Pool to handle cases where performance degrades over time. Its dirty, but it may work well for your case.
Since there isn't much to go on - here's another odd issue we came up against regarding our web services.
When the web service stops responding how is memory utilization? We have experienced issues with memory and memory fragmentation relating to busy web services on a system (there was also other things running causing additional fragmentation). When we re-factored the web services to load from smaller dll's and depend on other libraries (instead of one large library) we were able to resolve the memory fragmentation.
To identify what was occurring we would take a dump from the offending iis worker process where the app pool resided and then reviewed that using WinDbg.
http://www.microsoft.com/whdc/devtools/debugging/default.mspx
Additionally we used DebugDiag to take the postmortem dumps.
http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1286
Hope this provides another direction to look at.
After deploying a new build (mostly changed DLL's) of an ASP.NET web app the CPU on the server is now jumping to 100% every few seconds and the culprit is lsass.exe. Do you think that the deployment of the asp.net web app to the server and this problem are related? (or a coincidence that it happened at the same time?)
More info:
This is the first time that I've done the build on a Server 2008 x64 machine. Previous the builds were done on a Server 2003 x86 machine. Target is "Any CPU" so should work on either. Deployed to server is Server 2003 x86.
I've searched the web for more info on this and have confirmed that the process is lsass.exe (first character a lower case L and not an upper case i) so ruled out the virus version. Found some docs relating to a Server 2000 bug but doesn't apply here.
I eventually isolated the problem to an ASP forum running "under" that ASP.NET web app. Using the admin page on the forum I took the forum down and then brought it back up again and the problem disappeared. I find this very frustrating because the problem has now gone but I don't know what caused it and as such it could easily return.
I also installed this Microsoft Hotfix and rebooted this server but that didn't work.
Have you checked the System and Application event logs for anything unusual?
Have you updated to use Active Directory role provider? I've seen issues where enumerating groups to do role checking pegs the CPU and really slows down the app. I actually implemented a customized provider that allowed me to specify a particular OU and set of groups that I actually care about to get around this issue.
The xperf tools distributed in the Windows Performance Toolkit will tell you exactly what is usin CPU time or disk bandwith. These tools are free and work on any retail build of WS2008 or Vista. Here is series of posts on the xperf tools from myself.