My under development local drupal site become very slow, how to solve? - drupal

I am developing locally a site with drupal and suddenly it became very slow. The last thing I made was installing the internationalization module.
Now when I try to reach administration panel I receive:
Fatal error: Maximum execution time of 60 seconds exceeded...
What to do now? Should I increase the maximum execution time allowed? OR could be that I have too many modules installed?
EDIT: Forgot to tell you that I am working on a PC with 2GB RAM and CPU 2.9 GHz, Windows XP + XAMPP

Exceeding 60 seconds execution time is quite something - indicates that something is going quite wrong.
I'd start troubleshooting by disabling modules (physically moving them out of your modules directory) one at a time until the problem goes away. Then, add them back one at a time, until the problem returns (you'll need to re-enable them through the Modules page as you go). You should be able to quickly isolate exactly which module is causing the problem.
Since the last thing you did was to install internationalization, I'd start by disabling that module.
Once you've isolated the module, you can try to work out what's going wrong.
Some things to look into ...
is your database running out of space
Are you missing any indexes
Do you need to "update statistics" (rebuild metrics on table contents and column distributions)

The Devel module can be useful for logging performance statistics, to help you track down the bottleneck.

A php accelerator may help you get the time down a bit, there are also a number of caching options that your site can use (look in admin under performance), this may make developing more difficult but can make pages load faster.
I wouldn't increase your maximum execution time, at some stage you want to put your site wide, and if people don't get a page within a second or so they will think the site is down.
To have too many modules installed you would have to have a lot of modules, it is more likely that one of your modules is causing a performance bottleneck. Or something on your site like a view is causing things to slow down. mattv's answer helps with that.

try also activating the cache system under site settings / performance. It could be helpful.

there is a known and documented problem about massive queries getting dynamically built by the Views module when rebuilding the dynamic menu, apparently.
Unfortunately, no simple and definitive answer has been found, yet.
You can find more information here (please be aware that some answers relate to version 5).
I would really like to know how to fix this in a definitive and efficient manner.

Use Zend Server. For detailded information check this out: http://drupal.org/node/348202#comment-3349704

Related

Do you have any suggestions on how to speed up precompiling a Kentico site? AKA: the aspnet_compiler is VERY slow

I have a Kentico 6 site that we have squeezed better performance out of by precompiling. We have a Team City Continuous Integration (CI) server that runs the build automatically on check in to the subversion repository.
However, the build takes 40 min! About 33 min are during the aspnet_compiler step! This is a pretty long time for a build that SHOULD take less than 10 min. I've gone through some of the various perf improvements such as moving the asp.temp files to a fast SSD. During the aspnet_compiler step, the server is using very little CPU, Disk and RAM. The CPU use seems to avg about 1%! In researching about aspnet_compiler, I have found many pages complaining about the speed, but a dramatic silence from Microsoft about it.
https://aspnet.uservoice.com/forums/41199-general-asp-net/suggestions/4417181-speed-up-the-aspnet-compiler http://programminglife.wordpress.com/2009/04/16/aspnet_compiler-compilation-speed-part-1/
I've come to (perhaps erroneous) conclusion that if I can't speed up the aspnet_compiler, then perhaps I can reduce it's workload. Since Kentico has lots of controls in the project, are there any savings that I can glean by remove extraneous controls? ie: if I run the aspnet_compiler for a project with a single page, it's quick.
I've also thought about maybe making the cmsdesk a separate application that only needs to be compiled after a new hotfix has been applied.
To recap: My two concerns are: #1 - can I speed up the aspnet_compiler somehow? #2 - If I can't, then I'm guessing I can reduce it's workload.
In relation to #1, maybe I can do incremental compilation, so that I only precompile files that have changed since the last build? I haven't found much info about doing this; there are a few unanswered questions on StackOverflow about this very topic - eg: aspnet_compiler incremental precompile Incremental Build aspnet_compiler
FYI - for those of you unfamiliar with Kentico CMS, It's a Web Site Project, with LOTS of controls - maybe hundreds of them.
Any ideas?
PS - I have a reply on the Kentico forums: http://devnet.kentico.com/questions/do-you-have-any-suggestions-on-how-to-speed-up-precompiling-a-kentico-site-aka-the-aspnet_compiler-is-really-slow
One thing our team did try to improve the compilation time was to remove the modules and set of controls that were not necessary to the project(forum, ecommerce etc..) to speed up the compilation time.
Also, which approach do you use for developement, portals or aspx? as we've noticed that compilation prove to be faster in the portal approach than in the ASPX .
Everything I've read says that precompiling with aspnet_compiler is super difficult to achieve with Kentico.
However, I managed to find this post on Kentico's forums where someone appears to have figured out a way to make your system work for them (search for EHUGGINS-PINNACLEOFINDIANA on the page and you'll find it).
I have heard that using MSDeploy or MSBuild are a much more efficient way to precompile than directly calling aspnet_compiler (although I'm pretty sure both of those either still call aspnet_compiler or do the same thing it does, only faster).
I've personally never tried using any of these methods (and I'm pretty new to ASP.NET myself), but I figured I could at least give you some leads:
http://odetocode.com/blogs/scott/archive/2006/10/18/what-can-aspnet_compiler-exe-do-for-me.aspx
http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx
http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity_26.html

How do you profile a production ASP.NET applicaiton?

We have some performance problems with one of our applications. I thought about using something like dotTrace to find out where the problems are, but dotTrace would probably degrade performance even more.
What's the best way to profile an application that's on a prod environment w/o affecting performance too much?
The general answer is "don't do it".
Other than that, you can gain a lot by using performance counters. If the built-in counters don't help, you can create your own.
Among other things, the performance counters may give you an idea of how to reproduce the performance problems through load testing.
The next idea is to narrow down the area you're interested in. There's no sense impacting performance for the entire application if it turns out to be your web service access that's slow.
Next, be sure to have instrumented your application, preferably by using configuration. The Enterprise Library Logging Application Block is great for that, as it allows you to add the logging to your application, but have it configured off. Then, you can configure what kind of information to log, and where to log it to.
This gives you choices about how expensive the logging should be, from logging to the event log to logging to an XML file. And you can decided this all at runtime.
Finally, you're not going to be able to use dotTrace or something else that requires restarting IIS an adding code to your running application. Not in production. The ideas above are for the purpose of not needing to do so.
Profiling memory or cpu?
Memory: the best way would be to create a memory dump of the w3wp process (launch task manager, right click the process then "create dump"), then copy the dump to your local machine and analyse it with WinDbg. And look at which classes consume the most memory. There are lots of questions/answers here on Stackoverflow on how do do that (how to use WinDbg and analyse the .NET heap).
CPU: we use a short command-line profiler by Sam Saffron (woohoo, one of the creators of Stackoverflow!) His project is abandoned, but we forked it and maintain it here. https://github.com/jitbit/cpu-analyzer Everyone's welcome to contribute. It attaches to your threads using Microsoft's DbgManager and finds call-stacks that take longest time to execute.
Did you load-test the application on a number of sessions that's anywhere near the actual load of the production environment?
The first thing that comes to mind is that your app is not scaling well under load or that your db is not scaling well with an increase in size (causing this way problems even with a very limited number of concurrent sessions) but it could be anything really.
My suggestion is to replicate the production environment and run proper load-testing then look at the data and it'll give you some clue.
you don't wanna play games with your production environment, but if you don't have it already you could use logging to keep track of the sequence and time spans of key-events and take it from there.
You could use ants profiler
http://www.red-gate.com/products/ants_performance_profiler/index2.htm
They claim that "the overhead was hardly noticeable".
There is a 14 day free trial so you could give it a try.
Edit: I agree with John's comment, it will disrupt, require some down time, to get it started / stopped. Best to use it on a test environment to identify the bottle necks.
You can use ants profile as well as performance counter of the system. It will help you to determine whats the problem.
Here are some details about performance counter..
http://msdn.microsoft.com/en-us/library/fxk122b4.aspx
http://msdn.microsoft.com/en-us/library/ms979204.aspx
http://www.codeproject.com/KB/dotnet/perfcounter.aspx
I would recommend to take several memory dumps of the process in Production, look at all the stack traces and see if you find a pattern.

How can we improve our deployment and build systems?

We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.

Slow solution loading in visual studio 2008

I am working on an ASP.NET 3.5 project which has 55 projects in a solution. When opening the solution in Visual Studio 2008, it takes over a minute to open - about 1 second for each project. However, if I disconnect the network cable before opening the solution, it only takes about 15 seconds! Any ideas about what could be causing the slowdown?
I had this happen to me back in the days when we were using Visual Source Safe.
Could be your source control plugin asking for updates if you have the solution under source control.
You should do some investigation, fire up Wireshark, start a capture on the interface in question and see what traffic is flowing over the wire.
Can I answer a question with a question? What is the secret to getting VS to not just die with that many projects, let alone load in a phenomenally quick 60 seconds?
At about 10-12 projects the compile time on Visual Studio becomes unbearable, at about 5-8 projects Resharper will crash. The IDE is such a memory pig that even opening more projects by using multiple instances of VS usually isn't an option.
Anyhow, it's all about memory usage and the odd ball out project is probably doing it, e.g. the one with the most files.
I had the same problem this week (5 years later!!). It was caused by a huge .suo file (almost 400 Mb), deleting it fixed the problem.
A few years ago I remember a colleague having some similar problem (with a lot smaller solution, and in VS2003). Can't remember the details, but I think it was related to the local ASPNET user account (or rather, that it did not exist). Not sure though...
As a side note: I usually find it more efficient to have perhaps around a handful of projects in each solution (usually one solution produces one or two assemblies used in production code), and then have a few Visual Studio instances running at the same time. 50+ projects in the same solutions feels like asking for problems.
Might be that you have other dependencies though, just wanted to share my thoughts.
which has 55 projects in a solution
WOW. I can't imagine what type of solution needs that many projects. The answer is probably that your source control provider needs to refresh the status of each of the items, all of which take time.
For edit-merge-commit style version control systems, such as subversion, this operation doesn't take place. Try temporarily removing source control from the entire solution to see if this is the culprit.
If your solution is attached to source control, then it is trying to load up the symbols and verify which items you have checked out. So, if you have a slow connection, it is oftentimes faster to take the solution offline.
http://www.tmgirvin.com/2009/03/working-offline-with-visual-studio-2008-and-tfs.html
EDIT
Another solution which I've seen used,
create a
_webTier.sln
_database.sln
_build.sln
( is your project name)
and each of those solutions is a self-sufficient part of the entire project, but that way if you are working on the webtier and you don't need the database project or the mobile project parts to load up, you can just open the webtier solution.
The build solution contains the entire package that needs to be built, and takes a very long time to load.
I had this problem on a development machine with no internet connection and it turned out that the problem was related to a setting in IE's internet options:
Control Panel -> Internet Options ->
Advanced -> Security -> Check for
publisher's certificate revocation
After making sure this was unchecked my solutions started loading quickly again.

What are you using for Distributed Caching in web farms running ASP.NET?

I am curious as to what others are using in this situation. I know a couple of the options that are out there like a memcached port or ScaleOutSoftware. The memcached ports don't seem to be actively worked on (correct me if I'm wrong). ScaleOutSoftware is too expensive for me (I don't doubt it is worth it). This is not to say that I don't want to hear about people using memcached or ScaleOutSoftware. I'm just stating what I "know" at this point.
So my question is basically this: for those of you ACTIVELY using distributed caching, what are you using, are you happy with it, and what should I look out for?
I am moving to two servers very soon...both will be at the same location. I use caching fairly heavily (but carefully) to reduce the load on my database server.
Edit: I downloaded Scaleout Software's solution. I've coded for it and it seems to work real well. I just have to decide if my wallet will part with the cash for it. :) Anyone have experiences good or bad with ScaleoutSoftware?
Edit Again: It's been a little while since I asked this? Any more thoughts on it? We ended up buying the solution from ScaleOutSoftware and have been happy with it, but I'm curious what others are doing.
Microsoft has a product pending code-named Velocity. It's still in CTP, and is moving slowly, but looks like it will be pretty good. We'll be beating it up in the near future to see how it handles what we want it to do (> 2 million read/writes per hour). Will post back with results.
There is a 100% native .NET, well documented open source (LGPL) project called Shared Cache. Looks like it is not yet mentioned on SO, but it's promising and should be able to do what most people expect from a distributed cache. It even supports different strategies like distributed or replicated caching etc.
I will update this post with more details as soon as I had a chance to try it on a real project.
We're currently using an incredibly simple cache that I wrote in a couple of hours, based on re-hosting the ASP.NET cache in a Windows Service (more info and source code here). I won't pretend it's anywhere near as optimised as something like Memcached but we were just looking for something simple and free until Velocity came along, and it's held up extremely well even under fairly heavy load.
It comes down to our personal preference for core components - i.e. ones that affect whether the site is available or not - that they are either (a) supported by a vendor with a history of rapid and high quality support, or (b) written by us so that if something goes wrong we can fix it quickly. Open source is all well and good, and indeed we do use some OSS, but if your site is offline then unfortunately newsgroups et al don't have a 1 hour SLA, and just because it's OSS doesn't mean you have the necessary understanding or ability to fix it yourself.
We are using the memcached port for Windows and we are very pleased with it. The enyim.com memcached client API is great and easy to work with. It's also open source, which is a big advantage, if you ask me.
We are now using this setup in a production web-app and it has helped a lot in improving its performance.
There's a great .NET wrapper/port found here on Codeplex. Awesomesauce!
We use memcached with the enyim library in a production environment (www.funda.nl). Works fine, very pleased with it, but we did notice a substantial raise in CPU use on the clients. Presumably due to the serializing/deserializing going on. We do around 1000 reads per second.
One tried and tested product by 100's of customers worldwide is NCache. Its
a feature rich product that lets you store session state in a redundant and highly available manner, lets you share data
within the enterprise as well as bridging for WAN communication essentially acting as a data fabric and lastly it lets you build an elastic caching tier so that when
your application scales, you can add servers to the cache and actually boost performance further.

Resources