I have a site that runs off of dotNetNuke with much customization. In production, this site runs fine, speed is relatively optimal, etc. In development, it's PAINFULLY slow (anywhere from 10-30 seconds per action slow). Can anyone recommend any tools/ideas on how to diagnose this issue? The environments are very similar (the dev database is not as powerful as the production one, but it's not enough to warrant this type of delay). I'm looking for something that can help determine all the points of contact for the requests, etc.
Any thoughts?
Try out the following tools:
YSlow: YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites
PageSpeed: The PageSpeed family of tools is designed to help you optimize the performance of your website. PageSpeed Insights products will help you identify performance best practices that can be applied to your site, and PageSpeed optimization tools can help you automate the process.
Firebug and Network Monitoring: Look at detailed measurements of your site's network activity.
Fiddler
YSlow, PageSpeed, and Firebug are great tools you should definitely use but the fact that you're only seeing the issue in the development environment seems to imply it's not the site that's the problem but something with the development environment. Generally, I find most slowness in these cases is related to Disk and/or RAM issues. Use Task Manager to verify the machine has enough RAM for it's current load. Make sure there's sufficient available disk space for proper caching to occur. You may need a faster hard drive.
Run the site lokal in release mode and see if it changes something.
If you can run the live site in debug mode and see if it slows down as much as in the lokal environment.
Related
I'm using google's page speed insight tool and on production, it works fine.
But is it possible to use it on localhost? or are there any equivalent tool for testing local pages?
I know the lighthouse tab is also an option but the metrics are somehow different! I need the same API used in that service!
You can use Lighthouse Command Line Interface (CLI) (or run it from NodeJS if you are familiar with that), this is the engine that powers Page Speed Insights.
That way you configure CPU slowdown and network latency to closely match how you experience Page Speed Insights.
With regards to Lighthouse in the browser, the metrics should be the same (in terms of what is measured).
If you are getting vastly different performance numbers there may be several causes such as:
plugins (so run in incognito mode)
latency difference (if your website is in India or Australia for example then the latency will be high using Google's servers in America so you will get better scores from localhost)
settings (not running "simulated" throttling).
You can find a bit more info on the Lighthouse CLI advantages in this answer I gave.
I have shared webhosting and sometimes i go over the max allowed cpu usage once a day, sometimes two or three times. but i cant really narrow it down to anything specific.
I have the following scripts installed:
wordpress joomla owncloud dokuwiki fengoffice
before i was just running joomla on this hosting package and everything was fine, but i upgraded to have more domains available and also hosted other scripts. now like wordpress, owncloud and so on.
but no site has high traffic or hits. most of the stuff is anyway only used by me.
i talked to the hostgator support team and they told me there is a ssh command to monitor or watch the server and see whats causing the problem.
the high cpu load just happesn for a very short peak, because everytime i check the percentage of cpu usage in the cpanel its super low. the graph shows me the spike, but it looks worse than it really is, because the graph gets updated only every hour, and that makes it hard to narrow it down...
i am new to all this. can somebody help me to figure this out?
BTW:
I hope this question is fine now here, kinda dont really understand this plattform yet...
Just so you have more information, I to host many websites with HostGator using a reseller/shared account. The performance of your site is most likely not an issue, and is related more to HostGator's new servers and it's poor MySQL performance. None of my WordPress sites had issues for years, despite high traffic/plugins etc. Fast forward to late 2013 after EIG purchased HostGator (and others like BlueHost) and the performance on the "new more powerful" servers is anything but. Limits on CPU and processes are more aggressive, and while outright downtime isn't an issue, the performance during peak hours is exceedingly poor. Sites which rely on MySQL databases all suffer from poor performance and no amount of caching or plugin optimization will help (I should know as I spent months reviewing my sites trying many optimizations).
My advice: Find another web host and/or upgrade your hosting to a VPS that can scale based on your needs.
I moved my higher traffic/important clients to WPEngine. The speed difference and quality support is massive.
Management has decided to go for Windows 2008 64 bit with IIS7 to service our main website.
They want to have it staged on a Windows 2003 server with IIS6. [Edit] Yes 32 bit is what they are planning for staging [End Edit]
I want to know what issues, beyond the security issues, that I should put forward, suggesting we should opt for the same server in staging as in the live environment.
I have read great posts like this, but I want something I can say with a few bullet points
That staging and live environments should be the same, is easy for any seasoned developer to understand, my problem is that I am trying to explain this to upper level management people who seem to have already made up their mind...
[Edit]
#Luke:
Its basically a website which gets updated quite often, the whole site is to be staged, tested, before deploying to the live environment.
The site is to be left at the hands of the Marketing department, (non developers) and have them verify that the site has no issues before deployment.
[Edit++]
Code is ASP.NET, used in 3 important customer ordering pages.
Thanks,
Ric
I hope thats not a 32-bit Windows 2003 staging server you're using to test functionality for a 64-bit Windows 2008 production server or you are in for a world of pain.
The staging server should be, as far as possible, the equivalent of the production server because what you are using it for is to answer the question "Does this software work on the production environment?" before actually committing to loading it on the production environment.
Answering the question "Does this software work on a server that is almost totally unlike our production server?" is not useful and in reality all you are doing is committing to testing and debugging the software in yet another environment, but in an environment that you won't actually use. Its more work and in the end you still don't know if it works on your production environment, which is the entire point of having a staging server in the first place.
The more the staging environment matches live, the more issues can be found in test. If you have only a poor match, like what you have here, this limits the kind of bugs might be uncovered. For example, suppose there is an incompatibility with 2008 64bit and some component of the site? You will not find it until you have gone live. This could be too late.
Perhaps you should ask them what they believe a staging environment is. Explain to them that the entire point of a staging environment is to mimic the production environment as well as possible. Explain that if the staging environment is to be drastically different, you might as well not have it. Then if you do not have it, your production site will be used for testing. Tell them that it's really not that big of a deal, just that the site will break a couple times, and possibly have some major security leaks before you get everything fixed due to the lack of proper staging. I'm sure they'll understand.
The general rule is that you can only validate changes that use common subsystems between stage and live. If you are only validating HTML copy changes, and can guarantee that only HTML is being rolled from stage to live, it will probably give you high confidence that the site will work on live.
You have so many differences between stage and live that you can not validate any coding or IIS configuration changes. It will be "push and pray" going to live.
Preferably live and staging should be the same technologies of course (same box?). But what are you staging here, technology or content? If the staging environment is mainly for content then you might get away with both servers not being the same. However, if you're staging technology then you will definitely run into issues where you put stuff live that doesn't work properly. I guess, if the guy with the wallet is willing to be responsible for that, go ahead...
Explain it to the business in terms of risk and money.
The risk of your site encountering issues upon production deploy is known and non-trivial.
The cost of your site going down because of an unforeseen issue is extremely high.
The potential cost of the time it takes your support staff and developers to pinpoint issues each time they're encountered in production because your staging environment isn't answering the right question ("Will my software work in production?") is high, and exacerbates the former.
The late nights and high stress levels repeated failed deployments can incur will lead to an unhappy, unproductive team, which can lead to unacceptably high turnover rates.
The cost of mitigating all of this via the purchase of hardware is relatively low, and many reputable engineers recommend it as a best practice.
We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset.
Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't.
Here's some background:
Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware.
During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual.
However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required.
It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods.
First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging.
Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version.
Any tools, techniques or theories much appreciated!
You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource.
I have an article about debugging ASP.NET in production which may provide some pointers.
Is your test env the same really as live?
i.e
2 separate vm instances on 2 physical servers - with the network connection and account types?
Is there any other instances on the Database?
Is there any other web applications in IIS?
Is the .Net Config right?
Is the App Pool Config right for service accounts ?
Try look at this - MS Article on II6 Optmising for Performance
Lots of tricks.
I am tasked with improving the performance of a particular page of the website that has an extremely high response time as reported by google analytics.
Doing a few google searches reveals a product that came with VS2003 called ACT (Application Center Test) that did load testing. This doesn't seem to be distributed any longer
I'd like to be able to get a baseline test of this page before I try to optimize it, so I can see what my changes are doing.
Profiling applications such as dotTrace from Jetbrains may play into it and I have already isolated some operations that are taking a while within the page using trace.
What are the best practices and tools surrounding performance and load testing? I'm mainly looking to be able to see results not how to accomplish them.
Here is an article showing how to profile using VSTS profiler.
If broken it is, fix it you should
Also apart from all the tools why not try enabling the "Health Monitoring" feature of asp.net.
It provides some good information for analysis. It emits out essential information related to process, memory, diskusage, counters etc. HM with VSTS loadtesting gives you a good platform for analysis.
Check out the below link..
How to configure HealthMonitoring?
Also, for reference to some checklist have a look at the following rules/tips from yahoo....
High performance website rules/tips
HttpWatch is also a good tool to for identifying specific performance issues.
HttpWatch - Link
Also have a look at some of the tips here..
10 ASP.NET Performance and Scalability secret
Take a look at the ANTS Profiler from Red Gate. I use a whole slew of the Red Gate products and am very satisfied!
There are a lot of different paths you can go down. Assuming a MS environment you can leverage some of the team system tools such as MS Team Tester to record tests and perform load testing against your site. These can be set to run as part of an automated build process.
A list of tools is located at: http://www.softwareqatest.com/qatweb1.html#LOAD
Now, you might start off simple. In this case install two firefox plugins: Firebug and YSlow for Firebug. These will give stats and point out issues such as page size, the number of requests made to get the page, etc. They will also make recommendations on some things to fix.
Further, you can use unit tests to execute a lot of the code behind to see what functions are hurting you.
You can do all sorts of testing if u have full MS dev system with TFS and Visual Studio Team Edition. Based on what I see here
I recently had a nice .Net bug which was running rampant. This tool sorta helped, but in your case, I could see it working nicely..
http://www.jetbrains.com/profiler/
Most of the time we've used WCAT from Microsoft. If your searches where bring up ACT then this is probably the tool you want to grab if you are looking for requests per second and the such. Mike Volodarsky has a good point pointing the way on how to grab this.
We use it quite a bit internally when it comes to testing our network infrastructure or new web application and it is incredibly flexible once you get going with it. And it seems every demo Microsoft has done for us with new web tech they seem to be busting out WCAT to show off the improvements.
It's command line driven so it's kinda old school, but if you want power and customization it can't be beat. Especially for free.
Now, we use DotTrace also on our own applications when trying to track down performance issues, and the RedGate tools are also nice. I'd definitely recommend a combination of the two of them. They both give you some pretty solid numbers to track down which part of your app is the slowdown and I can't imagine life without DotTrace.
Visual Studio Test Edition (2008 or 2010) comes with a very good load testing component for ASP.NET apps.
It allows you to get statistics for all the perfmon stats for a server (from basics like CPU and disk waits to garbage collection and SQL locks)
Create a load test for the page and run it, storing the stats in a database for the base line. Subsequent runs can be compared.