I have a website running in ASP.NET webforms. I planning to run performance tests using visual studio 2013 ultimate edition test tools. I have a very basic question.
Where is the performance test supposed to be run?
From a client machine or from server?
Could you please point me to good article on this?
It depends on what exactly you want to measure.
To measure the uses experience then on a computer similarly located to those of the users of your website. To measure the raw performance of the website then on a computer located where network delays and bandwidth will not be an issue.
Another point to consider is how much monitoring of the servers you want to perform when running the test. Visual Studio load tests can collect performance counters from other computers during the test run. These might be the web servers, database servers, application servers, etc. Getting access to the the performance counters of these servers generally requires firewall and other permission changes. The counters also consume network bandwidth as they are transferred to the test computer(s). If these performance counters must be collected then that may require the test computer(s) to be within the company's filewalls.
If your test does collect performance counters from the various servers then one interesting test variation is a "do nothing" test that just collects the counters over a period of time during real use of the system. That provides a way of validating that the tests were representative of real use.
Related
I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?
The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)
We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.
I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.
How do you automate integration testing that requires 2 or more PCs (distributed app)? What's your strategy for performing integration testing (or performance testing) on the cases where multiple machines are involved?
Example
We need to integration-test our client/server app. To mimic the live-system, we need to deploy the client on one machine, and the server on another. Then we measure the TCP transfer speed.
There are ways to do this, but none of them are built into any frameworks that I am aware of.
Below are the three ways I have addressed it in the past:
use VMWare Server/ESX - What we have done most recently is to actually build VM images for the server and client machine with a mountable second drive (data drive). We then build and unit test our software, before the performance test we spin up the VM, then deploy the code to the data drive. After that we deploy a set of test scripts to the machines and kick them off (via Powershell). This works pretty well, has good replay-ability and allows us to give the test servers to other teams/customers for their evaluation. The downside is that it is very resource intensive.
Dedicated Server & Client Test Sets - We had two different Source Repositories, one for the server and one for the client. We then went through the build as above, but one at a time, deploying the server (and testing it against the old client), deploying the client (and testing it against the old server), and then deploying both and testing the combination. This worked fairly well, but required some manual testing for certain scenarios and could get cumbersome if we needed to test multiple server changes or client changes at the same time.
Test against production only - We only ever updated the client OR the server and then we updated that part and tested it against the current production setup. The downside of this of course is that we had to deploy much slower and make incremental changes in one system or the other, deploy, test and release, then make changes in the other component. Rinse and repeat.
If you have the resources I highly recommend #1. Its harder to setup initially but it pays for itself very quickly, and once its setup its repeatable for other products as well (as long as they follow a relatively similar deployment pattern).
It depends on your setup. For example I needed to test a group of web services that my team created/modified. During the test we deployed the app to one machine as the producer and used SoapUI to generated a few thousand transactions via many threads (from 1 to 100 threads as I remember). That way we guaranteed the response and the SLA (service level agreement).
I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.
I have a large ASP.NET website on a hosted platform. It shares the machine with a lot of other applications. We do not have access to the machine itself (only an FTP account).
Our client is complaining that it is starting to perform rather badly, particularly around peak hours. I've run some remote measurements (using a JMeter-like tool) that tells me that, yes, it does indeed perform rather badly during peak hours. It doesn't tell me why though. The client is resisting a move to a dedicated server without some hard facts.
As I see it, what I need are hard data about the machine itself. Setting up a local performance test environment would be extremely time-consuming, and I have no way to estimate the server performance.
My question: is there a good way to collect (a lot) of performance measurements when I have limited access to the machine, and certainly no access to the performance monitor? Any code would have to run in the asp.net application itself, without screwing it up too much.
We had a similar problem with our asp.net application hosted on a shared server, which also started to perform badly during peak hours.
Although I don't know of an elegant solution to your question, this is what we did:
Talk to your host providers to see what additional information they can give you - it's in their best interest to keep their clients happy. Our host providers were able to give us some time with one of their network engineers who provided us with some decent CPU and memory utilization stats.
Take your own performance measurements by dumping information to either a log file (using log4net) and/or the database - for example, user sessions, search times, page hits, timing measurements around key functionality. From this information we were able to ascertain what our systems normal behavior was for a set number of automation tests.
Setup a local server (not necessarily same stats as hosted/production server) with your application loaded and give it a full load/performance/capacity testing (we used Red Gate's ANTS Profiler). The stats that you gather from that will give you and your client a good indication of how the system should behave under certain loads with a known environment. Yes, this can be time consuming but it will give you a great performance measuring tool so that you can catch/fix bottlenecks locally rather than on production.
Good luck.