Load testing should be done locally or remotely? - nginx

I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?

The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)

We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.

Related

Application server hosting

I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.

Planning server infrastructure when hosting duplicated web-product over multiple servers

We have a web-application product that we sell to companies that is hosted at our servers.
The product contains couple of web applications, windows services and SQL server db.
Right now we have only one client that uses our product. We have two servers - one for the web apps and services and other for the db.
In order to add the product to another client, we have to 'duplicate' all the apps and db and run in separately.
As we started expanding and some companies will require more server power then others, I need to plan the servers infrastructure.
Having two servers for each client sounds ridiculous. Hosting costs will be huge. What will happen when I'll have 10 clients? And probably some servers will take more power than others, leaving servers using 30% from their capacity while others use 70%.
One thing I really care about is separating the DB from each product so in case of server compromise, only one db will be at risk.
So... I thought about Virtual Machines...
Does it sounds right?
Do I need two super servers to hold virtual machine instances? (one for web and other for db?)
What about Load balancing / etc..?
Will it require more maintenance time only because I use virtual machines?
Are there any hardware recommendations?
Any help will be appreciated
Many thanks
Virtual Machines is definitely the safest way to separate clients and will allow you the flexibility to allocate a specific percentage of resources to specific clients.
However, using separate processes on the same physical machine will perform better (but not always significantly) and will allow more dynamic use of resources (i.e., if one spikes, it will use the resources it needs). This setup will not allow you to control the resource allocation nearly as easily though. You'll also have to build your own monitoring tools to see and analyze what processes (clients) are using what resources (piggyback on perfmon).
Using separate processes also is dangerous if your application wasn't designed for this. Anywhere the application caches data on the file system or accesses anything besides memory and the database needs to be thoroughly scrubbed to make sure data from clients is not co-mingled or shared.
Separate virtual machines is more work to manage--each one is pretty much like it's own computer. So you have to manage all the VM's plus the physical machine.
You may also want to consider hosting in a more dynamic environment like Amazon AWS or Microsoft's Azure which will allow you to more easily scale up/down as necessary than a VM at a traditional host.

How to automate integration testing that requires multiple computers?

How do you automate integration testing that requires 2 or more PCs (distributed app)? What's your strategy for performing integration testing (or performance testing) on the cases where multiple machines are involved?
Example
We need to integration-test our client/server app. To mimic the live-system, we need to deploy the client on one machine, and the server on another. Then we measure the TCP transfer speed.
There are ways to do this, but none of them are built into any frameworks that I am aware of.
Below are the three ways I have addressed it in the past:
use VMWare Server/ESX - What we have done most recently is to actually build VM images for the server and client machine with a mountable second drive (data drive). We then build and unit test our software, before the performance test we spin up the VM, then deploy the code to the data drive. After that we deploy a set of test scripts to the machines and kick them off (via Powershell). This works pretty well, has good replay-ability and allows us to give the test servers to other teams/customers for their evaluation. The downside is that it is very resource intensive.
Dedicated Server & Client Test Sets - We had two different Source Repositories, one for the server and one for the client. We then went through the build as above, but one at a time, deploying the server (and testing it against the old client), deploying the client (and testing it against the old server), and then deploying both and testing the combination. This worked fairly well, but required some manual testing for certain scenarios and could get cumbersome if we needed to test multiple server changes or client changes at the same time.
Test against production only - We only ever updated the client OR the server and then we updated that part and tested it against the current production setup. The downside of this of course is that we had to deploy much slower and make incremental changes in one system or the other, deploy, test and release, then make changes in the other component. Rinse and repeat.
If you have the resources I highly recommend #1. Its harder to setup initially but it pays for itself very quickly, and once its setup its repeatable for other products as well (as long as they follow a relatively similar deployment pattern).
It depends on your setup. For example I needed to test a group of web services that my team created/modified. During the test we deployed the app to one machine as the producer and used SoapUI to generated a few thousand transactions via many threads (from 1 to 100 threads as I remember). That way we guaranteed the response and the SLA (service level agreement).

ASP.NET Hosting on Virtual Servers running on VMWare

My Company is running several international websites for selling insurance products.
Our current setup is a Webfarm with multiple Loadbalanced Webservers hosting our ASP.NET applications. The backend is a single - yet powerful - SQL Server. (all in one data center)
Our network admins want to move to virtual servers running on VMWare.
Scenarios could be
Webfarm: Multiple standard webservers, Loadbalanced (current setup), Session state on SQL Server
Virtual Webfarm: Multiple virtual servers, loadbalanced on one physical VMWare Host, Session state on SQL Server
2.a same as above but with multiple physical hosts
Single Virtual Webserver: One big powerful virtual webserver, no loadbalancing required, session state can be kept in process
There is a big hype around virtualization and I can see the benefits, but have no experience with this. I cannot tell what issues we will face and to what we should pay special attention.
Does anyone have experience with such a virtual setup?
What are general recommendations?
I tend towards 2a. I am afraid of having all webservers on one single physical machine.
Many thanks in advance to share your thoughts.
There are three reasons to use more than one webserver for an application:
Scaling - More grunt is required than one machine can provide
Reliability - Website should keep running in case of failure (a. hardware b. software)
Prioritization - One of the webservers takes on heavy work (perhaps scheduled tasks) leaving the other to respond to client requests quickly.
Marrying that up to you scenarios:
Scenario 1 provides 1, 2, 3
Scenario 2 provides 2b (perhaps 2a if it is fully hardware redundant (doubt it))
Scenario 2a provides 1, 2
Scenario 3 provides none of the above
Advantages of Virtual Hosting:
Lower Total Cost of Ownership (TCO) on big cluster serving multiple purposes is cost effective
New servers can be created quickly if needed
Redundant hardware is easier to justify if the cost is shared among many applications
Disadvantages:
Other virtual machines may suck away your CPU/Disk IO capacity
IMHO there is little point to load balancing multiple virtual machines on the same virtual server.
Robert's pretty much covered it all, I'm mostly just adding a note to say that at least one of our clients is currently running with option 2a.
So we have multiple loadbalanced web servers running on a couple of VM hosts, talking to a non-virtualised SQL cluster - this works quite well for them.
One other advantage of virtualisation is that it allows you to more fully utilise your hardware - however, you need to be aware that if you're running your virtual host at around 90% capacity with multiple VMs, you've not got a lot of spare capacity for any traffic spikes - if you're not expecting any, then great, but if you are, you'll need to have something in place to cope.
I agree with all of the above answers, and I actually work at a webhost. :-) If you're using multiple load-balanced webservers now then I can only assume the reason for it is either
Hardware Redundancy: If a single app server fails then those sessions are lost, but the app keeps running on the other servers and users can immediately re-connect.
or
Application Load Distribution (it's late so I can't think of a better name): Your traffic dictates that you have multiple app servers since all of your users would crash a single app server.
If #1 is the reason, then going to VMWare defeats the purpose since you only have one server supporting everything, and in case of hard drive crash, etc, you are down while it is repaired. If #2 is the reason then a VMWare based solution MAY work, however keep in mind that the hardware you'd use would almost necessarily be of a higher caliber than what you're currently using. So you maybe get more bang for your buck, but you STLL lose the redundancy that multiple physical machines gave you.
Now, you could always combine the two by having multiple physical machines all running VMWare, but that adds a level of complexity to things that you may not necessarily want either.
It doesn't sound like there would be any tangible benefit from running multiple virtual servers on the same physical host, you're just adding overhead. Unless I'm missing something with the way you've described the setup, there wouldn't be any benefit at all from moving to VMware - unless you're looking at taking advantage of features such as VMotion
VMware is most useful for consolidating underutilized hardware. If your hardware is running at near-capacity during peak periods then you don't want to run multiple VMs on the one machine.
There are benefits to Virtualization but your network admins need to prove that there is a benefit for your company before you even consider switching. I would say if you have multiple apps running on dedicated servers with low traffic (i.e. each app has it's own physical server) then sure, Virtualize. If you have one app over many servers, then don't.
You should be able to use virtual machine hosts with multiple vm per host and load balance across all of them.
Microsoft is doing this with msdn and technet http://virtualization.info/en/news/2008/05/microsoft-migrates-msdn-and-technet-on.html.

ASP.NET performance measurements on a hosted platform

I have a large ASP.NET website on a hosted platform. It shares the machine with a lot of other applications. We do not have access to the machine itself (only an FTP account).
Our client is complaining that it is starting to perform rather badly, particularly around peak hours. I've run some remote measurements (using a JMeter-like tool) that tells me that, yes, it does indeed perform rather badly during peak hours. It doesn't tell me why though. The client is resisting a move to a dedicated server without some hard facts.
As I see it, what I need are hard data about the machine itself. Setting up a local performance test environment would be extremely time-consuming, and I have no way to estimate the server performance.
My question: is there a good way to collect (a lot) of performance measurements when I have limited access to the machine, and certainly no access to the performance monitor? Any code would have to run in the asp.net application itself, without screwing it up too much.
We had a similar problem with our asp.net application hosted on a shared server, which also started to perform badly during peak hours.
Although I don't know of an elegant solution to your question, this is what we did:
Talk to your host providers to see what additional information they can give you - it's in their best interest to keep their clients happy. Our host providers were able to give us some time with one of their network engineers who provided us with some decent CPU and memory utilization stats.
Take your own performance measurements by dumping information to either a log file (using log4net) and/or the database - for example, user sessions, search times, page hits, timing measurements around key functionality. From this information we were able to ascertain what our systems normal behavior was for a set number of automation tests.
Setup a local server (not necessarily same stats as hosted/production server) with your application loaded and give it a full load/performance/capacity testing (we used Red Gate's ANTS Profiler). The stats that you gather from that will give you and your client a good indication of how the system should behave under certain loads with a known environment. Yes, this can be time consuming but it will give you a great performance measuring tool so that you can catch/fix bottlenecks locally rather than on production.
Good luck.

Resources