How to automate integration testing that requires multiple computers? - integration-testing

How do you automate integration testing that requires 2 or more PCs (distributed app)? What's your strategy for performing integration testing (or performance testing) on the cases where multiple machines are involved?
Example
We need to integration-test our client/server app. To mimic the live-system, we need to deploy the client on one machine, and the server on another. Then we measure the TCP transfer speed.

There are ways to do this, but none of them are built into any frameworks that I am aware of.
Below are the three ways I have addressed it in the past:
use VMWare Server/ESX - What we have done most recently is to actually build VM images for the server and client machine with a mountable second drive (data drive). We then build and unit test our software, before the performance test we spin up the VM, then deploy the code to the data drive. After that we deploy a set of test scripts to the machines and kick them off (via Powershell). This works pretty well, has good replay-ability and allows us to give the test servers to other teams/customers for their evaluation. The downside is that it is very resource intensive.
Dedicated Server & Client Test Sets - We had two different Source Repositories, one for the server and one for the client. We then went through the build as above, but one at a time, deploying the server (and testing it against the old client), deploying the client (and testing it against the old server), and then deploying both and testing the combination. This worked fairly well, but required some manual testing for certain scenarios and could get cumbersome if we needed to test multiple server changes or client changes at the same time.
Test against production only - We only ever updated the client OR the server and then we updated that part and tested it against the current production setup. The downside of this of course is that we had to deploy much slower and make incremental changes in one system or the other, deploy, test and release, then make changes in the other component. Rinse and repeat.
If you have the resources I highly recommend #1. Its harder to setup initially but it pays for itself very quickly, and once its setup its repeatable for other products as well (as long as they follow a relatively similar deployment pattern).

It depends on your setup. For example I needed to test a group of web services that my team created/modified. During the test we deployed the app to one machine as the producer and used SoapUI to generated a few thousand transactions via many threads (from 1 to 100 threads as I remember). That way we guaranteed the response and the SLA (service level agreement).

Related

Load testing should be done locally or remotely?

I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?
The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)
We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.

Application server hosting

I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.

How to estimate load for a live server and tailor it to a less powerful QA server?

I understand that a test environment should match the live environment as much as possible. Unfortunately in my case (the project is a Web CMS) a QA server with the same specifications as the live one cannot be supplied.
Does it make sense to define the load metrics and reduce them to a proportion to account for a server with reduced specifications? Would the margin of error of the results be reasonable?
Otherwise what is a sensible approach and can you point me to specific literature that might address this problem?
You cannot step into the same river twice. Even if your live server and test server are perfectly identical in configuration, usage, data etc. you might have differences in the results.
If in your case the test environment is weaker than the live one, then you can benefit from the situation: you might detect scalability problems on the test environment before it occurs on the live version. However, to maximize the chance for this, you need to implement some stress tests, simulating the case when many users are using costly features at the same time.
Running a load test against scaled down environment won't make any harm, but it may be very helpful as well, among things you can test are:
Checking for memory leaks. If you conduct a long-running test and there are issues with your application like not closed file handles, not freed up memory when the object is not required any more, etc. it will be easier to detect it using deployment on the low-spec server
Checking components integration and configuration like web/application server settings, database server settings, database connectivity parameters, etc.
If your application scales, you can test this functionality as well and also determine the scaling factor and project this to live system
You can run your test under a Profiler Tool telemetry to detect performance-related bugs which will occur in the production
So if you cannot use live server even in the dead time (i.e. nights or weekends) you can still add some value by running tests on lower spec server. See Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test article for more detailed explanation on some of the above points and extra information.

Performance test done in client or servide?

I have a website running in ASP.NET webforms. I planning to run performance tests using visual studio 2013 ultimate edition test tools. I have a very basic question.
Where is the performance test supposed to be run?
From a client machine or from server?
Could you please point me to good article on this?
It depends on what exactly you want to measure.
To measure the uses experience then on a computer similarly located to those of the users of your website. To measure the raw performance of the website then on a computer located where network delays and bandwidth will not be an issue.
Another point to consider is how much monitoring of the servers you want to perform when running the test. Visual Studio load tests can collect performance counters from other computers during the test run. These might be the web servers, database servers, application servers, etc. Getting access to the the performance counters of these servers generally requires firewall and other permission changes. The counters also consume network bandwidth as they are transferred to the test computer(s). If these performance counters must be collected then that may require the test computer(s) to be within the company's filewalls.
If your test does collect performance counters from the various servers then one interesting test variation is a "do nothing" test that just collects the counters over a period of time during real use of the system. That provides a way of validating that the tests were representative of real use.

BizTalk EDI test and production Environment setup

Currently, I have setup BizTalk server for few Parties for EDI communication in production.
Note: there is third party tool in place which is transferring EDI over the network (i.e. Datatrans).
Now, I would like to setup test environment where I can have separate locations for sending & receiving test edi.
Kindly suggest, what is the best way to setup test Environment in above case?
You haven't mentioned whether you have a separate test environment available, so I would suggest one of the two following options:
Establish a separate test environment and deploy your (current production) solution to this environment to be used purely for testing. EDI messages can be received and sent from the local file system to mimic your third party Datatrans software, or via any other protocol you see fit (e.g. FTP). Having a test environment is good practice full stop and reduces the risk of you breaking your production environment while testing a change.
Setup test Trading Partners in your production environment and route these messages to a pickup location that Datatrans isn't monitoring.
I would highly suggest option 1 as testing on your production environment is never a good thing (apart from a small subset of cases).

Resources