BizTalk EDI test and production Environment setup - biztalk

Currently, I have setup BizTalk server for few Parties for EDI communication in production.
Note: there is third party tool in place which is transferring EDI over the network (i.e. Datatrans).
Now, I would like to setup test environment where I can have separate locations for sending & receiving test edi.
Kindly suggest, what is the best way to setup test Environment in above case?

You haven't mentioned whether you have a separate test environment available, so I would suggest one of the two following options:
Establish a separate test environment and deploy your (current production) solution to this environment to be used purely for testing. EDI messages can be received and sent from the local file system to mimic your third party Datatrans software, or via any other protocol you see fit (e.g. FTP). Having a test environment is good practice full stop and reduces the risk of you breaking your production environment while testing a change.
Setup test Trading Partners in your production environment and route these messages to a pickup location that Datatrans isn't monitoring.
I would highly suggest option 1 as testing on your production environment is never a good thing (apart from a small subset of cases).

Related

Load testing should be done locally or remotely?

I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?
The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)
We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.

How to estimate load for a live server and tailor it to a less powerful QA server?

I understand that a test environment should match the live environment as much as possible. Unfortunately in my case (the project is a Web CMS) a QA server with the same specifications as the live one cannot be supplied.
Does it make sense to define the load metrics and reduce them to a proportion to account for a server with reduced specifications? Would the margin of error of the results be reasonable?
Otherwise what is a sensible approach and can you point me to specific literature that might address this problem?
You cannot step into the same river twice. Even if your live server and test server are perfectly identical in configuration, usage, data etc. you might have differences in the results.
If in your case the test environment is weaker than the live one, then you can benefit from the situation: you might detect scalability problems on the test environment before it occurs on the live version. However, to maximize the chance for this, you need to implement some stress tests, simulating the case when many users are using costly features at the same time.
Running a load test against scaled down environment won't make any harm, but it may be very helpful as well, among things you can test are:
Checking for memory leaks. If you conduct a long-running test and there are issues with your application like not closed file handles, not freed up memory when the object is not required any more, etc. it will be easier to detect it using deployment on the low-spec server
Checking components integration and configuration like web/application server settings, database server settings, database connectivity parameters, etc.
If your application scales, you can test this functionality as well and also determine the scaling factor and project this to live system
You can run your test under a Profiler Tool telemetry to detect performance-related bugs which will occur in the production
So if you cannot use live server even in the dead time (i.e. nights or weekends) you can still add some value by running tests on lower spec server. See Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test article for more detailed explanation on some of the above points and extra information.

Performance test done in client or servide?

I have a website running in ASP.NET webforms. I planning to run performance tests using visual studio 2013 ultimate edition test tools. I have a very basic question.
Where is the performance test supposed to be run?
From a client machine or from server?
Could you please point me to good article on this?
It depends on what exactly you want to measure.
To measure the uses experience then on a computer similarly located to those of the users of your website. To measure the raw performance of the website then on a computer located where network delays and bandwidth will not be an issue.
Another point to consider is how much monitoring of the servers you want to perform when running the test. Visual Studio load tests can collect performance counters from other computers during the test run. These might be the web servers, database servers, application servers, etc. Getting access to the the performance counters of these servers generally requires firewall and other permission changes. The counters also consume network bandwidth as they are transferred to the test computer(s). If these performance counters must be collected then that may require the test computer(s) to be within the company's filewalls.
If your test does collect performance counters from the various servers then one interesting test variation is a "do nothing" test that just collects the counters over a period of time during real use of the system. That provides a way of validating that the tests were representative of real use.

How to automate integration testing that requires multiple computers?

How do you automate integration testing that requires 2 or more PCs (distributed app)? What's your strategy for performing integration testing (or performance testing) on the cases where multiple machines are involved?
Example
We need to integration-test our client/server app. To mimic the live-system, we need to deploy the client on one machine, and the server on another. Then we measure the TCP transfer speed.
There are ways to do this, but none of them are built into any frameworks that I am aware of.
Below are the three ways I have addressed it in the past:
use VMWare Server/ESX - What we have done most recently is to actually build VM images for the server and client machine with a mountable second drive (data drive). We then build and unit test our software, before the performance test we spin up the VM, then deploy the code to the data drive. After that we deploy a set of test scripts to the machines and kick them off (via Powershell). This works pretty well, has good replay-ability and allows us to give the test servers to other teams/customers for their evaluation. The downside is that it is very resource intensive.
Dedicated Server & Client Test Sets - We had two different Source Repositories, one for the server and one for the client. We then went through the build as above, but one at a time, deploying the server (and testing it against the old client), deploying the client (and testing it against the old server), and then deploying both and testing the combination. This worked fairly well, but required some manual testing for certain scenarios and could get cumbersome if we needed to test multiple server changes or client changes at the same time.
Test against production only - We only ever updated the client OR the server and then we updated that part and tested it against the current production setup. The downside of this of course is that we had to deploy much slower and make incremental changes in one system or the other, deploy, test and release, then make changes in the other component. Rinse and repeat.
If you have the resources I highly recommend #1. Its harder to setup initially but it pays for itself very quickly, and once its setup its repeatable for other products as well (as long as they follow a relatively similar deployment pattern).
It depends on your setup. For example I needed to test a group of web services that my team created/modified. During the test we deployed the app to one machine as the producer and used SoapUI to generated a few thousand transactions via many threads (from 1 to 100 threads as I remember). That way we guaranteed the response and the SLA (service level agreement).

Test harness software for networking failures

During integration testing it is important to simulate various kinds of low-level networking failure to ensure that the components involved properly handle them. Some socket connection examples (from the Release It! book by Michael Nygard) include
connection refused
remote end replies with SYN/ACK but never sends any data
remote end sends only RESET packets
connection established, but remote end never acknowledges receiving packets, causing endless retransmissions
and so forth.
It would be useful to simulate such failures for integration testing involving web services, database calls and so forth.
Are there any tools available able to create failure conditions of this specific sort (i.e. socket-level failures)? One possibility, for instance, would be some kind of dysfunctional server that exhibits different kinds of failure on different ports.
EDIT: After some additional research, it looks like it's possible to handle this kind of thing using a firewall. For example iptables has some options that allow you to match packets (either randomly according to some configurable probability or else on an every-nth-packet basis) and then drop them. So I am thinking that we might set up our "nasty server" with the firewall rules configured on a port-by-port basis to create the kind of nastiness we want to test our apps against. Would be interested to hear thoughts about this approach.
bane is built for this purpose, described as:
Bane is a test harness used to test your application's interaction with other servers. It is based upon the material from Michael Nygard's "Release It!" book as described in the "Test Harness" chapter.
(Edit 2021): a more developed tool for testing behavior with different network issues is toxiproxy
Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supporting deterministic tampering with connections, but with support for randomized chaos and customization. Toxiproxy is the tool you need to prove with tests that your application doesn't have single points of failure.
Take a look at the dummynet.
You can do it with iptables, or you can do it without actually sending the packets anywhere with ns-3, possibly combined with your favourite virtualisation solution, or you can do all sorts of strange things with scapy.

Resources