Monitoring Integration points - integration-testing

Our company is working on integrating Guidewire(claims processing system) into the existing claims system. We will be executing performance tests on the integrated system shortly. I wanted to know if there was some way to monitor the integration points specific to guidewire.
The system is connected through Web Services. We have access to Loadrunner and Sitescope, and are comfortable with using other open source tools also.
I realize monitoring WSDL files is an option, Could you suggest additional methods to monitor the integration points?

Look at the architecture of Guidewire. You OS have OS monitoring points and you have application monitoring points. The OS is straightforward using SiteScope, SNMP (with SiteScope or LoadRunner), Hyperic, Native OS tools or a tool like Splunk.
You likely have a database involved: This monitoring case is well known and understood.
Monitoring the services? As the application experts inside of your organization what they look at to determine if the application is healthy and running well. You might be implementing a set of terminal users (RTE) with datapoints, log monitoring through SiteScope, custom monitors scheduled to run on the host piping the output through SED to a standard form that can be imported into Analysis at the end of the test.
Think Architecturally. Decompose each host in the stack into OS and services. Map your known monitors to the hosts and layers. Where you run into issues grab the application experts and have them write down the monitors they use (they will have more faith in your results and analysis as a result)

Related

Load testing should be done locally or remotely?

I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?
The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)
We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.

How does the Realm Mobile Platform scale?

You could say I am a fan of the Realm Mobile Platform. I'm using it and it seems to be working well.
However I am confused with how to operate it going to production. It seems to be deployed only to one server, and even the professional and enterprise editions are working on my single server.
Assuming Realm have thought of this (as Enterprise edition supports 'enterprise scaling) - how does this work if all clients point to my owned server URL?
Another question is how to monitor the load on that server.
Thanks!
The Professional Edition and the Enterprise Edition emit statsd compatible metrics which allow you to track the usage and load on each node in a Realm Object Server cluster. These metrics are also used internally inside the cluster in order to display statistics about the health of the cluster.
We are obviously still adding metrics as we understand more about our customer's use-cases, and fine-tuning the ones that we have.
With regards to the way the clustering works, we are currently implementing this according to an iterative process, where we add more and more features, and more and more resilience to the system with every passing day.
Basically, we have a logical load balancer process, which receives the incoming client connections, and then dispatches that to a node inside the cluster. This logical load balancer can be HA'd and LB'd itself as well, just like you would any regular WS connection handler. Handling many connections these days is easy. It's handling the quadratic merge algorithms that is expensive on the Realm Object Server, which is why the clustering is required for deployments at scale.

How can I get data from a scale into a web application?

*If you think I should ask this question elsewhere, please let me know.
Background:
I need to build an application for converting weights into piece counts. The weights currently come from scales that are connected to PCs via serial ports. I am replacing PC based applications that connect to the scales via a serial connection. I am considering the feasibility of making the next generation of these applications into a web based solution. However, I do not want to do this if it is not a better solution than building an application that runs on the client. In addition, I do not want to use any sort of browser specific technology (ActiveX).
FYI, we currently run a Windows based environment.
What I have so far:
I am currently thinking that I will need some sort of client side “service” to allow the scale data to be retrieved by the web application. I have looked into creating a WCF service for this task and have determined that it would probably work. This would require that the scale be connected to some sort of Windows based computer that is on the network. I would then interface the WCF service (running as a Windows Service on the PC) from an ASP.NET web application running on an IIS web server. This would minimize the footprint on the client and allow us to use a web application.
I am looking for any constructive thoughts and ideas. I am open to reviewing any feasible option that would make this solution as simple and reliable as possible.
Answering my own question per request #honeycomb.
I discovered two viable options for this purpose. Following are high-level overviews of the techniques we leveraged.
Develop a scale reader to be run on a PC connected to the weigh scale device via an RS-232 connection. This reader will forward any information received from the scale into a database. Combined with technologies like change notifications and server-side push notifications, this option will allow data from a weigh scale to be pushed into a web page with little effort and no additional cost. (This option has performed well during testing but is not yet in production)
Invest in converting weigh scale devices to use ethernet connections and connect them to the network. Use an OPC server with a driver that can connect to the weigh scales you are using to read the data from these devices. Consider KEPWare's offering for this purpose. Use KEPWare's tools to forward this data to a database or wherever it is needed. Once again, you can leverage change notifications and server-side push technologies to push this data into web applications in near real-time without polling. (This option is currently working in a critical, production environment)
The second option is probably better in the long-term, but this may vary based on your specific situation. It has some up front costs and would be better suited to new implementations. For my system, I am using the first option because it will ease the transition between the new and old systems.
Note: I am not in any way associated with KEPWare. I am only suggesting their product because it is the only one I am aware of that supports this functionality. I am sure there are other OPC servers that support this type of device.

How to test Network monitoring?

I'm currently building a network monitoring system that will notify me if any interface errors or network issues. after building it we would like to be able to test if it works before implementing it to our network, so need a way of simulating network interface errors on a switch or networking device?
I was thinking about cutting ethernet cables or terminating them wrong, but ideally I need soemthing that can create loads of different types of interface errors
any help would be much appreciated
Sean
You could download Nagios which is a powerful, enterprise-class host, service, application, and network monitoring program. Designed to be fast, flexible, and rock-solid stable. Nagios runs on *NIX hosts and can monitor Windows, Linux/Unix/BSD, Netware, and network devices.
you can download other network monitoring systems from sourceforge they have many different network tools written in different languages most of them are open source. you can take notes of their
design and maybe add to the application you building.
if you want to test your application the best thing to do is to tested on real environment, I believe their might be one or two Virtual Lab.
But Ideally I would tested on real interfaces
One of the ways to simulate network failures would be to dynamically change the firewall settings. You can make packets drop, hosts, disappear, etc. This doesn't require any physical damage to anything :)

What tools does your company use to manage application performance of asp.net applications?

I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.

Resources