Strategy for automated testing of a third-party service error - automated-tests

I'm trying to write an automated test for my app's response for a third party service being down.
Generally the service is always up. I'm looking for a reliable way to simulate it being down notably without requiring root access. To put another wrinkle in it: The application under test would be in a separate process. I thought of just altering the configuration pointing at the service but that's not going to work.
This is all happening in a unix environment (linux, os x) so I'd like it to work there, but I don't care about windows. Is there a quick way to block an outgoing port or something like that? It also has to be temporary as this has to happen in the middle of a larger test suite.
Hopefully there is a fairly standard way of doing this that I just haven't found yet.
Clarification: This is a functional test to make sure the gui responds correctly when the service is down. The unit test is already covered.

Make a proxy for the service. Point at the proxy for the service. Shoot down the proxy for the service.

If your application has been designed with unit-testing in mind, you should be able to replace the 3rd party service with a different implementation.
The idea is that you can replace the service with a mock object which returns dummy data for testing. You could also replace the service with an object that throws an exception, or times-out.
With the mock service in place, the tester can use the application and see how it responds to service failures.

Related

Can I set up a mock server by cloning real server responses?

TL;DR
Is there a tool can record all the network activity as I visit a website and create a mock server that responds to those requests with the same responses?
I'm investigating ways of mocking the complex backend for our React application. We're currently developing against the real backend (plus test/staging environments). I've looked around a bit and it looks there are a number of tools for mocking individual endpoints/features and sending the rest through to the real API (Mirage is leading the pack at the moment).
However, the Platonic ideal would be to mock the entire server so that a front end dev can work without an internet connection (again: Platonic ideal). It's a crazy lofty goal, I know this. And of course it would require mocking not only our backend but also requests any 3rd-party data sources. And of course the data would be thin and dumb and stale. But this is just for ultra-speedy front end development, it's just mocking. The data doesn't need to be rich, it'll be up to us to make it as useful/realistic as we need it to be.
Probably the quickest way would be to recreate the responses the backend is already sending, and then modifying as needed for new features or features under test etc.
To do this, we might go into Chrome DevTools and recreate everything on the network tab. Mock every request that was made by hardcoding response that returned. Taking it from there, do smart things like use url pattern matching to return a simple placeholder image for any request to get a user's avatar.
What I want to know is: is there any tool out there that does this automatically? That can watch as I load the site, click a bunch of stuff, take a bunch of actions, and spit out or set up a mock that recreates all the responses? And then we could edit any of them as we saw fit to simplify.
Does something like this exist? Maybe it's a browser tool. Maybe it's webpack middleware. Maybe it's a magic rooster.
PS. I imagine this may not be a specific, actionable enough question for SO. I'll understand if it's closed, but I'd really appreciate being directed somewhere where such questions/discussions would fit? I'm new enough to this world that SO is all I know!
There is a practice called service virtualization - a subset of the test double family.
Wikipedia has a list of tools you can use to do that. Here a couple of examples from that list:
Open Source Wiremock will let to record the mocks and edit the responses programmaticaly
Commercial Traffic Parrot will let you record the mocks and edit the responses via a UI and/or programatically
https://mswjs.io/ can mock all the requests for you. It intercepts all your client`s requests and returns your defined mock data.

Background task polling external resource at certain intervals

Requirement: I need to create a background worker/task that will get data from an external source ( message queue) at certain intervals ( i.e. 10s) and update a database. Need to run non stop 24hrs. An ASP.NET application is placing the data to the message queue.
Possible solutions:
Windows service with timer
Pros: Takes load away from web server
Cons: Separate deployment overhead, Not load balanced
Use one of the methods described here : background task
Pros: No separation deployment required, Can be load balanced - if one server goes down another can pick it up
Cons: Overhead on web server (however, in my case with max 100 concurrent users and seeing the web server resources are under-utilized, I do not think it will be an issue)
Question: What would be a recommended solution and why?
I am looking for a .net based solution.
You shouldn't go with the second option unless there's a really good reason for it. Decoupling your background jobs from your web application brings a number of advantages:
Scalability - It's up to you where to deploy the service. It can share the same server with the web application or you can easily move it to a different server if you see the load going up.
Robustness - If there's a critical bug in either the web application or the service this won't bring the other component down.
Maintanance - Yes, there's a slight overhead as you will have to adjust your deployment process but it's as simple as copying all binaries from the output folder and you will have to do it once only. On the other hand, you won't have to redeploy the application thus brining it down for some time if you just need to fix a small bug in the service.
etc.
Though I recommend you to go with the first option I don't like the idea with timer. There's a much simpler and robust solution. I would implement a WCF service with MSMQ binding as it provides you with a lot of nice features out of the box:
You won't have to implement polling logic. On start up the service will connect to the queue and will sit waiting for new messages.
You can easily use transaction to process queue messages. For example, if there's something wrong with the database and you can't write to it the message which is being processed at the moment won't get lost. This will get back to the queue to be processed later.
You can deploy as many services listening to the same queue as you wish to ensure scalability and availability. WCF will make sure that the same queue message is not processed by more than one service that is if a message is being processed by service A, service B will skip it and get the next available message.
Many other features you can learn about here.
I suggest reading this article for a WCF + MSMQ service sample and see how simple it is to implement one and use the features I mentioned above. As soon as you are done with the WCF service you can easily host it in a windows service.
Hope it helps!

Who is calling my WebService?

I have a web service that is on an internal server. It can be called from any website on our network.
More and more developers are starting to use it. Current probably 20+ pages use this service, and the number is growing fast. I can see a year from now, someone asking what pages are using this service and what methods.
I would like to log the url of the pages that use my web service as the request come in.
It would also be nice to know the method they are calling.I need to do something in such a way, that it does not affect the client web sites.My first thought was that I could write some code in the global.asax.
I have added some code to the Application_BeginRequest to log the request object details, but there does not appear to be anything about the requesting url.
What am I missing? Should I be looking at a different object?
Thanks.
Without disrupting existing users this is going to be difficult. The httpContect.Current.RequestUrl will just return the URL used to call your web service, not which web page called it.
The closest you can do without disrupting existing apps and forcing developers to change them is to grab the HttpContext.Current.Request.UserHostAddress, so you can at least get the IP of the machine calling your service.
Beyond this, what you might want to consider is adding a parameter to your functions for "CallingApp" and then log that in your code. That's pretty much what we did once re realized that we needed to know which apps are calling our service. We actually have an application monitoring service that uses a GUID for every new app we develop, and we pass that GUID to any web service. It[s extra work but to us it was critical because it allows us to know which apps will be affected when we need to perform updates or take the app server down for maintenance.
Edit - added
As a side note, at the point we realized we needed to track this, we had already been using web services for about a year. When faced with the same problem, we created a new set of web services, and included the extra field for the calling app in all of the new services, and then slowly went back and changed the older programs to point to the new services.
IN retrospect, we wish we had known we would need to do this up front because it created a lot of extra work. I'm guessing you'll be facing something similar if you really want to know exactly who is calling your services.
The only thing you can probably retrieve from the consumer is the IP address without changing your interface.
If you can change this you could do this e.g. by adding authentication and logging who is calling what, or by having some simple "token" principle.
However both methods require you to change the interface and therefore break backwards compatibility - which you should never do.
By always ensuring both back and forward compatibility you should not need to know exactly who is calling your service, but only that it is actually used.
#David Stratton
Thanks for your help. I think your suggestions were great. I accually did something very different, after your answer gave me some new ideas.
I should have mentioned that I was generating the web proxy that most of my users were using to make calls against my web service. My client in general do NOT use the proxy that Visual Studio creates.
Here is what did:
I generated my web proxy client again, and added calls to log the httpcontext of the client before every call. Because the proxy is running on the client, he had access to everything I needed. That allowed me to record everything about the client and the specific call they were making. I realize this would not work for most cases. But all of my clients are internal web sites.
It also had the advantage in that the clients did not have to modify their code at all. I just gave them all a new DLL. Problem solved. I get all the tracking data I want, and they did not have to modify their code.
I was stuck trying to solve the problem from the web service's point of view.
I realize that there is still a whole in this implementation, because someone does not have to use my client proxy to call my service. I guess I could force that at some point in the future. For now, they could let Visual Studio genereate a web proxy for my service. However, if they do that I guess I don't care. That is not the recommened way to call my service. I think the only one doing that is an ASP.NET 1.1 web site. When they upgrade, they will probably switch to my generated proxy.
Without implementing some sort of authentication, there isn't a guraenteeted way of knowing exactly who is calling your service - web metrics are the only way you can gauge what volume of traffic is hitting your service.
I'm sure you already know this but the whole point of a web service isn't to know or care who is calling it.
I have successfully used ...
Dim strReferrer As String = HttpContext.Current.Request.UrlReferrer.AbsoluteUri
to get the calling page that called my WEB API 2 Web Service.

Connect to web service fails

I have a web application which fetches information from a web service. It works fine in our development environment.
Now I'm trying to get it to work in a customer's environment instead, where the web service is from a third party. The problem is that the first time the application tries to fetch information it cannot connect to the web service. When it tries again just seconds later it works fine. If I wait a couple of hours and try again, the problem occurs again.
I'm having a hard time believing this is a programming error, as our customer and the maker of the web service thinks. I think it has to do with one of the IIS or some security in the network. But I don't have much to go on and can't reproduce the error in our development environment.
Is it failing with timeOutException when you try to connect first time?. If yes, this could be the result on start up time of the service
I have a rule: "Always assume its your fault until you can demonstrate otherwise". After over 20 years, I still stick to it.
So there are therefore two cases:
The code is broken
There is a specific issue with the live environment
Since you want to demonstrate that the problem is (2) you need to test calls to the service, from the live environment, using something other than your application. Exactly what will depend on the nature of the web service but we've found SoapUI to be helpful.
The other thing that's not clear is whether you are making calls to the live service from your development environment - if, in testing, you're not communicating with the same instance of the service then that's an additional variable that will need to be considered (and I appreciate that you're not always given the option).
Lastly, #Krishna is right - there may be a spin up issue with the remote service (hence my question about whether you're talking to the same service from your dev environment) and -horrible as it is - the solution in the first instance may simply be to find a way to allow for this!
The error was the web service from the third party. The test stub we got to develop against was made in C# and returned only dummy answers. The web service in the customer environment actually connected to a COM object. The first communication with the COM object after a longer wait took almost a minute.
Good for me that the third party developers left the source code on the customer servers...

Is it wrong to switch client logic in the service tier?

We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!

Resources