How to write ASP.NET API Integration tests - asp.net

To everyone that took their time, to read my question, I want to point out, that I'm writing Integration-tests NOT Unit-tests.
Using the definition of Integration-test, provided by sites(that are at the bottom of the question):
Integration tests do not use mock objects to substitute
implementations for service dependencies. Instead, integration tests
rely on the application's services and components. The goal of
integration tests is to exercise the functionality of the application
in its normal run-time environment.
My question is what is the best practice on writing integration test for ASP.net web API. At the moment I'm using the in memory host approach provided by Filip. W. blog post.
My second question is, how do you ensure, that your test data is there and is correct, when you're not not Mocking(msdn and other sites clearly say, that integration test do not mock databases). The internet is filled with examples of how to write extremely simple integration tests, but has zero examples for more complex api(anything that goes further than returning 1)
Reference Sites:
https://msdn.microsoft.com/en-us/library/ff647876.aspx
https://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx
http://www.codeproject.com/Articles/44276/Unit-Testing-and-Integration-Testing-in-Business-A
http://blog.stevensanderson.com/2009/06/11/integration-testing-your-aspnet-mvc-application/
Filip. W. In-Memory-Hosting:
http://www.strathweb.com/2012/06/asp-net-web-api-integration-testing-with-in-memory-hosting/

Have you seen my answer over at this other SO question here . I will pad this out with the additional information below.
In our release pipeline (using Visual Studio Release manager 2013) we provision a nightly integration database from a known test script by creating the database from scratch (all scripted) - initially we cloned production but as data grew this was too time consuming as part of the nightly integration build. After the db is provisioned we do the same with the integration VM web-servers and deploy the latest build to that environment. After these come up we run our unit tests again from command line as part of the release pipeline this time including the tests decorated with the custom action filter I descried in the answer linked.

Related

Deploying microservice to be tested within the test [duplicate]

This question already has an answer here:
Is there a way to run Karate tests as an integration test suite against a pre-booted spring boot server?
(1 answer)
Closed 1 year ago.
Maybe this is not possible to do generically in a test framework but
I would like to be able to deploy the microservice I am testing within the test itself. I have looked at Citrus, RestAssured, and Karate and listened to countless talks and read countless blogs but I never see how to do this first stage. It always seems to be the case that there is an assumption that the microservice is pre-deployed.
Honestly it depends on how your microservice is deployed and which infrastructure you are targeting on. I prefer to integrate the deployment into the Maven build as Maven provides pre- and post-integration-test phases.
In case you can use Kubernetes or Docker I would recommend integrating the deployment with fabric8 maven plugins (fabric8-maven-plugin, docker-maven-plugin). That would automatically create/start/stop the Docker container deployment within the Maven build.
In case you can use Spring boot the official maven plugin can do so in the same way.
Another possibility would be to use build pipelines. Where the continuous build with Jenkins for example would deploy the system under test first and then execute the tests in a pipeline.
I personally prefer to always separate deployment and testing tasks. In case you really want to do a deployment within your test Citrus as a framework is able to start/stop Docker containers and/or Kubernetes pods within the test. Citrus can also integrate with before/after test suite phases for these deployment tasks.
I found a way to do it using docker-compose.
https://github.com/djangofan/karate-test-prime-example
Basically, make a docker-compose.yml that runs your service container and then also runs e2e tests after a call to wait-for-it.sh.
2 points:
The karate-demo is a Spring Boot example that is deployed by the JUnit test-runner. Here is the code that starts the server.
The karate-mock-servlet takes things one step further where you can run HTTP integration tests within your project without booting an app-server. Save time and code-coverage reports are easier.
If you have any requirements beyond these, I'd be happy to hear them. One of the things we would like to implement is a built-in server-side mocking framework - think embedded wiremock: but with the ease of Karate's DSL. But no concrete timeline yet.
EDIT: Karate has mocking now: https://github.com/intuit/karate/tree/master/karate-netty

Test automation for microservices architecture

I am in charge of implementing QA processes and test automation for a project using microservices architecture.
Project has one public api that makes some data available. So I will automate API tests. Tests will live in one repository. This part is clear to me, I did this before in other monolith projects. I had one repo for API tests. And possibly another repo for selenium tests.
But then here the whole poduct consists of many microservices that communicate via restful apis and/or rabbit queues. How would I go about automating tests for each of these individual servicess? Would tests for each individual service be in a separate repo? Note: services are written in Java or PHP. I will automate tests with Python. It seems to me that I will end up with a lot of repos for tests/stubs/mocks.
What suggestions or good resources can community offer? :)
Keep unit and contract tests with the microservice implementation
Component tests make sense in the context of composite microservices,
so keep them together
Have the integration and E2E tests in a
separate repo, grouped by use cases
For this kind of testing I like to use Pact. (I know you said Python, but I couldn't find anything similar in that space, so I hope you (or other people searching) will find this excellent Ruby gem useful.)
For testing from outside in, you can just use the proxy component - hope this at least gives you some ideas.
Give each microservice its own code repository, and add one for the cross-service end-to-end tests.
Inside a microservice's repository, keep everything that relates to that service, from code over tests to documentation and pipeline:
root/
app/
source-code/
unit-tests/ (also: integration-tests, component-tests)
acceptance-tests/
contract-tests/
Keep everything that your build step uses in one folder (here: app), probably with sub-folders to distinguish source code from unit tests, integration tests, and component tests.
Put tests like acceptance tests and contract tests that run in later stages of the delivery pipeline in own folders. This keeps them viually separate. It also simplifies creating separate build/test steps for them, for example by including own pom.xml's when using Maven.
If a developer changes a feature, he will need to change the tests at the exact same time to ensure the two fit together. Keeping code and tests in the same repository keeps the two in sync in a natural way.

SpecFlow, Webdriver and Mocks - is it possible?

The question in short is that we are stumbling upon BDD definitions that more or less require different states - which leads to the necessity for a mock of sorts for ASP.NET/MVC - I know of none, which is why I ask here
Details:
We are developing a project in ASP.NET (MVC3/Razor engine) and are using SpecFlow to drive our development.
We quite often stumble into situations where we need the webpage under test to perform in a certain manner so that we can verify the behavior, i.e:
Scenario: Should render alternatively when backend system is down
Given that the backend system is down
And there are no channels for the page to display
When I inspect the webpage under test
Then the page renderes an alternative html indicating that there is a problem
For a unit test, this is less of an issue - run mock on the controller bit, and verify that it delivers the correct results, however, for a SpecFlow test, this is more or less requiring alternate configurations.
So it is possible at all, or - are there some known software patterns for developing webpages using BDD that I've missed?
Even when using SpecFlow, you can still use a mocking framework. What I would do is use the [BeforeScenario] attribute to set up the mocks for the test e.g.
[BeforeScenario]
public void BeforeShouldRenderAlternatively()
{
// Do mock setups.
}
This SO question might come in handy for you also.
You could use Deleporter
Deleporter is a little .NET library that teleports arbitrary delegates into an ASP.NET application in some other process (e.g., hosted in IIS) and runs them there.
It lets you delve into a remote ASP.NET application’s internals without any special cooperation from the remote app, and then you can do any of the following:
Cross-process mocking, by combining it with any mocking tool. For example, you could inject a temporary mock database or simulate the passing of time (e.g., if your integration tests want to specify what happens after 30 days or whatever)
Test different configurations, by writing to static properties in the remote ASP.NET appdomain or using the ConfigurationManager API to edit its entries.
Run teardown or cleanup logic such as flushing caches. For example, recently I needed to restore a SQL database to a known state after each test in the suite. The trouble was that ASP.NET connection pool was still holding open connections on the old database, causing connection errors. I resolved this easily by using Deleporter to issue a SqlConnection.ClearAllPools() command in the remote appdomain – the ASP.NET app under test didn’t need to know anything about it.

Testing Analysis services

We are looking to build a cube in Microsft SQL server analysis services but would like to be able to use some of the automated testing infrastructure we have.
such as Cruise control for automated build, deployments and test.
I am looking for anyone that can give me any pointers on building tests against analysis services, and also any experience with adding these to a build pipeline.
Also if automation is not possible some manual test methods.
Recently I came upon BI.Quality project on codeplex and from what I can tell it's very easy to learn and to integrate into existing deployment process.
There is another framework named NBi. You've additional features compared to BI.Quality as to check the existence of a measure, dimension, attributes, the order of members, the count of members. Also when comparing two result sets it's often easier to spot the difference between them with NBi. The edition of the test-suites is also done in one single xml file validated by an XSD (better user-experience).

How do I automate build and testing for an asp.net ajax application in Team Foundation Server?

Q. We're looking for a way to automate
build process, run test cases and
store build results.
A problem could be raise as the application on which we want to setup this process is an ajax application -- a one page operation application highly rely on JavaScript. The QA team is using QTP to automate their testing.
Q. Now as we're moved to Team Foundation
Server we would like to be in the box
instead to use some other tool for
functions that can also be done in
Team Foundation. Will it a good choice to use Team Foundation instead
of other tool for defining test cases.
Once, they adopt and will generate test cases for the app.
Q. We would like to attach the test
cases with the daily build and also
like to have log/report for monitoring
build progress.
This, I assume, but you can also suggest a practice which can make the aforementioned process more effective and quick.
Thanks.
TFS has a built in test runner but it is aimed at MSTest. What test framework are you using? TFS uses MSBuild in the background and has a template build script with hooks to allow you to customize the process. Read up more about it here.
There is an TFS Web Test but I haven't looked into it much there is nothing stopping you hooking in some open source framework like Selenium into the build process
3.TFS keeps a log of all the builds done much the same as CruiseControl would.
I would recommned "Team Foundation Server 2008 in Action" as it is a very good book that explains a lot about TFS.

Resources