I'm trying to write test cases for my web services in a way that I can rollback any database changes they may make. I can try to surround them with a transaction scope, but how do I specify a context for the transaction? In other words, how does the transaction know which database and server to rollback on? In my case, the SQL server runs locally as well as the web services. Before you tell me to call the web services directly without a client, please understand that the web services have very specific runtime environment settings that would be a royal pain to reproduce for my test cases. Perhaps, the transaction scope isn't what I want to use, is there an alternative? Is there a database function I could call to start a transaction? Thanks.
First you are not doing unit testing. Unit test is about testing single small unit of code (function). When you test a function you are creating unit test for each execution path so that you have full coverage of tested code. But your system under tests includes client to service communication and service to database communication = several tiers of code + configuration. That is called integration testing.
The problem here is how did you design your service? Does your service flow transactions? Transaction flow allows starting transaction at your client and pass it to the service (distributed transaction). It is not default behavior and it requires special configuration of WCF bindings. If you use this approach you can do the same in your test. Start transaction at test and rollback the transaction at the end of the test. If you didn't design service to flow transaction you simply can't use it because your transaction started in the test will not affect the service. In that case you have several choices:
Create manual compensation. At the end of each test run custom SQL to move data to initial state. This simulates rollback. I don't recommend this approach.
Recreate database at the beginning of each test. This is slow but fully acceptable because integration tests are usually run only on build server few times per day.
Don't test WCF service level. WCF service should be only some wrapper on the top of business logic or data access logic. So don't test service level but instead test the wrapped layer. You can probably use transactions there. This approach can be well combined with previous one so that you have some small set of complex integration tests which requires database recreation and some bigger set of tests which can do rollback and use the same database.
Related
I am new to PACT and trying to use pact-net for contract testing for a .net microservice. I understand the concept of consumer test which generates a pact file.
There is the concept of a provider state middleware which is responsible for making sure that the provider's state matches the Given() condition in the generated pact.
I am bit confused on the following or how to achieve this:
The provider tests are run against the actual service. So we start the provider service before tests are run. My provider service interacts with a database to store and retrieve records. PACT also mentions that all the dependencies of a service should be stubbed.
So we run the actual provider api that is running against the actual db?
If we running the api against actual db how do we inject the data into the db? Should we be using the provider api's own endpoints to add the Given() data?
If the above is not the correct approach then what is?
All the basic blog articles I have come across do not explain this and usually have examples with no provider states or states that are just some text files on the file system.
Help appreciated.
I'm going to add to Matt's comment, you have three options:
Do your provider test with a connected environment but you will have to do some cleanup manually afterwards and make sure your data is always available in your db or/and the external APIs are always up and running. Simple to write but can be very hard to maintain.
You mock your API calls but call the real database.
You mock all your external dependencies: the API and the DB calls.
For 2) or 3) you will have to have test routes and inject the provider state middleware in your provider test fixture. Then, you can configure provider states to be called to generate in-memory data if solution 3) or add some data-init if you are in solution 2)
You can find an example here: https://github.com/pact-foundation/pact-net/tree/master/Samples/EventApi/Provider.Api.Web.Tests
The provider tests are run against the actual service
Do you mean against a live environment, or the actual service running locally to the unit test (the former is not recommended, because of (2) above).
This is one of the exceptions to that rule. You can choose to use a real DB or an in-memory one - whatever is most convenient. It's common to use docker and tools like that for testing.
In your case, I'd have a specific test-only set of routes that respond to the provider state handler endpoints, that also have access to the repository code and can manipulate state of the system.
We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.
I want to design a bullet-proof, fault tolerant, DB driven web application and was wondering on how to architect it.
the system will have a asp.net UI, web services middle tier and SQL2005 back end. The UI and Services will communicate using JSON calls.
I was wondering on how to ensure transactions are committed and if not for any error to bubble up and be logged. ideally the action to be retried a couple times after 5 minute intervals, like an email app does.
I was planing to use try catch blocks in SQL and was wondering what the interface (or contract if you will) would look like between the SQL stored procs and the Services that call them would look/ function. this interface will play 2 roles one is to pass params for the proc to function and return expected results. the next will be for the proc to return error information. maybe somethign like error number and error message.
my quagmire is how to i structure this intelligently so that the services expect and react accordingly to both data and error info returned from procs and handle each accordingly?
is there a framework for this because it seems very boiler plate?
You might consider looking into SQL Server Service Broker:
http://msdn.microsoft.com/en-us/library/ms345108%28v=sql.90%29.aspx
The unique features of Service Broker
and its deep database integration make
it an ideal platform for building a
new class of loosely coupled services
for database applications. Service
Broker not only brings asynchronous,
queued messaging to database
applications but significantly expands
the state of the art for reliable
messaging.
I've just encountered an error in my app that probably could have been caught with some integration tests, so I think its high time I wrote some!
My question relates to the setup of these tests, and at what layer of the code you run the tests against.
Setup
Considering I should have lots of integration tests, I don't want to be creating and dropping a test database for every test, this would be epically slow (even if its an SqlLite in-memory one). My thoughts are:
Have a test db which sits next to my dev db
Before testing, run some reset script which will set up my schema correctly and insert any necessary data (not test case specific)
Simply use this test db as if its the real db.
However, it seems very wasteful that I have to run my Fluent NHib configuration in every [Setup]. Is this just tough? What are my options here?
My session is currently wrapped in a UoW pattern, with creation and destruction performed on begin_request and end_request (MVC web application) respectively. Should I modify this to play well with the tests to solve this problem?
Testing
When it comes to actually writing some tests, how should I do it?
Should I test from the highest level possible (my MVC controller actions) or from the lowest (repositories).
If I test at the lowest I'll have to manually hard code all the data. This will make my test brittle to changes in the code, and also not representative of what will really happen in the code at runtime. If I test at the highest I have to run all my IoCC setup so that dependencies get injected and the whole thing functions (again, repeating this in every [SetUp]?)
Meh! I'm lost, someone point me in the right direction!
Thanks
In regards to creating the session factory, I create a class called _AssemblyCommon in my test project and expose the session factory as a static from there. A method marked with the [SetupFixture] (NUnit) attribute configures the session factory.
In general integration tests should cover CRUD operations in the repositories. To do this, I have one test method per object (or aggregate root) and do an insert, retrieve, update, and delete all within that method. I also test any cascade deletes that I've defined. Keeping these operations in a single method doesn't leave any traces behind in the database. I do have some integration tests that leave behind test data but that hasn't been a problem.
Your higher level operations should be unit tested and mock (I use Moq) the repositories if possible.
In my current MVC app I've found that it's enough to test the interaction between the repositories and the database. More than anything, this is to iron out any wrinkles in the NHibernate mapping. Everything (ok I'm exaggerating when I say everything) above that layer is unit tested in isolation. I did have some integration tests from the controllers all the way down the stack to the database and these used an IoC container (StructureMap) to build the controllers and the dependencies, but I found these tests were really not adding anything and they were quite an overhead to maintain, so I've removed them from the 'integration' tests for now - I may find a reason to put them in but so far I haven't.
Anyway, the test process I use works like this:
The build process for the data access layer test assembly creates the test database via a FluentNHibernate configuration ExposeSchema() call. The build process then runs some NHibernate repository level code to populate reference tables in the database.
Each integration test that runs is then wrapped in a System.Transactions.TransactionScope using() statement and the TransactionScope never has Complete() called, so each test can run in isolation and the results can be setup and verified within the using() scope by an ISession without altering the state of any other test data. e.g.
using (new TransactionScope())
{
var session = NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting();
// This is an NHibernate ISession - setup any dependencies for the repository tests here
var testRepository = new NHibernateGenericRepository<NewsItem>(NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting())
// repository test code goes here - the repository is using a different ISession
// Validate the results here directly with the ISession
}
// at this point the transaction is rolled back and we haven't changed the test data
This means that I don't have to modify the UnitOfWork implementation I'm using - the transaction is rolled back at a higher level.
How do I write unit tests to test the Web methods of a Web service using NUnit?
The web methods in this application will add,update and delete a record in the database.
The unit test will test a web method whether a record has been inserted in the database, the webmethod calls a method in data access layer to perform this action.
I do not think it's appropriate to be testing the end result of your web service with a unit test. Also, what you are trying to do is called an "integration test", and not a unit test.
What you can do, however, is to:
Write unit tests to check if your data access layer (DAL) is working properly
Write unit tests to see if your web method is properly accessing your DAL
You might also want to look at a question I raised before: How do I unit test persistence? to provide you more insight.
If you really are adamant to be able to do this however, it is possible to create such unit tests using MbUnit, which has the Rollback attribute.
[Rollback]
public void Test_database_persistence()
{
//any database access you perform here will be put inside a transaction
//and rolled back afterwards
}
MbUnit is totally compatible with NUnit, so you could still use tests you've already written with NUnit.