My database calls are mocked and the actual methods that invokes DB are annotated with #Transactional. When I execute test cases, it internally tries to create the transaction and fails as no driver found. The database used is Neo4j and is not running while executing test cases. How can I test transactional methods in a non-transactional way? I tried annotating the integration test method with #Transactional(propagation=Propagation.NEVER). but that makes no difference
Related
I am new to PACT and trying to use pact-net for contract testing for a .net microservice. I understand the concept of consumer test which generates a pact file.
There is the concept of a provider state middleware which is responsible for making sure that the provider's state matches the Given() condition in the generated pact.
I am bit confused on the following or how to achieve this:
The provider tests are run against the actual service. So we start the provider service before tests are run. My provider service interacts with a database to store and retrieve records. PACT also mentions that all the dependencies of a service should be stubbed.
So we run the actual provider api that is running against the actual db?
If we running the api against actual db how do we inject the data into the db? Should we be using the provider api's own endpoints to add the Given() data?
If the above is not the correct approach then what is?
All the basic blog articles I have come across do not explain this and usually have examples with no provider states or states that are just some text files on the file system.
Help appreciated.
I'm going to add to Matt's comment, you have three options:
Do your provider test with a connected environment but you will have to do some cleanup manually afterwards and make sure your data is always available in your db or/and the external APIs are always up and running. Simple to write but can be very hard to maintain.
You mock your API calls but call the real database.
You mock all your external dependencies: the API and the DB calls.
For 2) or 3) you will have to have test routes and inject the provider state middleware in your provider test fixture. Then, you can configure provider states to be called to generate in-memory data if solution 3) or add some data-init if you are in solution 2)
You can find an example here: https://github.com/pact-foundation/pact-net/tree/master/Samples/EventApi/Provider.Api.Web.Tests
The provider tests are run against the actual service
Do you mean against a live environment, or the actual service running locally to the unit test (the former is not recommended, because of (2) above).
This is one of the exceptions to that rule. You can choose to use a real DB or an in-memory one - whatever is most convenient. It's common to use docker and tools like that for testing.
In your case, I'd have a specific test-only set of routes that respond to the provider state handler endpoints, that also have access to the repository code and can manipulate state of the system.
I have a requirement to get time elapsed for calling each of the partnerlink in the BPEL given a particular composite instance ID.
For example,if in my code I am using two partner links
BIService
PGP encryption
DB procedure call.
I need to show how much time was required while invoking BI,PGP service and DB procedure call separately in a dashboard.
I am using java facade API for accessing SOA instance details.
I am not able to find exact which classes n methods to use for my specified requirement.
I am using SOA 11g.
I'm trying to write test cases for my web services in a way that I can rollback any database changes they may make. I can try to surround them with a transaction scope, but how do I specify a context for the transaction? In other words, how does the transaction know which database and server to rollback on? In my case, the SQL server runs locally as well as the web services. Before you tell me to call the web services directly without a client, please understand that the web services have very specific runtime environment settings that would be a royal pain to reproduce for my test cases. Perhaps, the transaction scope isn't what I want to use, is there an alternative? Is there a database function I could call to start a transaction? Thanks.
First you are not doing unit testing. Unit test is about testing single small unit of code (function). When you test a function you are creating unit test for each execution path so that you have full coverage of tested code. But your system under tests includes client to service communication and service to database communication = several tiers of code + configuration. That is called integration testing.
The problem here is how did you design your service? Does your service flow transactions? Transaction flow allows starting transaction at your client and pass it to the service (distributed transaction). It is not default behavior and it requires special configuration of WCF bindings. If you use this approach you can do the same in your test. Start transaction at test and rollback the transaction at the end of the test. If you didn't design service to flow transaction you simply can't use it because your transaction started in the test will not affect the service. In that case you have several choices:
Create manual compensation. At the end of each test run custom SQL to move data to initial state. This simulates rollback. I don't recommend this approach.
Recreate database at the beginning of each test. This is slow but fully acceptable because integration tests are usually run only on build server few times per day.
Don't test WCF service level. WCF service should be only some wrapper on the top of business logic or data access logic. So don't test service level but instead test the wrapped layer. You can probably use transactions there. This approach can be well combined with previous one so that you have some small set of complex integration tests which requires database recreation and some bigger set of tests which can do rollback and use the same database.
I've just encountered an error in my app that probably could have been caught with some integration tests, so I think its high time I wrote some!
My question relates to the setup of these tests, and at what layer of the code you run the tests against.
Setup
Considering I should have lots of integration tests, I don't want to be creating and dropping a test database for every test, this would be epically slow (even if its an SqlLite in-memory one). My thoughts are:
Have a test db which sits next to my dev db
Before testing, run some reset script which will set up my schema correctly and insert any necessary data (not test case specific)
Simply use this test db as if its the real db.
However, it seems very wasteful that I have to run my Fluent NHib configuration in every [Setup]. Is this just tough? What are my options here?
My session is currently wrapped in a UoW pattern, with creation and destruction performed on begin_request and end_request (MVC web application) respectively. Should I modify this to play well with the tests to solve this problem?
Testing
When it comes to actually writing some tests, how should I do it?
Should I test from the highest level possible (my MVC controller actions) or from the lowest (repositories).
If I test at the lowest I'll have to manually hard code all the data. This will make my test brittle to changes in the code, and also not representative of what will really happen in the code at runtime. If I test at the highest I have to run all my IoCC setup so that dependencies get injected and the whole thing functions (again, repeating this in every [SetUp]?)
Meh! I'm lost, someone point me in the right direction!
Thanks
In regards to creating the session factory, I create a class called _AssemblyCommon in my test project and expose the session factory as a static from there. A method marked with the [SetupFixture] (NUnit) attribute configures the session factory.
In general integration tests should cover CRUD operations in the repositories. To do this, I have one test method per object (or aggregate root) and do an insert, retrieve, update, and delete all within that method. I also test any cascade deletes that I've defined. Keeping these operations in a single method doesn't leave any traces behind in the database. I do have some integration tests that leave behind test data but that hasn't been a problem.
Your higher level operations should be unit tested and mock (I use Moq) the repositories if possible.
In my current MVC app I've found that it's enough to test the interaction between the repositories and the database. More than anything, this is to iron out any wrinkles in the NHibernate mapping. Everything (ok I'm exaggerating when I say everything) above that layer is unit tested in isolation. I did have some integration tests from the controllers all the way down the stack to the database and these used an IoC container (StructureMap) to build the controllers and the dependencies, but I found these tests were really not adding anything and they were quite an overhead to maintain, so I've removed them from the 'integration' tests for now - I may find a reason to put them in but so far I haven't.
Anyway, the test process I use works like this:
The build process for the data access layer test assembly creates the test database via a FluentNHibernate configuration ExposeSchema() call. The build process then runs some NHibernate repository level code to populate reference tables in the database.
Each integration test that runs is then wrapped in a System.Transactions.TransactionScope using() statement and the TransactionScope never has Complete() called, so each test can run in isolation and the results can be setup and verified within the using() scope by an ISession without altering the state of any other test data. e.g.
using (new TransactionScope())
{
var session = NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting();
// This is an NHibernate ISession - setup any dependencies for the repository tests here
var testRepository = new NHibernateGenericRepository<NewsItem>(NHibernateTestSessionCreator.GetNHibernateSessionForIntegrationTesting())
// repository test code goes here - the repository is using a different ISession
// Validate the results here directly with the ISession
}
// at this point the transaction is rolled back and we haven't changed the test data
This means that I don't have to modify the UnitOfWork implementation I'm using - the transaction is rolled back at a higher level.
How do I write unit tests to test the Web methods of a Web service using NUnit?
The web methods in this application will add,update and delete a record in the database.
The unit test will test a web method whether a record has been inserted in the database, the webmethod calls a method in data access layer to perform this action.
I do not think it's appropriate to be testing the end result of your web service with a unit test. Also, what you are trying to do is called an "integration test", and not a unit test.
What you can do, however, is to:
Write unit tests to check if your data access layer (DAL) is working properly
Write unit tests to see if your web method is properly accessing your DAL
You might also want to look at a question I raised before: How do I unit test persistence? to provide you more insight.
If you really are adamant to be able to do this however, it is possible to create such unit tests using MbUnit, which has the Rollback attribute.
[Rollback]
public void Test_database_persistence()
{
//any database access you perform here will be put inside a transaction
//and rolled back afterwards
}
MbUnit is totally compatible with NUnit, so you could still use tests you've already written with NUnit.