Sharing database connection witihin Symfony2 functional test between test code and code being tested - sqlite

I'm trying to use SQLite in memory storage to improve speed of my functional tests suite. At the moment it's not possible to take advantage of this feature, as explained here:
https://github.com/doctrine/dbal/issues/2901
So I'm trying to find a workaround. The way I see it, I should get satisfactory results, if I manage to ensure there is single connection used for db set up (loading fixtures), within app being tested (for writing actual data into db) and for asserting db writes. I control connection for setup and for assert-reads manually, but I need a way to ensure my client uses the same connection as well. I'm also using Liip\FunctionalTestBundle. Is there a way to inject a connection into a client created by this line:
$this->client = $this->makeClient();
Without any hacking
Doctrine\DBAL\Driver\PDOSqlite\Driver::connect
is being called multiple times, I need it to run just once, without changing any of the production code.

Related

isolating database operations for integration tests

I am using NHibernate and ASP.NET/MVC. I use one session per request to handle database operations. For integration testing I am looking for a way to have each test run in an isolated mode that will not change the database and interfere with other tests running in parallel. Something like a transaction that can be rolled back at the end of the test. The main challenge is that each test can make multiple requests. If one request changes data the next request must be able to see these changes etc.
I tried binding the session to the auth cookie to create child sessions for the following requests of a test. But that does not work well, as neither sessions nor transactions are threadsafe in NHibernate. (it results in trying to open multiple DataReaders on the same connection)
I also checked if TransactionScope could be a way, but could not figure out how to use it from multple threads/requests.
What could be a good way to make this happen?
I typically do this by operating on different data.
for example, say I have an integration test which checks a basket total for an e-commerce website.
I would go and create a new user, activate it, add some items to a basket, calculate the total, delete all created data and assert on whatever I need.
so the flow is : create the data you need, operate on it, check it, delete it. This way all the tests can run in parallel and don't interfere with each other, plus the data is always cleaned up.

Automated testing an Orchestration

I have an orchestration which polls data from a database (which is actually used by an ERP, so i am not able to manipulate data in this database), Once the polling port finds matching data it executes the orchestration and sends data to a third party web service.
The logic used in this orchestration is complicated and often prone to change, and so it's important to cover it with proper set of tests. I am thinking about this for a while and even thought of using 3 different components so that,
First part (can be only 2 ports) reads the data from the database and put into a folder
Second one (current orchestration) uses a file port to read data and dumped by the first component and it dumps the resultant file to another folder
Third component reads the file dumped by the second component and send it to the web service
However I have few concerns,
Is this a frowned upon practice, when it comes to the BizTalk? Or is it a normal way to do things?
The performance - would it be significant slower compared to the current solution?
We are currently using the one of the server to run the tests / do the build using BTDF and Jenkins. Is there a way to disable the components 1 and 3, run the tests and re-enable them once build is completed so that it can function normally?
You can avoid the overhead of writing to and reading from files by using the built-in functionality of the MessageBox. The first place to start is here: https://msdn.microsoft.com/en-us/library/aa949234.aspx
There is an excellent Biztalk sample which shows how you can use this approach to modularise your functionality into a set of orchestrations which independently read from and write to the MessageBox. It's referenced at the bottom of the previous page and is called "Direct Binding to the MessageBox Database in Orchestrations".
I'd recommend against this approach. You'd be better off making the three orchestrations direct bound to the MessageBox and subscribe to the messages published by the previous orchestration. You could also create send ports that subscribe to these messages, or just use the management console to debug the messages.
You can also write unit tests for your various tasks. If you're doing some work in a .NET helper library, you can have a plain old unit tests project. You might also want to look into the BizUnit framework (https://bizunit.codeplex.com/) - it takes a little doing to get used to but it's a great resource for writing BizTalk unit tests.

Ruby-on-Rails - Conditional database condition depending on action

I am no master, but I have been using Ruby-On-Rails for quite few years now and consider myself well-versed in it. Additionally, I have been working as web developer for last 10 years, starting with .Net.
I .Net we used to manually create database connection before firing any query or making a transaction. But Rails on the other hand, while spawning a new thread for request, fires a bag of initialization process which includes setting up a database connection.
Now we are working on a project, where we may not have a need for DB connection for every action. Is it somehow possible to override the default DB connection function and do it action-wise (a before_filter maybe)?
PS: Another way I thought of creating an additional Sinatra web application, which houses all such actions and use them instead to do the work or get the data.
Ehm where did you read Rails sets up a database connection for every request? My understanding is a connection is checked out from the connection pool when needed.
Also I'm surprised this is a big issue! If you don't need to hit the database (which implies no authentication, right?) then you should be caching the entire response, server-side and client-side.
Check out the guide on caching: http://guides.rubyonrails.org/caching_with_rails.html and Dalli https://github.com/mperham/dalli
Separating the client app from the data layer (so Rails on top of an API) is a nice architecture I've used for a project with success. I'd suggest Grape instead of Sinatra however.

Entity Framework listening to SQL Server changes

I'm working on the following scenario:
I have a console up that populates a SQL Server database with some data. I have one more web app that reads the same database and displays the data on a front-end. Both of the applications use Entity Framework to communicate with the database (they have the same connection string).
I wonder how can the web app be notified for any changes that have occurred to the database. Bear in mind that the two applications are not referenced, whatsoever.
Is there event provided by EF that fires when some has changes. In essence, I would like to know when a change has happened, as well as, the nature of that change
I had a similar requirement and I solved it using the EF function:
[context].Database.CompatibleWithModel(throwIfNoMetadata: true)
It will return if your model matches the underlying database structure using the metadata table.
Note that I was using a Code First approach.
The msdn definition is below:
http://msdn.microsoft.com/en-us/library/system.data.entity.database.compatiblewithmodel(v=vs.103).aspx
Edit:
Just found an amazing article with a demonstration:
http://blog.oneunicorn.com/2011/04/08/code-first-what-is-that-edmmetadata-table/
This is not something that is related to EF at all. EF is just a library that makes SQL calls and maps them to objects. It has no inside knowledge of the database. As such, when data changes in one application, another application doesn't know unless they query to see if that data changes (and you're not going to be constantly running queries to know that, it's too impractical).
There are, potentially some ways to do this, such as adding triggers to the database, which then call extended stored procs to send messages to the app, but this is a lot of work to go through, and it can possibly compromise the robustness of the database.
There used to be something called Notification Services, but that was deprecated. There's now something called SqlDependency objects, which may help you in some cases.. but it all depends on what you're trying to do exactly.
In any event, it's usually easier to find a different way to do what you want. This is complex topic, and really requires a lot of sql server knowledge.

Unit Testing an MVC4 Application Service Layer

I've spent the past two days starting and restarting this learning process because I really don't know how to get started.
I have an MVC4 application with three layers: Web, Services, and Core. Controllers send requests to the Service layer, which provide info that the controllers use to hydrate the viewModels for the view.
I have the following methods in my service layer:
public interface ISetServices
{
List<Set> GetBreadcrumbs(int? parentSetId);
Set GetSet(int? setId);
Set CreateSet(string name, string details, int? parentSetId);
void DeleteSet(int? setId);
Card GetCard(int? cardId);
Card CreateCard(List<string> sides, string details, int? parentSetId);
void DeleteCard(int? cardId);
Side GetSide(int? sideId);
List<String> GetSides(Card card);
Side CreateSide(Card card, string content);
void DeleteSide (int? sideId);
}
I'm trying to figure out how I can create a Unit Test Class Library to test these functions.
When the tests are run, I would like a TestDatabase to be dropped (if it exists) and recreated, and seeded with data. I have a "protected" seed method in my Core project along with a - can I use this? If so, how?
Everywhere I read says to never use a database in your tests, but I can't quite figure out what the point of a test is then. These services are for accessing and updating the database... don't I need a database to test?
I've created a Project.Services.Tests unit testing project, but don't know how to wire everything up. I'd like to do it with code and not configuration files if possible... any examples or pointers would be MUCH appreciated.
There are many aspects to this problem, let me try to approach some:
Unit testing is about testing a unit of code, smallest possible testable piece of code but testing unit of code with it's interaction with database is an integration test problem
One approach to this problem by using repository pattern - it is an abstraction layer over your data access layer. Your service interface looks more like a repository pattern implementation, google around about it more.
Some people do not test internals of repository pattern, they just assert calls against it's interface. Database tests are considered an integration test problem.
Some people hit their database directly by writing SetUp and TearDown steps in their unit tests, where usually you would insert appropriate data in a SetUp and TearDown would clean it all up to the previous state, but be aware - they can get pretty slow and make your unit testing a pain.
Other approach would be to configure your tests to use different database - for example SQLCE. For some ORMs database swapping might be quite easy. This is faster than hitting 'full' database, and seem cleaner, but databases implementations have differences that sooner or later will surface and make your unit testing painful...
Currently with the raise of NoSQL solutions accessing database directly can get very easy because quite often they have their memory counterparts (like RavenDB)
I realize it might be a bit overwhelming at the beginning, but again, this problem has many aspects. How about you post your source code to github and share it here ?

Resources