Unit Testing an MVC4 Application Service Layer - asp.net

I've spent the past two days starting and restarting this learning process because I really don't know how to get started.
I have an MVC4 application with three layers: Web, Services, and Core. Controllers send requests to the Service layer, which provide info that the controllers use to hydrate the viewModels for the view.
I have the following methods in my service layer:
public interface ISetServices
{
List<Set> GetBreadcrumbs(int? parentSetId);
Set GetSet(int? setId);
Set CreateSet(string name, string details, int? parentSetId);
void DeleteSet(int? setId);
Card GetCard(int? cardId);
Card CreateCard(List<string> sides, string details, int? parentSetId);
void DeleteCard(int? cardId);
Side GetSide(int? sideId);
List<String> GetSides(Card card);
Side CreateSide(Card card, string content);
void DeleteSide (int? sideId);
}
I'm trying to figure out how I can create a Unit Test Class Library to test these functions.
When the tests are run, I would like a TestDatabase to be dropped (if it exists) and recreated, and seeded with data. I have a "protected" seed method in my Core project along with a - can I use this? If so, how?
Everywhere I read says to never use a database in your tests, but I can't quite figure out what the point of a test is then. These services are for accessing and updating the database... don't I need a database to test?
I've created a Project.Services.Tests unit testing project, but don't know how to wire everything up. I'd like to do it with code and not configuration files if possible... any examples or pointers would be MUCH appreciated.

There are many aspects to this problem, let me try to approach some:
Unit testing is about testing a unit of code, smallest possible testable piece of code but testing unit of code with it's interaction with database is an integration test problem
One approach to this problem by using repository pattern - it is an abstraction layer over your data access layer. Your service interface looks more like a repository pattern implementation, google around about it more.
Some people do not test internals of repository pattern, they just assert calls against it's interface. Database tests are considered an integration test problem.
Some people hit their database directly by writing SetUp and TearDown steps in their unit tests, where usually you would insert appropriate data in a SetUp and TearDown would clean it all up to the previous state, but be aware - they can get pretty slow and make your unit testing a pain.
Other approach would be to configure your tests to use different database - for example SQLCE. For some ORMs database swapping might be quite easy. This is faster than hitting 'full' database, and seem cleaner, but databases implementations have differences that sooner or later will surface and make your unit testing painful...
Currently with the raise of NoSQL solutions accessing database directly can get very easy because quite often they have their memory counterparts (like RavenDB)
I realize it might be a bit overwhelming at the beginning, but again, this problem has many aspects. How about you post your source code to github and share it here ?

Related

Sharing database connection witihin Symfony2 functional test between test code and code being tested

I'm trying to use SQLite in memory storage to improve speed of my functional tests suite. At the moment it's not possible to take advantage of this feature, as explained here:
https://github.com/doctrine/dbal/issues/2901
So I'm trying to find a workaround. The way I see it, I should get satisfactory results, if I manage to ensure there is single connection used for db set up (loading fixtures), within app being tested (for writing actual data into db) and for asserting db writes. I control connection for setup and for assert-reads manually, but I need a way to ensure my client uses the same connection as well. I'm also using Liip\FunctionalTestBundle. Is there a way to inject a connection into a client created by this line:
$this->client = $this->makeClient();
Without any hacking
Doctrine\DBAL\Driver\PDOSqlite\Driver::connect
is being called multiple times, I need it to run just once, without changing any of the production code.

Confusion over Symfony testing practices: functional vs integration testing

The Symfony testing documentation doesn't really mention a distinction between functional tests and integration tests, but my understanding is that they are different.
The Symfony docs describe functional testing like this:
Make a request;
Test the response;
Click on a link or submit a form;
Test the response;
Rinse and repeat.
While the Ruby on Rails docs describe it like this:
was the web request successful?
was the user redirected to the right page?
was the user successfully authenticated?
was the correct object stored in the response template?
was the appropriate message displayed to the user in the view?
The Symfony docs seem to be describing something more akin to integration testing. Clicking links, filling out forms, submitting them, etc. You're testing that all these different components are interacting properly. In their example test case, they basically test all actions of a controller by traversing the web pages.
I am confused why Symfony does not make a distinction between functional and integration testing. Does anyone in the Symfony community isolate tests to specific controller actions? Am I overthinking things?
Unit testing refers to test methods of a class, one by one, and check that they make the right calls in the right context. If those methods use dependencies (injected services or even other methods of that class), we're mocking them to isolate the test to the current method only.
Integration testing refers to automatically test a feature of your application. This is checking that all possible usage scenarios for the given feature work as expected. To do such tests, you're basically using the crawler, and simulate a website user to work through the feature, and check the resulting page or even resulting database data are consistent.
Functional testing refers to manually challenge the application usability, in a preproduction environment. You have a Quality Assurance team that will roll out some scenarios to check if your website works as expected. Manual testing will give you feedbacks you can't have automatically, such as "this button is ugly", "this feature is too complex to use", or whatever other subjective feedback a human (that will generally think like a customer) can give.
The way I see it the 2 lists don't contradict eachother. The first list (Symfony) can be seen as a method to provide answers for the second list (Rails).
Both lists sound like functional testing to me. They use the application as a whole to determine if the application satisfies the requirements. The second list (Rails) describes typical questions to determine if requirements are met, the first list (Symfony) offers a method on how to answer those questions.
Integration tests are more focused on how units work together. Say you have a repository unit that depends on a database abstraction layer. A unit test can make sure the repository itself functions correctly by stubbing/mocking out the database abstraction layer. An integration test will use both units to see if they actually work together as they should.
Another example of integration testing is to use real database to check if the correct tables/columns exist, and if queries deliver the results you expect. But also that when a logger is called to store a message, that message really ends up in a file.
PS: Functional testing that actually uses a (headless) browser is often called acceptance testing.
Both referenced docs describes functional tests. Functional tests are performed from user perspective (typically on GUI layer). It tests what user will see, what will happen if user submit form or will click on some button. Does not matter if it is automatic or manual process.
Then there are integration and unit tests. These test are on lower level. Basic prediction for unit tests is that they are isolated. You can test particular object and its methods but without external or real dependecies. This is what are mocks for (basically mock simulates real object according to unit test needs). Without understanding of IOC is really hard write isolated tests.
If you are writing tests using real/external dependencies (no mocks), you write integration tests. Integration tests can test cooperation of two objects or whole package/module including querying database, sending mails etc.
Yes, you are overthinking things ;)
I don't know why Symfony or Ruby on Rails states things like that. There is a time where testing depends on the eye it's looking at it. Bottom line: it doesn't matter the name. The only important thing is the confidence that the test gives to you on what you are doing.
Apart from that, the tests are alive and should evolve with your code. I sometimes test only for a specific HTTP status code, other times I isolate a module and unit test it... depends on the time I have to spend, the benefits, etc.
If I have a piece of code that only is used in a controller, I usually go for a functional test. If I'm making an utility I usually go for unit testing.

Entity Framework listening to SQL Server changes

I'm working on the following scenario:
I have a console up that populates a SQL Server database with some data. I have one more web app that reads the same database and displays the data on a front-end. Both of the applications use Entity Framework to communicate with the database (they have the same connection string).
I wonder how can the web app be notified for any changes that have occurred to the database. Bear in mind that the two applications are not referenced, whatsoever.
Is there event provided by EF that fires when some has changes. In essence, I would like to know when a change has happened, as well as, the nature of that change
I had a similar requirement and I solved it using the EF function:
[context].Database.CompatibleWithModel(throwIfNoMetadata: true)
It will return if your model matches the underlying database structure using the metadata table.
Note that I was using a Code First approach.
The msdn definition is below:
http://msdn.microsoft.com/en-us/library/system.data.entity.database.compatiblewithmodel(v=vs.103).aspx
Edit:
Just found an amazing article with a demonstration:
http://blog.oneunicorn.com/2011/04/08/code-first-what-is-that-edmmetadata-table/
This is not something that is related to EF at all. EF is just a library that makes SQL calls and maps them to objects. It has no inside knowledge of the database. As such, when data changes in one application, another application doesn't know unless they query to see if that data changes (and you're not going to be constantly running queries to know that, it's too impractical).
There are, potentially some ways to do this, such as adding triggers to the database, which then call extended stored procs to send messages to the app, but this is a lot of work to go through, and it can possibly compromise the robustness of the database.
There used to be something called Notification Services, but that was deprecated. There's now something called SqlDependency objects, which may help you in some cases.. but it all depends on what you're trying to do exactly.
In any event, it's usually easier to find a different way to do what you want. This is complex topic, and really requires a lot of sql server knowledge.

Multi-Database Transactional System & ASP.NET MVC

So I have a challenge to build a site that people online can use to interact with organizations.: Asp.NET MVC Customer Application
One of the requirements is financial processing and accounting.
I'm very comfortable using SQL Transactions and stored procedures to do this; i.e. CreateCustomer also creates an entity, and an account record. We have a stored procedure to do this, that does a begin transaction, creates some setup records we need, then does a commit. I'm not seeing a good way to do this with an ORM, and after reading some great blog articles I'm starting to wonder if I'm going down the wrong path.
Part of the complexity here is the data itself:
I'm querying x databases (one per existing customer) to get some of my data, though my app has its own data store as well. I need to query the x databases, run stored procedures on the x databases, and also to my own datastore.
I'm not seeing strong support for things like stored procedures and thereby transactions, though it does seem to be present.
Maybe I'm just trying to make my app a nail here, cause the MVC hammer is sooo shiny. I'm plenty comfortable with raw ADO.NET of course, but I'm in love with the expressive feel to writing Linq code in C# and I'd rather not give up on it.
Down to the question:
Is this a bad idea? Should I try to use Linq / Entity Framework, or something like nHibernate... and stick with the ORM pattern or should I trash it and use raw ADO.NET data access?
Edit: a note on scale; from a queries per second standpoint this app is not "huge". But, from a data complexity perspective, it does need to query against 50+ databases (all identical, or close to it) to read data from an external application and publish data back to that application. ORM feels right when dealing with "my" data store, but feels very wrong for accessing the data from the external application.
From a certain size (number of databases) up, you have to change the paradigm. Are you at that size?
When you deploy what ultimately is a distributed application and yet try to controll it as an ordinary local application you are going to run into a set of fundamental issues around availability, scalability and correctness. If you use concepts like 'distributed transactions', 'linked servers' and 'ORM', your are down the wrong path. True distributed applications will use terms like 'message', 'queue' and and 'service'. Terms like Linq, EF, nHibernate are all fine and good, but none will bring you anything extra from what a simple Transact-SQL SELECT statement brings. In other words, if a SELECT solves your issues, then the client side various ORM will work. If not, they won't add any miraculos value.
I recommend you go over the slides on the SQLCAT: High Performance Distributed Applications in Real World Deployments which explain how a site like MySpace manages to read and write into a store of nearly 500 servers and thousands of databases.
Ultimately what you need to internalize is this: one database can have 95% availability (uptime and acceptable service response time). A system consiting of 10 databases with 95% availability has 59% availability. And a system of 100 databases each with 99.5% availability has 60% availability. 1000 databases with 99.95% availability (5 min downtime per week) have 60% availability. And this is for an ideal situation. In reality there is always a snowball effect caused by resource consumption (eg. threads blocked on trying to access an unavailable or slow resource) that makes things far worse.
This means that one cannot write a large distributed system relying on synchronous, tightly coupled operatiosn and transactions. Is simply impossible. You always rely on asynchronous operations (usually messaging and queues), which is something completely different from your run-of-the-mill database application.
use TransactionScope object available in System.Transaction.
What I have chosen is to use Entity Framework to allow access to the application's main data store, and create a custom DAL for access to external application data and for access to stored procedures within the application.
Here's hoping Entity Framework 4.0 fixes the issue. For now, I'm using the concept listed here.
http://social.msdn.microsoft.com/forums/en-US/adodotnetentityframework/thread/44a0a7c2-7c1b-43bc-98e0-4d072b94b2ab/

Scalable/Reusable Authorization Model

Ok, so I'm looking for a bit of architecture guidance, my team is getting a chance to re-cast certain decisions with a new feature that we're building, and I wanted to see what SO thought :-) There are of course certain things that we're not changing, so the solution would have to fit in this model. Namely, that we've got an ASP.NET application, which uses web services to allow users to perform actions on the system.
The problem comes in because, as with many systems, different users need access to different functions. Some roles have access to Y button, and others have access to Y and B button, while another still only has access to B. Most of the time that I see this, developers just put in a mish-mosh of if statements to deal with the UI state. My fear is that left unchecked, this will become an unmaintainable mess, because in addition to putting authorization logic in the GUI, it needs to be put in the web services (which are called via ajax) to ensure that only authorized users call certain methods.
so my question to you is, how can a system be designed to decrease the random ad-hoc if statements here and there that check for specific roles, which could be re-used in both GUI/webform code, and web service code.
Just for clarity, this is an ASP.NET web application, using webforms, and Script# for the AJAX functionality. Don't let the script# throw you off of answering, it's not fundamentally different than asp.net ajax :-)
Moving from the traditional group, role, or operation-level permission, there is a push to "claims-based" authorization, like what was delivered with WCF.
Zermatt is the codename for the Microsoft class-library that will help developers build claims-based applications on the server and client. Active Directory will become one of the STS an application would be able to authorize against concurrently with your own as well as other industry-standard servers...
In Code Complete (p. 411) Steve McConnell gives the following advice (which Bill Gates reads as a bedtime story in the Microsoft commercial).
"used in appropriate circumstances, table driven code is simpler than complicated logic, easier to modify, and more efficient."
"You can use a table to describe logic that's too dynamic to represent in code."
"The table-driven approach is more economical than the previous approach [rote object oriented design]"
Using a table based approach you can easily add new "users"(as in the modeling idea of a user/agent along with it's actions). Its a good way to avoid many "if"s. And I've used it before for situations like yours, and it's kept the code nice and tidy.

Resources