Unit test: Real database vs. mocking - asp.net

I'm unit-testing an asp.net MVC 3 web application which our team is building.
The problem is that we have to mock a lot of things and our unit tests don't cover all webserver and database-related stuff.
Example:
I have a method with the following code:
public List<Useraccount> GetUseraccounts(Company company)
{
return company.Useraccounts.ToList<Useraccount>();
}
My developer complains that he has to inject a fake company object he's preparing by himself. He'd like to have a real object from the database.
My question:
Is it possible to use a real database (could also be SQLite/SQLExpress or something) with Unit-Test? Is this useful?
What are the pros and cons?
Without a real database we need to mock too many objects. We can not verify that, for example, such calls are working:
Useraccount useraccount = UnitOfWork.UseraccountRepository.Get(u => u.EnableCode == enableCode && u.IsEnabled == false).Single<Useraccount>();

Testing against a real database is integration testing, not unit testing. You can still run the integration tests in the same manner as unit tests - that is, run them via nunit or mstest or whatever, and via a command line or some such on your build server - but there are a couple of extra steps:
You need to setup the test data, inject it into the database, run the test, then remove the test data again. In an ideal world, you'd create a test database at the start of your integration test run, then run all tests, then remove it after all integration tests are finished. This can be impractical though.
Your integration tests will run much more slowly than unit tests. Be prepared for this by running them, for example, nightly in a build server job.
In terms of using SqlLite or whatever, i'd say not to, use the exact type of database you're using in the real world, otherwise it's not a trustworthy test.

try to use Effort tool . Effort is a powerful tool that enables a convenient way to create automated tests for Entity Framework based applications.
It is basically an ADO.NET provider that executes all the data operations on a lightweight in-process main memory database instead of a traditional external database. It provides some intuitive helper methods too that make really easy to use this provider with existing ObjectContext or DbContext classes. A simple addition to existing code might be enough to create data driven tests that can run without the presence of the external database.

Related

What's the purpose of an integration server?

I'm new to DevOps, so forgive me if this is trivial, but given the following workflow, what is the purpose of the integration server?
I've been given the following steps as an example of an approach to DevOps at my organisation :
Developers check in changes to source control (TFS).
Build server checks for changes.
Artefacts of the build are deployed to an "integration server" which has a copy of our ERP on it.
A release management application takes the output from this ERP environment and moves it to test, pre-production, and production environment as and when.
Is this approach correct, and if so, is the purpose of an integration server merely to provide a working implementation of code, that isn't accessed for any means other than moving code onto other servers?
My answer is making some assumptions on what it sounds like is going on in your environment.
When you check in changes to source control with AX, it's adding *.xpo text files of the code/objects that are your changes only.
It sounds like your "integration server" is a build/staging server. Imagine these two scenarios:
You have a customization with 3 objects, and you add 2 of the objects to source control and forget one. When you build on the integration server, it could have compile errors because that missing dependent object.
In your development environment, you create test forms and jobs that are basically junk you are experimenting with. You do not add these objects to source control. You wouldn't want this code to be deployed to your other environments, so the integration environment ensures the code is strictly from the repo.
Doing full compiles/syncs against the integration will also help identify issues. Then you can deploy the environment in its entirety to your other environments.
The big thing to realize is that your repo is really only your changes to the base (sys/syp) code. So part of the integration/build process is your code & base code combining.

How do I submit test results to Microsoft Test Managment Server via HTTP?

My company uses Microsoft Test Management server to host its tests and results. For the manual tests this works fine a QA engineer runs the test and marks its status, I have been tasked with writing some automated tests and I need them to submit results to the server. I know there is a code api, but I want to do this from a non .Net test environment (I am going to use AutoIt) so I would like to submit results from an HTTP api, how can I do this? Where can I find some good examples? Or is there a better way, we are a very MS TFS shop so whatever I do needs to fit into that environment.
Thank you!
Microsoft has a UI automation framework and API called CodedUI that is fully integrated with Microsoft Test Manager.
You can:
Generate test automations from Action recordings of manual tests
Generate test automations in Visual Studio
Code test automations from scratch in VS
These automations can then be associated with a Test Case in MTM and pushed to test environments manually or via an API. I usually have new code built, deployed, and tested automatically using these techniques.
You can also plug other UI frameworks into this model.
If you want to submit Test Results from a non-windows environment then you should use the Cross-Platform API. As part of Team Explorer Everywhere you get both a command line and a Java based Object Model for manipulating TFS.
http://www.microsoft.com/en-us/download/details.aspx?id=47727
I should note that the API for constructing test result submissions is quite complex as your tests in MTM are part of a hierarchy of Suits and Plans, and each Test Case can exist in more than one location. You will need to create a Test Run and populate it with the appropriate data.

shared test fixture for meteor velocity cucumber and jasmine

How do I share a fixture between my cucumber and jasmine test?
I can create a fixture with one jasmine server integration test that can be used with other jasmine server integration tests. But (due to different "mirrors" I guess?) I cannot use the same fixture in a cucumber test. The Mongo collection does not have the data created by the jasmine server integration tests.
One option is to save the state to a flat file, or nock, something similar outside of meteor. But, it would be a lot simpler to reference a common collection (on the same mirror?) for test fixtures. Is this possible?
You can use the package-fixture pattern for fixtures to achieve what you're asking for. See here: https://github.com/meteor-velocity/velocity#fixtures--test-data
Any packages that you create with the debugOnly flag in the package descriptor will not be bundled in production.
Everything is possible. However, I do not recommend to make tests depend on each other. As Wikipedia states:
Ideally, each test case is independent from the others.
A few reasons why your tests should be independent:
Easier to narrow down the problem if tests fail (if tests depend on each other you will have some test failures, where simply the predecessor did fail)
Allows for parallelisation to reduce total test run time (as your test suite grows)
Currently Velocity hard-codes port 5000 for the test mirror instance of your app, but I know that there are efforts to make this port configurable (which would have to be supported by the test frameworks themselves).
The summary answer to this is: shared runtime state between test tools is not supported usage (although both can execute code from the same fixture package). The usage I am going for is not a conventional pattern and involves some sort of dependency between tests.
To get what I was going for I had to write my own tool. What I wanted was basically a wrapper around nock to help me generate test fixtures by recording the results of my e2e tests with integrations turned on.

Integration Testing best practices

Our team has hundreds of integration tests that hit a database and verify results. I've got two base classes for all the integration tests, one for retrieve-only tests and one for create/update/delete tests. The retrieve-only base class regenerates the database during the TestFixtureSetup so it only executes once per test class. The CUD base class regenerates the database before each test. Each repository class has its own corresponding test class.
As you can imagine, this whole thing takes quite some time (approaching 7-8 minutes to run and growing quickly). Having this run as part of our CI (CruiseControl.Net) is not a problem, but running locally takes a long time and really prohibits running them before committing code.
My question is are there any best practices to help speed up the execution of these types of integration tests?
I'm unable to execute them in-memory (a la sqlite) because we use some database specific functionality (computed columns, etc.) that aren't supported in sqlite.
Also, the whole team has to be able to execute them, so running them on a local instance of SQL Server Express or something could be error prone unless the connection strings are all the same for those instances.
How are you accomplishing this in your shop and what works well?
Thanks!
Keep your fast (unit) and slow (integration) tests separate, so that you can run them separately. Use whatever method for grouping/categorizing the tests is provided by your testing framework. If the testing framework does not support grouping the tests, move the integration tests into a separate module that has only integration tests.
The fast tests should take only some seconds to run all of them and should have high code coverage. These kind of tests allow the developers to refactor ruthlessly, because they can do a small change and run all the tests and be very confident that the change did not break anything.
The slow tests can take many minutes to run and they will make sure that the individual components work together right. When the developers do changes that might possibly break something which is tested by the integration tests but not the unit tests, they should run those integration tests before committing. Otherwise, the slow tests are run by the CI server.
in NUnit you can decorate your test classes (or methods) with an attribute eg:
[Category("Integration")]
public class SomeTestFixture{
...
}
[Category("Unit")]
public class SomeOtherTestFixture{
...
}
You can then stipulate in the build process on the server that all categories get run and just require that your developers run a subset of the available test categories. What categories they are required to run would depend on things you will understand better than I will. But the gist is that they are able to test at the unit level and the server handles the integration tests.
I'm a java developer but have dealt with a similar problem. I found that running a local database instance works well because of the speed (no data to send over the network) and because this way you don't have contention on your integration test database.
The general approach we use to solving this problem is to set up the build scripts to read the database connection strings from a configuration file, and then set up one file per environment. For example, one file for WORKSTATION, another for CI. Then you set up the build scripts to read the config file based on the specified environment. So builds running on a developer workstation run using the WORKSTATION configuration, and builds running in the CI environment use the CI settings.
It also helps tremendously if the entire database schema can be created from a single script, so each developer can quickly set up a local database for testing. You can even extend this concept to the next level and add the database setup script to the build process, so the entire database setup can be scripted to keep up with changes in the database schema.
We have an SQL Server Express instance with the same DB definition running for every dev machine as part of the dev environment. With Windows authentication the connection strings are stable - no username/password in the string.
What we would really like to do, but haven't yet, is see if we can get our system to run on SQL Server Compact Edition, which is like SQLite with SQL Server's engine. Then we could run them in-memory, and possibly in parallel as well (with multiple processes).
Have you done any measurements (using timers or similar) to determine where the tests spend most of their time?
If you already know that the database recreation is why they're time consuming a different approach would be to regenerate the database once and use transactions to preserve the state between tests. Each CUD-type test starts a transaction in setup and performs a rollback in teardown. This can significantly reduce the time spent on database setup for each test since a transaction rollback is cheaper than a full database recreation.

Integrating Automated Web Testing Into Build Process

I'm looking for suggestions to improve the process of automating functional testing of a website. Here's what I've tried in the past.
I used to have a test project using WATIN. You effectively write what look like "unit tests" and use WATIN to automate a browser to click around your site etc.
Of course, you need a site to be running. So I made the test actually copy the code from my web project to a local directory and started a web server pointing to that directory before any of the tests run.
That way, someone new could simply get latest from our source control and run our build script, and see all the tests run. They could also simply run all the tests from the IDE.
The problem I ran into was that I spent a lot of time maintaining the code to set up the test environment more than the tests. Not to mention that it took a long time to run because of all that copying. Also, I needed to test out various scenarios including installation, meaning I needed to be able to set the database to various initial states.
I was curious on what you've done to automate functional testing to solve some of these issues and still keep it simple.
MORE DETAILS
Since people asked for more details, here it is. I'm running ASP.NET using Visual Studio and Cassini (the built in web server). My unit tests run in MbUnit (but that's not so important. Could be NUnit or XUnit.NET). Typically, I have a separate unit test framework run all my WATIN tests. In the AssemblyLoad phase, I start the webserver and copy all my web application code locally.
I'm interested in solutions for any platform, but I may need more descriptions on what each thing means. :)
Phil,
Automation can just be hard to maintain, but the more you use your automation for deployment, the more you can leverage it for test setup (and vice versa).
Frankly, it's easier to evolve automation code, factoring it and refactoring it into specific, small units of functionality when using a build tool that isn't
just driving statically-compiled, pre-factored units of functionality, as is the case with NAnt and MSBuild. This is one of the reasons that many people who were relatively early users of toole like NAnt have moved off to Rake. The freedom to treat build code as any other code - to cotinually evolve its content and shape - is greater with Rake. You don't end up with the same stasis in automation artifacts as easily and as quickly with Rake, and it's a lot easier to script in Rake than NAnt or MSBuild.
So, some part of your struggle is inherently bound up in the tools. To keep your automation sensible and maintained, you should be wary of obstructions that static build tools like NAnt and MSBuild impose.
I would suggest that you not couple your test environment boot-strapping from assembly load. That's an inside-out coupling that only serves brief convenience. There's nothing wrong (and, likely everything right) with going to the command line and executing the build task that sets up the environment before running tests either from the IDE or from the command line, or from an interactive console, like the C# REPL from the Mono Project, or from IRB.
Test data setup is simply just a pain in the butt sometimes. It has to be done.
You're going to need a library that you can call to create and clean up database state. You can make those calls right from your test code, but I personally tend to avoid doing this because there is more than one good use of test data or sample data control code.
I drive all sample data control from HTTP. I write controllers with actions specifically for controlling sample data and issue GETs against those actions through Selenium. I use these to create and clean up data. I can compose GETs to these actions to create common scenarios of setup data, and I can pass specific values for data as request parameters (or form parameters if needs be).
I keep these controllers in an area that I usually call "test_support".
My automation for deploying the website does not deploy the test_support area or its routes and mapping. As part of my deployment verification automation, I make sure that the test_support code is not in the production app.
I also use the test_support code to automate control over the entire environment - replacing services with fakes, turning off subsystems to simulate failures and failovers, activating or deactivating authentication and access control for functional testing that isn't concerned with these facets, etc.
There's a great secondary value to controlling your web app's sample data or test data from the web: when demoing the app, or when doing exploratory testing, you can create the data scenarios you need just by issuing some gets against known (or guessable) urls in the test_support area. Really making a disciplined effort to stick to restful routes and resource-orientation here will really pay off.
There's a lot more to this functional automation (including test, deployment, demoing, etc) so the better designed these resources are, the better the time you'll have maintaining them over the long hall, and the more opportunities you'll find to leverage them in unforseen but beneficial ways.
For example, writing domain model code over the semantic model of your web pages will help create much more understandable test code and decrease the brittleness. If you do this well, you can use those same models with a variety of different drivers so that you can leverage them in stress tests and load tests as well as functional test as well as using them from the command line as exploratory tools. By the way, this kind of thing is easier to do when you're not bound to driver types as you are when you use a static language. There's a reason why many leading testing thinkers and doers work in Ruby, and why Watir is written in Ruby. Reuse, composition, and expressiveness is much easier to achieve in Ruby than C# test code. But that's another story.
Let's catch up sometime and talk more about the other 90% of this stuff :)
We used Plasma on one project. It emulates a web server in process - just point it at the root of your web application project.
It was surprisingly stable - no copying files or starting up an out of process server.
Here is how a test using Plasma looks for us...
[Test]
public void Can_log_in() {
AspNetResponse response = WebApp.ProcessRequest("/Login.aspx");
AspNetForm form = response.GetForm();
form["UserName"] = User.UserName;
form["Password"] = User.Password;
AspNetResponse loggedIn = WebApp.ProcessRequest(Button.Click(form, "LoginUser"));
Assert.IsTrue(loggedIn.IsRedirect());
AspNetResponse homePage = WebApp.ProcessRequest(loggedIn.GetRedirectUrl());
Assert.AreEqual(homePage.Status, 200);
}
All the "AspNetResponse" and "AspNetForm" classes are included with Plasma.
We are currently using an automated build process for our asp.net mvc application.
We use the following tools:
TeamCity
SVN
nUnit
Selenium
We use an msbuild script that runs on a build agent which can be any amount of machines.
The msbuild script gets the latest version of code from svn and builds it.
On success it then deploys the artifacts to a given machine/folder and creates the virtual site in IIS.
We then use MSBuild contrib tasks to run sql scripts to install the database and load data, you could also do a restore.
On success we kick off the nUnit tests. The test setup ensures that selenium is up and running and then drives the selenium tests much in the same way that Watin does. Selenium has a good recorder for tests which can be exported to c#.
The good thing about Selenium is that you can drive FF, Chorme and IE rather than being restricted to IE which was the case with Watin the last time i looked at it. You can also use Selenium to do load testing with the Selenium Grid therefore you can reuse the same tests.
On success msbuild then tags the build in svn. TeamCity has a job that runs overnight that will deploy the latest tag to a staging environment ready for the business users to check the project status the following morning.
In a previous life we had nant & msbuild scripts to fully manage the environment (installing java, selenium etc) however this does take a lot of time so as a pre req we assume each build agent has these installed. In time we will include these tasks.
Why do you need to copy code? Ditch Cassini and let Visual Studio create a virtual directory for you. Sure the devs must remember to build before running web tests if the web app has changed. We have found that this is not a big deal, especially if you run web tests in CI.
Data is a big challenge. As far as I can see, you must choose between imperfect alternatives. Here's how we handle it. First, I should explain that we are working with a large complex legacy WebForms app. Also I should mention that the domain code is not well-suited for creating test data from within the test project.
This left us with a couple of choices. We could: (a) run data setup scripts under the build, or (b) create all data via web tests using the actual web site. The problem with option (a) is that tests become coupled with scripts at a minute level. It makes my head throb to think about synchronizing web test code with T-SQL. So we went with (b).
One benefit of (b) is that your setup also validates application behavior. The problem is...time.
Ideally tests should be independent, without temporal coupling (can run in any order) and not sharing any context (e.g., common test data). The common way to handle this is to set up and tear down data with every test. After some careful thought, we decided to break this rule.
We use Gallio (MbUnit 3), which provides some nice features that support our strategy. First, it lets you specify execution order at the fixture and test level. We have four "setup" fixtures which are ordered -4, -3, -2, -1. These run in the specified order and before all "non setup" fixtures, which by default have an order of 0.
Our web test project depends on the build script for one thing only: a single well-known username/password. This is a coupling I can live with. As the setup tests run they build up a "data context" object that holds identifiers of data (companies, users, vendors, clients, etc.) that is later used (but never changed) throughout other all fixtures. (By identifiers, I don't necessarily mean keys. In most cases our web UI does not expose unique keys. We must navigate the app using names or other proxies for true identifiers. More on this below.)
Gallio also allows you to specify that a test or fixture depends on another test or fixture. When a precedent fails, the dependent is skipped. This reduces the evil of temporal coupling by preventing "cascading failures" which can reap much confusion.
Creating baseline test data once, instead of before each test, speeds things up a lot. However, the setup tests still might take 10 minutes to run. When I'm working on new tests I want to run and rerun them frequently. Enter another cool Gallio feature: Ambience. Ambience is a wrapper around DB4 that provides a very simple way to persist objects. We use it to persist the data context automatically. Thus setup tests must only be run once between rebuilds of the database. After that you can run any or all other fixtures repeatedly.
So what about cleaning up test data? Don't we need to start from a known state? This is a rule we have found it expedient to break. A strategy that is working for us is to use long random values for things like company name, username, etc. We have found that it is not very difficult to keep a test run inside a logical "data space" such that it does not bump into other data. Certainly I fear the day that I spend hours chasing down a phantom failing test only to find that it's some data collision. It's a trade off that is working for us currently.
We are using Watin. I quite like it. Another key to success is something Scott Bellware alluded to. As we create tests we are building up an abstract model of our UI. So instead of this:
browser.TextField("ctl0_tab2_newNote").TypeText("foo");
You will see this in our tests:
User.NotesTab.NewNote.TypeText("foo");
This approach provides three benefits. First, we never repeat a magic string. This greatly reduces brittleness. Second, tests are much easier to read and understand. Last, we hide most of the the Watin framework behind our own abstractions. In the second example, only TypeText is a Watin method. This will make it easier to change as the framework changes.
Hope this helps.
It was difficult, but not impossible, to build an integration test phase into the build process using maven. What happened was essentially this:
Ignore all JUNit tests in a specific directory unless the integration-test phase fires.
Add a maven profile to execute the integration tests.
For the pre-integration test phase -
Start Jetty running the application hitting a test database.
Start the selenium server
Run selenium integration tests in integration test phase
Stop selenium server
Stop selenium
The difficulty in this step was really setting up jetty - we couldn't get it to just launch from a war, so we actually have to have jetty unpack the war, then run the server - but it works, well, and is automated - all you have to do is type mvn -PintegrationTest (that was our integration test profile name) and off it went.
Do you mean automatically starting testing after build finished?
You could write automated scripts to copy the build files to a working IIS while the build complied successfully. And then start the automated BVT by call mstest.exe or other methods.
You could get a try with autoitx or some function language,such as Python,ruby.

Resources