WatiN test data reset/clean up - integration-testing

I'm wondering how people are currently resetting their data / cleaning up test remnants for their WatiN/Wartir tests?
For example, lets say there's a test to add a user into the system and the username has to be unique. Obviously the first run without any users should work fine, but the second run will fail without manual intervention.

There are a couple of strategies that you could do for this, I am assuming that you are using WatiN, with Nunit or VS Unit tests to run your tests.
Use transactions
An approach that is used when unit testing is that you "wrap" the whole test in a transaction and at the completion of the test roll the transaction back. In .net you can use System.Transactions for this.
Build a "stub page"
Build a page in your applicaiton that uses the existing business logic to delete your data. This page would need to be secured and ideally not even deployed in to production.
This is the approach that I would recommend.
Call a web service
Develop a web service, or call one directly from the app tier of the applicaiton to perform the delete. You will probably need to develop this as well.
Clean up directly
Build some classes in your test code to access the data and clean it up.
With any of these you will need to cleanup before and after you run your test, i.e. in the test setup and test cleanup methods. The reason to do it twice is that you should assume that your test has failed and not cleaned up properley.
Use Linq to Sql AFAIK if you are using Linq to sql, it works in-memory and wraps the whole update in a transaction for you automatically. If you simply don't call the SubmitChanges(); method then you should be fine, but I haven't tested this myself.

I have asked a developer to make a script that will reset database. After a suite of tests, I just call that script and start from clean database.

Mike - your question isn't unique for Watir/WatiN. It applies for any UI testing, so search around for similar solutions for Selenium, Windmill, and even headless integration tests (HtmlUnit, API tests, etc). I've answered this question a couple times personally on StackOverflow.

WatiN is for UI testing.
In order to test the scenario you are looking for, you can generate user id using the c# code that will make it unique (as against the way it is stored when you created the test).

Related

Robot framework for distributed testing

We are in the process of finalizing a test framework, and are pretty impressed with Robot framework and STAF.
Unable to decide with the optimal approach for the below:
Want to be able to start tests from a server by selecting clients
Can we display all clients in the existing network?
Number of clients can increase/decrease over time
When we select a client we want to go fetch client properties dynamically
Is there a way to display dynamic properties on RIDE/STAX
Can we use any other framework and integrate with Robot? Staf/Stax?
User should be able to choose tests the client supports and build a config
Can we use RIDE or something similar to build test configurations/per client
Launch all clients in parallel, monitor and report results
Is there a way to launch and monitor results in parallel?
This is only a partial answer, based on my understanding of your question and lacking full knowledge of your situation.
Robot Framework can be started from a .cmd or .bat file. I don't know if the results can be sent somewhere else to be saved from there, but I'm fairly certain that yes, they can.
1.1. If you can get that list in Python or Java, then yes, you can do it in Robot Framework and pass variables and pass/fail results around the test suite. You might be able to use both at once, but I'm not sure yet.
Handled with the first item.
Robot + Python/Java can probably handle that.
3.1. I don't know, I use PyCharm as my Robot Framework IDE. It has an integrated console and allows for quick and easy managing of Python/RobotFramework files, as well as a lot of other languages, but I'd imagine that using Robot's Log to Console keyword, you could send the results directly to the console. So, yes.
3.2. Short answer: Not that I've ever heard of, but if you can run those with Java/Python and return the Pass/Fail results to Robot Framework, then yes.
Using multiple tags, the Robot Framework programmer can, at runtime, either run the test excluding particular tags or run the test including particular tags.
In theory: yes. Again, not something I've ever done, and honestly telling you how is beyond me, but I don't know a reason why you couldn't as long as you don't have any custom keywords that move the mouse cursor.

Generating test data for functional tests

I need to generate some test data for a Web and WPF application (both .NET Apps). So I had two possible solution in my mind.
Generate test data by using sql scripts. This approach has the issue that I need to validate the inserted data.
Use the insert API of my .NET code to generate test data. For this I am not sure if its oversized, but I would reuse the validation logic of my code.
Do you have any other suggestions? Maybe Microsoft tools for supporting those creation tasks?
Thanks!
Oliver
Don't use sql/db in your test. It is extra dependency that has nothing to do of your program logic. Normally, I would create an interface called IDataProvider and then provide different implementation i.e. DbDataProvider for normal execution or FakeDataProvdier for testing. With that IDataProvider interface, means that you can plug into whatever datasource your want for your testing e.g. TextFile, xml, json, objectbuilder using your own api etc.
Using insert API for generating test data is a better idea. But, why would you worry about performing validation to your test data ? Normally, a test should failed if the test data is wrong or you are not doing right functional test.

SpecFlow, Webdriver and Mocks - is it possible?

The question in short is that we are stumbling upon BDD definitions that more or less require different states - which leads to the necessity for a mock of sorts for ASP.NET/MVC - I know of none, which is why I ask here
Details:
We are developing a project in ASP.NET (MVC3/Razor engine) and are using SpecFlow to drive our development.
We quite often stumble into situations where we need the webpage under test to perform in a certain manner so that we can verify the behavior, i.e:
Scenario: Should render alternatively when backend system is down
Given that the backend system is down
And there are no channels for the page to display
When I inspect the webpage under test
Then the page renderes an alternative html indicating that there is a problem
For a unit test, this is less of an issue - run mock on the controller bit, and verify that it delivers the correct results, however, for a SpecFlow test, this is more or less requiring alternate configurations.
So it is possible at all, or - are there some known software patterns for developing webpages using BDD that I've missed?
Even when using SpecFlow, you can still use a mocking framework. What I would do is use the [BeforeScenario] attribute to set up the mocks for the test e.g.
[BeforeScenario]
public void BeforeShouldRenderAlternatively()
{
// Do mock setups.
}
This SO question might come in handy for you also.
You could use Deleporter
Deleporter is a little .NET library that teleports arbitrary delegates into an ASP.NET application in some other process (e.g., hosted in IIS) and runs them there.
It lets you delve into a remote ASP.NET application’s internals without any special cooperation from the remote app, and then you can do any of the following:
Cross-process mocking, by combining it with any mocking tool. For example, you could inject a temporary mock database or simulate the passing of time (e.g., if your integration tests want to specify what happens after 30 days or whatever)
Test different configurations, by writing to static properties in the remote ASP.NET appdomain or using the ConfigurationManager API to edit its entries.
Run teardown or cleanup logic such as flushing caches. For example, recently I needed to restore a SQL database to a known state after each test in the suite. The trouble was that ASP.NET connection pool was still holding open connections on the old database, causing connection errors. I resolved this easily by using Deleporter to issue a SqlConnection.ClearAllPools() command in the remote appdomain – the ASP.NET app under test didn’t need to know anything about it.

Web application: Acceptance testing: Initial state for a test and test isolation?

Greetings,
I am currently exploring some extreme programming and try to stick as much to it as possible. This means, I will need to turn my (by now, unexpectedly thick stack of) user stories into acceptance tests once I begin an iteration (after planning the release, of course).
I am not entirely sure about the implementation language I am going to use, however, I am sure that this is going to be a dynamic web application with a database backend, served by a webserver. Right now, I plan to develop the first release on a local machine with a local testing environment, so it is possible to assume that security is no concern on the acceptance tests (so, I can give the acceptance tests root access to the testing database involved, for example). I am still a bit unsure about the acceptance test framework to use, however, since this is going to be a web application, I think I will use Selenium RC in order to write the tests and run them (I mention this in case someone is able to point me to something better :) ).
However, there still is a dark area left: I do not have data for this application yet, because I am implementing a new, fresh application. Thus, I cannot grab a snapshot of the current production database in order to grab a test database, and additionally, the application is stateful (as any web application with a database backend is), so using a single database for all acceptance tests is going to cause ugly problems with regard to test isolation (and at least for unit tests, that reads as "This can result in great fun and lots of gray hair").
So, how to I solve this problem? Do I create artificial testing databases (and maintain them whenever the database schema changes) and write the acceptance tests such that each acceptance test loads the appropiate database state into the testing database before running the test? (How fast or slow will it be to load a dozen records a hundred times, when a lot of accentance tests run?) Should I create a single example database, load this for all tests and hope for the best? Should I recreate the test data I need all the time in the acceptance tests? Or, how do people do this?
According to further research, the proper way to do this is to bring the database into a defined state using the appropiate setUp-methods. This porentially involves deleting all existing data in the tables, adding a certain test set of data to the table and then running the test on exaclty this data. Afterwards, the teardown-method clears up whatever was done to the tables (either setUp drops everything, or teardown drops everything again). There are tools like dbUnit to simplify this process. This results in some reduction of the testing speed, however, it establishes total isolation of tests, which is a good thing, because then, green simply means green and red simply means red and not "Given the current order of test execution, this works".
Besides that, the speed issue will probably be less significant for me, as I can focus on a small amount of tests during developing code for a single user story and have my CI-server run all the tests (which then takes more time) in the background when I think I am done.

How to automate functional/integration tests and database rollbacks

In contrast to my previous question, i'll try to give my requirements.
I am trying to find some framework/methodology/"thing" that would fit the following:
Ability to write an automated test, preferably written in Visual Studio, using C#.
Test should drive a web browser and interact with SUT just like an user would.
Test should be able to setup a test scenario in DB.
Test should be able to assert that user interactions had the expected effect in DB.
After test is completed, it should be able to roll back all changes it made in DB.
My first attempt was to use NUnit test to drive Selenium (and Watin before that), but i faced a bit of a problem (check the link above) while using TransactionScope to roll back the changes Selenium-driven browser did in the DB.
Has anyone done anything like this in the "real world"? I've found some references through Google, but haven't been able to find any concrete examples on how to implement this. There wouldn't be any problems if i'd be doing unit testing. In that case TransactionScope would be quite enough.
Edit: R. Harvey pointed me to this question, which is almost identical to my situation.
However that question is just almost identical. My application is part of a family of services, all of them accessing the same set of database tables. Amount of test data required does not allow for efficient use of drop/create-scripts, so is there some alternate solution for this?
We are using SQL Server 2005, and i'm not very proficient in database magic, so if there's some way to use sql scripting other than drop/create, then that could be an option.
Edit 2:
Based on the answers and some additional head scratching, we'll go for more lightweight databases for developers to perform unit-, integration- and functional testing. This enables us to use sql-scripts for setting up and tearing down the test.
Changes made in a transaction are only visible inside said transaction. Also wrapping the test in a transaction scope (if possible) would make the test behave differently than the real thing in a very critical aspect (transactions).
It is much better to use a database image that you restore before every test suite. This way after the suite completes and the verification is done, you drop the test database. The next run, during the suite setup, the database is re-created from the saved image in a pristine state ready for testing. Even better would be to have a script that deploys the database from scratch and run that script during suite setup.
Btw is not feasible to restore to a pristine state before every test. More generically is not feasible to have lengthy individual test setup and cleanup steps. As you add more tests the time spent restoring the database to test-ready condition between tests will become just unmanageable. Suites with hundreds of tests are quite common and full test runs of tens of thousands of tests would mean hours and hours spent just restoring database for test. Design your individual test so that they can be run independently, ie. test N has to produce valid results even if test N-1 failed.
Another thing to consider is failure investigation, you want your failed test to leave the database in a state that can be investigated for meaningful info and you want subsequent tests to be able to run and produce valid results. Sometimes these requirements will contradict each other, but you must take them into consideration and design your test around them.
If the amount of data required to restore the database to a known-good state is prohibitive of drop/create scripts and you are running you tests on Developer or Enterprise edition of SQL 2005, you could look into creating a database snapshot of the good state, and reverting to it before each test. This is considerably faster than a full restore, although it may still be too time consuming if you have hundreds of tests.
Don't miss Amnesia which I recommended on this related question.

Resources