Integration testing using Selenium and NUnit - From UI to DB - asp.net

I am having some problems while trying to create integration tests with Selenium and NUnit.
I'm trying to use Selenium RC in NUnit test to drive my ASP.NET web app, and would like the tests to actually do all the stuff in DB that the real user would do. Naturally it would be nice if the database could get rolled back after Selenium has done it's thing, and i've asserted that db contains the new rows (etc) with the data from the ui.
So, here's the setup i have (in some sort of pseudocode):
TestMethod()
{
Using(new TransActionScope)
{
Selenium.StartSelenium()
Selenium.SelectAndClickAndDoStuffInUI()
AssertSomething()
}
}
Now, the SelectAndClickAndDoStuffInUI-method clicks around in UI and thus fires up our proprietary da-framework. Our framework writes all the stuff into db, and the AssertSomething-method asserts that everything is fine in db. Framework uses transactions ("required") in it's inner workings.
So everything is fine, right? No, sadly not. The TransActionScope in the example above shouldn't get committed (no txScope.Complete()-call there), and thus all the inner transactions should get rolled back too, right? Well they do not, and everything Selenium does through the UI gets committed to DB.
I've really tried to understand where this goes wrong, but so far haven't found the answer.
Thanks for reading, and (finally) here's the actual question:
Why the TransactionScope does not get rolled back in the case shown in my example?
I will gladly provide additional information about the situation and setup!

You are using a UI to an asp app. That means your test is not able to rollback the changes you made.
The transaction scope can only work in your own process. How is the transaction manager able to undo the click inside a webinterface? It could be anywhere. Selenium is just remote controlling a browser.
You should create your "real" unit tests with mock objects, and not access the database at all. Its a little hard with a normal asp page, but you cant take a look at asp.MVC to find a possible solution to this problem.

Related

How can a Symfony web app itself detect unapplied Doctrine Migrations?

The normal way of dealing with Doctrine Migrations is via the standard Commands - during development one runs the commands manually to e.g. run diffs and apply the migrations, and deployment typically involves applying them the by the same approach but automatically. Occasionally when working in a team on a local instance there are new migrations, but I've updated my source from version control rather than done a deployment, so I need to apply the new migrations manually, and I need to know that I need to do that! An improvement could be to display a warning on a rendered webpage that migrations are out of sync and action needs to be taken.
Is there a way to access the Migrations API directly in PHP/Symfony code, so that I could detect a mismatch between committed and applied migrations? I haven't found any documentation about that. I've had an initial poke around the code and it seems heavily skewed towards Commands (reasonably enough).
Firstly, updating your source code from version control, is a deployment too, and applying Doctrine Migrations should be part of that. You should create a check list of all the steps you need to do during a deployment, including rollbacks. Depending on the complexity of the application, many things could go wrong.
To answer you question, you can execute, in your code, a diff migration with the Process component and parse the output to determine if there're migrations to be applied.

Populate SQLite only once upon installation of UWP app

I am developing UWP application using vs studio 2017 version 15.9.6.
I want to use Windows local SQLite database. I want to run an SQL script named mySql.txt when the user first time install the application. I dont want to run it every time when the user run the app as it contain insert statement, which will cause duplicate rows insertion. So I want to run that script only once, preferably in the installation time.
How can I do that? I am very new to UWP and .NET. Please guide me step-by-step if possible.
You can make sure the initialization/seeding is done only once for the app. For that you may utilize ApplicationDate.Current.LocalSettings.
These allow you to write simple data for your application which are bound to your app. Once the user uninstalls the app, these data will be removed as well. This fits your scenario exactly.
Suppose your database initialization code is in the method InitializeDb(). You could use the following to make sure the initialization is done only once:
if (!ApplicationData.Current.LocalSettings.Values.ContainsKey("DbInitialized"))
{
InitializeDb();
ApplicationData.Current.LocalSettings.Values["DbInitialized"] = true;
}
This code first checks if we have initialized the db previously and if not, performs the initialization and stores a flag into app settings to make sure the next time the initialization is skipped.
You can run this code during app initialization, for example in OnLaunched method, or on when the database service is first required.
This is of course the simplest implementation, so you can (and should) add some exception handling, so that if the initialization fails, it can be retried and so on. Also you may want to handle app updates and DB updates - in which case you can use ApplicationData.Current.Version which allows you to track the version of application data and can be used to keep track of DB version as well so you can perform appropriate migrations between versions.
Finally, for even better user convenience, there is also a way to perform the app update steps during updates. See this article for more info.

Real time collaboration with CodeMirror

I started this little project where I would do real time collaboration on code using CodeMirror.
I have a msgs system setup such its easy to pass objects from one user to another. My problem is getting it integrated with CodeMirror. I found out that it have events for onchange and a replaceRange(string, from, to).
I pass the onchange objects to the other users and uses the replaceRange to update the view. Problem is then when using replaceRange, it triggers an new onchange and it sends msgs back and forward. Anyone know if there are some way of updating the local view without it triggering an onchange. Or suggestings for other paths to take. (the msgs system is already set up and its easy to pass javascript objects to other clients).
You Can Use Firepad
FirePad is an open-source (on GitHub) realtime collaborative plug-in to codemirror. You can get it setup with codemirror in 4 extra lines of code and a few minutes. It's uses Firebase for the backend.
To get this to work properly, you'll have to merge changes as well. See http://ot.substance.io/ for a demo of an open-source solution (also using CodeMirror).

WF4 versioning - how do I cancel / terminate an invalid instance?

I've got a state machine workflow implemented using WorkflowFoundation 4.1. I'm using the SQL Server persistence store and the WorkflowApplication class for loading and running workflows.
I'm making changes to the state machine workflow model, and am finding that existing instances break very easily. I've written code that can replay the workflow back into the correct state, which is basically a migration, which works fine, however I need to be able to clear out the old workflow instance as well.
The main issue is that if the workflow is invalid, I can't even load it, so I can't terminate or cancel it either.
Is there a way to use the workflow API to remove a workflow without loading it (ie, some command on the SqlPersistenceStore), or do I have to clean the database manually?
The SqlWorkflowInstanceStore doesn't allow you to do so directly. You will need to go into the database and delete the record there. If you are using AppFabric there is actually a command to delete a workflow instance without loading it first for just that purpose. There should be a PowerShell command to do that using code.

Integrating Automated Web Testing Into Build Process

I'm looking for suggestions to improve the process of automating functional testing of a website. Here's what I've tried in the past.
I used to have a test project using WATIN. You effectively write what look like "unit tests" and use WATIN to automate a browser to click around your site etc.
Of course, you need a site to be running. So I made the test actually copy the code from my web project to a local directory and started a web server pointing to that directory before any of the tests run.
That way, someone new could simply get latest from our source control and run our build script, and see all the tests run. They could also simply run all the tests from the IDE.
The problem I ran into was that I spent a lot of time maintaining the code to set up the test environment more than the tests. Not to mention that it took a long time to run because of all that copying. Also, I needed to test out various scenarios including installation, meaning I needed to be able to set the database to various initial states.
I was curious on what you've done to automate functional testing to solve some of these issues and still keep it simple.
MORE DETAILS
Since people asked for more details, here it is. I'm running ASP.NET using Visual Studio and Cassini (the built in web server). My unit tests run in MbUnit (but that's not so important. Could be NUnit or XUnit.NET). Typically, I have a separate unit test framework run all my WATIN tests. In the AssemblyLoad phase, I start the webserver and copy all my web application code locally.
I'm interested in solutions for any platform, but I may need more descriptions on what each thing means. :)
Phil,
Automation can just be hard to maintain, but the more you use your automation for deployment, the more you can leverage it for test setup (and vice versa).
Frankly, it's easier to evolve automation code, factoring it and refactoring it into specific, small units of functionality when using a build tool that isn't
just driving statically-compiled, pre-factored units of functionality, as is the case with NAnt and MSBuild. This is one of the reasons that many people who were relatively early users of toole like NAnt have moved off to Rake. The freedom to treat build code as any other code - to cotinually evolve its content and shape - is greater with Rake. You don't end up with the same stasis in automation artifacts as easily and as quickly with Rake, and it's a lot easier to script in Rake than NAnt or MSBuild.
So, some part of your struggle is inherently bound up in the tools. To keep your automation sensible and maintained, you should be wary of obstructions that static build tools like NAnt and MSBuild impose.
I would suggest that you not couple your test environment boot-strapping from assembly load. That's an inside-out coupling that only serves brief convenience. There's nothing wrong (and, likely everything right) with going to the command line and executing the build task that sets up the environment before running tests either from the IDE or from the command line, or from an interactive console, like the C# REPL from the Mono Project, or from IRB.
Test data setup is simply just a pain in the butt sometimes. It has to be done.
You're going to need a library that you can call to create and clean up database state. You can make those calls right from your test code, but I personally tend to avoid doing this because there is more than one good use of test data or sample data control code.
I drive all sample data control from HTTP. I write controllers with actions specifically for controlling sample data and issue GETs against those actions through Selenium. I use these to create and clean up data. I can compose GETs to these actions to create common scenarios of setup data, and I can pass specific values for data as request parameters (or form parameters if needs be).
I keep these controllers in an area that I usually call "test_support".
My automation for deploying the website does not deploy the test_support area or its routes and mapping. As part of my deployment verification automation, I make sure that the test_support code is not in the production app.
I also use the test_support code to automate control over the entire environment - replacing services with fakes, turning off subsystems to simulate failures and failovers, activating or deactivating authentication and access control for functional testing that isn't concerned with these facets, etc.
There's a great secondary value to controlling your web app's sample data or test data from the web: when demoing the app, or when doing exploratory testing, you can create the data scenarios you need just by issuing some gets against known (or guessable) urls in the test_support area. Really making a disciplined effort to stick to restful routes and resource-orientation here will really pay off.
There's a lot more to this functional automation (including test, deployment, demoing, etc) so the better designed these resources are, the better the time you'll have maintaining them over the long hall, and the more opportunities you'll find to leverage them in unforseen but beneficial ways.
For example, writing domain model code over the semantic model of your web pages will help create much more understandable test code and decrease the brittleness. If you do this well, you can use those same models with a variety of different drivers so that you can leverage them in stress tests and load tests as well as functional test as well as using them from the command line as exploratory tools. By the way, this kind of thing is easier to do when you're not bound to driver types as you are when you use a static language. There's a reason why many leading testing thinkers and doers work in Ruby, and why Watir is written in Ruby. Reuse, composition, and expressiveness is much easier to achieve in Ruby than C# test code. But that's another story.
Let's catch up sometime and talk more about the other 90% of this stuff :)
We used Plasma on one project. It emulates a web server in process - just point it at the root of your web application project.
It was surprisingly stable - no copying files or starting up an out of process server.
Here is how a test using Plasma looks for us...
[Test]
public void Can_log_in() {
AspNetResponse response = WebApp.ProcessRequest("/Login.aspx");
AspNetForm form = response.GetForm();
form["UserName"] = User.UserName;
form["Password"] = User.Password;
AspNetResponse loggedIn = WebApp.ProcessRequest(Button.Click(form, "LoginUser"));
Assert.IsTrue(loggedIn.IsRedirect());
AspNetResponse homePage = WebApp.ProcessRequest(loggedIn.GetRedirectUrl());
Assert.AreEqual(homePage.Status, 200);
}
All the "AspNetResponse" and "AspNetForm" classes are included with Plasma.
We are currently using an automated build process for our asp.net mvc application.
We use the following tools:
TeamCity
SVN
nUnit
Selenium
We use an msbuild script that runs on a build agent which can be any amount of machines.
The msbuild script gets the latest version of code from svn and builds it.
On success it then deploys the artifacts to a given machine/folder and creates the virtual site in IIS.
We then use MSBuild contrib tasks to run sql scripts to install the database and load data, you could also do a restore.
On success we kick off the nUnit tests. The test setup ensures that selenium is up and running and then drives the selenium tests much in the same way that Watin does. Selenium has a good recorder for tests which can be exported to c#.
The good thing about Selenium is that you can drive FF, Chorme and IE rather than being restricted to IE which was the case with Watin the last time i looked at it. You can also use Selenium to do load testing with the Selenium Grid therefore you can reuse the same tests.
On success msbuild then tags the build in svn. TeamCity has a job that runs overnight that will deploy the latest tag to a staging environment ready for the business users to check the project status the following morning.
In a previous life we had nant & msbuild scripts to fully manage the environment (installing java, selenium etc) however this does take a lot of time so as a pre req we assume each build agent has these installed. In time we will include these tasks.
Why do you need to copy code? Ditch Cassini and let Visual Studio create a virtual directory for you. Sure the devs must remember to build before running web tests if the web app has changed. We have found that this is not a big deal, especially if you run web tests in CI.
Data is a big challenge. As far as I can see, you must choose between imperfect alternatives. Here's how we handle it. First, I should explain that we are working with a large complex legacy WebForms app. Also I should mention that the domain code is not well-suited for creating test data from within the test project.
This left us with a couple of choices. We could: (a) run data setup scripts under the build, or (b) create all data via web tests using the actual web site. The problem with option (a) is that tests become coupled with scripts at a minute level. It makes my head throb to think about synchronizing web test code with T-SQL. So we went with (b).
One benefit of (b) is that your setup also validates application behavior. The problem is...time.
Ideally tests should be independent, without temporal coupling (can run in any order) and not sharing any context (e.g., common test data). The common way to handle this is to set up and tear down data with every test. After some careful thought, we decided to break this rule.
We use Gallio (MbUnit 3), which provides some nice features that support our strategy. First, it lets you specify execution order at the fixture and test level. We have four "setup" fixtures which are ordered -4, -3, -2, -1. These run in the specified order and before all "non setup" fixtures, which by default have an order of 0.
Our web test project depends on the build script for one thing only: a single well-known username/password. This is a coupling I can live with. As the setup tests run they build up a "data context" object that holds identifiers of data (companies, users, vendors, clients, etc.) that is later used (but never changed) throughout other all fixtures. (By identifiers, I don't necessarily mean keys. In most cases our web UI does not expose unique keys. We must navigate the app using names or other proxies for true identifiers. More on this below.)
Gallio also allows you to specify that a test or fixture depends on another test or fixture. When a precedent fails, the dependent is skipped. This reduces the evil of temporal coupling by preventing "cascading failures" which can reap much confusion.
Creating baseline test data once, instead of before each test, speeds things up a lot. However, the setup tests still might take 10 minutes to run. When I'm working on new tests I want to run and rerun them frequently. Enter another cool Gallio feature: Ambience. Ambience is a wrapper around DB4 that provides a very simple way to persist objects. We use it to persist the data context automatically. Thus setup tests must only be run once between rebuilds of the database. After that you can run any or all other fixtures repeatedly.
So what about cleaning up test data? Don't we need to start from a known state? This is a rule we have found it expedient to break. A strategy that is working for us is to use long random values for things like company name, username, etc. We have found that it is not very difficult to keep a test run inside a logical "data space" such that it does not bump into other data. Certainly I fear the day that I spend hours chasing down a phantom failing test only to find that it's some data collision. It's a trade off that is working for us currently.
We are using Watin. I quite like it. Another key to success is something Scott Bellware alluded to. As we create tests we are building up an abstract model of our UI. So instead of this:
browser.TextField("ctl0_tab2_newNote").TypeText("foo");
You will see this in our tests:
User.NotesTab.NewNote.TypeText("foo");
This approach provides three benefits. First, we never repeat a magic string. This greatly reduces brittleness. Second, tests are much easier to read and understand. Last, we hide most of the the Watin framework behind our own abstractions. In the second example, only TypeText is a Watin method. This will make it easier to change as the framework changes.
Hope this helps.
It was difficult, but not impossible, to build an integration test phase into the build process using maven. What happened was essentially this:
Ignore all JUNit tests in a specific directory unless the integration-test phase fires.
Add a maven profile to execute the integration tests.
For the pre-integration test phase -
Start Jetty running the application hitting a test database.
Start the selenium server
Run selenium integration tests in integration test phase
Stop selenium server
Stop selenium
The difficulty in this step was really setting up jetty - we couldn't get it to just launch from a war, so we actually have to have jetty unpack the war, then run the server - but it works, well, and is automated - all you have to do is type mvn -PintegrationTest (that was our integration test profile name) and off it went.
Do you mean automatically starting testing after build finished?
You could write automated scripts to copy the build files to a working IIS while the build complied successfully. And then start the automated BVT by call mstest.exe or other methods.
You could get a try with autoitx or some function language,such as Python,ruby.

Resources