automated testing on a mature codebase - automated-tests

I'm working with a large mature php codebase that I'm considering making a case for automated testing for to the management. As part of this I want to have the core of a usable testing suite in place to demo the impacts on our completion times ect. Only thing is, I'm pretty much a novice at automated testing myself and theres a couple of real life considerations I'm not altogether clear on.
For one thing how do I include my tests in git? I don't want to push them live!
For another I clearly don't want to create dependencies in the production code on a testing tool. So should I be writing the tests to load the main application into them as a preamble to the actual tests? (This would seem to imply a mirror directory structure for tests is needed in our development environments).
Lastly I am leaning toward PHPtest and/or selenium to run the show. In particular I want to test browser test results including html and subsequent js and ajax operations. Selenium seems ideal for that but I'm unclear on how I would integrate it into the codebase. I assume its Javascript but I'm unclear on that. The remaining features that aren't user accessible would then be tested by PHPtest, a fairly small number of items I would assume. Does that seem in any way sane?

Related

How do I keep compiled code libraries up-to-date across multiple web sites using version control?

Currently, we have a long list of various websites throughout our company's intranet. Most are inside a firewall and require an Active Directory account to access. One of our problems, as of late, has been the increase in the number of websites and the addition of a common code library that stores our database access classes, common helper functions, serialization methods, etc. The goal is to use that framework across all websites throughout the company.
Currently, we have upgraded the in-house data entry application with these changes consistently. It is up-to-date. The problem, however, is maintaining all of the other websites. Is there a best practice or way in which I find out versions on each website and upgrade accordingly? Can I have a centralized place where I keep these DLLs and sites reference them? What's the best way to go about finding out what versions are on these websites without having to go through each and every single website, find out the version, and upgrade after every change?
Keep in mind, we run the newest TFS and are a .NET development team.
At my job we have a similar setup to you, lots of internal applications that use common libraries, and I have spent the best part of a year sorting this all out.
The first thing to note is that nothing you mentioned really has anything to do with TFS, but is really a symptom of the way your applications, and their components, are packaged and deployed.
Here are some ideas to get you started:
Setup automated/continuous builds
This is the first thing you need to do. Use the build facility in TFS if you must, or make the investment into something like TeamCity (which is great). Evaluate everything. Find something which you love and that everyone else can live with. The reason why you need to find something you love is because you will ultimately be responsible for it.
The reason why setting up automated builds is so important is because that's your jumping off point to solve the rest of your issues.
Setup automated deployment
Every deployable artifact should now be being built by your build server. No more manual deployment. No more deployment from workstations. No more visual studio Publish feature. It's hard to step away from this, but it's worth it.
If you have lots of web projects then look into either using web deploy which can be easily automated using either msbuild/powershell or go fancy and try something like octopus deploy.
Package common components using nuget
By now your common code should have its own automated builds, but how do you automatically deploy a common component? Package it up into nuget and either put it on a share for consumption or host it in a nuget server (TeamCity has one built in). A good build server can automatically update your nuget packages for you (if you always need to be on the latest version), and you can inspect which version you are referencing by checking your packages.config.
I know this is a lot to take in, but it is in its essence the fundamentals of moving towards continuous delivery (http://continuousdelivery.com/).
Please beware that getting this right will take you a long time, but that the process is incremental and you can evolve it over time. However, the longer you wait the harder it will be. Don't feel like you need to upgrade all your projects at the same time, you don't. Just the ones that are causing the most pain.
I hope this helps.
I'd just like to step outside the space of a specific solution for your problem and address the underlying desire you have to consolidate your workload.
Be aware that any patching/upgrading scenario will have costs that you must address - there is no magic pill.
Particularly, what you want to achieve will typically incur either a build/deploy overhead (as jonnii has outlined), or a runtime overhead (in validating the new versions to ensure everything works as expected).
In your case, because you have already built your products, I expect you will go the build/deploy route.
Just remember that even with binary equivalence (everything compiles, and unit tests pass), there is still the risk that the application will behave somehow differently after an upgrade, so you will not be able to avoid at least some rudimentary testing across all of your applications (the GAC approach is particularly vulnerable to this risk).
You might find it easier to accept that just because you have built a new version of a binary, doesn't mean that it should be rolled out to all web applications, even ones that are already functioning correctly (if something ain't broke...).
If that is acceptable, then you will reduce your workload by only incurring resource expense on testing applications that actually need to be touched.

Web application: Acceptance testing: Initial state for a test and test isolation?

Greetings,
I am currently exploring some extreme programming and try to stick as much to it as possible. This means, I will need to turn my (by now, unexpectedly thick stack of) user stories into acceptance tests once I begin an iteration (after planning the release, of course).
I am not entirely sure about the implementation language I am going to use, however, I am sure that this is going to be a dynamic web application with a database backend, served by a webserver. Right now, I plan to develop the first release on a local machine with a local testing environment, so it is possible to assume that security is no concern on the acceptance tests (so, I can give the acceptance tests root access to the testing database involved, for example). I am still a bit unsure about the acceptance test framework to use, however, since this is going to be a web application, I think I will use Selenium RC in order to write the tests and run them (I mention this in case someone is able to point me to something better :) ).
However, there still is a dark area left: I do not have data for this application yet, because I am implementing a new, fresh application. Thus, I cannot grab a snapshot of the current production database in order to grab a test database, and additionally, the application is stateful (as any web application with a database backend is), so using a single database for all acceptance tests is going to cause ugly problems with regard to test isolation (and at least for unit tests, that reads as "This can result in great fun and lots of gray hair").
So, how to I solve this problem? Do I create artificial testing databases (and maintain them whenever the database schema changes) and write the acceptance tests such that each acceptance test loads the appropiate database state into the testing database before running the test? (How fast or slow will it be to load a dozen records a hundred times, when a lot of accentance tests run?) Should I create a single example database, load this for all tests and hope for the best? Should I recreate the test data I need all the time in the acceptance tests? Or, how do people do this?
According to further research, the proper way to do this is to bring the database into a defined state using the appropiate setUp-methods. This porentially involves deleting all existing data in the tables, adding a certain test set of data to the table and then running the test on exaclty this data. Afterwards, the teardown-method clears up whatever was done to the tables (either setUp drops everything, or teardown drops everything again). There are tools like dbUnit to simplify this process. This results in some reduction of the testing speed, however, it establishes total isolation of tests, which is a good thing, because then, green simply means green and red simply means red and not "Given the current order of test execution, this works".
Besides that, the speed issue will probably be less significant for me, as I can focus on a small amount of tests during developing code for a single user story and have my CI-server run all the tests (which then takes more time) in the background when I think I am done.

Testing ASP.NET webforms applications

If you're in my position you have a big WebForms applications which have escalated to this unmaintainable thing. Things break when you add new features and you need an inexpensive maintainable way to do some kind of automated testing.
Now, from my understanding, the right thing to do would be to try building an abstraction layout of the page and user control model present in ASP.NET WebForms however, seeing as it would require a major investment in an existing application it is not an option.
I'm trying and pushing for a REST-like development as much as possible because it has some nice properties. And while doing this I've written a simple spider bot that crawls all URLs it can find and tries, simply getting them.
This allowed my to quickly find bad data that was causing problems and avoid having my end-users clicking on broken things, however, this is of course not enough.
I continued work on my crawler and it's developed into a simple REST client that tries different input combination, looking for a probable bug or crash. It's more intelligent that just an exhaustive search (because it knows about the ASP.NET WebForms application layer) and my goal here is to basically explore the state of the web application, hoping to hit all the corner cases before our users.
Does anyone have any experience doing something similar?
Also, for you test gurus out there. Is this a complete waste of time, or will I be able to actually say something about the quality here? From my perspective it seems to hit a sweet spot in that it will try things a potential end user would though a browser.
As I said before, we're stuck in a bad place. And we need a simple way out of it, right now.
We've tried things like Selenium, but it mandates a lot of extra work and we change things all the time, it's just no possible to maintain multiple selenium test suits for 50 different applications.
Of all the types of testing to implement, unit testing is both the easiest and the most likely to yield results, in terms of less bugs and more maintainable code. Get that worked out before you deal with automated integration testing
Pick an IOC Container - I like Ninject for this personally
Find a convenient place to inject "service" classes into your Page (the consturctor of a base Page class or override the module that loads pages, whatever works for you)
Pick a unit test framework and if you don't have an automated build then set one up; include running a full suite of unit tests in that build
Every time you go near a piece of logic in an aspx.cs file, see if you can't isolate it in a service and wrap unit tests around it
Take a look at whether the MVP Pattern would be good for you - we found it decreased productivity as much as it increased testability (it did both a lot), but it works for some people
See about slowly migrating your app over to MVC, a page at a time if necessary
And remember, you are not going to fix this problem overnight, you don't have time. Just keep improving test coverage and you'll see the benefits over time.
What part of your application is breaking? The UI, or the business logic?
Business logic should be completely separated from the user interface, and should be tested separately. In particular, it's much easier to use automated unit testing tools against separated business logic than it is against UI.
If i am rigth you have a large web form and want to run some standard end user tests each time you do a new release.
I can recomend the Selenium IDE adon for firefox.
it will allow you to record your user actions, e.g filling in a form, and allow you to replay those actions at any time. an easy way to run some test over a form with differnt data.
For internal code testing write some Unit tests using NUnit

How to automate functional/integration tests and database rollbacks

In contrast to my previous question, i'll try to give my requirements.
I am trying to find some framework/methodology/"thing" that would fit the following:
Ability to write an automated test, preferably written in Visual Studio, using C#.
Test should drive a web browser and interact with SUT just like an user would.
Test should be able to setup a test scenario in DB.
Test should be able to assert that user interactions had the expected effect in DB.
After test is completed, it should be able to roll back all changes it made in DB.
My first attempt was to use NUnit test to drive Selenium (and Watin before that), but i faced a bit of a problem (check the link above) while using TransactionScope to roll back the changes Selenium-driven browser did in the DB.
Has anyone done anything like this in the "real world"? I've found some references through Google, but haven't been able to find any concrete examples on how to implement this. There wouldn't be any problems if i'd be doing unit testing. In that case TransactionScope would be quite enough.
Edit: R. Harvey pointed me to this question, which is almost identical to my situation.
However that question is just almost identical. My application is part of a family of services, all of them accessing the same set of database tables. Amount of test data required does not allow for efficient use of drop/create-scripts, so is there some alternate solution for this?
We are using SQL Server 2005, and i'm not very proficient in database magic, so if there's some way to use sql scripting other than drop/create, then that could be an option.
Edit 2:
Based on the answers and some additional head scratching, we'll go for more lightweight databases for developers to perform unit-, integration- and functional testing. This enables us to use sql-scripts for setting up and tearing down the test.
Changes made in a transaction are only visible inside said transaction. Also wrapping the test in a transaction scope (if possible) would make the test behave differently than the real thing in a very critical aspect (transactions).
It is much better to use a database image that you restore before every test suite. This way after the suite completes and the verification is done, you drop the test database. The next run, during the suite setup, the database is re-created from the saved image in a pristine state ready for testing. Even better would be to have a script that deploys the database from scratch and run that script during suite setup.
Btw is not feasible to restore to a pristine state before every test. More generically is not feasible to have lengthy individual test setup and cleanup steps. As you add more tests the time spent restoring the database to test-ready condition between tests will become just unmanageable. Suites with hundreds of tests are quite common and full test runs of tens of thousands of tests would mean hours and hours spent just restoring database for test. Design your individual test so that they can be run independently, ie. test N has to produce valid results even if test N-1 failed.
Another thing to consider is failure investigation, you want your failed test to leave the database in a state that can be investigated for meaningful info and you want subsequent tests to be able to run and produce valid results. Sometimes these requirements will contradict each other, but you must take them into consideration and design your test around them.
If the amount of data required to restore the database to a known-good state is prohibitive of drop/create scripts and you are running you tests on Developer or Enterprise edition of SQL 2005, you could look into creating a database snapshot of the good state, and reverting to it before each test. This is considerably faster than a full restore, although it may still be too time consuming if you have hundreds of tests.
Don't miss Amnesia which I recommended on this related question.

Testing the UI in an Asp.net Page?

What's the best way to automate testing the UI in an Asp.net Page?
Watir or Watin are a great place to start.
More info here
Quite loosely defined question so a good answer is almost impossible.
Would dare to suggest that using Selenium might help with automating the task.
If you are the only coder on a project, I would suggest testing it by hand. That said, you will likely suffer from coder myopathy. Since you wrote the code and know what it is supposed to do, you may subconsciously avoid actions that will break it.
I have worked with different automation methods and they tend to be fairly heavy. In other words, you will find yourself working on updating your tests more often than you would like. In my opinion, automated testing only becomes necessary when you have more than one developer on a project and they are not aware of the full scope.
In the ideal environment, a developer would have a dedicated tester who would write and maintain tests, as well as validate that the code was functionally correct and met the business requirements.
In the real world, lots of developers are basically lone wolves with limited resources and time and the best way to have solid, bug-free code is to understand the business requirements and then make sure that when writing the code, you make no mistakes. :-)
Not sure about the "best" way, that's probably quite a loaded question...
One way is to use the Web Tests in the Test edition of Visual Studio, see MSDN documentation.
Also here's a simple tutorial.
What specifically are you testing for? Cross browser compliance? Performance? Usability? That's a pretty broad question - can you define it a little more?
In terms of User Acceptance? Bug hunting? Load testing?
For the first one, get other people to use it and comment on it.
For the second one you should use your test plans and test cases that you wrote beforehand to test the UI, in terms of data validation (server-side as well as javascript), range checking and all that stuff. I believe there are tools that simulate clicks as well that you could use.
For the third, try JMeter.
As for testing the engine behind the website, you can bypass the web interface and write test classes that call the engine directly (if it isn't coded directly into the ASP) to test its functions. I would call this a different task to testing the UI however.
AspUnit which can be found on SourceForge.net. However the project is no longer actively developed but it will work on .Net 1.1 and 2.0.
Setup a room with several terminals
running your application
Prepare a list of tasks to be
completed
Bring in volunteers to run through
the tasks
Monitor the actions of the
volunteers either through taping or
a one way mirror
Rinse and Repeat!
I vote for Test Manager in Visual Studio 2010 and then generate "Coded UI tests" for it!
Very easy to create assertions
Very nice code (Readable!)
Easy and maintainable, because the code is easy to read and you can change the way how controls are found on the page
I did a quick comparison or WatiN, Selenium and Test Manager VS2010

Resources