free automatic regression test tool for asp.net applications? - asp.net

We are working in a small team. We often had problems like developer1 did some changes in stored procedure or function and it affected work of developer2. Such issues are traced out by chance later. Please guide me how such issues can be stopped. Is there a free tool that we can run to test such issues?

Slowly introduce unit tests, focused integration tests and full system tests.
For all of those use a .net unit test framework to do it. It'll be what you do in the test what makes it be any of the above scenarios. Make sure to keep each of those 3 type of tests separately, as those will have a big difference on the speed it takes to execute them.
For the unit test framework I suggest NUnit but there are others, one that I've found interesting but never made the jump is xUnit.net.
For full system tests I suggest to run them in the unit test framework using WatiN. You could also go with Selenium RC.
We often had problems like developer1 did some changes in stored procedure or function and it affected work of developer2. Such issues are traced out by chance later.
For that specific type of scenario I strongly suggest focused integration tests. Full system tests might catch such scenario, but it will still left you to figure out why it broke.
Instead focus the test in the very specific db access code that makes the call to the procedure. By adding scenarios in there that reveal all the expectations developer2 had from said procedure when (s)he wrote the related .net code, regression issues with that integration code can be revealed very quickly and be dealt with very effectively. Also note that developer1 can easily run the focused integration tests that involve that procedure or area of the database many times / which is a lot more likely to happen than doing the same with full system tests.

You can do either automated unit testing using tools such as NUnit or automated black-box testing using tools such as Selenium. Note that both options (even with free tools) may need significant investment in terms of time and efforts. Typically, unit test cases are created by developers them selves while for automated black box testing, a separate team of QA is utilized - this is mostly because unit test cases are generally written in languages such as C#, VB.NET while automated black-box testing tools typically utilize scripting languages.

Related

Suggest a suitable Automated Testing Tool for my project

We are in search of an automated testing tool for our project. As we are in testing department we prefer a tool which would have less programming in it. Please suggest some tools for us .Till now we are testing our application manually.
Our project is being developed in Java.
Is there any freeware tool that I could use or is it better to go for a paid tool?
Thanks in Advance.
Less programming? You'll need something like JUnit to write unit tests if you want to do serious regression testing, but unit tests require you to write some code
Here's a big list of open-source testing tools, some of them may offer what you want: http://java-source.net/open-source/testing-tools/junit
For example, T2 claims to be a random testing tool. As one, it is fully automatic, but one must keep in mind that the code coverage of random testing is in general very limited. It should be used as a complement to other testing methods. T2 checks for internal errors, run time exceptions, method specifications, and class invariant.
Not sure if you mean a CI tool or not, but we use Hudson at Zappos and it works pretty well.
http://hudson-ci.org/
..and there's also CruiseControl: http://cruisecontrol.sourceforge.net/
If you're not talking about CI, maybe you mean QA testing - in which case you should take a look at something like Selenium (for web apps):
http://seleniumhq.org/
If you're doing GUI testing? I'm not really familiar with that area, but I've heard about WinRunner and Rational:
http://en.wikipedia.org/wiki/HP_WinRunner
http://www-01.ibm.com/software/rational/offerings/quality/
..though neither are really free tools. Something like AutoIT might help you move widgets around, but it lacks the reporting parts:
http://www.autoitscript.com/autoit3/index.shtml
There could be two answer to you question:
Besides Selenium, though it has ample of advantages, I am reading about another tool which uses same API which Selenium use. The only changes in API I have seen so far is it reduces the complexity of functions thus making it more easier and simpler for user who is learning.
The tool is called 'Helium' and it has 50% (and more) less complex functions and code as Selenium has.
The only problem with this tool is it is paid tool for learning purpose and for implementing not-so-big scale project you can use it. But yeah after some time its gonna cost you.
I have implemented some code on Helium. Please let me know , if you face any issue initially or you are thinking to implement it.
Other being, you can use Selenium Builder(http://khyatisehgal.wordpress.com/2014/05/26/selenium-builder-exporting-and-execution/) which is an advanced form of Selenium IDE. It imports your command in different languages and does work more effectively and efficiently as Selenium IDE does(http://khyatisehgal.wordpress.com/2014/05/25/selenium-builder/) . So you can import scripts in Eclipse IDE and just execute them as is.
Please let me know , if you have any doubt in any of the tool.

How to initiate automated testing?

I started as a software engineer at the company I'm currently at. Over time, I was either the only one willing to or capable of taking responsibility for various systems, and so I was "promoted" to being IT Manager. Now, during my time as software engineer, I would create functional tests for the various software modules I would build, and as a result, even today I am able to quickly test various parts of the system that I have worked on. However, there is a large large code base with little to no coverage from the other various developers who have been working here.
Now, as IT Manager, I want to be able to test that all the parts of the system are working, but there is:
A) no budgeted time dedicated to creating code test coverage
and
B) No desire from the "chief software engineer" to start creating testing suites to help me monitor that the software is functioning.
I don't expect the software team to drop everything they are doing and spend 2 weeks creating test suites, but it would be nice if they started expanding the test suite
coverage over time so I can confirm that the various parts of the system are working.
So boiling it down, how do I get the software team to start building test suites?
Other caveats:
A) I'm still asked to do software projects in addition to managing our IT dept (a unix engineer, desktop support guy, and related office and production equipment)
B) My unix admin has a really hard time getting production systems up running the full code base, and we aren't getting good help from the software team. He can't run any kind of diagnostic to see where the web app is failing on the new installs. The VP of the company keeps telling me to go in and do print_r's in the code to see what is happening. This sucks!!!
First, you need to investigate Test Driven Development so that you are comfortable explaining it in terms that your developers will understand, as well as your management. Since you seem to be developing web applications, and you have technical skills, I suggest that you take the plunge and choose an open source tool for testing web applications, get it installed, and start building tests for anything that you develop yourself.
Twill is an example of the kind of testing tool that you would need.
Then, as manager, you need to entice developers to follow your example, and reward them for doing so. And punish them, when they don't use the testing framework and it leads to preventable problems. As soon as you get one such incident, you should be able to get your boss on board, and pick up some momentum.
Overall, remember that the objective is to do less work to get a good result. Cutting corners is a way of doing less work, but leads to the risk of bad, or spectacularly bad results. Keep management informed of the risk levels and potential costs at risk.
Don't just force people to do testing for testing's sake. It has to help them be more productive so choose the first projects for it carefully.
That's a good question. And if there was one correct answer to it, much more software projects would be successful and deliver high quality.
I don't think, that it is a good idea to make such a change top-down. It has to be driven from the developers themselves. So trainings in TDD direction would be good, but that is a long time invest, which takes time.
If you want a faster solution you should consider functional-, acceptance-, and systemtests. With these test you test pretty much the whole application through all layers. If you are developing web applications you should consinder using Selenium to automate your test. It is easy to create test with it (Selenium IDE).
But using only such tests (not Unit-tests) don't give you the advantages coming from TDD.
Automating your tests is crucial.
Do you have a Test or QA team?
I would first start to see if they have Test Cases that they use to qualify the build. If not you will have to develop these test cases to test the core functionality of your product.
The next step would be automating the test cases.
If the application is poorly developed without any troubleshooting tools or debugging features it would be tough until these are added as requirements for next release.
My 2 cents.
I'll have to disagree with michaelkebe- these changes need support from the executive level, in addition to a few key developers, in order to fully succeed.
Without that support, you'll just be some developers who look like they are 'wasting time on writing tests for stuff that already works.'
There needs to be a clear vision, and it needs to be repeated loudly and often.
I'm not necessarily advocating for Agile here, but often times it clicks for business owners.
If you can sell them on that, the things that you're excited about (delivering software fast, easy maintenance, automated testing, etc.) will fall into place.

Who writes the automated UI tests? Developers or Testers?

We're in the initial stages of a large project, and have decided that some form of automated UI testing is likely going to be useful for us, but have not yet sorted out exactly how this is going to work...
The primary goal is to automate a basic install and run-through of the app, so if a developer causes a major breakage (eg: app won't install, network won't connect, window won't display, etc) the testers don't have to waste their time (and get annoyed by) installing and configuring a broken build
A secondary goal is to help testers when dealing with repetitive tasks.
My question is: Who should create these kinds of tests? The implicit assumption in our team has been that the testers will do it, but everything I've read on the net always seems to imply that the developers will create them, as a kind of 'extended unit test'.
Some thoughts:
The developers seem to be in a much better position to do this, given that they know control ID's, classes, etc, and have a much better picture of how the app is working
The testers have the advantage of NOT knowing how the app is working, and hence can produce tests which may be much more useful
I've written some initial scripts using IronRuby and White. This has worked really well, and is powerful enough to do literally anything, but then you need to be able to write code to write the UI tests
All of the automated UI test tools we've tried (TestComplete, etc) seem to be incredibly complex and fragile, and while the testers can use them, it takes them about 100 times longer and they're constantly running into "accidental complexity" caused by the UI test tools.
Our testers can't code, and while they're plenty smart, all I got were funny looks when I suggested that testers could potentially write simple ruby scripts (even though said scripts are about 100x easier to read and write than the mangled mess of buttons and datagrids that seems to be the standard for automated UI test tools).
I'd really appreciate any feedback from others who have tried UI automation in a team of both developers and testers. Who did what, and did it work well? Thanks in advance!
Edit: The application in question is a C# WPF "rich client" application which connects to a server using WCF
Ideally it should really be QA who end up writing the tests. The problem with using a programmatic solution is the learning curve involved in getting the QA people up to speed with using the tool. Developers can certainly help with this learning curve and help the process by mentoring, but it still takes time and is a drag on development.
The alternative is to use a simple GUI tool which backs a language (and data scripts) and enables QA to build scripts visually, delving into the finer details of the language only when really necessary - development can also get involved here also.
The most successful attempts I've seen have definitely been with the latter, but setting this up is the hard part. Selenium has worked well for simple web applications and simple threads through the application. JMeter also (for scripted web conversations for web services) has worked well... Another option which is that of in house built test harness - a simple tool over the top of a scripting language (Groovy, Python, Ruby) that allows QA to put test data into the application either via a GUI or via data files. The data files can be simple properties files, or in more complex cases structured (something like YAML or even Excel) data files. That way they can build the basic smoke tests to start, and later expand that into various scenario driven tests.
Finally... I think rich client apps are way more difficult to test in this way, but it depends on the nature of the language and the tools available to you...
In my experience, testers who can code will switch jobs for a pay raise as developers.
I agree with you on the automated UI testing tools. Every place I've worked that was rich enough to afford WinRunner or LoadRunner couldn't afford the staff to actually use it. The prices may have changed, but back then, these were in the high 5-digit to low 6-digit price tags (think of the price of a starter home). The products were hard to use, and were usually kept uninstalled in a locked cabinet because everyone was afraid of getting in trouble for breaking them.
I worked over 7 years as an application developer before I finally switched to testing and test automation. Testing is much more challenging than coding, and any automation developer who wants to succeed should master testing skills.
Some time ago I put my thoughts on skill matrices in a couple of blog posts.
If interested to discuss:
http://automation-beyond.com/2009/05/28/qa-automation-skill-matrices/
Thanks.
I think having the developers write the tests will be of the most use. That way, you can get "breakage checking" throughout your dev cycle, not just at the end. If you do nightly automated builds, you can catch and fix bugs when they're small, before they grow into huge, mean, man-eating bugs.
What about the testers proposing the tests, and the developers actually writing it ?
I believe at first it largely depends on the tools you use.
Our company currently uses Selenium (We're a Java shop).
The Selenium IDE (which records actions in Firefox) works OK, but developers need to manually correct mistakes it makes against our webapps, so it's not really appropriate for QA to write tests with.
One thing I tried in the past (with some success), was to write library functions as wrappers for Selenium functions. They read as plain english:
selenium.clickButton("Button Text")
...but behind the scenes check for proper layout and tags on the button, has an id etc.
Unfortunately this required a lot of set up to allow easy writing of tests.
I recently became aware of a tool called Twist (from Thoughtworks, built on the Eclipse engine), which is a wrapper for Selenium, allowing plain English style tests to be written. I am hoping to be able to supply this to the testers, who can write simple assertions in plain English!
It automatically creates stubs for new assertions too, so the testers could write the tests, and pass them to developers if they need new code.
I've found the most reasonable option is to have enough specs such that the QA folks can stub out the test, basically figure out what they want to test at each 'screen' or on each component, and stub those out. The stubs should be named such that they're very descriptive as to what they're testing. This also offers a way to crystalize functional requirements. In fact, doing the requirements in this fashion are particularly easy, and help non-technical people really work through the muddy waters of their own though process.
The stubs can be filled in via a combination of QA/dev people. This allows you to CHEAPLY train QA people as to how to write tests, and they typically slurp it up as it furthers their job security.
I think it depends mostly on the skill level of your test team, the tools available, and the team culture with respect to how developers and testers interact with each other. My current situation is that we have a relatively technical test team. All testers are expected to have development skills. In our case, testers write UI Automation. If your test team doesn't have those skills they will not be set up for success. In that case, it may be best for developers to write you UI automation.
Other factors to consider:
What other testing tasks are on the testers' plate?
Who are your customers and what are their expectations related to quality?
What is the skill level of the development team and what is their willingness to take on test automation work?
-Ron

Continuous Integration vs. Nightly Builds

Reading this post has left me wondering; are nightly builds ever better for a situation than continuous integration? The consensus of the answers seems to be pretty lopsided in favor of continuous integration, is that evangelism or is there really no reason to use nightly builds when continuous integration is an option?
If you're really doing continuous integration with all available tests, nightly builds would be redundant, since the last thing checked in that day would already have been tested.
On the other hand, if your CI regime only involves running a subset of all available tests, for example because some of your tests take a long time to run, then you can use nightlies additionally to run all tests. This'll let you catch many bugs early, and if you can't catch them early, you can at least catch them overnight.
I don't know, though, if that's technically still CI, since you're only doing a "partial" build each time, by ignoring some of the tests.
In our organization, nightly builds and CI builds have two distinct purposes. The CI build is a 'latest code' build in which the unit tests are run against the last check in as you would expect. We also run several code metrics on the CI build.
For nightly builds, however, we only incorporate source code that has been through the peer review process and is deemed ready for testing.
This way, the nightly build always contains build that is 'feature ready' for testing, while the CI build contains features that while functional (to the extent that the unit tests pass) may not be ready to send the to the test group.
The test groups writes new CRs exclusively from one of the nightly builds as opposed to the CI build, although those are also available for informal exploratory type testing.
Yes, if you have a process you want attached to a build, but it is resource heavy. For example, on my team we run JTest during the nightly build. We can't run it during the day because:
It requires a lot of resources, which may not be available
It takes 4 hours to complete each time
If you have a nice robust CI process in place a "nightly" is still useful.
As mentioned, a "nightly" build can do exhaustive tests and perhaps some high-level system tests. End-to-end stuff.
The concept of a "nightly" build is easily understood by everyone in the organization. If you have trouble communicating CI builds out to other groups (for example, a QA group that doesn't have the same handle on Agile that the Dev group might) a "nightly" is a powerful and simple concept.
If your nightly is a separate set of resources, it can be managed separately and used to cut "gold" images with some claims to software integrity. For example, developer writes code, some trusted build system that dev can't touch builds it, QA tests the gold build and signs it. In such a situation, the nightly build functions like a production build system.
Just some thoughts.
In my professional opinion, the only reason to use nightly builds is when the build process takes so long that it can't complete in a "reasonable" amount of time.
For example, if your build process takes 5 hours to complete there is really no reason to do a build on check in.
Beyond that, there is so much value in knowing as soon as possible when a build fails that it overrides other concerns.
It depends on the purpose and length of each of your builds. Basically, you should identify what you are trying to learn from CI and decide if it is worth while spending the resources on running multiple builds.
We used continuous integration at my last job for a few different purposes.
First, we used it to make sure that the repository and thus the developers always had a version of the code that compiled. Few things are worse for team members than having to manage another person's broken changes through commenting, uncommenting, reverting and merging, because one person checked in bad code. For this, we had a build that ran instantaneously with no tests or other validation so we knew as soon as possible if the code was safe to update. Builds usually took about ten minutes and the machine was probably running around 50% on a normal workday. No documentation was generated here, just a quiet pass or a loud fail siren.
Second, we wanted to know as soon as possible whether or not any rules were broken. The quicker that you find a broken rule, the easier it is to fix. For this purpose, we had a separate machine that ran a full build and validation of the code. This machine was running 12-14 hours a day continuously on a normal workday. Email status of the build was sent out describing broken unit tests, code compliance, etc.
We stopped there as far as automatically triggered builds go. A nightly build on top of that seemed a bit extreme for us. But I suppose if you wanted to have a snapshot build archived daily, you may want to schedule a third build with the extra steps required for that. Though, we did have another build that wrapped up and archived our QA deployment artifacts for quick and easy deployment, but we only ever manually triggered that one.
I think the other posts cover the common reasons, like having a build process that takes "too long" or having to run only a subset of tests during the CI build. But there's another reason which is political.
In some organizations the official builds are handled by a minimally responsive build/infrastructure/release management/SCM team. In these cases you might put them in charge of the nightly build and then run the CI build out of development. This avoids a fight because their build remains "the official build" and your CI build give you the feedback you need.
We have both continuous integration and nightly builds in place. They serve two different purposes.
Our continuous integration mechanism builds the software and runs unit tests under the continuous integration suite.
Our nightly build tags the source under version control, builds the software, runs the unit tests under nightly build suite. The software built here is then used in various system tests and stress tests.
I think that one of the main differentiator for nightly build is system tests.

What is a good CI build-process

What constitutes a good CI build-process?
We use CI, but is deployment to production even a realistic CI goal when you have dependencies on several services that should be deployed too and other apps may depend on these too.
Is a good good CI build process good enough when its automated to QA and manual from there?
Well "it depends" :)
We use our CI system to:
build & unit test
deploy to single box, run intergration tests and code analisys
deploy to lab environment
run acceptance tests in prod-like system
drop builds that pass to code drop for prod deployment
This is for a greenfield project of about a dozen services and databases deployed to 20+ servers, that also had dependencies on half a dozen other 'external' services.
Using a CI tool to deploy your product to a production environment as a realistic goal? again... "it depends"
Why would you want to do this?
if you have the process you can roll changes (and roll back) faster and more often
less chance for human error
you can test the same deployment strategy in a test environment before going to production and catch issues earlier
Some technical things you have to address before you can answer this:
what is the uptime requirements for your system -- Are you allowed to have downtime or does it need to be up 24/7?
do you have change control processes in place that require human intervention/approval?
is your deployment robust enough for any component to roll back to a known-good state if a deployment fails?
is your system designed to handle different versions of services or clients in case one or several component deployments fails (and you have the above rollback to last known good)?
does the process have the smarts to handle a partial deployment where a component cannot handle mixed versions of its dependencies/clients?
how are you handing database deployment/upgrades?
do you have monitoring in place so you know when something goes wrong?
Here are a couple of recent related links about automation and building the tools you need.
When it comes down to it the more complex your system the more difficult it is do automate everything, but that does not mean it is not a worthy goal, it just takes a lot more effort and willpower to get it done -- everything from knowing the difficulties you're going to face, the problems you have to account for (failure will happen), the political challenges of building infrastructure (vs. more product features).
Now heres the big secret... the technical challenges are challenging but not impossible... the political challenges may be insurmountable. Everything about this costs money whether its dev time or buying 3rd party solutions. So really, can you build the $1K, $10K, $100K, or $1M solution?
Whatever solution you go for make sure the automation is robust first, complete second... i.e. make sure you have as robust a solution as you can for getting deployment to a test environment rather than a fragile solution that deploys to production.
CI is not intended as a deployment mechanism. It is good to have your CI execute any automated deployment to a QA/Test server, to ensure those aspects of your build work, but I would not use a CI system like Cruise Control or Bamboo as the means of deployment.
CI is for building the codebase periodically to automate execution of automated tests, verification of the codebase via static analysis and other checks of that nature.
Be sure you understand the idea behind a CI build. CI stands for Continuous Integration and CI builds are really intended to be throw-away builds that are performed when a developer checks code in to the source control system (or at some specified interval) to ensure that the newest changes do not break the code base (hence the idea of continuously integrating the changes to the code base).
To that end, the technology used for the actual build server process is largely irrelevant compared to what actually happens during the build. As #pdavis mentioned, the CI build should compile the code base, execute some code analysis (FxCop, StyleCop, Lint, etc.), execute unit tests and code coverage, and execute any other custom analysis you want performed that should impact the concept of a "successful" or "failed" build.
Having a CI build automatically deploy to an environment really doesn't fall under the control of a build server. That being said, you can always create a separate project that runs on the build server that handles the deployment when it detects certain conditions (such as a build completes successfuly), but that should always be done as a completely independent thing.
I am starting on a new project at work that I am really looking forward to. We are still in the initial design stage and have just recently completed the Logical System Architecture. We have ordered new servers for the testing and staging environments and are setting up a Continuous Integration (CI) build system based on Cruise Control (http://cruisecontrol.sourceforge.net/) and MSBuild (http://msdn2.microsoft.com/en-us/library/wea2sca5.aspx) which is basically an improved port of ANT. It appears that Visual Studio 2005 project and solution files are all now in MSBuild format. Cruise Control will be automatically pulling the source from Visual Source Safe (ok, it isn't Subversion but we can deal), compiling it, and then running it through fxCop (http://www.gotdotnet.com/Team/FxCop/), nUnit (http://www.nunit.org/), nCover (http://ncover.org/site/), and last but not lease Simian (http://www.redhillconsulting.com.au/products/simian/). Cruise Control has a pretty good website interface for displaying all of the compiled results from the various tools and can even display code changes from one build to the next. It also keeps track of all builds in a build history. I'm looking forward to the test driven development and think that this type of approach combined with nUnit/nCover should give us a pretty good idea before we roll out changes that we haven't broken anything. There are also plans to incorporate some type of automated user interface testing once we are far enough along in the project. Depending on the tool, this should be just a matter of installing the tool on the build server and calling it from Cruise Control. Sweet.
A good CI process will have full or nearly-full unit test coverage. Unit tests test classes and methods, vs. integration tests, which test multiple parts of the system. When you set up your CI builds, have them automate the unit tests. That way, the CI builds can run multiple times per day. We have ours set to run every 2 hours.
You can have longer running builds that run once per day. These can use other services and run integration tests.
I was watching a ThoughtWorks presentation (creators of Cruise Control) and they actually addressed this issue. Their answer is that NO deployment is too complex to test. Why? Because otherwise, your customers become your testers, which is exactly where you don't want to be.
If you have a complex deployment structure, set up a visualization server. Have it pretend to be all the systems you need to talk to. They can always start in a known good state, because you can reset to a clean image.
To answer your initial question, a good process is one which enables communication between the repository and the developers. If the repository is in a bad state (non-compiling code, failed tests, etc.), the developers know about it as soon as possible, so that they can correct it.
The later a bug is discovered, the costlier it is to fix. So bugs should be discovered as early as possible. This is the motivation behind CI.
A good CI should ensure catching as many bugs as possible. The whole application comprises of code (often in multiple languages), Database schema, deployment files etc. Errors in any of these can cause bugs - so the CI should try to exercise as many of them as possible.
CI does not replace a proper QA discipline. Also, CI need not be very comprehensive on day one of the project. One can start with a simple CI process that does basic compilation & unit testing initially. As you discover more classes of bugs in QA, you should adapt the CI process to try to catch future occurrences of those bugs. It can also involve static code-analysis checks, so that you can implement consistent coding and design ideals across the codebase.

Resources